deep

Topic Contributions

Comments

When is AI safety research harmful?

Ha! 

Personally, I've gotten a lot of value in having a buddy look over my work and chat with me about it -- a fresh perspective is really useful, not just for copyedits but also for building on my first thoughts. If you don't yet know people you could ask for this, you might find it valuable to reach out to SERI, CERI, or other community orgs that aim to help junior x-risk researchers. (presumably ZERI and JERI are next.) Happy to chat more via DM if that would be useful :) 

When is AI safety research harmful?

I think this is a pretty important topic, and one I haven't seen discussed as often as I'd like! Thanks for writing it up.

I think you could get more engagement with this topic if you spent some more time smoothing out the presentation of your writeup. For example, there are a few typos in the summary section that made me less excited to read the rest of the piece. Given that you now have a pretty interesting piece of thinking written, it might be pretty feasible to find a smart junior person who could give you copyedits and comments. 

Demandingness and Time/Money Tradeoffs are Orthogonal

Self-signaling value ain't something to sneeze at. Personally, a lot of my desire-for-demandingness is about reinforcing my identity as someone who's willing to make sacrifices in order to do good. ("reinforcing" meaning both getting good  at that skill, and assuring myself that that's what I'm like :) 

Will more AI systems be trained to make use of preexisting computational tools?

epistemic status: "the best way to learn is by saying something wrong and being corrected." These statements are all intended as "my best guess" from someone who's not super technical and could easily be wrong about AI progress.

Please Share Your Perspectives on the Degree of Societal Impact from Transformative AI Outcomes

In general, I'm skeptical of surveys like this -- I participated in a similar one a few years ago that didn't have super useful results, though I think it was kind of useful for clarifying my own thinking.  But that's pretty outside-viewy. Let me take a stab at making that general skepticism concrete -- trying to elucidate why people might struggle to answer, slash why the questions you're asking won't yield super useful answers. 

I expect that the 'right' answer depends on carefully enumerating and considering a bunch of different plausible scenarios, and what you'll get instead is either uncertainty or vague intuitive guesses. If you mostly want vague intuitive guesses, great! I would guess you'd get more clarity from trying to elicit people's particular models / expected trajectories.

My rough experience is that people working in AI governance mostly think about particular trajectories/dynamics of AI progress that they consider especially plausible/important/tractable, so they might only have insight into particular configurations of variables you consider. Or their insight might be at a  more granular level, weighing e.g. the impact of AI development in particular corporate labs.

Skimming your survey, the answer that feels right to me is often that the effect depends a lot on circumstances. For example, fast takeoff worlds where fast takeoff is anticipated  look extremely different from fast takeoff worlds where it comes as a surprise.

Useful Vices for Wicked Problems

This piece is...pretty amazing. I could see this being really useful for me as an AI governance researcher, possibly the most useful thing I've read this year. Thanks!

Do you have any advice for eliciting feedback from people when you're doing rapid iteration? I generally find it valuable to share Google Docs with people as I'm working through ideas, but it can be hard to communicate the kind of feedback that's most useful for rough documents. Maybe it's good to flag "these are hot takes, I'm looking for strong arguments against them to refine my viewpoint, don't bother with small details for now"?