deep

195Joined Apr 2022

Comments
9

Nice to see this! I remember being surprised a few years back that nobody in EA besides Drexler was talking about APM, so it's nice to see a formal public writeup clarifying what's going on with it. I'm leery of infohazards here, but conditional on it being reasonable to publish such an article at all, this seems like a solid version of that article. 

Re: key organizations, a few thoughts:

  • FHI seems like another natural place, since Drexler's there and (I assume) they're pretty open to hosting other people working on APM. 
  • I would be curious if Ben Snodin has a take on whether Rethink Priorities is a particularly good place to work, relative to e.g. being an independent researcher or working at FHI. RP's General Longtermism team could host work on APM risk, and proximity to Ben is useful, but AFAIK Ben is not currently doing APM-related work and doesn't particularly plan to.

I enjoyed this post -- I've wanted a scope-sensitive news source for ages. 

A resource I really like for getting a sense of what the world looks like "on average" is Dollar Street, which puts together info and images about households around the world. They estimate household income, so you can see what life at different income levels is like.

Thanks for this post! I always appreciate a pretty metaphor, and  I generally agree that junior EAs should be less deferential and more ambitious. Maybe most readers will in fact mostly take away the healthy lesson of "don't defer", which would be great! But I worry a bit about the urgent tone of "act now, it's all on you", which I think can lead in some unhealthy directions. 

To me, it felt like a missing mood within the piece was concern for the reader's well-being.  The concept of heroic responsibility is in some ways very beautiful and important to me, but I worry that it can very easily mess people up more than it causes them to do good. (Do heroic responsibility responsibly, kids.)  

When you feel like there are no lifeguards, and drowning children are everywhere, it's easy to exhaust yourself before you even get to the point of saving anyone at all. I've seen of people burn themselves out over projects that, while promising, were really not organized with their sustainable well-being in mind.    

If I were to write a version of this piece that reflected my approach to doing good, maybe I'd try to find a different metaphor that framed it more as an iterated game, to make it more natural to say something about conserving your strength / nurturing yourself / marathon-not-a-sprint. 

Some other comments I particularly resonated with: @levin's point about negative side effects due to unilateralist  uninformed action,  and @VaidehiAgarwalla's point about implicitly reflecting an Eliezerish view of AI risk. I think the latter is part of what triggered my worry about this post potentially crushing people under the weight of responsibility.

Ha! 

Personally, I've gotten a lot of value in having a buddy look over my work and chat with me about it -- a fresh perspective is really useful, not just for copyedits but also for building on my first thoughts. If you don't yet know people you could ask for this, you might find it valuable to reach out to SERI, CERI, or other community orgs that aim to help junior x-risk researchers. (presumably ZERI and JERI are next.) Happy to chat more via DM if that would be useful :) 

I think this is a pretty important topic, and one I haven't seen discussed as often as I'd like! Thanks for writing it up.

I think you could get more engagement with this topic if you spent some more time smoothing out the presentation of your writeup. For example, there are a few typos in the summary section that made me less excited to read the rest of the piece. Given that you now have a pretty interesting piece of thinking written, it might be pretty feasible to find a smart junior person who could give you copyedits and comments. 

Self-signaling value ain't something to sneeze at. Personally, a lot of my desire-for-demandingness is about reinforcing my identity as someone who's willing to make sacrifices in order to do good. ("reinforcing" meaning both getting good  at that skill, and assuring myself that that's what I'm like :) 

epistemic status: "the best way to learn is by saying something wrong and being corrected." These statements are all intended as "my best guess" from someone who's not super technical and could easily be wrong about AI progress.

In general, I'm skeptical of surveys like this -- I participated in a similar one a few years ago that didn't have super useful results, though I think it was kind of useful for clarifying my own thinking.  But that's pretty outside-viewy. Let me take a stab at making that general skepticism concrete -- trying to elucidate why people might struggle to answer, slash why the questions you're asking won't yield super useful answers. 

I expect that the 'right' answer depends on carefully enumerating and considering a bunch of different plausible scenarios, and what you'll get instead is either uncertainty or vague intuitive guesses. If you mostly want vague intuitive guesses, great! I would guess you'd get more clarity from trying to elicit people's particular models / expected trajectories.

My rough experience is that people working in AI governance mostly think about particular trajectories/dynamics of AI progress that they consider especially plausible/important/tractable, so they might only have insight into particular configurations of variables you consider. Or their insight might be at a  more granular level, weighing e.g. the impact of AI development in particular corporate labs.

Skimming your survey, the answer that feels right to me is often that the effect depends a lot on circumstances. For example, fast takeoff worlds where fast takeoff is anticipated  look extremely different from fast takeoff worlds where it comes as a surprise.

This piece is...pretty amazing. I could see this being really useful for me as an AI governance researcher, possibly the most useful thing I've read this year. Thanks!

Do you have any advice for eliciting feedback from people when you're doing rapid iteration? I generally find it valuable to share Google Docs with people as I'm working through ideas, but it can be hard to communicate the kind of feedback that's most useful for rough documents. Maybe it's good to flag "these are hot takes, I'm looking for strong arguments against them to refine my viewpoint, don't bother with small details for now"?