Ben Snodin

Senior Researcher @ Rethink Priorities
Working (6-15 years of experience)

Bio

Participation
4

I'm a Senior Researcher at Rethink Priorities in the General Longtermism team, where my work to date has included nanotechnology strategy research and co-founding EA Pathfinder, which I co-led from April to September 2022. 

If you're interested in learning more about nanotechnology strategy research, you could check out this database of resources I made.

Previously, I was a Senior Research Scholar at the Future of Humanity Institute, and before that I completed a PhD in DNA nanotechnology at Oxford University and spent 5 years working in finance as a quantitative analyst.

Feel free to send me a private message here, or to email me at hello [at] bensnodin dot com.

You can also give me anonymous feedback with this form!

Comments
73

Thanks for these!

I think my general feeling on these is that it's hard for me to tell if they actually reduced existential risk. Maybe this is just because I don't understand the mechanisms for a global catastrophe from AI well enough. (e.g. because of this, linking to Neel's longlist of theories for impact was helpful, so thank you for that!)

E.g. my impression is that some people with relevant knowledge seem to think that technical safety work currently can't achieve very much. 

(Hopefully this response isn't too annoying -- I could put in the work to understand the mechanisms for a global catastrophe from AI better, and maybe I will get round to this someday)

I think my motivation comes from things to do with: helping with my personal motivation for work on existential risk, helping me form accurate beliefs on the general tractability of work on existential risk, and helping me advocate to other people about the importance of work on existential risk.

Thinking about it maybe it would be pretty great to have someone assemble and maintain a good public list of answers to this question! (or maybe someone did already and I don't know about it)

I imagine a lot of relevant stuff could be infohazardous (although that stuff might not do very well on the "legible" criterion) -- if so and if you happen to feel comfortable sharing it with me privately, feel free to DM me about it.

Should EA people just be way more aggressive about spreading the word (within the community, either publicly or privately) about suspicions that particular people in the community have bad character?

(not saying that this is an original suggestion, you basically mention this in your thoughts on what you could have done differently)

I (with lots of help from my colleague Marie Davidsen Buhl) made a database of resources relevant nanotechnology strategy research, with articles sorted by relevance for people new to the area. I hope it will be useful for people who want to look into doing research in this area.

Sure, I think I or Claire Boine might write about that kind of thing some time soon :).

This is pretty funny because, to me, Luke (who I don't know and have never met) seems like one of the most intimidatingly smart EA people I know of.

Nice, I don't think I have much to add at the moment, but I really like + appreciate this comment!

Thanks, would be interested to discuss more! I'll give some reactions here for the time being

This sounds astonishingly high to me (as does 1-2% without TAI)

(For context / slight warning on the quality of the below: I haven't thought about this for a while, and in order to write the below I'm mostly relying on old notes + my current sense of whether I still agree with them.)

Maybe we don't want to get into an AGI/TAI timelines discussion here (and I don't have great insights to offer there anyway) so I'll focus on the pre-TAI number.

I definitely agree that it seems like we're not at all on track to get to advanced nanotechnology in 20 years, and I'm not sure I disagree with anything you said about what needs to happen to get there etc. 

I'll try to say some things that might make it clearer why we are currently giving  different numbers here (though to be clear, as is hopefully apparent in the post, I'm not especially convinced about the number I gave)

  • I think getting to 99.99% confidence is pretty hard -- like in the 0.001% fastest-development scenarios I feel like we're far into "wow I made some very wrong assumptions I wasn't even aware I was making" territory. (In general with prediction, I feel like in the 10% most extreme scenarios an assumption I thought was rock solid turns out to be untrue)
  • Apart from the "reluctance to be extremely confident in anything" thing:
    • I think the main scenario I have in mind for pre-TAI advanced nanotechnology by 2040 is one where some very powerful AI that isn't powerful enough to count as TAI gets developed and speeds up (relevant parts of) science R&D a lot
    • I think there's also some (very small) chance that advanced nanotechnology is much easier than it currently seems, since (maybe) we haven't really tried yet. Either through roughly Drexler's path, or through some other path.

Scientists convince themselves that Drexler's sketch is infeasible more often than one might think. But to someone at that point there's little reason to pursue the subject further, let alone publish on it. It's of little intrinsic scientific interest to argue an at-best marginal, at-worst pseudoscientific question. It has nothing to offer their own research program or their career. Smalley's participation in the debate certainly didn't redound to his reputation.

So there's not much publication-quality work contesting Nanosystems or establishing tighter upper bounds on maximum capabilities. But that's at least in part because such work is self-disincentivizing. Presumably some arguments people find sufficient for themselves wouldn't go through in generality or can't be formalized enough to satisfy a demand for a physical impossibility proof, but I wouldn't put much weight on the apparent lack of rebuttals.

I definitely agree with the points about incentives for people to rebut Drexler's sketch, but I still think the lack of great rebuttals is some evidence here (I don't think that represents a shift in my view -- I guess I just didn't go into enough detail in the post to get to this kind of nuance (it's possible that was a mistake)).

Kind of reacting to both of the points you made / bits I quoted above: I think convincing me (or someone more relevant than me, like major EA funders etc) that the chance that advanced nanotechnology arrives by 2040 is less than 1 in 1e-4 would be pretty valuable. I don't know if you'd be interested in working to try to do that, but if you were I'd potentially be very keen to support that. (Similarly for ~showing something like "near-infeasibility for  Drexler's sketch.)

Answer by Ben SnodinJun 18, 202230

a) Has anyone ever thought about this question in detail? 

  • I haven't thought about this in detail but I have a weakly held view that senior people should do more mentoring
  • (without wanting to imply that I'm a "senior EA") I've thought about it / am generally inclined to think about it more carefully for me personally, I think last time I did I basically thought I'd like to do more mentoring and was bottlenecked on not having anyone to mentor (but not sure that I currently think I should do more mentoring)

b) What factors would such a decision depend on? Intuitively, the senior's ability to mentor and the urgency of the problem play a role but there is surely more. 

  • For myself I might try to do a fermi, maybe something like (just for illustration, extremely rough and not thought through, etc): career capital for me (maybe made concrete with "how many hours of productive time am I willing to sacrifice for the expected career capital") + career capital for mentee (maybe "by how many days do I accelerate their career progression x how much impact will they have compared to me") - time cost for me
  • I will say I think some people enjoy mentoring (and similar things) way more than others, and this probably matters a lot. Maybe in the above fermi you can apply a factor to the time cost to convert it from "actual clock time" to "counterfactual difference in time spent on other productive things" or whatever
  • Maybe another factor is how disruptive it is for you to add (say) another meeting per month to your diary. E.g. if you usually have around 2 meetings per week and otherwise can focus on research, maybe adding more meetings is very costly.

c) Are there options to combine mentorship and direct work, i.e. can senior people reliably outsource simple tasks to their mentees?

  • I think outsourcing simple tasks is surprisingly hard. But maybe a good version of this looks something like having an RA/PA (and maybe senior people should have more RAs/PAs).
Load more