I'm a Senior Researcher at Rethink Priorities in the General Longtermism team, where my work to date has included nanotechnology strategy research and co-founding EA Pathfinder, which I co-led from April to September 2022.
If you're interested in learning more about nanotechnology strategy research, you could check out this database of resources I made.
Previously, I was a Senior Research Scholar at the Future of Humanity Institute, and before that I completed a PhD in DNA nanotechnology at Oxford University and spent 5 years working in finance as a quantitative analyst.
Feel free to send me a private message here, or to email me at hello [at] bensnodin dot com.
You can also give me anonymous feedback with this form!
I think my motivation comes from things to do with: helping with my personal motivation for work on existential risk, helping me form accurate beliefs on the general tractability of work on existential risk, and helping me advocate to other people about the importance of work on existential risk.
Thinking about it maybe it would be pretty great to have someone assemble and maintain a good public list of answers to this question! (or maybe someone did already and I don't know about it)
I imagine a lot of relevant stuff could be infohazardous (although that stuff might not do very well on the "legible" criterion) -- if so and if you happen to feel comfortable sharing it with me privately, feel free to DM me about it.
Should EA people just be way more aggressive about spreading the word (within the community, either publicly or privately) about suspicions that particular people in the community have bad character?
(not saying that this is an original suggestion, you basically mention this in your thoughts on what you could have done differently)
I (with lots of help from my colleague Marie Davidsen Buhl) made a database of resources relevant nanotechnology strategy research, with articles sorted by relevance for people new to the area. I hope it will be useful for people who want to look into doing research in this area.
Sure, I think I or Claire Boine might write about that kind of thing some time soon :).
This is pretty funny because, to me, Luke (who I don't know and have never met) seems like one of the most intimidatingly smart EA people I know of.
Nice, I don't think I have much to add at the moment, but I really like + appreciate this comment!
Thanks, would be interested to discuss more! I'll give some reactions here for the time being
This sounds astonishingly high to me (as does 1-2% without TAI)
(For context / slight warning on the quality of the below: I haven't thought about this for a while, and in order to write the below I'm mostly relying on old notes + my current sense of whether I still agree with them.)
Maybe we don't want to get into an AGI/TAI timelines discussion here (and I don't have great insights to offer there anyway) so I'll focus on the pre-TAI number.
I definitely agree that it seems like we're not at all on track to get to advanced nanotechnology in 20 years, and I'm not sure I disagree with anything you said about what needs to happen to get there etc.
I'll try to say some things that might make it clearer why we are currently giving different numbers here (though to be clear, as is hopefully apparent in the post, I'm not especially convinced about the number I gave)
Scientists convince themselves that Drexler's sketch is infeasible more often than one might think. But to someone at that point there's little reason to pursue the subject further, let alone publish on it. It's of little intrinsic scientific interest to argue an at-best marginal, at-worst pseudoscientific question. It has nothing to offer their own research program or their career. Smalley's participation in the debate certainly didn't redound to his reputation.
So there's not much publication-quality work contesting Nanosystems or establishing tighter upper bounds on maximum capabilities. But that's at least in part because such work is self-disincentivizing. Presumably some arguments people find sufficient for themselves wouldn't go through in generality or can't be formalized enough to satisfy a demand for a physical impossibility proof, but I wouldn't put much weight on the apparent lack of rebuttals.
I definitely agree with the points about incentives for people to rebut Drexler's sketch, but I still think the lack of great rebuttals is some evidence here (I don't think that represents a shift in my view -- I guess I just didn't go into enough detail in the post to get to this kind of nuance (it's possible that was a mistake)).
Kind of reacting to both of the points you made / bits I quoted above: I think convincing me (or someone more relevant than me, like major EA funders etc) that the chance that advanced nanotechnology arrives by 2040 is less than 1 in 1e-4 would be pretty valuable. I don't know if you'd be interested in working to try to do that, but if you were I'd potentially be very keen to support that. (Similarly for ~showing something like "near-infeasibility for Drexler's sketch.)
a) Has anyone ever thought about this question in detail?
b) What factors would such a decision depend on? Intuitively, the senior's ability to mentor and the urgency of the problem play a role but there is surely more.
c) Are there options to combine mentorship and direct work, i.e. can senior people reliably outsource simple tasks to their mentees?
Thanks for these!
I think my general feeling on these is that it's hard for me to tell if they actually reduced existential risk. Maybe this is just because I don't understand the mechanisms for a global catastrophe from AI well enough. (e.g. because of this, linking to Neel's longlist of theories for impact was helpful, so thank you for that!)
E.g. my impression is that some people with relevant knowledge seem to think that technical safety work currently can't achieve very much.
(Hopefully this response isn't too annoying -- I could put in the work to understand the mechanisms for a global catastrophe from AI better, and maybe I will get round to this someday)