I lead the Existential Security team (previously known as the General Longtermism team) at Rethink Priorities. We are currently focused on helping launch entrepreneurial projects that reduce existential risk. See here for a blog post explaining our team strategy for the year.
My previous work has included nanotechnology strategy research and co-founding EA Pathfinder, which I co-led from April to September 2022. Before joining Rethink Priorities in early 2022 I was a Senior Research Scholar at the Future of Humanity Institute, and before that I completed a PhD in DNA nanotechnology at Oxford University and spent 5 years working in finance as a quantitative analyst.
If you're interested in learning more about nanotechnology strategy research, you could check out this database of resources I made.
Feel free to send me a private message here, or to email me at hello [at] bensnodin dot com.
You can also give me anonymous feedback with this form!
Thanks for writing this Joey, very interesting!
Since the top 20% of founders who enter your programme generate most of the impact, and it's fairly predictable who these founders will be, it seems like getting more applicants in that top 20% bracket could be pretty huge for the impact you're able to have. Curious if you have any reaction to that? I don't know whether expanding the applicant pool at the top end is a top priority for the organisation currently.
Thanks for these!
I think my general feeling on these is that it's hard for me to tell if they actually reduced existential risk. Maybe this is just because I don't understand the mechanisms for a global catastrophe from AI well enough. (e.g. because of this, linking to Neel's longlist of theories for impact was helpful, so thank you for that!)
E.g. my impression is that some people with relevant knowledge seem to think that technical safety work currently can't achieve very much.
(Hopefully this response isn't too annoying -- I could put in the work to understand the mechanisms for a global catastrophe from AI better, and maybe I will get round to this someday)
I think my motivation comes from things to do with: helping with my personal motivation for work on existential risk, helping me form accurate beliefs on the general tractability of work on existential risk, and helping me advocate to other people about the importance of work on existential risk.
Thinking about it maybe it would be pretty great to have someone assemble and maintain a good public list of answers to this question! (or maybe someone did already and I don't know about it)
I imagine a lot of relevant stuff could be infohazardous (although that stuff might not do very well on the "legible" criterion) -- if so and if you happen to feel comfortable sharing it with me privately, feel free to DM me about it.
Should EA people just be way more aggressive about spreading the word (within the community, either publicly or privately) about suspicions that particular people in the community have bad character?
(not saying that this is an original suggestion, you basically mention this in your thoughts on what you could have done differently)
I (with lots of help from my colleague Marie Davidsen Buhl) made a database of resources relevant nanotechnology strategy research, with articles sorted by relevance for people new to the area. I hope it will be useful for people who want to look into doing research in this area.
This is pretty funny because, to me, Luke (who I don't know and have never met) seems like one of the most intimidatingly smart EA people I know of.
Nice, I don't think I have much to add at the moment, but I really like + appreciate this comment!
Thanks for sharing Ben! As a UK national and resident I'm grateful for an easy way to be at least a little aware of relevant UK politics, which I otherwise struggle to manage.