I'm interested in helping mid-career people who want to switch to work that's impactful according to EA considerations. Please get in touch if you fit that description!
I'm a Senior Researcher at Rethink Priorities, where I work on nanotechnology strategy research.
You can reach me by sending me a private message on the forum, or by emailing hello[at]bensnodin dot com.
Thanks, would be interested to discuss more! I'll give some reactions here for the time being
This sounds astonishingly high to me (as does 1-2% without TAI)
(For context / slight warning on the quality of the below: I haven't thought about this for a while, and in order to write the below I'm mostly relying on old notes + my current sense of whether I still agree with them.)
Maybe we don't want to get into an AGI/TAI timelines discussion here (and I don't have great insights to offer there anyway) so I'll focus on the pre-TAI number.
I definitely agree that it seems like we're not at all on track to get to advanced nanotechnology in 20 years, and I'm not sure I disagree with anything you said about what needs to happen to get there etc.
I'll try to say some things that might make it clearer why we are currently giving different numbers here (though to be clear, as is hopefully apparent in the post, I'm not especially convinced about the number I gave)
Scientists convince themselves that Drexler's sketch is infeasible more often than one might think. But to someone at that point there's little reason to pursue the subject further, let alone publish on it. It's of little intrinsic scientific interest to argue an at-best marginal, at-worst pseudoscientific question. It has nothing to offer their own research program or their career. Smalley's participation in the debate certainly didn't redound to his reputation.
So there's not much publication-quality work contesting Nanosystems or establishing tighter upper bounds on maximum capabilities. But that's at least in part because such work is self-disincentivizing. Presumably some arguments people find sufficient for themselves wouldn't go through in generality or can't be formalized enough to satisfy a demand for a physical impossibility proof, but I wouldn't put much weight on the apparent lack of rebuttals.
I definitely agree with the points about incentives for people to rebut Drexler's sketch, but I still think the lack of great rebuttals is some evidence here (I don't think that represents a shift in my view -- I guess I just didn't go into enough detail in the post to get to this kind of nuance (it's possible that was a mistake)).
Kind of reacting to both of the points you made / bits I quoted above: I think convincing me (or someone more relevant than me, like major EA funders etc) that the chance that advanced nanotechnology arrives by 2040 is less than 1 in 1e-4 would be pretty valuable. I don't know if you'd be interested in working to try to do that, but if you were I'd potentially be very keen to support that. (Similarly for ~showing something like "near-infeasibility for Drexler's sketch.)
a) Has anyone ever thought about this question in detail?
b) What factors would such a decision depend on? Intuitively, the senior's ability to mentor and the urgency of the problem play a role but there is surely more.
c) Are there options to combine mentorship and direct work, i.e. can senior people reliably outsource simple tasks to their mentees?
Ah was looking forward to listening to this using the Nonlinear Library podcast but twitter screenshots don't work well with that. If someone made a version of this with the screenshots converted to normal text that would be helpful for me + maybe others.
Nice, sounds like a cool project!
Some quick thoughts on this from me:
Honestly for me it's probably at the "almost too good to be true" level of surprisingness (but to be clear it actually is true!). I think it's a brilliant community / ecosystem (though of course there's always room for improvement).
I agree that you probably generally need unusual views to find the goals of these jobs/projects compelling (and maybe also to be a good job applicant in many cases?). That seems like a high bar to me, and I think it's a big factor here.
I also agree that not all roles are research roles, although I don't know how much this weakens the surprisingness because some people probably don't find research roles appealing but do find e.g. project management appealing. (Also I do feel like most research is pretty tough one way or another, whether or not it's "EA" research.)
I guess there's also the "downsides" I mentioned in the post. One that particularly comes to mind is that there still aren't a ton of great EA jobs to just slot into, and the ones that exist often seem to be very over-subscribed. Partly depends on your existing profile of skills of course :).
Yeah, I think that progress in nanotech stuff has been very slow over the past 20 years, whereas progress in AI stuff has sped up a lot (and investment has increased a huge amount). Based on that, it seems reasonable to focus more on making the development of powerful AI go well for the world and to think less about nanotech, so I think this is at least part of the story.
Or to message me :)
Thanks for sharing your thoughts!
For mid-career people, it feels like runway may be less of an impact relative to the knowledge you may be giving up something with a guaranteed impact, even if it may not be optimal, on the basis of uncertain factors.
If you're thinking purely about maximising impact, you probably want to go for the highest expected value thing, in which case accepting a bit more uncertainty in your lifetime impact to explore other options is (in the kind of situation you described) maybe well worth it in many cases. Of course, one important factor is how easy it is to return to the current career path after (say) a year of trying other stuff.
(if this is more of a gut level concern, maybe it's a different story of course)
At a high level I'd say ~in the 2 years I've spent doing "EA work" my average motivation has been towards the upper end of my motivation level over the previous 8-9 years doing a PhD and working in finance. (I might have been significantly less motivated working in finance if I wasn't kind of doing an "earning to give" type thing.)
I think the biggest areas of difficulty for me re motivation in "EA work" have been difficulties with motivation associated with doing research-type things that are many steps removed from impact, and at times not having huge amounts of management / guidance (but there are lots of pluses, as I implied in the post I guess).
Nice, I don't think I have much to add at the moment, but I really like + appreciate this comment!