Ben Snodin

Senior Researcher @ Rethink Priorities
Working (6-15 years of experience)

Bio

Participation
4

I’m a career advisor at EAPathfinder.org, helping mid-career professionals who want to switch to work that's impactful according to EA considerations

I'm also a Senior Researcher at Rethink Priorities, where I work on nanotechnology strategy research. 

If you're interested in learning more about nanotechnology strategy research, you could check out this database of resources I made.

Previously, I was a Senior Research Scholar at the Future of Humanity Institute, and before this, I completed a PhD in DNA nanotechnology at Oxford University.

Feel free to send me a private message here, email hello [at] bensnodin dot com, or apply for career advice through the EA Pathfinder website’s application form.

You can also give me anonymous feedback with this form!

Comments
70

Should EA people just be way more aggressive about spreading the word (within the community, either publicly or privately) about suspicions that particular people in the community have bad character?

(not saying that this is an original suggestion, you basically mention this in your thoughts on what you could have done differently)

I (with lots of help from my colleague Marie Davidsen Buhl) made a database of resources relevant nanotechnology strategy research, with articles sorted by relevance for people new to the area. I hope it will be useful for people who want to look into doing research in this area.

Sure, I think I or Claire Boine might write about that kind of thing some time soon :).

This is pretty funny because, to me, Luke (who I don't know and have never met) seems like one of the most intimidatingly smart EA people I know of.

Nice, I don't think I have much to add at the moment, but I really like + appreciate this comment!

Thanks, would be interested to discuss more! I'll give some reactions here for the time being

This sounds astonishingly high to me (as does 1-2% without TAI)

(For context / slight warning on the quality of the below: I haven't thought about this for a while, and in order to write the below I'm mostly relying on old notes + my current sense of whether I still agree with them.)

Maybe we don't want to get into an AGI/TAI timelines discussion here (and I don't have great insights to offer there anyway) so I'll focus on the pre-TAI number.

I definitely agree that it seems like we're not at all on track to get to advanced nanotechnology in 20 years, and I'm not sure I disagree with anything you said about what needs to happen to get there etc. 

I'll try to say some things that might make it clearer why we are currently giving  different numbers here (though to be clear, as is hopefully apparent in the post, I'm not especially convinced about the number I gave)

  • I think getting to 99.99% confidence is pretty hard -- like in the 0.001% fastest-development scenarios I feel like we're far into "wow I made some very wrong assumptions I wasn't even aware I was making" territory. (In general with prediction, I feel like in the 10% most extreme scenarios an assumption I thought was rock solid turns out to be untrue)
  • Apart from the "reluctance to be extremely confident in anything" thing:
    • I think the main scenario I have in mind for pre-TAI advanced nanotechnology by 2040 is one where some very powerful AI that isn't powerful enough to count as TAI gets developed and speeds up (relevant parts of) science R&D a lot
    • I think there's also some (very small) chance that advanced nanotechnology is much easier than it currently seems, since (maybe) we haven't really tried yet. Either through roughly Drexler's path, or through some other path.

Scientists convince themselves that Drexler's sketch is infeasible more often than one might think. But to someone at that point there's little reason to pursue the subject further, let alone publish on it. It's of little intrinsic scientific interest to argue an at-best marginal, at-worst pseudoscientific question. It has nothing to offer their own research program or their career. Smalley's participation in the debate certainly didn't redound to his reputation.

So there's not much publication-quality work contesting Nanosystems or establishing tighter upper bounds on maximum capabilities. But that's at least in part because such work is self-disincentivizing. Presumably some arguments people find sufficient for themselves wouldn't go through in generality or can't be formalized enough to satisfy a demand for a physical impossibility proof, but I wouldn't put much weight on the apparent lack of rebuttals.

I definitely agree with the points about incentives for people to rebut Drexler's sketch, but I still think the lack of great rebuttals is some evidence here (I don't think that represents a shift in my view -- I guess I just didn't go into enough detail in the post to get to this kind of nuance (it's possible that was a mistake)).

Kind of reacting to both of the points you made / bits I quoted above: I think convincing me (or someone more relevant than me, like major EA funders etc) that the chance that advanced nanotechnology arrives by 2040 is less than 1 in 1e-4 would be pretty valuable. I don't know if you'd be interested in working to try to do that, but if you were I'd potentially be very keen to support that. (Similarly for ~showing something like "near-infeasibility for  Drexler's sketch.)

a) Has anyone ever thought about this question in detail? 

  • I haven't thought about this in detail but I have a weakly held view that senior people should do more mentoring
  • (without wanting to imply that I'm a "senior EA") I've thought about it / am generally inclined to think about it more carefully for me personally, I think last time I did I basically thought I'd like to do more mentoring and was bottlenecked on not having anyone to mentor (but not sure that I currently think I should do more mentoring)

b) What factors would such a decision depend on? Intuitively, the senior's ability to mentor and the urgency of the problem play a role but there is surely more. 

  • For myself I might try to do a fermi, maybe something like (just for illustration, extremely rough and not thought through, etc): career capital for me (maybe made concrete with "how many hours of productive time am I willing to sacrifice for the expected career capital") + career capital for mentee (maybe "by how many days do I accelerate their career progression x how much impact will they have compared to me") - time cost for me
  • I will say I think some people enjoy mentoring (and similar things) way more than others, and this probably matters a lot. Maybe in the above fermi you can apply a factor to the time cost to convert it from "actual clock time" to "counterfactual difference in time spent on other productive things" or whatever
  • Maybe another factor is how disruptive it is for you to add (say) another meeting per month to your diary. E.g. if you usually have around 2 meetings per week and otherwise can focus on research, maybe adding more meetings is very costly.

c) Are there options to combine mentorship and direct work, i.e. can senior people reliably outsource simple tasks to their mentees?

  • I think outsourcing simple tasks is surprisingly hard. But maybe a good version of this looks something like having an RA/PA (and maybe senior people should have more RAs/PAs).

Ah was looking forward to listening to this using the Nonlinear Library podcast but twitter screenshots don't work well with that. If someone made a version of this with the screenshots converted to normal text that would be helpful for me + maybe others.

Nice, sounds like a cool project!

Some quick thoughts on this from me:

Honestly for me it's probably at the "almost too good to be true" level of surprisingness (but to be clear it actually is true!). I think it's a brilliant community / ecosystem (though of course there's always room for improvement).

I agree that you probably generally need unusual views to find the goals of these jobs/projects compelling (and maybe also to be a good job applicant in many cases?). That seems like a high bar to me, and I think it's a big factor here.

I also agree that not all roles are research roles, although I don't know how much this weakens the surprisingness because some people probably don't find research roles appealing but do find e.g. project management appealing. (Also I do feel like most research is pretty tough one way or another, whether or not it's "EA" research.)

I guess there's also the "downsides" I mentioned in the post. One that particularly comes to mind is that there still aren't a ton of great EA jobs to just slot into, and the ones that exist often seem to be very over-subscribed. Partly depends on your existing profile of skills of course :).

Load More