80,000 hours have outlined many career paths where it is possible to do an extraordinary amount of good. To maximize my impact I should consider these careers. Many of these paths are very competitive and require enormous specialization. I will not be done with my studies for potentially many years to come. How will the landscape look then? Will there still be the same need for an AI specialist, or will entirely new pressing issues have crept up on us like Operations management recently did so swiftly?
80,000 hours is working hard at identifying key bottlenecks in the community. MIRI has long stated that a talent gap has been its main limitation in hiring. This sentiment is shared among many top AI research institutions. Justifiably 80,000 hours recommended AI research as a top career path.
Attending EAGx Netherlands in 2018, I was surprised to see so many young, bright, and enthusiastic people proudly stating they were pursuing a career in AI research with some even switching from unrelated fields to a MSc in Machine Learning!
Not too long ago when it became clear there is an operations management bottleneck, 80,000 hours swiftly released podcasts and articles advocating for the value of pursuing expertise in this field.
I didn't get to attend EAG London but was told the workshop for Operations management was so packed they had to add an extra room! If there were half as many Effective altruists excited to pursue Operations in EAG London as there were aspiring AI researchers at EAGx Netherlands I'm certain we got Operations covered.
Only one problem. Many of these brilliant people will not be ready until years from now and the bottleneck will remain until then. If we keep recommending pursuing careers that alleviate current bottlenecks for too long after they've been identified, then when the bottlenecks are finally alleviated there will be a flood of talented people coming after, crowding over the same limited jobs.
I'm concerned that too little effort is put into tracking how many Effective altruists are pursuing the different problem profiles. Having met more than a hundred Effective altruists early in their career I can count on a single finger the people I've met dedicated to improving institutional decision making for example.
80,000 hours has coached around a thousand students and must have the best idea of what careers effective altruists are pursuing, but there is little to to no public information about this that we can take into consideration when we try to figure out what paths to pursue. When planning our careers we shouldn't only look at neglected areas. We should also look also at the neglected neglected areas, so we avoid crowding over the same subset of neglected areas that are more or less bottlenecked by the time it takes to attain expertise. Currently, this is very hard to do unless you know many young Effective Altruists and even that is a biased sample size.
The career coaches of 80,000 hours are already strained, and I’m asking them to spread their time even thinner but I think it’s important enough to warrant it. As someone with a severe lack of talent, steering clear of competition is my go-to strategy. It would be hugely valuable for me, and hopefully others like me, to have a better insight on what careers other EA’s are choosing, which problems you believe will remain as important in 5 years and which will not.
This potential failure mode is hard to regard as a flaw with 80,000 hours’ career advice but is rather a symptom of their smash success. We are really taking their advice to heart! I suspect 80,000 hours thought about these issues years ago and are well prepared, but on the off-chance I had an original idea, I figured I’d voice it!
Thanks to Sebastian Schmidt for providing feedback on a draft of this, any resemblance of coherent thought is solely due to his help.
This is a good thought! I actually went through a month or two of being pretty excited about doing something like this early last year. Unfortunately I think there are quite a few issues around how well the data we have from advising represents what paths EAs in general are aiming for, such that we (80,000 Hours) are not the natural home for this project. We discussed including a question on this in the EA survey with Rethink last year, though I understand they ran out of time/space for it.
I think there’s an argument that we should start collecting/publicising whatever (de-identified) data we can get anyway, because any additional info on this is useful and it’s not that hard for 80,000 Hours to get. I think the reason that doing this feels less compelling to me is that this information would only answer a small part of the question we’re ultimately interested in.
We want to know the expected impact of a marginal person going to work in a given area.
To answer that, we’d need something like:
I think without doing that extra analysis, I wouldn’t really know how to interpret the results and we’ve found that releasing substandard data can get people on the wrong track. I think that doing this analysis well would be pretty great, but it’s also a big project with a lot of tricky judgement calls, so it doesn’t seem at the top of our priority list.
What should be done in the meantime? I think this piece is currently the best guide we have on how to systematically work through your career decisions. Many of the factors you mentioned are considered (although not precisely quantified) when we recommend priority paths because we try to consider neglectedness (both now and our guess at the next few years). For example, we think AI policy and AI technical safety could both absorb a lot more people before hitting large diminishing returns so we’re happy to recommend that people invest in the relevant career capital. Even if lots of people do so, we expect this investment to still pay off.
I've seen indications and arguments that suggest this is true when 80,000 Hours releases data or statements they don't want people to take too seriously. Do you (or does anyone else) have thoughts on whether it's the case that anyone releasing "substandard" (but somewhat relevant and accurate) data on a topic will tend to be worse than there being no explicit data on a topic?
Basically, I'm tentatively inclined to think that some explicit data i... (read more)