80,000 hours have outlined many career paths where it is possible to do an extraordinary amount of good. To maximize my impact I should consider these careers. Many of these paths are very competitive and require enormous specialization. I will not be done with my studies for potentially many years to come. How will the landscape look then? Will there still be the same need for an AI specialist, or will entirely new pressing issues have crept up on us like Operations management recently did so swiftly?

80,000 hours is working hard at identifying key bottlenecks in the community. MIRI has long stated that a talent gap has been its main limitation in hiring. This sentiment is shared among many top AI research institutions. Justifiably 80,000 hours recommended AI research as a top career path.

Attending EAGx Netherlands in 2018, I was surprised to see so many young, bright, and enthusiastic people proudly stating they were pursuing a career in AI research with some even switching from unrelated fields to a MSc in Machine Learning!

Not too long ago when it became clear there is an operations management bottleneck, 80,000 hours swiftly released podcasts and articles advocating for the value of pursuing expertise in this field.

I didn't get to attend EAG London but was told the workshop for Operations management was so packed they had to add an extra room! If there were half as many Effective altruists excited to pursue Operations in EAG London as there were aspiring AI researchers at EAGx Netherlands I'm certain we got Operations covered.

Only one problem. Many of these brilliant people will not be ready until years from now and the bottleneck will remain until then. If we keep recommending pursuing careers that alleviate current bottlenecks for too long after they've been identified, then when the bottlenecks are finally alleviated there will be a flood of talented people coming after, crowding over the same limited jobs.

I'm concerned that too little effort is put into tracking how many Effective altruists are pursuing the different problem profiles. Having met more than a hundred Effective altruists early in their career I can count on a single finger the people I've met dedicated to improving institutional decision making for example.

80,000 hours has coached around a thousand students and must have the best idea of what careers effective altruists are pursuing, but there is little to to no public information about this that we can take into consideration when we try to figure out what paths to pursue. When planning our careers we shouldn't only look at neglected areas. We should also look also at the neglected neglected areas, so we avoid crowding over the same subset of neglected areas that are more or less bottlenecked by the time it takes to attain expertise. Currently, this is very hard to do unless you know many young Effective Altruists and even that is a biased sample size.

The career coaches of 80,000 hours are already strained, and I’m asking them to spread their time even thinner but I think it’s important enough to warrant it. As someone with a severe lack of talent, steering clear of competition is my go-to strategy. It would be hugely valuable for me, and hopefully others like me, to have a better insight on what careers other EA’s are choosing, which problems you believe will remain as important in 5 years and which will not.

This potential failure mode is hard to regard as a flaw with 80,000 hours’ career advice but is rather a symptom of their smash success. We are really taking their advice to heart! I suspect 80,000 hours thought about these issues years ago and are well prepared, but on the off-chance I had an original idea, I figured I’d voice it!

Thanks to Sebastian Schmidt for providing feedback on a draft of this, any resemblance of coherent thought is solely due to his help.

59

0
0

Reactions

0
0

More posts like this

Comments9
Sorted by Click to highlight new comments since: Today at 1:14 PM

This is a good thought! I actually went through a month or two of being pretty excited about doing something like this early last year. Unfortunately I think there are quite a few issues around how well the data we have from advising represents what paths EAs in general are aiming for, such that we (80,000 Hours) are not the natural home for this project. We discussed including a question on this in the EA survey with Rethink last year, though I understand they ran out of time/space for it.

I think there’s an argument that we should start collecting/publicising whatever (de-identified) data we can get anyway, because any additional info on this is useful and it’s not that hard for 80,000 Hours to get. I think the reason that doing this feels less compelling to me is that this information would only answer a small part of the question we’re ultimately interested in.

We want to know the expected impact of a marginal person going to work in a given area.

To answer that, we’d need something like:

  • The number of EAs aiming at a given area, weighted by dedication, seniority and likelihood of success.
  • The same data for people who are not EAs but are aiming to make progress on the same problem. In some of our priority paths, EAs are a small portion of the relevant people.
  • An estimate of the extent to which different paths have diminishing returns and complementarity. (That linked post might be worth reading for more of our thoughts on coordinating as a community.)
  • We then probably want something around time - how close to making an impact are the people currently aiming at this path, how long does it take someone who doesn’t have any experience to make an impact, how much do we want talent there now vs later etc.

I think without doing that extra analysis, I wouldn’t really know how to interpret the results and we’ve found that releasing substandard data can get people on the wrong track. I think that doing this analysis well would be pretty great, but it’s also a big project with a lot of tricky judgement calls, so it doesn’t seem at the top of our priority list.

What should be done in the meantime? I think this piece is currently the best guide we have on how to systematically work through your career decisions. Many of the factors you mentioned are considered (although not precisely quantified) when we recommend priority paths because we try to consider neglectedness (both now and our guess at the next few years). For example, we think AI policy and AI technical safety could both absorb a lot more people before hitting large diminishing returns so we’re happy to recommend that people invest in the relevant career capital. Even if lots of people do so, we expect this investment to still pay off.

we’ve found that releasing substandard data can get people on the wrong track

I've seen indications and arguments that suggest this is true when 80,000 Hours releases data or statements they don't want people to take too seriously. Do you (or does anyone else) have thoughts on whether it's the case that anyone releasing "substandard" (but somewhat relevant and accurate) data on a topic will tend to be worse than there being no explicit data on a topic?

Basically, I'm tentatively inclined to think that some explicit data is often better than no explicit data, as long as it's properly caveated, because people can just update their beliefs only by the appropriate amount. (Though that's definitely not fully or always true; see e.g. here.) But then 80k is very prestigious and trusted by much of the EA community, so I can see why people might take statements or data from 80k too seriously, even if 80k tells them not to.

So maybe it'd be net positive for something like what the OP requests to be done by the EA Survey or some random EA, but net negative if 80k did it?

I agree we have a coordination problem. It might be easier to reach this using the Annual EA Census rather than through 80,000 Hours. It would also be worth emphasizing future needs more strongly.

I had some thoughts on how to use the survey in this comment.

I don't think this idea was mine originally, but it would go a long way just to have two pi charts: the current distribution of careers in EA, and the optimal distribution.

I actually don't think that would help a ton, because 80K already prioritizes careers based on their perceived delta between supply and demand. The coordination problem comes because it can take years to generate additional supply, and 80K has only limited visibility into that supply as it's being generated.

I think there is enough difficulty in achieving specialization that you are better off ignoring coordination concerns here in favor of choosing based on personal inclination. It's hard to put in all the time it takes to become an expert in something, it's even harder when you don't love that something for its own sake, and my own suspicion is that without that love you will never achieve to the highest level of expertise, so best to look for the confluence of what you most love and what is most useful than to worry about coordinating over usefulness. You and everyone else is not sufficiently interchangeable when it comes to developing sufficient specialization to be helpful to EA causes.

I'd agree with the idea people should take personal fit very seriously, with passion/motivation for a career path being a key part of that. And I'd agree with your rationale for that.

But I also think that many people could become really, genuinely fired up about a wider range of career paths than they might currently think (if they haven't yet tried or thought about those career paths). And I also think that many people could be similarly good fits for, or similarly passionate about, multiple career paths. For these people, which career path will have the greatest need for more people like them in a few years can be very useful as a way of shortlisting the things to test one's ability to become passionate about, and/or a "tie-breaker" between paths one has already shortlisted based on passions/motivations/fit.

For example, I'm currently quite passionate about research, but have reason to believe I could become quite passionate about operations-type roles, about roles at the intersection of those two paths (like research management), and maybe about other paths like communications or non-profit entrepreneurship. So which of those roles - rather than which roles in general - will be the most marginally useful in a few years time seems quite relevant for my career planning.

(I think this is probably more like a different emphasis to your comment, rather than a starkly conflicting view.)

Just wanted to mention this problem is orthogonal to the related problem of generating enough work to do in the first place, and before you start thinking about how to cut up the pie better you might want to consider making the pie bigger instead.

Unless making the pie bigger is less neglected. I guess this problem can be applied to itself :)