Question
I wonder whether 80,000 Hours should be more transparent about how they rank problems and careers. I think so:
- I suspect 80,000 Hours' rankings play a major role in shaping the career choices of people who get involved in EA.
- According to the 2022 EA Survey, 80,000 Hours was an important factor to get involved in EA for 58.0 % of the total 3.48 k respondents, and for 52 % of the people getting involved in 2022.
- The rankings have changed a few times. 80,000 Hours briefly explained why in their newsletter, but I think having more detail about the whole process would be good.
- Greater reasoning transparency facilitates constructive criticism.
I understand the rankings are informed by 80,000 Hours' research process and principles, but I would also like to have a mechanistic understanding of how the rankings are produced. For example, do the rankings result from aggregating the personal ratings of some people working at and advising 80,000 Hours? If so, who, and how much weight does each person have? May this type of information be an infohazard? If yes, why?
In any case, I am glad 80,000 Hours does have rankings. The current ones are presented as follows:
- Problems:
- 5 ranked "most pressing world problems".
- "These areas are ranked roughly by our guess at the expected impact of an additional person working on them, assuming your ability to contribute to solving each is similar (though there’s a lot of variation in the impact of work within each issue as well)".
- 10 non-ranked "similarly pressing but less developed areas".
- "We’d be equally excited to see some of our readers (say, 10–20%) pursue some of the issues below — both because you could do a lot of good, and because many of them are especially neglected or under-explored, so you might discover they are even more pressing than the issues in our top list".
- "There are fewer high-impact opportunities working on these issues — so you need to have especially good personal fit and be more entrepreneurial to make progress".
- 10 "world problems we think are important and underinvested in". "We’d also love to see more people working on the following issues, even though given our worldview and our understanding of the individual issues, we’d guess many of our readers could do even more good by focusing on the problems listed above".
- 2 non-ranked "problems many of our readers prioritise". "Factory farming and global health are common focuses in the effective altruism community. These are important issues on which we could make a lot more progress".
- 8 non-ranked "underrated issues". "There are many more issues we think society at large doesn’t prioritise enough, where more initiatives could have a substantial positive impact. But they seem either less neglected and tractable than factory farming or global health, or the expected scale of the impact seems smaller".
- 5 ranked "most pressing world problems".
- Careers:
- 10 ranked "highest-impact career paths our research has identified so far".
- "These are guides to some more specific career paths that seem especially high impact. Most of these are difficult to enter, and it’s common to start by investing years in building the skills above before pursuing them. But if any might be a good fit for you, we encourage you to seriously consider it".
- "We’ve ranked these paths roughly in terms of our take on their expected impact, holding personal fit for each fixed and given our view of the world’s most pressing problems. But your personal fit matters a lot for your impact, and there is a lot of variation within each path too — so the best opportunities in one lower on the list will often be better than most of the opportunities in a higher-ranked one".
- 14 non-ranked "high-impact career paths we’re excited about".
- "Our top-ranked paths won’t be right for everybody, and there are lots of ways to have an impactful career. Here we list some additional paths we think can be high impact for the right person. These aren’t ranked in terms of impact, and there are surely many promising paths we haven’t written about at all".
- 10 ranked "highest-impact career paths our research has identified so far".
Side note
80,000 Hours' is great! It was my entry point to effective altruism in early 2019 via the slide below, where following its advice was being presented as the opposite of doing frivolous research.
The slide was presented in one of the last classes of the course Theory and Methodology of Science (Natural and Technological Science), which I did during my Erasmus studies at KTH. I did not check 80,000 Hours' website after class. However, a few months later I came across the slide again studying for the exam. Maybe because I was a little bored, I decided to search for 80,000 Hours that time. I remember I found the ideas so interesting that I thought to myself I had better look into them with more peace of mind later, in order not to get distracted from the exams.
Hey there, thank you both for the helpful comments.
I agree the shorttermist/longtermist framing shouldn't be understood as too deep a divide or too reductive a category, but I think it serves a decent purpose for making clear a distinction between different foci in EA (e.g. Global Health/Factory Farming vs AI-Risk/Biosecurity etc).
The comment above really helped me in seeing how prioritization decisions are made. Thank you for that, Ardenlk!
I'm a bit less bullish than Vasco on it being good that 80k does their own prioritization work. I don't think it is bad per se, but I am not sure what is gained by 80k research on the topic vis a vis other EA people trying to figure out prioritization. I do worry that what is lost are advocates/recomendations for causes that are not currently well-represented in the opinion of the research team, but that are well-represented among other EA's more broadly. This makes people like me have a harder time funneling folks to EA-principles based career-advising, as I'd be worried the advice they receive would not be representative of the considerations of EA folks, broadly construed. Again, I realize I may be overly worried here, and I'd be happy to be corrected!
I read the Thorstadt critique as somewhat stronger than the summary you give- certainly, just invoking X-risk should not per default justify assuming astronomical value. But my sense from the two examples (one from Bostrom, one on cost-effectiveness on Biorisk) was that more plausible modeling assumptions seriously undercut at least some current cost-effectiveness models in that space, particularly for individual interventions (as opposed to e.g. systemic interventions that plausibly reduce risk long-term). I did not take it to imply that risk-reduction is not a worthwhile cause, but that current models seem to arrive at the dominance of it as a cause based on implausible assumptions (e.g. about background risk).
I think my perception of 80k as "partisan" stems from posts such as these, as well as the deprioritization of global health/animal welfare reflected on the website. If I read the post right, the four positive examples are all on longtermist causes, including one person who shifted from global health to longtermist causes after interacting with 80k. I don't mean to suggest that in any of these cases, that should not have been done - I merely notice that the only appearance of global health or animal welfare is in that one example of someone who seems to have been moved away from those causes to a longtermist cause.
I may be reading too much into this. If you have any data (or even guesses) on how many % of people you advise you end up funneling to global health and animal welfare causes, and how many you advise to go into risk-reduction broadly construed, that would be really helpful.