TL;DR:
EA cause prioritisation frameworks use probabilities derived from belief which aren't based on empirical evidence. This means EA cause prioritisation is highly uncertain and imprecise, which means that the optimal distribution of career focuses for engaged EAs should be less concentrated amongst a small number of "top" cause areas.
Post:
80000 Hours expresses uncertainty over the prioritisation of causes in their lists of the most pressing problems:
“We begin this page with some categories of especially pressing world problems we’ve identified so far, which we then put in a roughly prioritised list.”
“We then give a longer list of global issues that also seem promising to work on (and which could well be more promising than some of our priority problems), but which we haven’t investigated much yet.”
Their list is developed in part using this framework and this definition of impact, with a list of example scores from 2017 at https://80000hours.org/articles/cause-selection/.
Importantly, the framework uses probabilities derived from belief which aren't based on empirical evidence, which makes the probabilities highly uncertain and susceptible to irrationality. Even if 80000 Hours’ own staff were to separately come up with some of the relevant probabilities, I would expect very little agreement between them. When the inputs to the framework are so uncertain and susceptible to irrationality, the outputs should be seen as very uncertain.
80000 Hours do express uncertainty with their example scores:
“Please take these scores with a big pinch of salt. Some of the scores were last updated in 2016. We think some of them could easily be wrong by a couple of points, and we think the scores may be too spread out.”
To take the concern that “some of them could easily be wrong by a couple of points” literally, this could mean that factory farming could easily be on par with AI, or land use reform could easily be more pressing than biosecurity.
Meanwhile, the 2020 EA survey tells us about the cause priorities of highly engaged EAs, but it doesn’t tell us what highly engaged EAs are focusing their careers on.
I don’t think the distribution of causes that engaged EAs are focusing their careers on reflects the uncertainty we should have around cause prioritisation.
I’m mostly drawing my sense of what engaged EAs are focusing their careers on from reading Swapcard profiles for EAG London 2022 - where careers seemed to largely be focused on EA movement building, randomista development, animal welfare and biosecurity (it’s plausible that people working in AI are less likely to be living in the UK and attending EAG London).
I think if EAs better appreciated uncertainty when prioritising causes, people’s careers would span a wider range of cause areas.
I think 80 000 Hours could emphasise uncertainty more, but also that the EA community as a whole just needs to be more conscious of uncertainty in cause prioritisation.
Work in the randomista development wing of EA and prioritisation between interventions in this area is highly empirical, able to use high quality evidence and unusually resistant to irrationality. Since this is the wing of EA that initially draws many EAs to the movement, I think it can give them the misconception that cause prioritisation in EA is also highly empirical and unusually resistant to irrationality, when this is not true.
A useful thought experiment is to imagine 100 different timelines where effective altruism emerged. How consistent do you think the movement’s cause priorities (and rankings of them) would be across these 100 different timelines?
I believe the largest sources of irrationality are likely to be the not-based-on-empirical-evidence probabilities that are used in cause prioritisation, but other potential sources are:
- Founder effects - if an individual was a strong advocate for prioritising a particular cause area early in the history of EA, chances are that this cause area is larger now than it ideally should be
- Cultural biases - 69% of EAs live in the US, the UK, Germany, Australia, and Canada. This may create blindspots, unknown unknowns and may make the goal of being truly impartial more difficult to achieve, compared to a scenario where more EAs lived in other countries.
- Gender biases - there are gender differences in stated priorities by EAs. The more rational EA cause prioritisation is, the smaller I’d expect these differences to be.
Thanks for clarifying.
So I'm an example of someone in that position (I'm trying to work out how to contribute via direct work to a cause area) so I appreciate the opportunity to discuss the topic.
Upon reflection, maybe the crux of my disagreement here is that I just don't agree that the uncertainty is wide enough to effect the rankings (except in each tier) or to make the direct-work decision rule robust to personal fit.
I think that X-risks have non-overlapping confidence intervals with non-x-risks because of the scale of the problem, and I don't feel like this changes from a near-term perspective. Even small chances of major catastrophic events this century seen to dwarf other problems.
80k's second top priority areas are Nuclear security, Climate Change (extreme) and Improving Institutional decision making. For the first two, these seem to be associated with major catastrophe's (maybe not x-risks) which also might be considered not to overlap with the next set of issues (factory farming/global health).
With respect to concerns that demographics might be heavily affecting cause prioritisation, I think it would be helpful to have specific examples of causes you think are under-estimated and the biases associated with them.
For example, I've heard lots of different arguments that x-risks are concerning even if you don't buy into long-termism. To a similar end, I can't think of any causes that would be under-valued because of not caring adequately about balance/harmony.