TL;DR:
EA cause prioritisation frameworks use probabilities derived from belief which aren't based on empirical evidence. This means EA cause prioritisation is highly uncertain and imprecise, which means that the optimal distribution of career focuses for engaged EAs should be less concentrated amongst a small number of "top" cause areas.
Post:
80000 Hours expresses uncertainty over the prioritisation of causes in their lists of the most pressing problems:
“We begin this page with some categories of especially pressing world problems we’ve identified so far, which we then put in a roughly prioritised list.”
“We then give a longer list of global issues that also seem promising to work on (and which could well be more promising than some of our priority problems), but which we haven’t investigated much yet.”
Their list is developed in part using this framework and this definition of impact, with a list of example scores from 2017 at https://80000hours.org/articles/cause-selection/.
Importantly, the framework uses probabilities derived from belief which aren't based on empirical evidence, which makes the probabilities highly uncertain and susceptible to irrationality. Even if 80000 Hours’ own staff were to separately come up with some of the relevant probabilities, I would expect very little agreement between them. When the inputs to the framework are so uncertain and susceptible to irrationality, the outputs should be seen as very uncertain.
80000 Hours do express uncertainty with their example scores:
“Please take these scores with a big pinch of salt. Some of the scores were last updated in 2016. We think some of them could easily be wrong by a couple of points, and we think the scores may be too spread out.”
To take the concern that “some of them could easily be wrong by a couple of points” literally, this could mean that factory farming could easily be on par with AI, or land use reform could easily be more pressing than biosecurity.
Meanwhile, the 2020 EA survey tells us about the cause priorities of highly engaged EAs, but it doesn’t tell us what highly engaged EAs are focusing their careers on.
I don’t think the distribution of causes that engaged EAs are focusing their careers on reflects the uncertainty we should have around cause prioritisation.
I’m mostly drawing my sense of what engaged EAs are focusing their careers on from reading Swapcard profiles for EAG London 2022 - where careers seemed to largely be focused on EA movement building, randomista development, animal welfare and biosecurity (it’s plausible that people working in AI are less likely to be living in the UK and attending EAG London).
I think if EAs better appreciated uncertainty when prioritising causes, people’s careers would span a wider range of cause areas.
I think 80 000 Hours could emphasise uncertainty more, but also that the EA community as a whole just needs to be more conscious of uncertainty in cause prioritisation.
Work in the randomista development wing of EA and prioritisation between interventions in this area is highly empirical, able to use high quality evidence and unusually resistant to irrationality. Since this is the wing of EA that initially draws many EAs to the movement, I think it can give them the misconception that cause prioritisation in EA is also highly empirical and unusually resistant to irrationality, when this is not true.
A useful thought experiment is to imagine 100 different timelines where effective altruism emerged. How consistent do you think the movement’s cause priorities (and rankings of them) would be across these 100 different timelines?
I believe the largest sources of irrationality are likely to be the not-based-on-empirical-evidence probabilities that are used in cause prioritisation, but other potential sources are:
- Founder effects - if an individual was a strong advocate for prioritising a particular cause area early in the history of EA, chances are that this cause area is larger now than it ideally should be
- Cultural biases - 69% of EAs live in the US, the UK, Germany, Australia, and Canada. This may create blindspots, unknown unknowns and may make the goal of being truly impartial more difficult to achieve, compared to a scenario where more EAs lived in other countries.
- Gender biases - there are gender differences in stated priorities by EAs. The more rational EA cause prioritisation is, the smaller I’d expect these differences to be.
From one of my other comments:
"The way I'm thinking about it, is that 80K have used some frameworks to come up with quantitative scores for how pressing each cause area is, and then ranked the cause areas by the point estimates.
But our imagined confidence intervals around the point estimates should be very large and presumably overlap for a large number of causes, so we should take seriously the idea that the ranking of causes would be different in a better model.
This means we need to take more seriously the idea that the true top causes are different to those suggested by 80K's model."
So I think EAs should approach the uncertainty on what the top cause is by spending more time individually thinking about cause prioritisation, and by placing more attention on personal fit in career choices. I think this would produce a distribution of career focuses which is less concentrated to randomista development, animal welfare, meta-EA and biosecurity.
With gender, the 2020 EA Survey shows that male EAs are less likely to prioritise near-term causes than female EAs. So it seems likely that EA was 75% female instead of 75% male, the distribution of career focuses of EAs would be different, which indicates some kind of model error to me.
With culture, I mentioned that I expect unknown unknowns here, but another useful thought experiment would be - how similar would EA's cause priorities and rankings of cause priorities be if it emerged in India, or Brazil, or Nigeria, instead of USA / UK? For example, it seems plausible to me that we value animal welfare less than an EA movement with more Hindu / Buddhist cultural influences would, or that we prioritise promoting liberal democracy less than an imagined EA movement with more influence from people in less democratic countries. Also, maybe we value improving balance and harmony less than an EA movement that originated in Japan would, which could affect cause prioritisation.