TL;DR:
EA cause prioritisation frameworks use probabilities derived from belief which aren't based on empirical evidence. This means EA cause prioritisation is highly uncertain and imprecise, which means that the optimal distribution of career focuses for engaged EAs should be less concentrated amongst a small number of "top" cause areas.
Post:
80000 Hours expresses uncertainty over the prioritisation of causes in their lists of the most pressing problems:
“We begin this page with some categories of especially pressing world problems we’ve identified so far, which we then put in a roughly prioritised list.”
“We then give a longer list of global issues that also seem promising to work on (and which could well be more promising than some of our priority problems), but which we haven’t investigated much yet.”
Their list is developed in part using this framework and this definition of impact, with a list of example scores from 2017 at https://80000hours.org/articles/cause-selection/.
Importantly, the framework uses probabilities derived from belief which aren't based on empirical evidence, which makes the probabilities highly uncertain and susceptible to irrationality. Even if 80000 Hours’ own staff were to separately come up with some of the relevant probabilities, I would expect very little agreement between them. When the inputs to the framework are so uncertain and susceptible to irrationality, the outputs should be seen as very uncertain.
80000 Hours do express uncertainty with their example scores:
“Please take these scores with a big pinch of salt. Some of the scores were last updated in 2016. We think some of them could easily be wrong by a couple of points, and we think the scores may be too spread out.”
To take the concern that “some of them could easily be wrong by a couple of points” literally, this could mean that factory farming could easily be on par with AI, or land use reform could easily be more pressing than biosecurity.
Meanwhile, the 2020 EA survey tells us about the cause priorities of highly engaged EAs, but it doesn’t tell us what highly engaged EAs are focusing their careers on.
I don’t think the distribution of causes that engaged EAs are focusing their careers on reflects the uncertainty we should have around cause prioritisation.
I’m mostly drawing my sense of what engaged EAs are focusing their careers on from reading Swapcard profiles for EAG London 2022 - where careers seemed to largely be focused on EA movement building, randomista development, animal welfare and biosecurity (it’s plausible that people working in AI are less likely to be living in the UK and attending EAG London).
I think if EAs better appreciated uncertainty when prioritising causes, people’s careers would span a wider range of cause areas.
I think 80 000 Hours could emphasise uncertainty more, but also that the EA community as a whole just needs to be more conscious of uncertainty in cause prioritisation.
Work in the randomista development wing of EA and prioritisation between interventions in this area is highly empirical, able to use high quality evidence and unusually resistant to irrationality. Since this is the wing of EA that initially draws many EAs to the movement, I think it can give them the misconception that cause prioritisation in EA is also highly empirical and unusually resistant to irrationality, when this is not true.
A useful thought experiment is to imagine 100 different timelines where effective altruism emerged. How consistent do you think the movement’s cause priorities (and rankings of them) would be across these 100 different timelines?
I believe the largest sources of irrationality are likely to be the not-based-on-empirical-evidence probabilities that are used in cause prioritisation, but other potential sources are:
- Founder effects - if an individual was a strong advocate for prioritising a particular cause area early in the history of EA, chances are that this cause area is larger now than it ideally should be
- Cultural biases - 69% of EAs live in the US, the UK, Germany, Australia, and Canada. This may create blindspots, unknown unknowns and may make the goal of being truly impartial more difficult to achieve, compared to a scenario where more EAs lived in other countries.
- Gender biases - there are gender differences in stated priorities by EAs. The more rational EA cause prioritisation is, the smaller I’d expect these differences to be.
Sorry for the slow reply.
Talking about allocation of EA's to cause areas.
I agree that confidence intervals between x-risks are more likely to overlap. I haven't really looked into super-volcanoes or asteroids and I think that's because what I know about them currently doesn't lead me to believe they're worth working on over AI or Biosecurity.
Possibly, a suitable algorithm would be to defer to/check with prominent EA organisations like 80k to see if they are allocating 1 in every 100 or every 1000 EAs to rare but possibly important x-risks. Without a coordinated effort by a central body, I don't see how you'd calibrate adequately (use a random number generator and if the number is less than some number, work on a neglected but possibly important cause?).
My thoughts on EA allocation to cause areas have evolved quite a bit recently (partly due to talking 80k and others, mainly in biosecurity). I'll probably write a post with my thoughts, but the bottom line is that, basically, the sentiment expressed here is correct and that it's easier socially to have humility in the form of saying you have high uncertainty.
Responding to the spirit of the original post, my general sense is that plenty of people are not highly uncertain about AI-related x-risk - you might have gotten that email from 80k titled "A huge update to our problem profile — why we care so much about AI risk". That being said, they're still using phrases like "we're very uncertain". Maybe the lack of uncertainty about some relevant facts is lower than their decision rule. For example, in the problem profile, they write:
Different Views under Near-Termism
This seems tempting to believe, but I think we should substantiate it. What current x-risks are not ranked higher than non-x-risks (or how much less of a lead do they have) relative to non-x-risks causes from a near-term perspective?
I think this post proposes a somewhat detailed summary of how your views may change under a transformation from long-termist to near-termist. Scott says:
His arguments here are convincing because I find an AGI event this century likely. If you didn't, then you would disagree. Still, I think that even were AI not to have short timelines, other existential risks like engineered pandemics, super-volcanoes or asteroids might have milder only catastrophic variations, which near-termists would equally prioritise, leading to little practical variation in what people work on.
Talking about different cultures and EA
Can you reason out how "there would be an effect"?