Right as I was about to write this post, I saw Scott Alexander had made a post discussing existential risk vs. longtermism. Luckily Scott's post didn't tackle this question, so I can still write this post!
Disclaimer: I'm relatively new to Effective Altruism
Longtermism is pretty obvious to me: things that have long-term impact are much larger in effect than things with short-term impact, under utilitarianism or whatever other ethical system you adhere to.
I have never really understood existential risk, though. I still think studying existential risk factors are important, but only because basically every existential risk is also a major s-risk factor. I may not be using these terms exactly right based on this post; I view existential risk as "permanently limiting or completely extinguishing intelligent life", while s-risks "create very large amounts of suffering". From my view, s-risks are clearly the main threat, while existential risks are just a thought-provoking way to get people interested in longtermism. Here's why.
1. Existential risk is really unlikely
Whenever I talk to EAs, I get the sense that they're using "existential risk" as a substitute for "events that will drastically change the world for the worse". Not only are they not the same thing - as I will discuss later I see them as very different - but the latter is far more likely than the former, even from events ordinarily associated with existential risk.
Let's take two of the most common existential risk factors I have heard of: nuclear war and advanced AI. The nuclear war scenario is pretty easy to think about. Say there is a nuclear war between NATO and Russia-China, and all the biggest cities in Europe, Russia, China and the US get nuked. The 100 largest cities in the US make up about 20% of the US population, according to this website. This other website estimates that only about 41% of Europeans live in cities (rather than "towns" or "suburbs"). I can't find information about suburban population in China, but 40% of Chinese people still live in rural areas. All these urban areas getting nuked is I think close to a worst-case scenario, and even that leaves a large population left in these countries and in the rest of the world, not to mention non-human animals. There would be an aftermath of a large economic decline and death from secondary causes, but it must be said: nuclear war isn't existential risk.
AI is a bit trickier because the situations discussed are quite drastic yet pretty vague. But let's say we think there is a high risk of the world becoming dominated by super-intelligent, power-seeking AI, which we're not going to classify as sentient life. Why would this permanently limit all other intelligent life? It's true that human dominance has had a drastic negative effect on the population of most species, but it has definitely not permanently limited the population of all other intelligent life. Given that life is capable of thriving all on its own via evolution, AI would have to see the existence of any life as a threat for it to actively pursue extinction, which seems exceedingly unlikely.
Similar arguments can be made for all other existential threats I have heard of. Every big problem is much more likely to be an s-risk than to actually extinguish intelligent life.
This might all seem pedantic, and you know what, it is! But given the constant talk about existential risk, I would at least expect it to be a distinct possibility. And besides...
2. Extinguishing intelligent life may not be bad
The idea here is that total utility in the world - the total amount of pleasure subtract the total amount of pain - may not be positive. In fact, I think (65% chance?) that it is negative.
I'm basically copying this argument from this paper, but essentially there are 3 main groups to consider: humans, domesticated animals, and wild animals.
First, humans. The vast majority of the world is living under conditions that are, I imagine, pretty miserable. Here's one example of data that shows this:
Though things are getting a lot better - and I will talk about the change in utility over time in part 3 - conditions are still awful. This is true on basically every development index. I'm open to the idea that for most people, living on 5 dollars per day, without basic education or quality health services, is still a net-pleasurable life. This is a sentiment often brought up and is reflected in global happiness metrics - though I think that there is a huge psychological bias to thinking that your life is "pretty good", regardless of actual living conditions. However, I strongly suspect that the distribution of human utility has a large negative tail, and the people in poor countries who deal with e.g. chronic pain or active discrimination bring down the average a lot.
Next, domesticated animals. 70% of global farm animals are raised on factory farms, conditions which, as you probably know, are pretty terrible. People who have tried to directly estimate this suffering give pretty grim estimates:
Agricultural economist F. Bailey Norwood and Sara Shields, an animal welfare expert, estimate the lifetime welfare of US farm animals on a scale from –10 to 10. I list their scores as pairs, where the first score in the pair is Norwood’s and the second is Shields’.13 Their ratings are as follows: cows raised for meat 6, 2; dairy cows 4, 0; chickens raised for meat 3, –8; pigs –2, –5; and egg-laying hens in cage systems –8, –7. Shields rates the welfare of fish –7. Importantly and unfortunately, the most numerous of these animals (the chickens, hens and fish) tend, with some exceptions, to get scores around –7 and –8.14
-Knutsson, 2019, the paper I linked before
And finally, wild animals. Given the wide variety of sentient animals, it's hard to generalize about their conditions. However, I think it's safe to say that if humans experience net-negative lives, organisms with no technology and who face many of the same challenges (disease, famine, being attacked by other creatures) definitely do too, at least in aggregate.
I'm a strict hedonistic utilitarian, so if total utility in the world was negative I'd have no problem with extinguishing intelligent life. I understand other people may not be so eager, but I still think it's weird to treat existential risk as obviously bad when there is reason to believe total suffering in the world far outweighs total happiness.
3. We are not destined toward infinite pleasure
Even if you agree that existential risk is very rare, and that total utility today is net negative, you might still argue that existential risk is very bad and deserves focus. You might look not at the current utility value of the world but the change in utility value. Humanity has gotten a lot happier in the past century, attributable to many factors, a trend that is arguably not slowing. There has also been advancement in animal rights in the West, so you might think that the same would happen to animals one day.
I find this position hard to reconcile with the longterm threats constantly discussed in EA. Say we think there is a good chance of some s-risk in the next 100 years, which as I argued in part 1 is much more likely than actual extinction-level events. This directly contradicts the idea that utility is destined to improve forever. These s-risk events aren't one-offs, either: if some portion of intelligent life survives and starts to repopulate, another s-risk will come one day and hurt life again!
From my view, existential risk is a lot less bad (not bad at all, in fact) and a lot less likely than s-risks. So what am I missing about EAs' obsession with existential risk?
I use "arguably" here because I think progress is actually going to slow in two of the categories seen in the image I linked, "vaccination rate" and "democracy", due to the rise of far-right parties around the world.