Right as I was about to write this post, I saw Scott Alexander had made a post discussing existential risk vs. longtermism. Luckily Scott's post didn't tackle this question, so I can still write this post!

Disclaimer: I'm relatively new to Effective Altruism

Longtermism is pretty obvious to me: things that have long-term impact are much larger in effect than things with short-term impact, under utilitarianism or whatever other ethical system you adhere to.

I have never really understood existential risk, though. I still think studying existential risk factors are important, but only because basically every existential risk is also a major s-risk factor. I may not be using these terms exactly right based on this post; I view existential risk as "permanently limiting or completely extinguishing intelligent life", while s-risks "create very large amounts of suffering". From my view, s-risks are clearly the main threat, while existential risks are just a thought-provoking way to get people interested in longtermism. Here's why.

1. Existential risk is really unlikely

Whenever I talk to EAs, I get the sense that they're using "existential risk" as a substitute for "events that will drastically change the world for the worse". Not only are they not the same thing - as I will discuss later I see them as very different - but the latter is far more likely than the former, even from events ordinarily associated with existential risk.

Let's take two of the most common existential risk factors I have heard of: nuclear war and advanced AI. The nuclear war scenario is pretty easy to think about. Say there is a nuclear war between NATO and Russia-China, and all the biggest cities in Europe, Russia, China and the US get nuked. The 100 largest cities in the US make up about 20% of the US population, according to this website. This other website estimates that only about 41% of Europeans live in cities (rather than "towns" or "suburbs"). I can't find information about suburban population in China, but 40% of Chinese people still live in rural areas. All these urban areas getting nuked is I think close to a worst-case scenario, and even that leaves a large population left in these countries and in the rest of the world, not to mention non-human animals. There would be an aftermath of a large economic decline and death from secondary causes, but it must be said: nuclear war isn't existential risk.

AI is a bit trickier because the situations discussed are quite drastic yet pretty vague. But let's say we think there is a high risk of the world becoming dominated by super-intelligent, power-seeking AI, which we're not going to classify as sentient life. Why would this permanently limit all other intelligent life? It's true that human dominance has had a drastic negative effect on the population of most species, but it has definitely not permanently limited the population of all other intelligent life. Given that life is capable of thriving all on its own via evolution, AI would have to see the existence of any life as a threat for it to actively pursue extinction, which seems exceedingly unlikely.

Similar arguments can be made for all other existential threats I have heard of. Every big problem is much more likely to be an s-risk than to actually extinguish intelligent life. 

This might all seem pedantic, and you know what, it is! But given the constant talk about existential risk, I would at least expect it to be a distinct possibility. And besides...

2. Extinguishing intelligent life may not be bad

The idea here is that total utility in the world - the total amount of pleasure subtract the total amount of pain - may not be positive. In fact, I think (65% chance?) that it is negative.

I'm basically copying this argument from this paper, but essentially there are 3 main groups to consider: humans, domesticated animals, and wild animals.

First, humans. The vast majority of the world is living under conditions that are, I imagine, pretty miserable. Here's one example of data that shows this:

Number of people living in various wage groups, over time

Though things are getting a lot better - and I will talk about the change in utility over time in part 3 - conditions are still awful. This is true on basically every development index. I'm open to the idea that for most people, living on 5 dollars per day, without basic education or quality health services, is still a net-pleasurable life. This is a sentiment often brought up and is reflected in global happiness metrics - though I think that there is a huge psychological bias to thinking that your life is "pretty good", regardless of actual living conditions. However, I strongly suspect that the distribution of human utility has a large negative tail, and the people in poor countries who deal with e.g. chronic pain or active discrimination bring down the average a lot.

Next, domesticated animals. 70% of global farm animals are raised on factory farms, conditions which, as you probably know, are pretty terrible. People who have tried to directly estimate this suffering give pretty grim estimates:

Agricultural economist F. Bailey Norwood and Sara Shields, an animal welfare expert, estimate the lifetime welfare of US farm animals on a scale from –10 to 10. I list their scores as pairs, where the first score in the pair is Norwood’s and the second is Shields’.13 Their ratings are as follows: cows raised for meat 6, 2; dairy cows 4, 0; chickens raised for meat 3, –8; pigs –2, –5; and egg-laying hens in cage systems –8, –7. Shields rates the welfare of fish –7. Importantly and unfortunately, the most numerous of these animals (the chickens, hens and fish) tend, with some exceptions, to get scores around –7 and –8.14

-Knutsson, 2019, the paper I linked before

And finally, wild animals. Given the wide variety of sentient animals, it's hard to generalize about their conditions. However, I think it's safe to say that if humans experience net-negative lives, organisms with no technology and who face many of the same challenges (disease, famine, being attacked by other creatures) definitely do too, at least in aggregate.

I'm a strict hedonistic utilitarian, so if total utility in the world was negative I'd have no problem with extinguishing intelligent life. I understand other people may not be so eager, but I still think it's weird to treat existential risk as obviously bad when there is reason to believe total suffering in the world far outweighs total happiness.

3. We are not destined toward infinite pleasure

Even if you agree that existential risk is very rare, and that total utility today is net negative, you might still argue that existential risk is very bad and deserves focus. You might look not at the current utility value of the world but the change in utility value. Humanity has gotten a lot happier in the past century, attributable to many factors, a trend that is arguably not slowing[1]. There has also been advancement in animal rights in the West, so you might think that the same would happen to animals one day.

I find this position hard to reconcile with the longterm threats constantly discussed in EA. Say we think there is a good chance of some s-risk in the next 100 years, which as I argued in part 1 is much more likely than actual extinction-level events. This directly contradicts the idea that utility is destined to improve forever. These s-risk events aren't one-offs, either: if some portion of intelligent life survives and starts to repopulate, another s-risk will come one day and hurt life again!  

From my view, existential risk is a lot less bad (not bad at all, in fact) and a lot less likely than s-risks. So what am I missing about EAs' obsession with existential risk?

  1. ^

    I use "arguably" here because I think progress is actually going to slow in two of the categories seen in the image I linked, "vaccination rate" and "democracy", due to the rise of far-right parties around the world.

21

7 comments, sorted by Click to highlight new comments since: Today at 8:48 AM
New Comment

Hi, welcome to the forum. 

You raise some interesting points. Some quick notes/counterpoints:

  1. Not all existential risk is extinction risk.
    1. Existential risk doesn't have an extremely clean definition, but in the simple extinction/doom/non-utopia ontology, most longtermist EA's intuitive conception of "existential risk" is closer to risk of "doom" than "extinction"
  2. Nuclear war may not be a large direct existential risk, but it's an existential risk factor.
    1. The world could be made scarier after large-scale nuclear war, and thus less hospitable for altruistic values (plus other desiderata)
  3. AI may or may not kill us all. But this point is academic and only mildly important, because if unaligned AI takes over, we (humanity and our counterfactual descendants) have lost control of the future.
  4. Almost all moral value in the future is in the tails (extremely good and extremely bad outcomes).
    1. Those outcomes likely require optimization for, and it seems likely that our spiritual descendants optimize heavily for good stuff than bad stuff.
      1. Bad stuff might happen incidentally (historical analogues include factory farming and slavery), but they aren't being directly optimized for, so will be a small fraction of the badness of maximally bad outcomess.

Thank you for the response!

Yeah I think I have the most problem with (4), something that I probably should have expressed more in the post.

It's true that humans are in theory trying to optimize for good outcomes, and this is a reason to expect utility to diverge to infinity. However, there are in my view equally good reasons utility to diverge to negative infinity- that being that the world is not designed  for humans. We are inherently fragile creatures, only suitable to live in a world with specific temperature, air composition, etc. There are a lot of large-scale phenomenon causing these factors to change - s-risks - that could send utility plunging. This, plus the fact that current utility is below 0, means that I think existential risk is probably a moral benefit.

I also agree that this whole thing is pretty pedantic, especially in cases like AI domination.

"the world is not designed  for humans"

I think our descendants will unlikely be flesh-and-blood humans but rather digital forms of sentience: https://www.cold-takes.com/how-digital-people-could-change-the-world/

I think the main question here is: What can we do today to make the world better in the future? If you believe AI could make the world a lot worse, or even just lock in the already existing state, it seems really valuable to do work on that not happening. If you additionally believe AI could solve problems such as wild animal suffering or unhappy humans then it seems like an even more area problem to spend your time on.

(I think this might be less clear for biorisk where the main concern really is extinction.)

Here is my take on the value of extinction risk reduction, from some years ago: https://forum.effectivealtruism.org/posts/NfkEqssr7qDazTquW/the-expected-value-of-extinction-risk-reduction-is-positive

This posts also contains links to many other posts related to the topic.

Some other posts, that come to different conclusions:

https://forum.effectivealtruism.org/posts/BY8gXSpGijypbGitT/why-i-prioritize-moral-circle-expansion-over-artificial

https://forum.effectivealtruism.org/posts/RkPK8rWigSAybgGPe/a-longtermist-critique-of-the-expected-value-of-extinction-2

 

One final thing: Generally, I think people make a distinction between existential risk (roughly: permanent, irreversible, and drastic loss of value of the future) and extinction risk (extinction of humans), where extinction risk is just one type of existential risk.

Even if you think all sentient life is net negative, extinction is not a wise choice. Unless you completely destroy Earth, animal life will probably evolve again, so there will be suffering in the future.

Moreover, what if there are sentient aliens somewhere? What if some form of panpsychism is true and there is consciousness embedded in most systems? What if some multiverse theory is true?

If you want to truly end suffering, your best bet would be something like creating a non sentient AGI that transforms everything into some nonsentient matter, and then spends eternity thinking and experimenting to determine if there are other universes or other pockets of suffering, and how to influence them.

Of course this would entail human extinction too, but it's a very precise form of extinction. Even if you create an AGI, it would have to be aligned with your suffering-minimizing ethics.

So for now, even if you think life is net negative, preventing ourselves from losing control of the future is a very important instrumental goal. And anything that threatens that control, even if it's not an existential threat, should be avoided.

Congrats on your first post! I appreciate reading your perspective on this – it's well articulated. 

I think I disagree about how likely existential risk from advanced AI is. You write:

Given that life is capable of thriving all on its own via evolution, AI would have to see the existence of any life as a threat for it to actively pursue extinction

In my view, an AGI (artificial general intelligence) is a self-aware agent with a set of goals and the capability to pursue those goals very well. Sure, if such an agent views humans as a threat to its own existence it would wipe us out. It might also wipe us out because we slightly get in the way of some goal it's pursuing. Humans have very complex values, and it is quite difficult to match an AI's values to human values. I am somewhat worried that an AI would kill us all not because it hates us but because we are a minor nuisance to its pursuit of unrelated goals. 

When humans bulldoze an ant hill in order to make a highway, it's not because we hate the ants or are threatened by them. It's because they're in the way of what we're trying to do. Humans tend to want to control the future, so if I were an advanced AI trying to optimize for some values, and they weren't the same exact values humans have, it might be easiest to just get rid of the competition – we're not that hard to kill.

I think this is one story of why AI poses existential risk, but there are many more. For further reading, I quite like  Carlsmith's piece! Again, welcome to the forum! 

Existential risk might be worth talking about because of normative uncertainty.  Not all EAs are necessarily hedonists, and perhaps the ones who are shouldn't be, for reasons to be discovered later.  So, if we don't know what "value" is, or, as a movement, EA doesn't "know" what "value" is, a priori, we might want to keep our options open, and if everyone is dead, then we can't figure out what "value" really is or ought to be.