Hide table of contents

The following question was submitted to the EA librarian. Any views expressed in questions or the Librarian’s response do not necessarily represent the Centre for Effective Altruism’s views.

We are sharing this question and response to give a better idea of what using the service is like and to benefit people who have similar questions. Questions and answers may have been edited slightly to improve clarity or preserve anonymity.

Question: What are the strongest arguments against working on existential risk?

Answer

An existential risk is a risk that threatens the destruction of humanity's long-term potential. Existential risks include both natural risks, such as those posed by asteroids or supervolcanoes, and anthropogenic risks, like mishaps resulting from synthetic biology or artificial intelligence.

Although many in the effective altruism community are persuaded by the arguments for prioritizing existential risk reduction, we can still note a number of potential reasons for focusing on other cause areas.

If the value of helping someone diminishes in proportion to their temporal distance to the present, humanity's potential is drastically reduced. Even a comparatively low pure discount rate of 1% implies most future value is concentrated in the next hundred years, rather than in billions or trillions of years for which humans and their descendants could live. This kind of discounting would thus seriously undermine the standard arguments for prioritizing existential risk reduction. Still, there is general agreement among moral philosophers that pure time discounting should be rejected. First, arguments for pure discounting often rely on an invalid analogy with discounting in financial contexts. It does make sense to discount the value of money, because the future is uncertain and because investments compound over time. But these are instrumental reasons for discounting. By contrast, pure time discounting concerns the intrinsic value of welfare. Second, pure time discounting has highly counterintuitive implications. Suppose a government decides to get rid of radioactive waste without taking the necessary safety precautions. A girl is exposed to this waste and dies as a result. This death is a moral tragedy regardless of whether the girl lives now or 10,000 years from now. Yet a pure discount rate of 1% implies that the death of the present girl is more than 1043 times as bad as the death of the future girl.

A second reason for deprioritizing work on existential risk is population ethics. Consider two different ways of trying to help people: one way tries to make the lives of existing people better, while the other way instead tries to bring new people with good lives into existence. Although both of these approaches increase the sum total of welfare, many share the intuition that they are not morally equivalent. As the philosopher, Jan Narveson once observed, “we are in favour of making people happy, but neutral about making happy people.” Views that try to capture this intuition are known as "person-affecting views". Like views that discount the future, person-affecting views imply that humanity's potential value is a tiny fraction of what would be suggested by considering the sheer number of lives that the future can contain. This is because the vast majority of these lives do not yet exist and so, from a person-affecting perspective, do not count morally.

With that said, one should note that person-affecting views also have counterintuitive implications and, moreover, incorporating such views into a proper theory of value has proved elusive. In addition, an existential catastrophe would likely be very bad even on person-affecting views, because a huge number of people will almost certainly be harmed. Consider, for example, an event that causes human extinction. From a person-affecting perspective, the fact that this event prevents all subsequent generations from existing is not in itself bad, but the deaths of every person alive are still a major tragedy. (For more on this, see Gregory Lewis's 'The person-affecting value of existential risk reduction'.)

The third reason for deprioritizing work on existential risk is uncertainty about the long-term future. Predicting events even a few months or years into the future is often very difficult. But, as noted, humanity's potential extends over billions or trillions of years. So it seems that the case for working on existential risk assumes that we can make a predictable difference to what happens in the very long run. That seems very implausible. We can reinforce this point by considering what a "proto existential risk reducer" living in, say, Ancient Greece could have been able to do to help people in the 21st century. It seems that such a person couldn't possibly have predicted the relevant historical developments, such as the printing press, the industrial revolution, or the modern computer. Aren't we in a similar position relative to future generations?

This is a forceful objection. One possible response is to accept that the future is very hard to predict, but deny that this is a reason for deprioritizing work on existential risk reduction. Instead, the objection warrants a focus on broad rather than targeted ways to influence the long-term. Targeted interventions attempt to positively influence the long-term future by focusing on specific, identifiable risks or scenarios. By contrast, broad interventions try to have a long-term influence by pursuing general approaches with the potential to be useful in a wide range of contexts, such as building effective altruism or promoting global cooperation. (See broad vs. narrow interventions.) Another possible response is to focus on short-term existential risks. These are risks that threaten our long-term potential, but the risks themselves exist in the near-term, so reducing them doesn't face the challenges associated with long-range forecasting. For example, we can try to reduce existential risks from synthetic biology: these are risks to which we are already exposed, and we can progress without having to make detailed predictions about how the far future will unfold.

The three reasons discussed above are all related to the value of existential risk reduction as a cause area. But there are also specific reasons related to a person's degree of personal fit to work in this cause. For example, you may think that the general case for prioritizing existential risk is strong, but still decide to focus on some other cause because your personal skills and circumstances put you in a unique position to make a difference in this area.

For more on personal fit, see Benjamin Todd, 'Personal fit: why being good at your job is even more important than people think'.

To learn more about possible reasons against working on existential risks, you can listen to this 80,000 Hours podcast episode with Alexander Berger.

9

0
0

Reactions

0
0

More posts like this

Comments3
Sorted by Click to highlight new comments since: Today at 10:06 AM

Even a comparatively low pure discount rate of 1% implies most future value is concentrated in the next hundred years

This is not correct! Suppose the human population grows at a constant rate for 1000 years. If you discount the moral worth of future people by 1% per year, but the growth rate is anything above 1%, most of the value of humanity is concentrated in the last hundred years, not the first hundred years.

There's this very surprising, maybe counterintuitive moral implication of cosmopolitanism where if you think future people have moral value and you believe in discount rates of 1-3%, you should basically disregard any present-day considerations and make all of your decisions based solely on how they affect the distant future, but if you use a discount rate of 5%,  you should help one person today rather than a billion trillion people a thousand years from now.[1]

  1. ^

    https://www.wolframalpha.com/input?i=1000000000000000000000*0.95%5E1000.0

Hi Robi,

The answer was assuming a constant population rather than a growing population, although (confusingly) that assumption was not made explicit.

However, I hadn't appreciated the points you make in the second paragraph. That's very interesting.

Note that there are normative views other than discounting and person-affecting views that do not prioritize reducing existential risks—at least extinction risks specifically, which seem to be the large majority of existential risks that the EA community focuses on. I discuss these here.

More from calebp
Curated and popular this week
Relevant opportunities