Ethics of existential risk

Pablo (+87/-26)
Pablo (+18/-11)
Leo (+1196/-931)
Pablo (+10/-9)
Pablo (+13/-10)
Pablo (+3875/-1176)
Leo (+4/-5)
Leo
MichaelA (+395/-131) Changed the name + updated the text; see Discussion page
Leo (+6/-7)

The ethics of existential risk concernsis the study of the ethical issues related to existential risk, including questions of how bad an existential catastrophe would be, how good it is to reduce existential risk, preciselyrisk, why those things are as bad or good as they are, and how this differs between different specific existential risks. There areis a range of different perspectives on these questions, and these questions have implications for how much to prioritise reducing existential risk in general and which specific risks to prioritise reducing.

In The Precipice, Toby Ord discusses five different "moral foundations" for assessing the value of existential risk reduction, depending on whether emphasis is placed on the future, the present, the past, civilizational virtues or cosmic significance (Ord 2020).[1]

In one of the earliest discussions of the topic, Derek Parfit offers the following thought experiment (Parfit 1984: 453–454):experiment:[2]

The scale of what is lost in an existential catastrophe is determined by humanity's long-term potential—all the value that would be realized if our species survived indefinitely. The universe's resources could sustain a total of around 1035 biological human beings, or around 1058 digital human minds (Bostrom, Dafoe & Flynn 2020: 319).minds.[3] And this may not exhaust all the relevant potential, if value supervenes on other things besides human or sentient minds, as some moral theories hold. 

Some philosophers have defended views on which future or contingent people do not matter morally (Narveson 1973).morally.[4] Even on such views, however, an existential catastrophe could be among the worst things imaginable: it would cut short the lives of every living moral patient, destroying all of what makes their lives valuable, and most likely subjecting many of them to profound suffering. So even setting aside the value of future generations, a case for reducing existential risk could grounded in concern for presently existing beings.

This present-focused moral foundation could also be discussed as a "near-termist" or "person-affecting" argument for existential risk reduction (Lewis 2018).reduction.[5] In the effective altruism community, it appears to be the most commonly discussed non-longtermist ethical argument for existential risk reduction.

Humanity can be considered as a vast intergenerational partnership, engaged in the task of gradually increasing its stock of art, culture, wealth, science and technology. In Edmund Burke's words, "As the ends of such a partnership cannot be obtained except in many generations, it becomes a partnership not only between those who are living, but between those who are living, those who are dead, and those who are to be born."  (Burke 1790: 193)[6] On this view, a generation that allowed an existential catastrophe to occur may be regarded as failing to discharge a moral duty owed to all previous generations (Ord 2020: 49–53).generations.[7] 

Instead of focusing on the impacts of individual human action, one can consider the dispositions and character traits displayed by humanity as a whole, which Ord calls civilizational virtues(Ord 2020: 53).[8] An ethical framework that attached intrinsic moral significance to the cultivation and exercise of virtue would regard the neglect of existential risks as showing "a staggering deficiency of patience, prudence, and wisdom." (Ord, in Grimes 2020)[9]

At the beginning of On What Matters, Parfit writes that "We are the animals that can both understand and respond to reasons. [...] We may be the only rational beings in the Universe." (Parfit 2011: 31)[10] If this is...

Read More (408 more words)

The scale of what is lost in an existential catastrophe is determined by humanity's long-term potential—all the value that would be realized if our species survived indefinitely. The universesuniverse's resources could sustain a total of around 1035 biological human beings, or around 1058 digital human minds (Bostrom, Dafoe & Flynn 2020: 319). And this may not exhaust all the relevant potential, if value supervenes on other things besides human or sentient minds, as some moral theories hold. 

The scale of what is lost in an existential catastrophe is determined by humanity's long-term potential—all the value that would be realized if our species survived indefinitely. The universes resources could sustain a total of around 1035 biological human beings, or around 1058 digital human minds (Bostrom, Dafoe & Flynn 2020: 319). And this may not exhaust all the relevant potential, if value supervenes on other things other thanbesides human or sentient minds, as some moral theories hold. 

In The Precipice, Toby Ord discusses five different "moral foundations" for assessing the value of existential risk reduction, depending on whether emphasis is placed on thefuture, thepresent, thepast, civilizational virtues or cosmic significance (Ord 2020).

The future

In one of the earliest discussions of the topic, Derek Parfit offers the following thought experiment (Parfit 1984: 453–454):

I believe that if we destroy mankind, as we now can, this outcome will be much worse than most people think. Compare three outcomes:

  1. Peace.
  2. A nuclear war that kills 99% of the world's existing population.
  3. A nuclear war that kills 100%.

(2) would be worse than (1), and (3) would be worse than (2). Which is the greater of these two differences? Most people believe that the greater difference is between (1) and (2). I believe that the difference between (2) and (3) is very much greater.

The scale of what is lost in an existential catastrophe is determined by humanity's long-term potential—all the value that would be realized if our species survived indefinitely. The universes resources could sustain a total of around 1035 biological human beings, or around 1058 digital human minds (Bostrom, Dafoe & Flynn 2020: 319). And this may not exhaust all the relevant potential, if value supervenes on things other than human or sentient minds, as some moral theories hold. 

In the effective altruism community, this is probably the ethical perspective most associated with existential risk reduction is longtermism:reduction: existential risks are often seen as a pressing problem because of the astronomical amounts of value or disvalue potentially at stake over the course of the long-term future. But other ethical perspectives could also lead to a focus

The present

Some philosophers have defended views on existential risk reduction.

For example, in The Precipice (Ord 2020), Toby Ord discusses five different "moral foundations" for the importance of existential risk reduction:

  • The present: Many existential catastrophes would involve death and suffering for vast numbers ofwhich future or contingent people alive at the time it happens.
  • The future: An existential catastrophe would destory of humanity’s longterm potential.
  • The past: We could see humanity as a vast partnership across time, anddo not matter morally (Narveson 1973). Even on such views, however, an existential catastrophe could be seen as "fail[ing]among the worst things imaginable: it would cut short the lives of every generation that came before us" (Grimes 2020).
  • Civilizational virtues: "by risking our entire future, humanity itself displaysliving moral patient, destroying all of what makes their lives valuable, and most likely subjecting many of them to profound suffering. So even setting aside the value of future generations, a staggering deficiency of patience, prudence, and wisdom." (Grimes 2020)
  • Cosmic significance: "this might be the only placecase for reducing existential risk could grounded in the universe where there's intelligent life, the only chanceconcern for the universe
...
Read More (371 more words)

DifferentThe moralethics of existential risk concerns the questions of how bad an existential catastrophe would be, how good it is to reduce existential risk, precisely why those things are as bad or good as they are, and how this differs between different specific existential risks. There are a range of different perspectives on existential risk reductionthese questions, and these questions have different implications for how much to prioritise reducing existential risk reduction in general, as well as for and which specific existential risks to prioritise reducing.

In the effective altruism community, the moralethical perspective most associated with existential risk reduction is longtermism: existential risks are often seen as a pressing problem because of the astronomical amounts of value or disvalue potentially at stake over the course of the long-term future. But other moralethical perspectives could also lead to a focus on existential risk reduction.

The "present"-focused moral foundation could also be discussed as a "near-termist" or "person-affecting" argument for existential risk reduction (Lewis 2018). In the effective altruism community, this is perhaps the most commonly discussed non-longtermist moralethical argument for existential risk reduction. Meanwhile, the "cosmic significance" moral foundation has received some attention among cosmologists and physicists concerned about extinction risk.

However, it is important to distinguish between the question of whether a given moralethical perspective would see existential risk reduction as net positive and the question of whether that moralethical perspective would prioritise existential risk reduction, and this distinction is not always made (see Daniel 2020). One reason this matters is that existential risk reduction may be much less tractable and perhaps less neglected than some other cause areas (e.g., near-term farmed animal welfare), but with that being made up for by far greater importance from a longtermist perspective. Therefore, if one adopts a moralan ethical perspective that just sees existential risk reduction as similarly important to other major global issues, existential risk reduction may no longer seem worth prioritising.

Load more (10/21)