Abstract
Many philosophers defend two claims: the astronomical value thesis that it is astronomically important to mitigate existential risks to humanity, and existential risk pessimism, the claim that humanity faces high levels of existential risk. It is natural to think that existential risk pessimism supports the astronomical value thesis. In this paper, I argue that precisely the opposite is true. Across a range of assumptions, existential risk pessimism significantly reduces the value of existential risk mitigation, so much so that pessimism threatens to falsify the astronomical value thesis. I argue that the best way to reconcile existential risk pessimism with the astronomical value thesis relies on a questionable empirical assumption. I conclude by drawing out philosophical implications of this discussion, including a transformed understanding of the demandingness objection to consequentialism, reduced prospects for ethical longtermism, and a diminished moral importance of existential risk mitigation.
Introduction
Derek Parfit (1984) invites us to consider two scenarios. In the first, a war kills ninety-nine percent of the world’s human population. Such an event, Parfit urges, would be a great tragedy. Billions would die and the rest would suffer terribly. Nations would fall. Cities and monuments would be destroyed. Recovering from such a catastrophe would take centuries.
In a second scenario, the same war kills every living human. This event, Parfit holds, would be many times worse than the first. Unthinkably many future lives would fail to be lived Greaves and MacAskill 2021). Our projects would remain forever incomplete and our purposes unfulfilled (Bennett 1978; Knutzen forthcoming; Riedener 2021). All nations, cultures and families would end (Scheffler and Kolodny 2013). There would be no more art, science, music or philosophy (Parfit 1984). A human species that might have flourished for billions of years would find itself extinguished in its infancy (Bostrom 2003; Ord 2020).
Some followers of Parfit have drawn the lesson that it is overwhelmingly important to mitigate existential risks: risks of existential catastrophes involving “the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development” (Bostrom 2013, p. 15). For example, we might work to regulate chemical and biological weapons or to reduce the threat of nuclear conflict (Bostrom and Ćirković 2011; MacAskill 2022; Ord 2020). Mitigating existential risk is frequently held to be not only valuable, but also astronomically more valuable than tackling important global challenges such as poverty, inequality, global health or racial injustice (Bostrom 2013; Ord 2020). The reason given is that existential risk mitigation offers a chance of tremendous gain: the continued survival and development of humanity. Given the mind-boggling scale of what might be lost, anything that we can do to prevent existential catastrophe may have astronomical value. Let the astronomical value thesis be the claim that the best available options for reducing existential risk today have astronomical value.
The astronomical value thesis is often combined with alarmingly high estimates of current existential risk. Toby Ord puts the risk of existential catastrophe by 2100 at “one in six: Russian roulette” (Ord 2020, p. 46). The Royal Astronomer Martin Rees gives a 50% chance of civilizational collapse by 2100 (Rees 2003). And participants at the Oxford Global Catastrophic Risk Conference in 2008 estimated a median 19% chance of human extinction by 2100 (Sandberg and Bostrom 2008).
Let existential risk pessimism be the view that existential risk this century is very high — for concreteness, say twenty percent. It is often supposed that existential risk pessimism bolsters the case for the astronomical value thesis. After all, we should usually do more to address probable threats than to address improbable threats. In this paper, I use a series of models to draw a counterintuitive conclusion. Across a range of assumptions, existential risk pessimism not only fails to increase the value of existential risk mitigation, but in fact substantially decreases it, so much so that existential risk pessimism threatens to falsify the astronomical value thesis (Sections 2-3). I suggest that the best way to reconcile existential risk pessimism with the astronomical value thesis relies on a questionable empirical assumption, the time of perils hypothesis that risk is high now, but will soon fall to a very low level (Sections 4-6). I argue that the time of perils hypothesis is not well supported. If that is right, then existential risk pessimism threatens to tell against the astronomical value thesis. Existential risk mitigation may yet be valuable, but perhaps not astronomically so.
I conclude by drawing out four philosophical consequences of this discussion: a transformed understanding of the demandingness objection to consequentialism (Section 7.1); a challenge to ethical longtermism (Section 7.2); a reduced need for controversial forms of temporal discounting (Section 7.3); and a diminished moral importance of existential risk mitigation (Section 7.4). Proofs are in Appendix A, with additional models in Appendix B.
An important feature of my argument is that it does not rely on ethical or decisiontheoretic assumptions which defenders of the astronomical value thesis may be likely to reject. Many recent arguments against the astronomical value thesis have questioned decision-theoretic, consequentialist or population-ethical assumptions used to motivate it ([removed x3]; Lloyd 2021; Mogensen 2022). My argument does not rely on any such maneuvers. The argument in this paper is compatible with standard versions of expected utility theory, interpreted in consequentialist fashion along a range of population axiologies including totalism. In doing this, my aim is to meet the pessimist on her own turf in order to build a case against the astronomical value thesis that may be persuasive even to the pessimist herself.[1]
Read the rest of the paper
- ^
However, these arguments would strengthen the conclusions of this paper by further reducing the axiological or deontic importance of existential risk mitigation. In that sense, they might be viewed as important complements to the present project.
Meta-comment: For the future, it might be better for GPI to not post several summaries/working papers at the same time. I can currently see four GPI posts on the EA Forum homepage and this makes it a bunch less likely that I will read all 4 (personally, I can only handle so much academic global priorities research at once). A suggestion is that spreading this content out over e.g. 4 weeks might increase uptake/reading but just my personal opinion!
Thanks for the feedback. We'll consider this for the future.
Completely agree. Even though I love the summaries of a lot of these papers, its very intimidating to engage with as they all drop at once!