Hide table of contents

Introduction

Let’s say with Nick Bostrom that an ‘existential risk’ (or ‘x-risk’) is a risk that ‘threatens the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development’ (2013, 15). There are a number of such risks: nuclear wars, developments in biotechnology or artificial intelligence, climate change, pandemics, supervolcanos, asteroids, and so on (see e.g. Bostrom and Ćirković 2008). So the future might bring

Extinction: We die out this century.

In fact, Extinction may be more likely than most of us think. In an informal poll, risk experts reckoned that we’ll die out this century with a 19% probability (Sandberg and Bostrom 2008). The Stern Review on the Economics of Climate Change, commissioned by the UK government, assumed a 9.5% likelihood of our dying out in the next 100 years (UK Treasury 2006). And a recent report by the Global Challenges Foundation suggests that climate change, nuclear war and artificial intelligence alone might ultimately result in extinction with a probability of between 5% and 10% (Pamlin and Armstrong 2015).[1] But the future needn’t be so grim. It may also bring

Survival: We survive for another billion years, and on average there are always 10 billion people, who live very good 100-year-long lives. So there are 100 million billion future people with very good lives.

This may sound optimistic. But it’s also possible. The earth seems to remain sustainably inhabitable by at least 10 billion people (United Nations 2001, 30), and for another 1.75 billion years (Rushby et al. 2013). The quality of our lives seems to have increased continuously (see e.g. the data collected in Pinker 2018), and it seems possible for this trend to continue. Notably, it partly depends on us whether Extinction or Survival will materialise. In fact it may depend on what we do today. We could now promote academic research on x-risks (Bostrom and Ćirković 2008), global political measures for peace, sustainability or AI safety (Cave and ÓhÉigeartaigh 2019), the development of asteroid defence systems (Bucknam and Gold 2008), shelters (Hanson 2008), risk-proof food technologies (Denkenberger and Pearce 2015), and so on. And while none of these measures will bring x-risks down to zero, they’ll arguably at least reduce them. So all of this raises a very real practical question. How important is it, morally speaking, that we now take measures to make Survival more likely?

The answer depends on the correct moral theory. It’s most straightforward on standard utilitarianism. Suppose we increase the probability of Survival over Extinction by just 1 millionth of a percentage point. In terms of overall expected welfare (setting nonhuman sentience aside), this is equivalent to saving about 1 billion lives, with certainty. So according to utilitarianism, even such tiny increases in probability are astronomically important. Indeed, Nick Bostrom suggested that x-risk reduction ‘has such high utility that standard utilitarians ought to focus all their efforts on it’ (2003, 308ff.; see also Parfit 1984, 452f., Beckstead 2013). But this implication isn’t restricted to utilitarianism. Something very similar will emerge on any view that assigns weight to expected impartial welfare increases. Consider Effective Altruism (or ‘EA’). Effective Altruism is the project of using evidence and reasoning to determine how we can do the most good, and taking action on this basis (see MacAskill 2015). This doesn’t presuppose any specific moral theory about what the ‘good’ is, or about what other reasons we have beyond doing the most good. But Effective Altruists typically give considerable weight to impartial expected welfare considerations. And as long as we do, the utilitarian rationale will loom large. Thus according to a 2018 survey, EA-leaders consider measures targeted at the far future (e.g. x-risk reduction) 33 times more effective than measures targeted at poverty reduction (Wiblin and Lempel 2018). The EA-organisation 80’000 hours suggests that ‘if you want to help people in general, your key concern shouldn’t be to help the present generation, but to ensure that the future goes well in the long-term’ (Todd 2017). And many Effective Altruists already donate their money towards x-risk reduction rather than, say, short term animal welfare improvements.

In this paper, I’ll ask how the importance of x-risk reduction should be assessed on a Christian moral theory. My main claim will be that Christian morality largely agrees with EA that x-risk reduction is extremely important—albeit for different reasons. So Christians should emphatically support the above-mentioned measures to increase the probability of Survival.

Let me clarify. First, there’s no such thing as the Christian moral doctrine. One of the philosophically most elaborate and historically most influential developments of Christian thought is the work of Aquinas. So I’ll take this as my starting point, and argue first and foremost that core Thomist assumptions support x-risk reduction. Thomas’s specific interpretation of these assumptions are often unappealing today. But I’ll also claim that they can be interpreted more plausibly, that their core idea is still authoritative for many Christians, and that on any plausible interpretation they ground an argument for x-risk reduction. So while I’ll start with Thomas, my conclusions should appeal to quite many contemporary Christians. Indeed, I’ll ultimately indicate that these assumptions needn’t even be interpreted in a specifically Christian manner, but emerge in some form or other from a number of worldviews (cf. section 4). Second, there are different x-risk scenarios, and they raise different moral issues. I think the case is clearest for ways in which humanity might literally go extinct, before being superseded by non-human intelligence, and as a direct consequence of our own actions. I’ll refer to these cases as ‘non-transitionary anthropogenic extinction’, and it’s on these cases that I’ll focus. It would be interesting to explore other scenarios: cases in which we’re replaced by another form of intelligence, non-extinction ‘x-risks’ (which Bostrom’s definition includes) like an extended stage of suffering, or scenarios of natural extinction like volcano eruptions. My arguments will have upshots for such cases too. But I won’t explore them here. Third, there are different ways in which, or different agents for whom, x-risk reduction might be ‘important’. In what follows, I’ll mostly be considering a collective perspective. That is, I’ll assume that we as humanity collectively do certain things. And I’ll focus on whether we ought to do anything to mitigate x-risks—rather than on whether you individually ought to. The existence of this collective perspective might be controversial. But I think it’s plausible (see e.g. Dietz 2019). Christian moral theory, or at least Thomas, also seems to assume it (cf. section 2.1). And many important issues emerge only or more clearly from it. So I’ll assume it in what follows.

In short, my question is about how important it is, on roughly Thomist premises, for us to reduce risks of non-transitionary anthropogenic extinction. I’ll first present three considerations to the effect that, if we did drive ourselves extinct, this would be morally very problematic—a hubristic failure in perfection with cosmologically bad effects (section 2). I’ll then discuss some countervailing considerations, suggesting that even though such extinction would be bad, we needn’t take measures against it—because God won’t let it happen, or because we wouldn’t intend it, or because at any rate it isn’t imminent (section 3). I’ll argue that none of these latter considerations cut any ice. So I’ll conclude that on roughly Thomist premises it’s extremely important for us to reduce x-risks.

Read the rest of the paper


  1. More precisely, the report distinguishes between ‘infinite impact’, ‘where civilisation collapses to a state of great suffering and does not recover, or a situation where all human life ends’, and an ‘infinite impact threshold’, ‘an impact that can trigger a chain of events that could result first in a civilisation collapse, and then later result in an infinite impact’ (2015, 40). The above numbers are their estimates for infinite impact thresholds. ↩︎

3

0
0

Reactions

0
0

More posts like this

Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities