submerged church of the sacred heart of jesus petrolandia brasil

Mistakes in the moral mathematics of existential risk (Part 1: Introduction and cumulative risk)

Even if we use … conservative estimates, which entirely ignor[e] the possibility of space colonization and software minds, we find that the expected loss of an existential catastrophe is greater than the value of 1016 human lives. This implies that the expected value of reducing existential risk by a mere one millionth of one percentage point is at least a hundred times the value of a million human lives.

Nick Bostrom, “Existential risk prevention as global priority
Listen to this post

1. Introduction

This is Part 1 of a series based on my paper, “Mistakes in the moral mathematics of existential risk.”

(Almost) everyone agrees that human extinction would be a bad thing, and that actions which reduce the chance of human extinction have positive value. But some authors assign quite high value to extinction mitigation efforts. For example:

  • Nick Bostrom argues that even on the most conservative assumptions, reducing existential risk by just one millionth of one percentage point would be as valuable as saving a hundred million lives today.
  • Hilary Greaves and Will MacAskill estimate that early asteroid-detection efforts saved lives at an expected cost of fourteen cents per life.

These numbers are a bit on the high side. If they are correct, then on many philosophical views the truth of longtermism will be (nearly) a foregone conclusion.

I think that these, and other similar estimates, are inflated by many orders of magnitude. My paper and blog series “Existential risk pessimism and the time of perils” brought out one way in which these numbers may be too high: they will be overestimates unless the Time of Perils Hypothesis is true.

My aim in this paper is to bring out three novel ways in which many leading estimates of the value of existential risk mitigation have been inflated. (The paper should be online as a working paper within a month.)

I’ll introduce the mistakes in detail throughout the series, but it might be helpful to list them now.

  • Mistake 1: Focusing on cumulative risk rather than per-unit risk.
  • Mistake 2: Ignoring background risk.
  • Mistake 3: Neglecting population dynamics.

I show how many leading estimates make one, or often more than one of these mistakes.

Correcting these mistakes in the moral mathematics of existential risk has two important implications.

  • First, many debates have been mislocated, insofar as factors such as background risk and population dynamics are highly relevant to the value of existential risk mitigation, but these factors have rarely figured in recent debates.
  • Second, many authors have overestimated the value of existential risk mitigation, often by many orders of magnitude.

In this series, I review each mistake in turn. Then I consider implications of this discussion for current and future debates. Today, I look at the first mistake, focusing on cumulative rather than per-unit risk.

2. Bostrom’s conservative scenario

Nick Bostrom (2013) considers what he terms a conservative scenario in which humanity survives for a billion years on the planet Earth, at a stable population of one billion humans.

We will see throughout this series that is far from a conservative scenario. Modeling background risk (correcting the second mistake) will put pressure on the likelihood of humanity surviving for a billion years. And modeling population dynamics (correcting the third mistake) will raise the possibility that humanity may survive at a population far below one billion people. However, let us put aside these worries for now and consider Bostrom’s scenario as described.

In this scenario, there are 1018 human life-years yet to be lived, or just over 1016 lives at current lifespans. Bostrom uses these figures to make a startling claim: reducing existential risk by just one millionth of one percent is, in expectation, as valuable as saving one hundred million people.

Even if we use … conservative estimates, which entirely ignor[e] the possibility of space colonization and software minds, we find that the expected loss of an existential catastrophe is greater than the value of 1016 human lives. This implies that the expected value of reducing existential risk by a mere one millionth of one percentage point is at least a hundred times the value of a million human lives.

It can seem obvious that Bostrom must be correct here. After all, a reduction of existential risk by one part in a million gives a 10-8 chance of saving just over 1016 people, and so in expectation it saves just more than 108 lives, or a hundred million lives.

Today, we will see that this estimate is not strictly speaking false. It is rather worse than false: it is badly misleading. Once the estimate is described in more revealing terms, we will see that the seemingly small reduction of 10-8 in the chance of existential catastrophe required to deliver an expected value equivalent to a hundred million lives saved is better described as a very large reduction in existential risk.

3. Relative and absolute risk reduction

To see the point, we need to make two distinctions. First, reductions in risk can be described in two ways.

Typically, we speak about relative reductions in risk, which chop a specified fraction off the current amount of risk. It is in this sense that a 10% reduction in risk takes us from 80% to 72% risk, from 20% to 18% risk, or from 2% to 1.8% risk. (Formally, relative risk reduction by f takes us from risk r to risk (1-f)r).

More rarely, we talk about absolute reductions, which subtract an absolute amount from the current level of risk. It is in this sense that a 10% reduction in risk takes us from 80% to 70% risk, from 20% to 10% risk, or from 10% to 0% risk. (Formally, relative risk reduction by f takes us from risk r to risk r – f).

Bostrom must be concerned with absolute risk reduction for his argument to make sense. Otherwise, the benefit of existential risk reduction would have to be multiplied by the current level of risk r.

In general, a focus on absolute risk isn’t especially nefarious, just a bit nonstandard. It does tend to overstate risk a bit, since absolute risk reduction is equivalent to relative risk reduction from a starting point of 100% risk. In this way, stating risk in absolute rather than relative terms overstates risk by 1/r, where r is the starting level of risk. This can be quite a strong boost if starting levels of risk are low, however many effective altruists think that levels of existential risk are rather high, in which case the overstatement may be one order of magnitude or less. Let’s not dwell on this.

4. Cumulative and per-unit risk

Over a long period (say, a billion years), we can report risk in two ways. On the one hand, we can report the cumulative risk rC that a catastrophe will occur at least once throughout the period. Cumulative risk over a billion years can be quite high: it’s hard to go a billion years without catastrophe.

On the other hand, we can divide a long period into smaller units (say, centuries). Then we can report the per-unit risk rU that a catastrophe will occur in any given unit.

How is cumulative risk related to per-unit risk? Well, if there are N units, then we have:

rC = 1-(1-rU)N

Therein lies the rub, for if N is very high (in this case, N = ten million!), then for almost any value of rU, the rightmost term will be driven exponentially towards zero, so that rC is driven almost inescapably towards one. This means that over a long period, driving rC meaningfully away from one requires very low values of rU.

Therein lies the rub, because Bostrom is concerned with cumulative risk. For an intervention to be credited with increasing the survival probability not only of current humans, but also of all future humans, by 10-8, that intervention must reduce cumulative risk by 10-8.

No problem, you say. Surely it cannot be so hard to reduce cumulative risk by a mere one millionth of one percent. However, an absolute reduction of cumulative risk by 10-8 requires (by definition) driving cumulative risk at least below 1-10-8. Again, you say, that must be easy. Not so. Driving cumulative risk this low requires driving per-century risk to about 1.6*10-6, barely one in a million.

5. First mistake: Focusing on cumulative over per-unit risk

We can describe this intervention in two ways. On the one hand, we can describe it flatteringly, as Bostrom does, in terms of (absolute) cumulative risk reduction. All that’s needed, Bostrom says, is a reduction by one millionth of one percent.

On the other hand, we can describe it unflatteringly, in the more standard terms of relative per-unit risk reduction. Now what’s needed is driving risk to almost one in a million per century. By contrast, many effective altruists think that per-century risk is currently above 10%. This would put us at a relative risk reduction of over 100,000x, not one part in a million, but the tenth part of a million.

Our first mistake in the moral mathematics of existential risk is therefore focusing on cumulative rather than per-unit risk. This is a mistake for two reasons.

First, as we saw, focusing on cumulative risk significantly understates risk by describing very, very large changes in per-century risk as very, very small changes in cumulative risk. This gives the misleading sense that what is, in quite a natural and intuitive sense, an astronomically large change, is in fact only a tiny change.

Second, focusing on cumulative risk moves debates away from what we can affect. Our actions may reduce risk in our own century, and perhaps if we are lucky they will even affect risk in nearby centuries. But it is unlikely that we can predictably affect risk in far distant centuries with anything approaching the ease that we affect risk in nearby centuries. For this reason, in assessing the value of feasible acts taken to mitigate existential risk, we should focus on per-unit risk rather than cumulative risk in order to bring the focus back to quantities that our actions can predictably affect.

6. Conclusion

Today’s post introduced the series and discussed a first mistake in the moral mathematics of existential risk: focusing on cumulative over per-century risk. We saw that a leading paper by Nick Bostrom makes this mistake, and that once the mistake is corrected, what appeared to be a very small change in existential risk turned out in a natural sense to be a very large change. We will also see in the next post that another leading paper makes the same mistake.

We will also look at two more mistakes in the moral mathematics of existential risk: ignoring background risk, and neglecting population dynamics. We will see how these mistakes combine in leading papers to overestimate levels of existential risk, and serve to mislocate debates about the value of existential risk mitigation.

Comments

8 responses to “Mistakes in the moral mathematics of existential risk (Part 1: Introduction and cumulative risk)”

  1. Evelyn Avatar

    I think the focus on cumulative risk as you’ve defined it is also misleading. After all, we care about the expected amount of welfare over the long arc of the future, not merely whether humanity makes it to the hundred millionth century; there can be people alive at every unit of time up until that point. I think it makes more sense to talk about reducing incremental x-risk for the next N centuries (for any N ≥ 1) or reducing incremental x-risk at every point in time in the future. It also makes more sense to talk about relative risk reductions as the math is simpler: for example, reducing incremental x-risk at every point in the future by half doubles the expected value of future well-being.

    Also, cumulative x-risk after N centuries is always 1 for high enough values of N. For example, adopting Bostrom’s extremely conservative assumptions in which humanity never settles other planets and survives on Earth until the oceans drive up, the probability of humanity surviving past a billion years is exactly zero. Even if they were to settle other planets, biological humans would not be able to survive past the deaths of the last stars in the universe about 120 trillion years from now. Digital minds could probably continue to exist past the point where the universe decays into a soup of black holes and subatomic particles, but it’s unlikely that they’d be able to delay the heat death of the universe forever.

    1. David Thorstad Avatar

      Thanks Evelyn! It’s good to hear from you.

      I certainly agree that it is misleading to focus on cumulative risk in this context. That is why I called a focus on cumulative risk the first of three mistakes in the moral mathematics of existential risk.

      I think that your remarks on mathematical simplicity might tell the other way from what you intended. To discuss reducing incremental x-risk at every point in the future is to discuss a relative, cumulative risk reduction of 50%. We could, I suppose, redescribe this as a relative, per-unit risk reduction of 50% in all periods at once, but that is just a drawn out way of talking about cumulative risk reduction.

      While the mathematics of cumulative risk reduction is indeed simpler than that of relative risk reduction, it is not quite as simple as what you describe: if we assume that all centuries have constant value, then a 50% relative, cumulative risk reduction multiplies the value of the future by 1/r, where r is level of per-century background risk (see Section 3.2 of my x-risk pessimism paper, on global risk reduction, for proof). It is because background risk matters to the value of risk reduction, but is often omitted from calculations of the value of existential risk reduction, that the second mistake in this paper I will discuss is ignoring background risk.

      (The risk reduction of 50% may also be hiding an additional mathematical complication here: the equation in Section 3.2 has a f/(1-f) term in it that vanishes only for f = 0.5, a special case).

      You’re absolutely right that cumulative risk is nearly always very close to one unless we introduce additional premises, such as the time of perils hypothesis. That is precisely why it is so misleading to talk about “small” reductions in cumulative risk, because providing a small reduction in cumulative risk requires a very large reduction in per-unit risk.

      1. Evelyn Avatar

        I wrote my first comment in a rush, so my apologies for any ambiguity/mistakes.

        This is my understanding of Bostrom’s argument:

        Maximum number of humans that could live on Earth = 10^9 humans at a time * 10^9 years / 10^2 years/human lifespan = 10^16 human lives

        Expected value of intervention that reduces “cumulative x-risk” by 10^-8 = 10^16 lives * 10^-8 = 10^8 lives

        I think Bostrom is making a sloppy back-of-the-envelope calculation that bakes in dodgy assumptions without explaining them. In particular, it seems to be assuming that the current time is a time of perils: x-risk is nonzero in the current century and zero for every century thereafter until the Earth ends. Since almost all the potential value of the future occurs after the time of perils, reducing x-risk now by a given absolute amount f increases the value of the future by f times the maximum value of the future. Otherwise, we would have to discount background risk at every point in the future. (It is curious that this mistake/baked-in assumptions was not caught by the reviewers of the paper.)

        1. David Thorstad Avatar

          Thanks Evelyn! Yes, that’s my view of what happened as well.

          I share your consternation with the reviewers. I guess all I can say is that I hope readers will understand why I place so much importance on publishing papers in leading journals. The best journals don’t (usually) let this kind of mistake slide. Lower-ranking journals and internet fora do.

          1. David Mathers Avatar
            David Mathers

            ‘ The best journals don’t (usually) let this kind of mistake slide. Lower-ranking journals and internet fora do.’

            Insofar as the point you’re making here was new to people at the GPI in Oxford, it seems to me that cuts against the idea that most high-ranking journals in philosophy would have caught the mistake. After all, most of the people at the GPI have published in top 10 generalist journals, and some of them seem to do so quite regularly. I assume the people who publish in those journals (especially regularly!) are better philosophers than the median reviewer at those journals. I am a bad (former) philosopher with a rubbish publication record, but I once *reviewed* a paper for AJP, a far better journal than I have ever published in. And I assume people at the GPI have spent far more time thinking about the astronomical waste argument than the 2-3 days I spent reviewing that paper for AJP. (And my comments were far longer than most of the reviews I’ve received, including the R&Rs that eventually led to publication.) Sometimes mistakes seem obvious once pointed out but are actually quite hard to catch. (Or people might just have been building in a “time of troubles” assumption when reading Bostrom, since he explicitly makes it elsewhere.)

      2. Evelyn Avatar

        Also, I tend to intuit about the value of x-risk reduction using continuous notions of time – precisely because time is continuous and it produces (imho) more elegant math – whereas your paper uses a discrete model. In the continuous case, the expected value of the future given instantaneous catastrophe rate r is:

        V[W] = integral (0 to infty) e^(-rt) dt = r

        If we reduce the catastrophe rate at every point in time by relative amount f, the value of V[W] is increased by a factor of 1 / (1 – f).

        In the discrete case, the *ratio* between the value of the world with the intervention and the value without is:

        V[W_X] = [1 – (1 – f)r] / [(1 – f)r]

        V[W] = (1 – r) / r

        V[W_X] / V[W] = [1 – (1 – f)r] / [(1 – f)r] / [(1 – r) / r] = [1 – (1 – f)r] / [(1 – f)(1 – r)]

        For sufficiently small r, it converges to 1 / (1 – f), which is the same as the continuous case. But if r is large, then the answers from the continuous and discrete models differ by a lot. For example, if r = 1/2, then reducing x-risk by f = 1/2 raises the value of the future by a factor of 2 in the continuous model and by a factor of 3 in the discrete one.

        (By the way, the formula in section 3.3 of your paper is for the value of the intervention, which is the *difference* between the value of the world with the intervention *minus* the value of the world without. I’m talking about the ratio.)

        Hope this clarifies.

        (By the way, is LaTeX supported in the comments?)

        1. David Thorstad Avatar

          Thanks! Yes, that helps a lot – and the continuous case is cleaner :).

          I think LaTeX might be supported in the comments. To use LaTeX on wordpress I type, where D is a dollar sign and c is the code that I want, Dlatex c D, with spacing as shown. Let’s see if that works.

          a^3 + b^3 = c^3

        2. Evelyn Avatar

          Ah, here we go:

          V[W] = \int_0^\infty e^{-rt} dt = r

          \begin{align}  \frac{V[W_X]}{V[W]}  &= \frac{1 – (1 – f)r}{(1 – f)r} \div \frac{1 – r}{r} \\  &= \frac{1 – (1 – f)r}{(1 – f)(1 – r)}  \end{align}

Leave a Reply

Discover more from Reflective altruism

Subscribe now to keep reading and get access to the full archive.

Continue reading