Hide table of contents

This post is a summary of Tiny probabilities and the value of the far future, a Global Priorities Institute Working Paper by Petra Kosonen. This post is part of my sequence of GPI Working Paper summaries.

If you’d like a brief summary, skip to “Conclusion/Brief Summary.”

Introduction

Many people reasonably doubt the probability of our actions today causing better outcomes in the distant future, leading them to eschew longtermism. After all, something feels wrong with a theory that lets tiny probabilities of enormous value dictate our actions. For instance, it leaves us vulnerable to Pascal’s mugging.

However, Kosonen argues that even if we discount small probabilities—which protects us from cases like Pascal’s mugging—longtermism is not undermined. She refutes three arguments:

  1. The Low Risks Argument: The probabilities of existential risks are so low that we should ignore them.
  2. The Small Future Argument: After we ignore unlikely outcomes (e.g., space settlement or digital minds), the number of individuals in the future is too low for longtermism to be correct.
  3. The No Difference Argument: The probability we make a difference in whether extinction occurs is so low we should ignore it.

The Low Risks Argument

Naive Discounting

Before discussing arguments against longtermism, Kosonen introduces one way to discount probabilities, which she calls Naive Discounting

  • Naive Discounting: Ignore probabilities below some threshold[1]

The Low Risks Argument

  • The Low Risks Argument: The probabilities of existential risks are so low that we should ignore them.

In response, Kosonen argues it’s unclear what Naive Discounting says about The Low Risks Argument:

  1. Probabilities of existential risks from various sources in the next century seem to fall above any reasonable discounting threshold.[2]
  2. For any risk, we can make it specific enough to make its probability fall below the threshold (e.g., extinction from an asteroid on Jan. 4, 2055), or we can make it broad enough to fall above the threshold (e.g., extinction from a pandemic this century).
  3. We don’t have guidelines for choosing how specific to make these risks when assessing their probabilities, so whether they fall above or below the threshold is arbitrary.
  4. Hence, it’s unclear what Naive Discounting says, and it doesn’t impair longtermism based on the Low Risks Argument.

Tail Discounting

Even another, more plausible, way to discount probabilities, called Tail Discounting, doesn’t undermine longtermism.

  • Tail Discounting: Ignore the very best and very worst outcomes (tails). That is, ignore the grey areas under this curve:
     
Figure 1

However, Kosonen argues Tail Discounting doesn’t ignore the possibility of extinction because extinction plausibly doesn’t sit in the tails—a world worse than extinction[3] is likely enough, and a world better than extinction is likely enough (see Figure 2).

Figure 2

Hence, Kosonen argues the Low Risks Argument—that existential risks are so unlikely we should ignore them—doesn’t erode longtermism because: 

  1. Naive Discounting struggles with how to specify risks, so it’s unclear what it says.
  2. Tail Discounting doesn’t ignore extinction risks as long as there are meaningful probabilities of outcomes better and worse than extinction.

The Small Future Argument

Kosonen addresses another argument against longtermism.

  • The Small Future Argument: After we ignore unlikely outcomes (e.g., space settlement or digital minds), the number of individuals in the future is too low for longtermism to be correct.

She compares the cost-effectiveness of giving $10,000 to the AMF and giving it to existential risk reduction. She calculates[4] the following effects:

  • AMF: Save 2.5 lives
  • Asteroid detection: Reduce the chance of extinction from an asteroid this century by 1 in 33,000 trillion[5]
  • Pandemic prevention: Reduce the chance of extinction from a pandemic this century by 1 in 50 trillion[5]
  • AI safety: Reduce the chance of extinction from AI this century by 1 in 10 billion[5]

Using these, she calculates the size of the future population required for existential risk reduction to beat the AMF by saving more than 2.5 lives in expectation (see Figure 3).

Figure 3

Assuming the population reaches and stays at 11 billion and people live for 80 years on average, she calculates the future population if humanity continues for…

  • 87,000 years (based on extinction risks from natural causes[6])
    • There will be 12 trillion future humans, meaning AI safety beats the AMF
  • 1 million years (the average lifespan of hominins)
    • The future holds 140 trillion people, meaning AI safety and pandemic prevention beat the AMF
  • 1 billion years (until Earth becomes uninhabitable)
    • The future population is 140,000 trillion, meaning AI safety, pandemic prevention, and asteroid detection beat the AMF

In fact, with these assumptions, humanity must continue for only 265 years for AI safety to beat the AMF. Humanity is expected to continue this long even with a 1/6 probability of extinction in the next 100 years (Toby Ord’s estimate).

Kosonen also argues that assigning a negligible probability to space settlement or digital minds is overconfident, and if we don’t ignore these possibilities, the expected number of future lives becomes far greater.

Overall, Kosonen thinks the future seems large enough for longtermism to ring true, even if we ignore unlikely possibilities.

The No Difference Argument

  • The No Difference Argument: The probability we make a difference in whether extinction occurs is so low we should ignore it.

On its face, the No Difference Argument seems to impair longetermism. However, Kosonen argues it faces collective action problems. For instance, imagine an asteroid is about to hit Earth. We have multiple defenses, but each one has a very low probability of working and incurs a small cost.

Since each defense probably won’t make a difference, the No Difference Argument says we shouldn’t launch them, even if we almost certainly would stop the asteroid if we launch many of them.

Kosonen argues our decisions about whether we work on existential risk reduction are similar to the asteroid defenses; we plausibly ought to take others’ decisions into account and consider whether the collective has a non-negligible probability of making a difference. She discusses the justifications for this view as well as the problems with it on pages 27 to 33.

Moreover, she thinks the collective has a non-negligible probability of preventing extinction. For instance, spending $1 billion on AI safety would plausibly provide an absolute reduction in extinction risk of at least 1 in 100,000—though this estimate is conservative.[7]

Conclusion/Brief Summary

To recap, Kosonen argues that even if we discount small probabilities, longtermism holds true. She refutes three arguments.

  1. The Low Risks Argument: The probabilities of existential risks are so low that we should ignore them. She finds this isn’t true because…
    1. If you ignore probabilities below some threshold (Naive Discounting), it’s unclear what you should do.
    2. If you ignore the best and worst extremes (Tail Discounting), you won’t ignore extinction risk, as there are (plausibly) meaningful probabilities of outcomes better and worse than extinction.
  2. The Small Future Argument: After we ignore unlikely outcomes (e.g., space settlement or digital minds), the number of individuals in the future is too low for longtermism to be correct.
    1. She finds this isn’t true because likely scenarios feature humanity persisting for long enough that longtermism is true.
  3. The No Difference Argument: The probability we make a difference in whether extinction occurs is so low we should ignore it.
    1. She offers reasons you should consider our collective probability of making a difference instead of your individual probability.
    2. She argues our collective probability of preventing extinction is non-negligible.
  1. ^

    Kosonen finds it implausible that there is a specific required threshold. For instance, Monton, who defends probability discounting, considers this threshold "subjective within reason."
    Others have proposed thresholds, such as 1 in 10,000 (Buffon) and 1 in 144,768 (Condorcet). Buffon's threshold is the probability of a 56-year-old dying on a given day, a chance most people ignore. Condorcet had a similar justification.

  2. ^

    See pages 6 and 7 to see the estimates she consults.

  3. ^

    She thinks there's a non-negligible probability that human and non-human suffering make the world's value net negative (an outcome worse than extinction).

  4. ^

    See pages 11 to 14 for her calculations.

  5. ^

    These are absolute (not relative) reductions in probabilities.

  6. ^

    Snyder-Beattie et al. (2019).

  7. ^

    This calculation assumes extinction from AI has a 0.1% risk in the next 100 years (while the median expert estimate is 5%), and it assumes $1 billion will only decrease the chance of extinction by AI by 1%.

Comments1
Sorted by Click to highlight new comments since:

I suppose my main objection to the collective decision-making response to the no difference argument is that there doesn't seem to be any sufficiently well-motivated and theoretically satisfying way of taking into account the collective probability of making a difference, especially as something like a welfarist consequentialist, e.g. a utilitarian. (It could still be the case that probability difference discounting is wrong, but it wouldn't be because we should take collective probabilities into account.)

Why should I care about this collective probability, rather than the probability differences between outcomes of the choices actually available to me? As a welfarist consequentialist, I would compare probability distributions over population welfare values[1] for each of my options, and use rules that rank them pairwise, or within subsets of options, or selects maximal options from subsets of options. Collective probabilities don't seem to be intrinsically important here. They wouldn't be part of the descriptions of the probability distributions of population welfare values for options actually available to you or calculable from them, if and because you don't have enough actual influence over what others will do. You'd need to consider the probability distributions of unavailable options to calculate them. Taking those distributions into account in a way that would properly address collective action problems seems to require non-welfarist reasons. This seems incompatible with act utilitarianism.

The differences in probabilities between options, on the other hand, do seem potentially important, if the probabilities matter at all. They're calculable from the probability measures over welfare for the options available to you.

Two possibilities Kosonen discusses that seem compatible with utilitarianism are rule utilitarianism and evidential decision theory. Here are my takes on them:

  1. For rule utilitarianism, you could assume you should follow universalizable procedures that lead to the best outcomes (or best distributions of outcomes) if everyone (had the same beliefs and) followed them (e.g. Greaves et al., 2022 (section 4.5), Kant (his first formulation of the categorical imperative), Parfit and others). Pairwise probability difference discounting without collective thresholds leads to collective defeat, so it would be ruled out.
    1. I'm not really convinced that collective defeat under these unrealistic universalizations is a big deal at all. I don't get to choose what procedures everyone else follows (or what they believe), so why should I care about that?
    2. I expect that changing my preferences or procedures to satisfy this constraint will lead to worse outcomes according to my current preferences. That seems self-defeating in a way, too.
    3. Mendola (1986, p.158, and responses to objections to his argument in the rest of the paper) argues that this is too strong as a formal constraint, based on aliens or machines that harshly punish you and ensure your actions are net negative for following your theory, because even consequentialists couldn't meet it. That being said, I'm not entirely convinced by this. Maybe you just need to follow the right decision theory. Or, we could still think avoiding collective defeat under universalization counts in favour of some procedures over others, and we should try to (more approximately) satisfy it in more situations than fewer.
    4. You'd probably need to be only boundedly sensitive to stakes, at least in many conceivable decision situations, e.g. my post and Pruss, 2022. Depending on how exactly you are sensitive to the stakes, this could undermine longtermism.
  2. Evidential decision theory and other acausal decision theories seem pretty plausible to me, but I'm very skeptical they make enough difference unless you take into account correlations with agents across a large (e.g. infinite) multiverse. I do in fact think it's pretty likely we live in the right kind of multiverse that longtermism seems pretty plausible (or at least worthy of being a big part of my portfolio) even with probability difference discounting, though!

 

  1. ^

    Or aggregate welfare, or pairwise differences in individual welfare between outcomes.

Curated and popular this week
Relevant opportunities