Note: The Global Priorities Institute (GPI) has started to create summaries of some working papers by GPI researchers with the aim to make our research more accessible to people outside of academic philosophy (e.g. interested people in the effective altruism community). We welcome any feedback on the usefulness of these summaries.

**Summary: A Paradox for Tiny Probabilities and Enormous Values**

*This is a summary of the GPI Working Paper **"A Paradox for Tiny Probabilities and Enormous Values" by Nick Beckstead and Teruji Thomas**. The summary was written by Tomi Francis.*

Many decisions in life involve balancing risks with their potential payoffs. Sometimes, the risks are small: you *might *be killed by a car while walking to the shops, but it would be unreasonably timid to sit at home and run out of toilet paper in order to avoid this risk. Other times, the risks are overwhelmingly large: your lottery ticket *might *win tomorrow, but it would be reckless to borrow £20,000 from a loan shark named “Killer Clive” in order to pay the 50% upfront cost of tomorrow’s massive celebratory party (even if you *really, really, really *enjoy massive parties). The correct decision theory should tell us to be neither too timid nor too reckless, right? Wrong.

In “A Paradox for Tiny Probabilities and Enormous Values”, Nick Beckstead and Teruji Thomas argue that every decision theory is either “timid”, in the sense that it sometimes tells us to avoid small-probability risks no matter the size of the potential payoff, or “reckless”, in the sense that it sometimes tells us to accept almost-certain disaster in return for a small probability of getting a large enough payoff.

Beckstead and Thomas’s central case goes like this. Imagine that you have one year left to live, but you can swap your year of life for a ticket that will, with probability , give you ten years of life, but which will otherwise kill you immediately with probability . You can also take multiple tickets: two tickets will give a probability of getting years of life, otherwise death, three tickets will give a probability of getting years of life, otherwise death, and so on. It seems objectionably timid to say that taking tickets is not better than taking n tickets. Given that we don’t want to be timid, we should say that taking one ticket is better than taking none; two is better than one; three is better than two; and so on.

We now need to introduce a crucial assumption. According to *transitivity*, if is better than and is better than , must be better than . So, if one ticket is better than none, and two tickets is better than one, then two tickets is better than none. Since three tickets is better than two, and two tickets is better than none, three tickets is also better than none. Carrying on in this way, it can be shown that for any n, taking n tickets is better than taking none. But that seems objectionably reckless: taking, say, 50,000 tickets will result in almost certain death.^{[1]} We thus have a paradox: it seems unreasonable to be timid and it also seems unreasonable to be reckless, but given transitivity, if we’re not timid, we have to be reckless.

It’s worth flagging that there are some philosophers who think we should reject transitivity in light of puzzles like these. But that’s a pretty unattractive option, at least on the face of it: if the deals keep getting better, how could the last one fail to be better than the first one? Beckstead and Thomas don’t much discuss the rejection of transitivity, and I shall follow them in setting it aside. Having done so, we have to accept either timidity or recklessness: Beckstead and Thomas’s argument leaves us no choice.

How bad would it be to be timid? You might think that it’s not so obvious that it’s always better to live ten times as long, with a very slightly smaller probability, rather than to live a tenth as long with a slightly larger probability. If we only had to accept something like this in order to avoid recklessness, things wouldn’t be too bad. But unfortunately, Beckstead and Thomas’s paradox doesn’t just apply to cases where we’re deciding what’s best for ourselves. It also applies to cases where we’re choosing on behalf of others: that is, it applies to moral decision-making. And timidity for moral decision-making is really hard to accept.

To see why, let’s modify the central example a little. Imagine that each successive ticket, rather than allowing us to live for ten times as long, allows us to save ten times as many lives. Taking an additional ticket then results in a *very slightly *smaller probability of saving *many* more lives. Indeed, each time you take another ticket, the expected number of lives saved becomes almost ten times greater. So it seems pretty clearly morally better to take another ticket, no matter how many tickets you’ve taken already.

Brute intuitions like these aside, Beckstead and Thomas show that timid theories face a number of other objections. For example, if you’re “timid” about saving lives, then your moral decision-making is going to depend in strange ways on events in distant regions of the universe, over which you have no control.

Let’s see how that works. Suppose that you have two choices, and . gives a slightly higher probability of saving lives, while gives a slightly lower probability of saving lives, where is much larger than . These two choices are summarised in the table below:

Probabilities | |||

A | lives saved | lives saved | lives saved |

lives saved | lives saved | lives saved |

If you’re timid about saving lives, you think that it’s better to save some large number of lives n with some probability , rather than saving any larger number of lives with a slightly smaller probability . So you’re going to have to say that is better than in some version of this case.

Now, here’s an interesting fact about this case. No matter what you do, at least lives will be saved with probability . If we imagine that these lives correspond to the same people either way, then whether or not these people are saved has nothing to do with your choice between and . It could be that these people are located in some distant galaxy, and that the reason there is a probability that they will be saved is that somebody else in that distant galaxy chose to attempt to save them. Now consider the following case, which is just like the preceding one, but where the distant agent chose *not* to attempt to save these lives:

Probabilities | |||

lives saved | lives saved | lives saved | |

lives saved | lives saved | lives saved |

If is greater than and is greater than , is *obviously* better than from a moral perspective: it gives a greater chance of saving a greater number of lives. But hold on: timidity told us that is better than . We’re saying that, and we’re also saying that is better than , even though the two pairs of cases differ only in the matter of whether some *other* agent in a distant galaxy decided to attempt to save lives. That’s very strange, and very hard to believe!

Is accepting recklessness more palatable than accepting timidity? It might seem so at first: while recklessness is somewhat counter-intuitive, at least there seems to be a consistent way of being reckless: just maximise the expectation of the thing you think is good, like the number of lives saved, or the number of years of life you’re going to enjoy. But even if we can swallow almost-certain death, Beckstead and Thomas point out that reckless decision theories face further challenges in infinite cases: they *obsess* over infinities in a troubling way, and they have trouble dealing with certain infinite gambles. To make it easy to see how the two challenges arise, we’ll consider a reckless agent who wants to maximise the expected number of years of good life they shall have in finite cases. (This will make the problems easier to see, but they generalise to all reckless decision-makers.)

First, infinity obsession. For any small probability , a reckless agent will think that rather than having n years of good life for certain, it would be better to get probability of years of good life, otherwise certain death, provided is sufficiently large. Infinitely many years of good life is presumably better than any finite number of years, and so you should likewise prefer any probability, no matter how small, of getting infinitely many years of good life, to the certainty of *any *finite number of years of good life. In other words, it seems that reckless decision makers will obsessively pursue infinite amounts of value, no matter how unlikely it is that their pursuit will yield anything at all.

Reckless decision-makers face another kind of problem when it comes to the “St. Petersburg” gamble. In this gamble, a fair coin is flipped until it lands tails, no matter how long it takes. The player then gets a payoff of life years, where n is the number of times the coin landed heads. The payoffs are illustrated by the table below.

Number of heads | 0 | 1 | 2 | 3 | 4 | … | … | |

Life years received | 1 | 2 | 4 | 8 | 16 | … | … |

Compared to getting n years of good life for certain, it would be better to instead get a truncated version of the St. Petersburg gamble which ends prematurely if the first flips land heads, since this will yield an expected years of good life. Clearly, any truncated St. Petersburg gamble is worse than the un-truncated gamble: it’s better to have the small chance of continuing after flips than it is to not have this chance. So, for reckless decision-makers, it must be better to get the St. Petersburg gamble than to get any finite number of years of good life for certain. This is especially odd in that it means that it must be better to get the St. Petersburg gamble than to get any of its possible outcomes. That’s another kind of paradox.

To summarise, Beckstead and Thomas show that we’ve got some hard choices to make when it comes to decision-making under risk. Perhaps we’ve sometimes got to be very timid, in which case we also need to think that what we should do sometimes depends in strange ways on parts of the world we can’t affect at all. Or, we’ve sometimes got to be very reckless, and obsess over infinities. Or we’ve got to deny transitivity: we’ve got to believe that * *can be better than , and * *can be better than , without being better than

None of these options looks good. But then nobody said decision theory was going to be easy.

**References **

Beckstead, N. and Thomas, T. (2021), A paradox for tiny probabilities and enormous values. GPI Working Paper No. 7–2021.

^{^}The probability of survival if 50,000 tickets are taken is , which is less than .

From an expository perspective, I think it would be better for you to explain more the issues with non-transitivity in moral theories, even though the paper does not. This summary is targeted at people without a ton of philosophical knowledge, whereas the paper is targeted at philosophers who are already familiar with the debates around non-transitivity. Giving context to the paradox requires knowing more about transitivity as a philosophical criterion.

But otherwise, good summary!

Thanks for reading! I totally agree with you that there's a

lotto talk about when it comes to non-transitive moral theories. I did consider going into it in more depth. I agree with you that there's a good reason to do so: it might not be clear, especially to non-philosophers, how secure principles like transitivity really are. But there are also two good reasons on the other side for not going into it further, and I thought on balance they were a bit stronger.The first one is that I was summarising the paper, so I didn't want to spend too much time giving my own views (and it would have to be my own views, given that the original paper doesn't really discuss it). The second reason, which is probably more important, is that I was really trying hard to keep the word count down, and I felt that if I were to say more about non-transitivity than I already did, it would probably take a lot of space/words to do so.

(Suppose I did something very quick - for example, suppose I just gave Broome's standard line that we should accept transitivity because it's a consequence of the logic of comparatives. Setting aside whether that's actually a good argument, if I just said that without explaining it further I think there are very few people it would help: people who aren't familiar with that argument won't find out what it means from my saying that, while people who do know what it means already know what it means! And if I wanted to explain the argument in detail, I think it would take a couple of paragraphs at least.)

I really appreciated this, having watched GPI come out with papers that seemed really neat but incomprehensible to me.

Thanks for this - the summary is very useful and means I read a paper I would otherwise not known of! I've got a question which might be really obvious/stupid, but I'm no moral philosopher so apologies if it is.

I don't quite follow the example with the choices about A & B, and the probabilities p & q. I looked at the original paper and they formulated it differently (with the addition of C) so I was wondering if you could clear up this question:

p+qof savingnlives, and a(1-p-q)probability of saving 0 livespof savingN+nlives, otherwise saving 0 livespof savingnlives, and a probabilityqof savingnlives, so surely a probability ofp+qmeans you save2nlives?Thanks for reading James! It's a good question, let me get to it.

It's probably easier to see what's going on if we set some concrete numbers down. So let's say n is ten, and the states of nature are decided by rolling a six-sided die. The state with probability p (= 2/6) is where the die rolls 1 or 2, and the state with probability q (= 1/6) is where the die rolls 3. The last state with probability 1 - p - q (= 1/2) is where the die rolls anything else, so 4-6.

The table's then supposed to mean that on A, you save 10 lives if the die rolls 1 or 2, you also save 10 lives if the die rolls 3, and you save nobody if it rolls 4-6. Or, putting it another way, you save 10 lives if the die rolls between 1 and 3 (with probability 1/6 + 2/6 = 1/2) and save nobody otherwise.

I think something that maybe wasn't clear is that the probabilities in the tables are supposed to be attached to mutually exclusive events. That is, if you rolled a 1, you can't also have rolled a 3. So there's no way of saving 10 + 10 lives, because if you save 10 lives in one way (by rolling a 1), that means you didn't save 10 lives in another way (by rolling a 3).

Probability theoretic "better" is intransitive. See non-transitive dice

Imagine your life is a dice, and you have three options:

If we compare them: peace < adventure < lottery < peace, so I would deny transitivity.

The intransitive dice work because we do not care about the margin of victory. In expected value calculations the same trick does not work, so these three lives are all equal, with expected value 7/2