Hide table of contents

Note: The Global Priorities Institute (GPI) has started to create summaries of some working papers by GPI researchers with the aim to make our research more accessible to people outside of academic philosophy (e.g. interested people in the effective altruism community). We welcome any feedback on the usefulness of these summaries.

Summary: A Paradox for Tiny Probabilities and Enormous Values

This is a summary of the GPI Working Paper "A Paradox for Tiny Probabilities and Enormous Values" by Nick Beckstead and Teruji Thomas. The summary was written by Tomi Francis.

Many decisions in life involve balancing risks with their potential payoffs. Sometimes, the risks are small: you might be killed by a car while walking to the shops, but it would be unreasonably timid to sit at home and run out of toilet paper in order to avoid this risk. Other times, the risks are overwhelmingly large: your lottery ticket might win tomorrow, but it would be reckless to borrow £20,000 from a loan shark named “Killer Clive” in order to pay the 50% upfront cost of tomorrow’s massive celebratory party (even if you really, really, really enjoy massive parties). The correct decision theory should tell us to be neither too timid nor too reckless, right? Wrong.

In “A Paradox for Tiny Probabilities and Enormous Values”, Nick Beckstead and Teruji Thomas argue that every decision theory is either “timid”, in the sense that it sometimes tells us to avoid small-probability risks no matter the size of the potential payoff, or “reckless”, in the sense that it sometimes tells us to accept almost-certain disaster in return for a small probability of getting a large enough payoff.

Beckstead and Thomas’s central case goes like this. Imagine that you have one year left to live, but you can swap your year of life for a ticket that will, with probability , give you ten years of life, but which will otherwise kill you immediately with probability . You can also take multiple tickets: two tickets will give a probability  of getting  years of life, otherwise death, three tickets will give a probability  of getting  years of life, otherwise death, and so on. It seems objectionably timid to say that taking tickets is not better than taking n tickets. Given that we don’t want to be timid, we should say that taking one ticket is better than taking none; two is better than one; three is better than two; and so on. 

We now need to introduce a crucial assumption. According to transitivity, if  is better than  and  is better than  must be better than . So, if one ticket is better than none, and two tickets is better than one, then two tickets is better than none. Since three tickets is better than two, and two tickets is better than none, three tickets is also better than none. Carrying on in this way, it can be shown that for any n, taking n tickets is better than taking none. But that seems objectionably reckless: taking, say, 50,000 tickets will result in almost certain death.[1] We thus have a paradox: it seems unreasonable to be timid and it also seems unreasonable to be reckless, but given transitivity, if we’re not timid, we have to be reckless.

It’s worth flagging that there are some philosophers who think we should reject transitivity in light of puzzles like these. But that’s a pretty unattractive option, at least on the face of it: if the deals keep getting better, how could the last one fail to be better than the first one? Beckstead and Thomas don’t much discuss the rejection of transitivity, and I shall follow them in setting it aside. Having done so, we have to accept either timidity or recklessness: Beckstead and Thomas’s argument leaves us no choice.

How bad would it be to be timid? You might think that it’s not so obvious that it’s always better to live ten times as long, with a very slightly smaller probability, rather than to live a tenth as long with a slightly larger probability. If we only had to accept something like this in order to avoid recklessness, things wouldn’t be too bad. But unfortunately, Beckstead and Thomas’s paradox doesn’t just apply to cases where we’re deciding what’s best for ourselves. It also applies to cases where we’re choosing on behalf of others: that is, it applies to moral decision-making. And timidity for moral decision-making is really hard to accept.

To see why, let’s modify the central example a little. Imagine that each successive ticket, rather than allowing us to live for ten times as long, allows us to save ten times as many lives. Taking an additional ticket then results in a very slightly smaller probability of saving many more lives. Indeed, each time you take another ticket, the expected number of lives saved becomes almost ten times greater. So it seems pretty clearly morally better to take another ticket, no matter how many tickets you’ve taken already.

Brute intuitions like these aside, Beckstead and Thomas show that timid theories face a number of other objections. For example, if you’re “timid” about saving lives, then your moral decision-making is going to depend in strange ways on events in distant regions of the universe, over which you have no control.

Let’s see how that works. Suppose that you have two choices,  and  gives a slightly higher probability of saving  lives, while  gives a slightly lower probability of saving  lives, where  is much larger than . These two choices are summarised in the table below:

 Probabilities
 
A lives saved lives saved lives saved
 lives saved lives saved lives saved

 

If you’re timid about saving lives, you think that it’s better to save some large number of lives n with some probability , rather than saving any larger number of lives  with a slightly smaller probability . So you’re going to have to say that  is better than  in some version of this case.

Now, here’s an interesting fact about this case. No matter what you do, at least  lives will be saved with probability . If we imagine that these  lives correspond to the same people either way, then whether or not these people are saved has nothing to do with your choice between  and . It could be that these people are located in some distant galaxy, and that the reason there is a probability  that they will be saved is that somebody else in that distant galaxy chose to attempt to save them. Now consider the following case, which is just like the preceding one, but where the distant agent chose not to attempt to save these lives: 

 Probabilities
 
 lives saved lives saved lives saved
 lives saved lives saved lives saved

If  is greater than  and  is greater than  is obviously better than  from a moral perspective: it gives a greater chance of saving a greater number of lives. But hold on: timidity told us that  is better than . We’re saying that, and we’re also saying that  is better than , even though the two pairs of cases differ only in the matter of whether some other agent in a distant galaxy decided to attempt to save  lives. That’s very strange, and very hard to believe! 

Is accepting recklessness more palatable than accepting timidity? It might seem so at first: while recklessness is somewhat counter-intuitive, at least there seems to be a consistent way of being reckless: just maximise the expectation of the thing you think is good, like the number of lives saved, or the number of years of life you’re going to enjoy. But even if we can swallow almost-certain death, Beckstead and Thomas point out that reckless decision theories face further challenges in infinite cases: they obsess over infinities in a troubling way, and they have trouble dealing with certain infinite gambles. To make it easy to see how the two challenges arise, we’ll consider a reckless agent who wants to maximise the expected number of years of good life they shall have in finite cases. (This will make the problems easier to see, but they generalise to all reckless decision-makers.)

First, infinity obsession. For any small probability , a reckless agent will think that rather than having n years of good life for certain, it would be better to get probability  of  years of good life, otherwise certain death, provided  is sufficiently large. Infinitely many years of good life is presumably better than any finite number of years, and so you should likewise prefer any probability, no matter how small, of getting infinitely many years of good life, to the certainty of any finite number of years of good life. In other words, it seems that reckless decision makers will obsessively pursue infinite amounts of value, no matter how unlikely it is that their pursuit will yield anything at all.

Reckless decision-makers face another kind of problem when it comes to the “St. Petersburg” gamble. In this gamble, a fair coin is flipped until it lands tails, no matter how long it takes. The player then gets a payoff of  life years, where n is the number of times the coin landed heads. The payoffs are illustrated by the table below.

Number of heads1234
Life years received24816

 

Compared to getting n years of good life for certain, it would be better to instead get a truncated version of the St. Petersburg gamble which ends prematurely if the first  flips land heads, since this will yield an expected years of good life. Clearly, any truncated St. Petersburg gamble is worse than the un-truncated gamble: it’s better to have the small chance of continuing after flips than it is to not have this chance. So, for reckless decision-makers, it must be better to get the St. Petersburg gamble than to get any finite number of years of good life for certain. This is especially odd in that it means that it must be better to get the St. Petersburg gamble than to get any of its possible outcomes. That’s another kind of paradox. 

To summarise, Beckstead and Thomas show that we’ve got some hard choices to make when it comes to decision-making under risk. Perhaps we’ve sometimes got to be very timid, in which case we also need to think that what we should do sometimes depends in strange ways on parts of the world we can’t affect at all. Or, we’ve sometimes got to be very reckless, and obsess over infinities. Or we’ve got to deny transitivity: we’ve got to believe that  can be better than , and  can be better than , without  being better than  

None of these options looks good. But then nobody said decision theory was going to be easy. 

References 

Beckstead, N. and Thomas, T. (2021), A paradox for tiny probabilities and enormous values. GPI Working Paper No. 7–2021.

  1. ^

    The probability of survival if 50,000 tickets are taken is , which is less than .


     

Comments7


Sorted by Click to highlight new comments since:

From an expository perspective, I think it would be better for you to explain more the issues with non-transitivity in moral theories, even though the paper does not. This summary is targeted at people without a ton of philosophical knowledge, whereas the paper is targeted at philosophers who are already familiar with the debates around non-transitivity. Giving context to the paradox requires knowing more about transitivity as a philosophical criterion.

But otherwise, good summary!

Thanks for reading! I totally agree with you that there's a lot to talk about when it comes to non-transitive moral theories. I did consider going into it in more depth. I agree with you that there's a good reason to do so: it might not be clear, especially to non-philosophers, how secure principles like transitivity really are. But there are also two good reasons on the other side for not going into it further, and I thought on balance they were a bit stronger. 

The first one is that I was summarising the paper, so I didn't want to spend too much time giving my own views (and it would have to be my own views, given that the original paper doesn't really discuss it). The second reason, which is probably more important, is that I was really trying hard to keep the word count down, and I felt that if I were to say more about non-transitivity than I already did, it would probably take a lot of space/words to do so. 

(Suppose I did something very quick - for example, suppose I just gave Broome's standard line that we should accept transitivity because it's a consequence of the logic of comparatives. Setting aside whether that's actually a good argument, if I just said that without explaining it further I think there are very few people it would help: people who aren't familiar with that argument won't find out what it means from my saying that, while people who do know what it means already know what it means! And if I wanted to explain the argument in detail, I think it would take a couple of paragraphs at least.)

I really appreciated this, having watched GPI come out with papers that seemed really neat but incomprehensible to me.

Thanks for this - the summary is very useful and means I read a paper I would otherwise not known of! I've got a question which might be really obvious/stupid, but I'm no moral philosopher so apologies if it is.

I don't quite follow the example with the choices about A & B, and the probabilities p & q. I looked at the original paper and they formulated it differently (with the addition of C) so I was wondering if you could clear up this question:

  • I don't fully understand how you've structured the table. What I think you mean from the text is:
    • Option A means there is a probability of p+q of saving n lives, and a (1-p-q) probability of saving 0 lives
    • Option B means there is a probability p of saving N+n lives, otherwise saving 0 lives
  • However, from the table, under choice A it looks like you have a probability p of saving n lives, and a probability q of saving n lives, so surely a probability of p+q means you save 2n lives? 

Thanks for reading James! It's a good question, let me get to it.

It's probably easier to see what's going on if we set some concrete numbers down. So let's say n is ten, and the states of nature are decided by rolling a six-sided die. The state with probability p (= 2/6)  is where the die rolls 1 or 2, and the state with probability q (= 1/6) is where the die rolls 3. The last state with probability 1 - p - q (= 1/2) is where the die rolls anything else, so 4-6.

The table's then supposed to mean that on A, you save 10 lives if the die rolls 1 or 2, you also save 10 lives if the die rolls 3, and you save nobody if it rolls 4-6. Or, putting it another way, you save 10 lives if the die rolls between 1 and 3 (with probability 1/6 + 2/6 = 1/2) and save nobody otherwise.

I think something that maybe wasn't clear is that the probabilities in the tables are supposed to be attached to mutually exclusive events. That is, if you rolled a 1, you can't also have rolled a 3. So there's no way of saving 10 + 10 lives, because if you save 10 lives in one way (by rolling a 1), that means you didn't save 10 lives in another way (by rolling a 3).

Probability theoretic "better" is intransitive. See non-transitive dice

Imagine your life is a dice, and you have three options:

  • 4 4 4 4 4 1
    • You live a mostly peaceful life, but there is a small chance of doom.
  • 5 5 5 2 2 2
    • You go on a big adventure: either a trasure or a disappointment.
  • 6 3 3 3 3 3
    • You put all your cards in a lottery for epic win, but on fail, you will carry that with you.

If we compare them: peace < adventure < lottery < peace, so I would deny transitivity.

The intransitive dice work because we do not care about the margin of victory.  In expected value calculations the same trick does not work, so these three lives are all equal, with expected value 7/2

Curated and popular this week
 ·  · 32m read
 · 
Summary Immediate skin-to-skin contact (SSC) between mothers and newborns and early initiation of breastfeeding (EIBF) may play a significant and underappreciated role in reducing neonatal mortality. These practices are distinct in important ways from more broadly recognized (and clearly impactful) interventions like kangaroo care and exclusive breastfeeding, and they are recommended for both preterm and full-term infants. A large evidence base indicates that immediate SSC and EIBF substantially reduce neonatal mortality. Many randomized trials show that immediate SSC promotes EIBF, reduces episodes of low blood sugar, improves temperature regulation, and promotes cardiac and respiratory stability. All of these effects are linked to lower mortality, and the biological pathways between immediate SSC, EIBF, and reduced mortality are compelling. A meta-analysis of large observational studies found a 25% lower risk of mortality in infants who began breastfeeding within one hour of birth compared to initiation after one hour. These practices are attractive targets for intervention, and promoting them is effective. Immediate SSC and EIBF require no commodities, are under the direct influence of birth attendants, are time-bound to the first hour after birth, are consistent with international guidelines, and are appropriate for universal promotion. Their adoption is often low, but ceilings are demonstrably high: many low-and middle-income countries (LMICs) have rates of EIBF less than 30%, yet several have rates over 70%. Multiple studies find that health worker training and quality improvement activities dramatically increase rates of immediate SSC and EIBF. There do not appear to be any major actors focused specifically on promotion of universal immediate SSC and EIBF. By contrast, general breastfeeding promotion and essential newborn care training programs are relatively common. More research on cost-effectiveness is needed, but it appears promising. Limited existing
 ·  · 2m read
 · 
Summary: The NAO will increase our sequencing significantly over the next few months, funded by a $3M grant from Open Philanthropy. This will allow us to scale our pilot early-warning system to where we could flag many engineered pathogens early enough to mitigate their worst impacts, and also generate large amounts of data to develop, tune, and evaluate our detection systems. One of the biological threats the NAO is most concerned with is a 'stealth' pathogen, such as a virus with the profile of a faster-spreading HIV. This could cause a devastating pandemic, and early detection would be critical to mitigate the worst impacts. If such a pathogen were to spread, however, we wouldn't be able to monitor it with traditional approaches because we wouldn't know what to look for. Instead, we have invested in metagenomic sequencing for pathogen-agnostic detection. This doesn't require deciding what sequences to look for up front: you sequence the nucleic acids (RNA and DNA) and analyze them computationally for signs of novel pathogens. We've primarily focused on wastewater because it has such broad population coverage: a city in a cup of sewage. On the other hand, wastewater is difficult because the fraction of nucleic acids that come from any given virus is very low,[1] and so you need quite deep sequencing to find something. Fortunately, sequencing has continued to come down in price, to under $1k per billion read pairs. This is an impressive reduction, 1/8 of what we estimated two years ago when we first attempted to model the cost-effectiveness of detection, and it makes methods that rely on very deep sequencing practical. Over the past year, in collaboration with our partners at the University of Missouri (MU) and the University of California, Irvine (UCI), we started to sequence in earnest: We believe this represents the majority of metagenomic wastewater sequencing produced in the world to date, and it's an incredibly rich dataset. It has allowed us to develop
Linch
 ·  · 6m read
 · 
Remember: There is no such thing as a pink elephant. Recently, I was made aware that my “infohazards small working group” Signal chat, an informal coordination venue where we have frank discussions about infohazards and why it will be bad if specific hazards were leaked to the press or public, accidentally was shared with a deceitful and discredited so-called “journalist,” Kelsey Piper. She is not the first person to have been accidentally sent sensitive material from our group chat, however she is the first to have threatened to go public about the leak. Needless to say, mistakes were made. We’re still trying to figure out the source of this compromise to our secure chat group, however we thought we should give the public a live update to get ahead of the story.  For some context the “infohazards small working group” is a casual discussion venue for the most important, sensitive, and confidential infohazards myself and other philanthropists, researchers, engineers, penetration testers, government employees, and bloggers have discovered over the course of our careers. It is inspired by taxonomies such as professor B******’s typology, and provides an applied lens that has proven helpful for researchers and practitioners the world over.  I am proud of my work in initiating the chat. However, we cannot deny that minor mistakes and setbacks may have been made over the course of attempting to make the infohazards widely accessible and useful to a broad community of people. In particular, the deceitful and discredited journalist may have encountered several new infohazards previously confidential and unleaked: * Mirror nematodes as a solution to mirror bacteria. "Mirror bacteria," synthetic organisms with mirror-image molecules, could pose a significant risk to human health and ecosystems by potentially evading immune defenses and causing untreatable infections. Our scientists have explored engineering mirror nematodes, a natural predator for mirror bacteria, to