Hide table of contents

Summary: Many people have the intuition that it is preferable to have a guaranteed small positive payoff than to gamble on a tiny chance of a huge outcome. This is sometimes used to justify avoiding speculative research in favor of more grounded approaches. But this intuition sometimes conflicts with standard decision theory tools like expected value. In a working paper, Christian Tarsney comes up with a clever resolution to this conflict: if you have extreme uncertainty about how much moral value there is in the universe (which we probably do) then paradoxically the “gamble” is actually better across the board than the supposed “guarantee” (for a certain technical definition of “across-the-board”). The result is a decision procedure which agrees with expected value calculations for non-Pascallian situations, but disagrees in Pascallian ones. His paper is somewhat technical, but I think the intuition can be understood by anyone with a grasp of basic probability theory so I’m attempting to write an explanation.

I’m trying to give an accurate summary of his paper while avoiding technical details, which is a challenging task. Any mistakes are mine, and concerned readers can find a rigorous version of these theorems in his paper. 

Intro

Pascal’s wager scenarios (where we are forced to choose between a guaranteed small outcome or an extremely small chance of a gigantic outcome) conflict with many people’s intuitions. Seemingly reasonable principles imply that we should prefer a small chance of doing something exceptionally great over a guarantee of doing something pretty good, but this seems counterintuitive. 

Tarsney argues that these scenarios seem counterintuitive because the wagers are phrased in a misleading way. In particular, they are asked in a vacuum: even though you care about maximizing the total amount of goodness in the universe, the scenario usually asks whether or not you personally would accept some wager, but doesn't state anything about what happens in the rest of the universe. Since your decision about accepting the wager doesn't affect those other parts of the universe, philosophers assume that information about them can be left out of the thought experiment. But Tarsney shows, quite counterintuitively, that this is not true in circumstances where we have extreme uncertainty about what’s going on outside of our actions. In these scenarios, not only are Pascal wagers not reliant on small probabilities, they are in some sense guaranteed to be better than the "sure thing".

Furthermore, he argues that we do, in fact, have extreme uncertainty about how good/bad the rest of the universe is (e.g. because there might be some unexplored branch of the galaxy that has huge levels of suffering or happiness). 

How do we reconcile the apparent usefulness of expected value (EV) with it giving counterintuitive conclusions in Pascallian cases? Tarsney argues that we should use an alternative decision criterion called stochastic dominance which agrees with EV in non-Pascallian situations, but, when combined with the above argument about uncertainty, disagrees with EV in Pascallian ones. 

Thus stochastic dominance gives us the best of both worlds: we can gain the benefits of EV in non-Pascallian situations without the counterintuitive implications in Pascallian ones.

Note: this paper assumes that we are trying to maximize the total amount of good in the universe, rather than the difference one individually makes to the total good in the universe. The argument may not apply to moral frameworks which aren’t doing this.

Intuition

(Lightly adapted from the original paper)

Suppose you can choose between:

  • Safe: you will be guaranteed to make one person happier
  • Risky: 10% chance of making 20 people happier, 90% chance of making zero people happier

You may prefer Safe over Risky, even if Risky is better in expectation.

But now someone comes in and says: I'm going to choose a possibly huge random number (which could be negative or positive) and add that to whatever you chose. So if my random number is 1 million and you chose Safe, 1,000,001 people will be made happier. If my random number is -1 million and you chose Safe, 999,999 people will be made sadder.

The original appeal of Safe was that you would be guaranteed to cause some positive amount of happiness, but now that guarantee is gone. In fact, if the random numbers are chosen in the right way, the probability of you causing a positive amount of happiness is actually higher if you choose Risky than if you choose Safe. (The next section gives a simple example of how you can choose random numbers to make this true.) Even more strongly: we can choose random numbers such that, for any value X, the probability that total happiness in the universe is greater than X is at least as high, and sometimes higher, if you choose Risky than if you choose Safe. (I.e. Risky stochastically dominates Safe.)

Tarsney claims that this is the moral situation we are all in: huge amounts of value and disvalue are possibly being created in distant branches of the universe (analogous to someone adding or subtracting huge numbers from our result), and there is no realistic scenario in which we can guarantee even a tiny positive outcome. Therefore, many (most? all?) supposedly Pascallian wagers are actually not wagers at all, and are strictly better than the supposedly "safe" options. 

Slightly more technical intuition

Again suppose you are choosing between Safe and Risky. Someone comes in and says that, whatever you choose, they are going to subtract one from the final value. So if you choose Safe you will make 1-1 = 0 people happier, and if you choose Risky there is a 10% chance that you will make 20-1 = 19 people happier, and a 90% chance that you will make 0-1 = -1 people happier (i.e. one person sadder).

What is the probability that you will end up helping some positive number of people? If you choose Safe, the probability is 0% (since you are guaranteed to get 0 units). But if you choose Risky there is a 10% chance of the final number being greater than zero.

So, somewhat counterintuitively, Risky is actually more likely to result in a positive outcome.

Now suppose we have the same scenario, but instead of definitely subtracting 1, the person will subtract a randomly chosen integer one through 19, chosen uniformly. Again, Risky has a greater chance of being greater than zero, for the same reason: Safe will always give a negative outcome, whereas in 10% of cases Risky will give a positive outcome.

But now it also has a greater chance of being greater than minus one: Safe will be greater than -1 only if the randomly chosen number is 1, which happens 1/19th of the time. But Risky will be greater than minus one 10% of the time.

Now Risky is more likely to result in an outcome greater than -1.

We can continue to add larger sets of possible random numbers, creating scenarios in which Risky will be more likely to result in an outcome greater than -2, -3, -4, … If we continue this process infinitely, we find that there is some probability distribution of random numbers such that Risky is more likely than Safe to result in a scenario greater than any number, i.e. it stochastically dominates.

One technical point is that this distribution of random numbers has to not just have infinite range but also be fat-tailed, because otherwise the really large numbers wouldn’t matter “enough”.In practical terms, this means that it’s not enough to be uncertain about what else is going on in the universe, but we have to give substantial credence to the possibility that something extremely bad or extremely good is happening.

Some Final Notes

  1. Tarsney gives a concrete heuristic for deciding whether a small probability of a very large payoff actually stochastically dominates a smaller sure-thing payoff: If the probability of the large payoff is greater than the ratio of the sure-thing payoff to the interquartile range of the background distribution, then the 'long shot' option is probably stochastically dominant. He gives example numbers for certain longtermist interventions to see whether they stochastically dominate short-term interventions, but I don’t think anyone has done this rigorously.
  2. Tarsney summarizes some implications: "An initially counterintuitive feature of the preceding arguments is their implication that what an agent rationally ought to do can depend on her uncertainties about seemingly irrelevant features of the world. To put the point as sharply as possible: Whether I am rationally required, for instance, to take a risky action in a life-or-death situation can depend on my uncertainties about the existence, number, and welfare of sentient beings in distant galaxies, perhaps outside the observable universe, with whom I will never and can never interact, on whom my choices have no effect, and whose existence, number, welfare, etc, make no difference to the local effects of my choices."
  3. When given the choice between two options, it’s possible that neither will stochastically dominate the other. In contrast, EV will always tell us that one is better than another, or that they are equally valuable. So there may be some wager where stochastic dominance tells us that either accepting or rejecting the wager is permissible, but EV says one is strictly better than the other. This is either an advantage or disadvantage of stochastic dominance, depending on your perspective.

I would like to thank Christian Tarsney and Max Dalton for their comments on a draft.

44

0
0

Reactions

0
0

More posts like this

Comments11
Sorted by Click to highlight new comments since: Today at 4:52 PM

I find that this approach undermines one of the major intuitions behind utilitarianism in the first place: what is permissible, obligatory, etc., should not depend on parts of the universe that are independent of (unaffected by) my actions, (a stochastic version of) separability. It is no longer the case that what's best depends only on the ex ante prospects each individual faces, basically one of the assumptions in Harsanyi's argument for utilitarianism (Postulate c in the paper, assumption 3 here) and this generalization (Anteriority), because now the statistical dependence between individuals' prospects matters. You could assume separability (independence of unconcerned agents) in uncertainty-free cases and still arrive at utilitarianism, but you've still undermined the intuition. Why use an additive theory at all now?

Could you elaborate why this violates Pareto? I'm used to that assumption being phrased in terms of sure things, but even if you make it stochastic it still seems fine to say "if A stochastically dominates B for each person, then A > B".

And for what it's worth, this is not one of my major intuitions behind utilitarianism. Cluelessness already implies that I need to consider a butterfly flapping its wings before deciding whether to donate to AMF; stating that the butterfly could be outside my light cone doesn't seem qualitatively different.

(Possibly it is a key intuition that Harsanyi had, not sure. Also I do agree that considering consequences unaffected by my actions is a counterintuitive thing for any decision theory to do, moral or otherwise.)

Could you elaborate why this violates Pareto?

You can't get them to give opposite strict inequalities, i.e. A<B according to Pareto and A>B according to stochastic dominance, since a Pareto improvement implies higher expected total utility, which implies not stochastically dominated. But you can get a Pareto improvement that doesn't stochastically dominate (being incomparable). "Gamble A first-order stochastically dominates gamble B if and only if every expected utility maximizer with an increasing utility function prefers gamble A over gamble B.", which means that stochastic dominance with total utility is compatible with (but weaker than) the order implied by the expected value of any increasing function of total utility, including ones with very different risk preferences over total utility. So, you could apply  where , etc..

Let  be a random variable that's 0 or 1, with probability 0.5 each. Consider two options with the following utility prospects for a single person:

  1. .

1 is better, with expected value 0.5, while 2 has expected value 0.25. 1 also stochastically dominates 2. Pareto and stochastic dominance agree here.

Suppose there's another individual, with prospect  in both 1 and 2. Summing the utilities, we get

But neither stochastically dominates the other. 2 has a 100% probability of being at least 0.5, but 1 only has a 50%. Pareto would rule out 2, but stochastic dominance does not. Both are permissible. So, this violates your definition of Pareto, although it's compatible with a weak Pareto definition.

We can make it slightly worse with 3 options and 3 people:

Again, 3 is not stochastically dominated by 1 or 2, but 1 is stochastically dominated by 2, so ruled out. So 3 is permissible, while the Pareto improvement over it, 2, is not (although a better Pareto improvement, 3, is permissible). So, stochastic dominance permits an option (3) while ruling out another option (1) that Pareto dominates it. This of course doesn't mean 3 stochastically dominates 1, though.

And for what it's worth, this is not one of my major intuitions behind utilitarianism. Cluelessness already implies that I need to consider a butterfly flapping its wings before deciding whether to donate to AMF; stating that the butterfly could be outside my light cone doesn't seem qualitatively different.

Cluelessness seems to me to be a practical concern about prediction, not how you evaluate uncertain outcomes when distributions are specified. If we are assuming the butterfly is completely independent from what you're doing locally, then you're pretty much biting the bullet on the Egyptology objection, and what you should do or are allowed to do now can depend on how well off you think the long-dead ancient Egyptians were, for non-instrumental reasons (not because this knowledge changes predictions about future events). I'm personally willing to bite this bullet, though; I don't see why I can't just care about the whole distribution.

And then we can make it worse still with infinitely many options :P

  1. , for 

Here, each option in 1 is Pareto dominated and stochastically dominated by any option from 1 for larger n, and 2 is the only option which is not stochastically dominated. If you are not allowed to choose stochastically dominated options, then 2 is the only permissible option, despite being Pareto dominated by all the others. In general, though, I think you just want to go with something like "scalar utilitarianism" and allow yourself to choose stochastically dominated options when there are infinitely many of them, or else you may have no permissible options.

Oh interesting, thanks for sharing. These are compelling counterexamples

More technical blog post about this result here.

In a working paper, Christian Tarsney comes up with a clever resolution to this conflict

Fwiw, I was expecting that the "resolution" would be an argument for why you shouldn't take the wager.

If you do consider it a resolution: if Alice said she would torture a googol people if you didn't give her $5, would you give her the $5? (And if so, would you keep doing it if she kept upping the price, after you had already paid it?)

Thanks! I think the "stochastic dominance + background uncertainty" decision criterion makes two claims about muggings:

  1. If the mugging is not too Pascallian, it stochastically dominates "safe" options, which is a pretty strong argument for accepting it (and probably agrees with what an expected value calculation would dictate)
  2. If it is too Pascallian, neither it nor the safe option stochastically dominates, giving a principled reason for rejecting it

The hope is that your example would fall under case (2), but of course this depends on a bunch of particular assumptions about the background uncertainty.

While I think this is a fascinating concept, and probably pretty useful as a heuristic in the real hugely uncertain world, I don't think it addresses the root of the decision theoretic puzzles here. I - and I suspect most people? - want decision theory to give an ordering over options even assuming no background uncertainty, which SD can't provide on its own. If option A is 100% chance of -10 utility, and option B is 50% chance of -10^20 utility else 0, it seems obvious to me that B is a very very terrible, not rationally permitted choice. But in a world with no background uncertainty A would not stochastically dominate B.

I think it's worth keeping in mind that if the action A's expected value is higher than B's, then B can never stochastically dominate A, and there's (in theory) background uncertainty according to which A dominates B. So, if you have enough deep uncertainty about the background uncertainty and entertain multiple distributions, A might be better according to at least one, so that's a reason to prefer A, breaking the indifference.

On the other hand, you might also have deep uncertainty/complex cluelessness about which has higher expected value, anyway.

if you have extreme uncertainty about how much moral value there is in the universe (which we probably do) then paradoxically the “gamble” is actually better across the board than the supposed “guarantee”

Doesn't this depend more on the particulars? I.e. if it's "sufficiently" Pascalian, neither dominates the other, as you've written in the post and your reply to Rohin, and each is permissible.

Curated and popular this week
Relevant opportunities