•

Applied to What is so wrong with the "dogmatic" solution to recklessness? 7mo ago

•

Applied to Should strong longtermists really want to minimize existential risk? 10mo ago

•

Applied to SBF, Pascal's Mugging, and a Proposed Solution 10mo ago

•

Applied to EV Maximization for Humans 1y ago

•

Applied to The future of humanity 1y ago

•

Applied to Does this solve Pascal's Muggings? 1y ago

•

Applied to Biting the Bullet on Pascal's Mugging 1y ago

•

Applied to Most* small probabilities aren't pascalian 1y ago

•

Applied to Cosmic's Mugger : Should we really delay cosmic expansion ? 1y ago

•

Applied to Fanatical EAs should support very weird projects 1y ago

**Pascal's mugging** is a thought experiment intended to raise a problem for expected value theory. Unlike Pascal's wager, Pascal's mugging does not involve infinite utilities or probabilities, so the problem it raises is separate from any of the known paradoxes of infinity.

The thought experiment and its name first appeared in a blog post by Eliezer Yudkowsky.^{[1]} Nick Bostrom later elaborated it in the form of a fictional dialogue.^{[2]}

In Yudkowsky's original formulation, a person is approached by a mugger who threatens to kill an astronomical number of people unless the person agrees to give them five dollars. Even a tiny probability assigned to the hypothesis that the mugger will deliver on his promise seems sufficient to make the prospect of giving the mugger five dollars better than the alternative, in expectation. The minuscule chance that the mugger is willing and able to save astronomically many people is more than compensated by the enormous value of what is at stake. (If one thinks the probability too low, the number of lives the mugger threatens to kill could be arbitrarily increased.) The thought experiment supposedly raises a problem for expected value theory because it seems intuitively absurd that we should give money to the mugger, yet this is what the theory apparently implies.

A variety of responses have been developed. One common response is to revise or reject expected value theory. A frequent revision is to ignore scenarios whose probability is below a certain threshold.

This response, however, has a number of problems. One problem is that the threshold seems arbitrary, regardless of where it is set. A critic could always say: "Why do you set the threshold at that value, rather than e.g. one order of magnitude higher or lower?" A more fundamental problem is that it seems that whether a scenario falls below or above a certain threshold is contingent on how one carves up the space of possibilities. For example, an existential risk of 1-in-100 per century can be redescribed as an existential risk of 1-in-5.2 billion per minute. If the threshold is set to a value between those two numbers, whether one should or should not ignore the risk will depend merely on how one decides to describe it.

Another response is to adopt a prior that penalizes hypotheses in proportion to the number of people they imply we can affect. That is, one could adopt a view on which there is roughly a 1 in 10^{n} chance that someone will have the power to affect 10^{n} people. Given this penalty, the mugger can no longer resort to the trick of increasing the number of people they threaten to kill in order to make the offer sufficiently attractive. As the number...

alternatives to expected value theory | altruistic wager | decision theory | fanaticism | risk aversion

LessWrong (2020) Pascal’s mugging, *LessWrong Wiki*, August 3 (updated 23 September 2020).

•

Applied to Tiny Probabilities of Vast Utilities: Concluding Arguments 2y ago

•

Applied to Tiny Probabilities of Vast Utilities: A Problem for Long-Termism? 2y ago

•

Applied to Tiny Probabilities of Vast Utilities: Bibliography and Appendix 2y ago

This response, however, has a number of problems. One problem is that the threshold seems arbitrary, regardless of where it is set. A critic could always say: "Why do you set the threshold at that value, rather than e.g. one order of magnitude higher or lower?" A more fundamental problem is that it seems that whether a scenario falls below or above a certain threshold is contingent on how

~~one carves up~~the space of~~possibilities.~~possibilities is carved up. For example, an existential risk of 1-in-100 per century can be redescribed as an existential risk of 1-in-5.2 billion per minute. If the threshold is set to a value between those two numbers, whether one should or should not ignore the risk will depend merely on how one~~decides to describe~~describes it.Another response is to adopt a prior that penalizes hypotheses in proportion to the number of people they imply we can affect. That is, one could adopt a view

~~on~~in which there is roughly a 1 in 10^{n}chance that someone will have the power to affect 10^{n}people. Given this penalty, the mugger can no longer resort to the~~trick~~expedient of increasing the number of people they threaten to kill in order to make the offer sufficiently attractive. As the number of people increases, the probability that they will be killed by the mugger decreases~~correspondingly,~~commensurately, and the expected value of their successive proposals remains the same.Regardless of how one responds to Pascal's mugging, it is important to note that it does not appear to affect the value assigned to "high-stakes" causes or interventions prioritized within the effective altruism community, such as AI safety research or other forms of existential risk mitigation. The case for working on these causes is not fundamentally different from more mundane arguments which

~~we~~do not~~take to~~plausibly fall under the scope of Pascal's mugging, such as voting in an election.^{[3]}^{[4]}