This response, however, has a number of problems. One problem is that the threshold seems arbitrary, regardless of where it is set. A critic could always say: "Why do you set the threshold at that value, rather than e.g. one order of magnitude higher or lower?" A more fundamental problem is that it seems that whether a scenario falls below or above a certain threshold is contingent on how
one carves up the space of possibilities. For example, an existential risk of 1-in-100 per century can be redescribed as an existential risk of 1-in-5.2 billion per minute. If the threshold is set to a value between those two numbers, whether one should or should not ignore the risk will depend merely on how one decides to describe it.
Another response is to adopt a prior that penalizes hypotheses in proportion to the number of people they imply we can affect. That is, one could adopt a view
on which there is roughly a 1 in 10n chance that someone will have the power to affect 10n people. Given this penalty, the mugger can no longer resort to the trick of increasing the number of people they threaten to kill in order to make the offer sufficiently attractive. As the number of people increases, the probability that they will be killed by the mugger decreases correspondingly, and the expected value of their successive proposals remains the same.
Regardless of how one responds to Pascal's mugging, it is important to note that it does not appear to affect the value assigned to "high-stakes" causes or interventions prioritized within the effective altruism community, such as AI safety research or other forms of existential risk mitigation. The case for working on these causes is not fundamentally different from more mundane arguments which
we do not take to fall under the scope of Pascal's mugging, such as voting in an election.