Hide table of contents

Summary: We routinely act to prevent, mitigate, or insure against risks with P = 'one-in-a-million'. Risks similarly or more probable than this should not prompt concerns about 'pascal's mugging' etc.

Motivation

Reckless appeals to astronomical stakes often prompt worries about pascal's mugging or similar. Sure, a 10^-20 chance of 10^40 has the same expected value as 10^20 with P = 1, but treating them as equivalent when making decisions is counter-intuitive. Thus one can (perhaps should) be wary of lines which amount to "The scale of the longterm future is so vast we can basically ignore the probability - so long as it is greater than 10^-lots - to see x-risk reduction is the greatest priority."

Most folks who work on (e.g.) AI safety do not think the risks they are trying to reduce are extremely (nor astronomically) remote. Pascalian worries are unlikely to apply to attempts to reduce a baseline risk of 1/10 or 1/100. They also are unlikely to apply if the risk is a few orders of magnitude less (or a few orders of magnitude less tractable to reduce) than some suppose.

Despite this, I sometimes hear remarks along the lines of "I only think this risk is 1/1000 (or 1/10 000, or even 'a bit less than 1%') so me working on this is me falling for Pascal's wager." This is mistaken: an orders-of-magnitude lower risk (or likelihood of success) makes, all else equal, something orders of magnitude less promising, but it does not mean it can be dismissed out-of-hand.

Exactly where the boundary should be drawn for pascalian probabilities is up for grabs (10^-10 seems reasonably pascalian, 10^-2 definitely not). I suggest a very conservative threshold at '1 in a million': human activity in general (and our own in particular) is routinely addressed to reduce, mitigate, or insure against risks between 1/1000 and 1/1 000 000, and we typically consider these activities 'reasonable prudence' rather than 'getting mugged by mere possibility'.

Illustrations

Among many other things:

Aviation and other 'safety critical' activities

One thing which can go wrong when flying an airliner is an engine stops working. Besides all the engineering and maintenance to make engines reliable, airlines take many measures to mitigate this risk:

  • Airliners have more than one engine, and are designed and operated so that they are able to fly and land at a nearby airport 'on the other engine' should one fail at any point in the flight.
  • Pilots practice in initial and refresher simulator training how to respond to emergencies like an engine failure (apparently engine failure just after take-off is the riskiest)
  • Pilots also make a plan before each flight what to do 'just in case' an engine fails whilst they are taking off.

This risk is very remote: the rate of (jet) engine failure is something like 1 per 400 000 flight hours. So for a typical flight, maybe the risk is something like 10^-4 to 10^-5. The risk of an engine failure resulting in a fatal crash is even more remote: the most recent examples I could find happened in the 90s. Given the millions of airline flights a year, '1 in a million flights' is comfortable upper bound.

Similarly, the individual risk-reduction measures mentioned above are unlikely to be averting that many micro(/nano?) crashes. A pilot who (somehow) manages to skive off their recurrency training or skip the pre-flight briefing may still muddle through if the risk they failed to prepare for realises. I suspect most consider the diligent practice by pilots for events they are are unlikely to ever see in their career admirable rather than getting suckered by Pascal's mugging.

Aviation is the poster child of safety engineering, but it is not unique. Civil engineering disasters (think building collapses) share similar properties to aviation ones: the absolute rate is very low (plausibly of the order of '1 per million structure years' or lower); this low risk did not happen by magic, but rather through concerted efforts across design, manufacture, maintenance, and operation; by design, a failure in one element should not lead to disaster (cf. the famous 'swiss cheese model'); it also means the marginal effect of each contribution to avoiding disaster is very small. A sloppy inspection or shoddy design does not guarantee disaster, yet thorough engineers and inspectors are lauded rather than ridiculed. The same points can be made can be made across most safety and many security fields.

Voting (and similar collective actions)

There's a well worn discussion about the rationality of voting, given the remote likelihood a marginal vote is decisive. A line popular in EA-land is considering voting as rational charity: although the expected value to you if your vote is decisive to get the better party in government might only be cents, if one is voting for party one believes best improves overall welfare across the electorate, this expected value (roughly) gets multiplied by population, and so climbs into the thousands of dollars.

Yet this would not be rational if in fact individuals should dismiss motivating reasons based on 1/1 000 000 probabilities as 'Pascal's mugging'. Some initial work making this case suggested the key probability of being the decisive vote was k/ n(voters in election), with k between 1 and 10 depending how close the election was expected to be. So voting in larger elections (>10M voters) could not be justified by this rationale.

The same style of problem applies to other collective activity: it is similarly unlikely you will make the decisive signature for a petition to be heeded, the decisive attendance for a protest to gain traction, nor the decisive vegan which removes a granule of animal product production. It is doubtful they are irrational in virtue of Pascal's mugging.

Asteroid defence

Perhaps the OG x-risk reducers are those who work on planetary defence: trying to look for asteroids which could hit Earth; and planning how, if one was headed our way, how a collision could be avoided. These efforts have been ongoing for decades, and have included steadily improvements in observation and detection, and recently included tests of particular collision avoidance methods.

The track record of asteroid impacts (especially the most devastating) indicate this risk is minute, and still lower when conditioned on diligent efforts to track near-earth objects and finding none of the big ones are on a collision course. Once again, the rate of a 'planet killer' collision is somewhere around 10^-6 to 10^-7 per century. Also once again, I think most are glad this risk is being addressed rather than ignored.

Conclusion

Three minor points, then one major one.

One is that Pascal's mugging looks wrong at 1 in a million does not make the worry generally misguided; even if 'standard rules apply' at 10^-6, maybe something different is called for at (say) 10^-60. I only argue pascalian worries are inapposite at or above the 10^-6 level.

Two is you can play with interval or aggregation to multiply up or down the risk. Even if 'not looking both ways before you cross the road' once incurs only a minute risk of serious injury (as a conservative BOTEC: ~2000 injuries/year in the UK, ~70M population, cross a road 1/day on average, RR = 100 for not looking both ways ~~ 8/ million per event?) following this as a policy across one's lifetime increases the risk a few orders of magnitude - and everyone following this policy would significantly increase the number of pedestrians struck and killed by cars each year.

This highlights a common challenge for naive (and some not so naive) efforts to 'discount low probabilities': we can slice up some composite risk reduction measures such that individually every one should be rejected ("the risk of getting hit by a car but-for looking both ways as you cross the road is minuscule enough to the subject of Pascal's mugging"), yet we endorse the efforts as a whole given their aggregate impact. Perhaps the main upshot was 1/1 000 000 risk as the 'don't worry about Pascal's mugging' was too conservative - maybe more 'at least 1/1 000 000 on at least one reasonable way to slice it'

Three is we might dispute how reasonable the above illustrations are. Maybe we err risk-intolerant or over-insure ourselves; maybe the cost savings of slightly laxer airline safety would be worth an aviation disaster or two a year; maybe the optimal level of political engagement should be lower than the status quo; maybe asteroid impacts are so remote its practitioners should spend their laudable endeavour elsewhere. Yet even if they are unreasonable, they are unreasonable because the numbers are not adding up (/multiplying together) per orthodox expected utility theory, and not because they should have been ruled out in principle by a 'pascal's mugging' style objection.

Finally, returning to x-risk. The examples above were also chosen to illustrate a different 'vibe' that could apply to x-risk besides 'impeding disaster and heroic drama'. Safety engineering is non-heroic by design: a saviour snatching affairs from the jaws of disaster indicates an intolerable single point of failure. Rather, success is a team effort which is resilient to an individual's mistake, and their excellence only slightly notches down the risk even further. Yet this work remains both laudable and worthwhile: a career spent investigating not-so-near misses to tease out human factors to make them even more distant has much to celebrate, even if not much of a highlight reel.

'Existential safety' could be something similar. Risks of AI, nukes, pandemics etc. should be at least as remote of those of a building collapsing, a plane crashing, or a nuclear power plant melting down. Hopefully these risks are similarly remote, and hopefully one's contribution amounts to a slight incremental reduction. Only the vainglorious would wish otherwise.

Not all hopes are expectations, and facts don't care about how well we vibe with them. Most people working on x-risk (including myself) think the risks they work on are much more likely than an airliner crashing. Yet although the scale of the future may be inadequate stakes for pascalian gambles, its enormity is sufficient to justify most non-pascalian values. Asteroid impacts, although extremely remote, still warrant some attention. If it transpired all other risks were similarly unlikely, I'd still work on mine.

Whether x-risk reduction is the best thing one can do if the risks are more 'bridge collapse' than 'Russian roulette' turns on questions like how should one price the value of the future, and how it stacks up versus other contributions to other causes. If you like multiplying things at approximate face value as much as I do, the answer plausibly still remains 'yes'. But if 'no', pascal's mugging should not be the reason why.

Comments20
Sorted by Click to highlight new comments since: Today at 5:33 PM

Great post!  I like your '1 in a million' threshold as a heuristic, or perhaps sufficient condition for being non-Pascalian.  But I think that arbitrarily lower probabilities could also be non-Pascalian, so long as they are sufficiently "objective" or robustly grounded.

Quick argument for this conclusion: just imagine scaling up the voting example.  It seems worth voting in any election that significantly affects N people, where your chance of making a (positive) difference is inversely proportional (say within an order of magnitude of 1/N, or better).  So long as scale and probability remain approximately inversely proportional, it doesn't seem to make a difference to the choice-worthiness of voting what the precise value of N is here.

Crucially, there are well-understood mechanisms and models that ground these probability assignments.  We're not just making numbers up, or offering a purely subjective credence.  Asteroid impacts seem similar.  We might have robust statistical models, based on extensive astronomical observation, that allow us to assign a 1/trillion chance of averting extinction through some new asteroid-tracking program, in which case it seems to me that we should clearly take those expected value calculations at face value and act accordingly.  However tiny the probabilities may be, if they are well-grounded, they're not "Pascalian".

Pascalian probabilities are instead (I propose) ones that lack robust epistemic support.  They're more or less made up, and could easily be "off" by many, many orders of magnitude.  Per Holden Karnofsky's argument in 'Why we can't take explicit expected value estimates literally', Bayesian adjustments would plausibly mandate massively discounting these non-robust initial estimates (roughly in proportion to their claims to massive impact), leading to low adjusted expected value after all.

I like the previous paragraph as a quick solution to "Pascal's mugging".  But even if you don't think it works, I think this distinction between robust vs non-robustly grounded probability estimates may serve to distinguish intuitively non-Pascalian vs Pascalian tiny-probability gambles.

Conclusion: small probabilities are not Pascalian if they are either (i) not ridiculously tiny, or (ii) robustly grounded in evidence.

I agree with this.

People generally find it difficult to judge the size of these kinds of small probabilities that lack robust epistemic support. That means that they could be susceptible to conmen telling them stories of potential events which, though unlikely (according to the listener's estimate), has a substantial expected value due to huge payoffs were they to occur (akin to Pascal's mugging). It may be that people have developed defence mechanism against this, and reject claims of large expected value involving non-robust probabilities to avoid extortion. I once had plans to study this psychological hypothesis empirically, but abandoned them.

Thanks for this, Richard.

As you (and other commenters) note, another aspect of Pascalian probabilities is their subjectivity/ambiguity. Even if you can't (accurately) generate "what is the probability I get hit by a car if I run across this road now?", you have "numbers you can stand somewhat near" to gauge the risk - or at least 'this has happened before' case studies (cf. asteroids). Although you can motivate more longtermist issues via similar means (e.g. "Well, we've seen pandemics at least this bad before", "What's the chance folks raising grave concern about an emerging technology prove to be right?") you typically have less to go on and are reaching further from it.

I think we share similar intuitions: this is a reasonable consideration, but it seems better to account for it quantitatively (e.g. with a sceptical prior or discount for 'distance from solid epistemic ground') rather than a qualitative heuristic. E.g. it seems reasonable to discount AI risk estimates (potentially by orders of magnitude) if it all seems very outlandish to you - but then you should treat these 'all things considered' estimates at face value.

 

The problem (of worrying that you're being silly and getting mugged) doesn't arise when probabilities are tiny, it's when probabilities are tiny and you're highly uncertain. We have pretty good bounds in the three areas you listed, but I do not have good bounds on say, the odds that "spending the next year of my life on AI Safety research" will prevent x-risk.

In the former cases, we have base rates and many trials. In the latter case, I'm just doing a very rough fermi estimate. Say I have 5 parameters with an order of magnitude of uncertainty on each one, which when multiplied out, is just really horrendous.

Anyway, I mostly agree with what you're saying, but it's possible that you're somewhat misunderstanding where the anxieties you're responding to are coming from.


 

Thank you for writing this.

My preferred resolution of Pascal's Mugging is not to set an arbitrary threshold, such as 1-in-1,000,000 (although I agree that probabilities above 1-in-1,000,000 likely don't count as Pascal's Muggings).

I prefer the Reversal/Inconsistency Test:

If the logic seems to lead to taking action X, and seems to equally validly lead to taking an action inconsistent with X, then I treat it as a Pascal's Mugging.

Examples:

  • Original Pascal's Mugging:
    • The original Pascal's Mugging suggests you should give the mugger your 10 livres in the hope that you get the promised 10 quadrillion Utils.
    • The test: It seems equally valid that there's an "anti-mugger" out there who is thinking "if Pascal refuses to give the mugger the 10 livres, then I will grant him 100 quadrillion Utils". There is no reason to privilege the mugger who is talking to you, and ignore the anti-mugger whom you can't see.
    • Conclusion: fails the Reversal/Inconsistency Test, so treat as a Pascal's Mugging and ignore.
  • Extremely unlikely s-risk example:
    • I claim that the fart goblins of Smiggledorf will appear in the winter solstice of the year 2027, and magically keep everyone alive for 1 googleplex years, but subject them to constant suffering by having to smell the worst farts you've ever imagined. The smells are so bad that the suffering that each person experiences in one minute is equivalent to 1 million lifetimes of suffering.
    • The only way to avoid this horrific outcome is to earn as much money as you can, and donate 90% of your income to a very nice guy with the EA Forum username "sanjay".
    • The test: Is there any reason to believe that donating all this money will make the fart goblins less likely to appear, as opposed to more?
    • Conclusion: fails the Reversal/Inconsistency Test, so treat as a Pascal's Mugging and ignore.
  • Extremely likely x-risk example: 
    • In the distant land of Utopi-doogle, everyone has a wonderful, beautiful life, except for one lady called Cassie who runs around anxiously making predictions. Her first prediction is incredibly specific and falsifiable, and turns out to be correct. Same for her second, and her third, and after 100 highly specific, falsifiable and incredibly varied predictions, with a 100% success rate, she then predicts that Utopi-doogle will likely explode killing everyone.
    • The only way to save Utopi-doogle is for every able-bodied adult to stamp their foot while saying Abracadabra. Unfortunately, you have to get the correct foot -- if some people are stamping their right foot and some are stamping their left foot, it won't work. If everyone is stamping their left foot, this will either mean that Utopi-doogle is saved, or that Utopi-doogle will be instantly destroyed.
    • A politician sets up a Left Foot movement arguing that we should try to save Utopi-doogle by arranging a simultaneous left foot stamp.
    • The test: The simultaneous left foot stamp has equal chance of causing doom as of saving civilisation.
    • Conclusion: fails the Reversal/Inconsistency Test, so treat the politician's suggestion as a Pascal's Mugging and ignore. 
    • Note, interestingly, that other actions -- such as further research -- are not necessarily a Pascal's Mugging. (Could we ask Cassie about simultaneous stamping of the right foot?)
  • How some people perceive AI safety risk:
    • Let's assume that, despite recent impressive successes by AI capabilities researchers, human-level AGI has a low (10^-12) chance of happening in the next 200 years
    • Let's also concede that, if such AGI arose, humanity would have a <50% chance of survival unless we had solved alignment.
    • Let's continue being charitable to the importance of AI safety and assume that in just over 200 years, humanity will reach a state of utopia which last for millenia, as long as we haven't wiped ourselves out before then, which means that extinction in the next 200 years would mean 10^20 lives lost
    • The raw maths seems to suggest that work on AI safety is high impact.
    • The test: If we really are that far from AGI, can any work we do really help? Are we sure that any AI safety research we do now will actually make safe AI more likely and not less likely? There are a myriad ways we could make things worse, e.g. we could inadvertently further capabilities research; the research field could be path-dependent, and our early mistakes could damage the field more than just leaving it be until we understand the field better, we might realise that we need to include some ethical thinking, but we incorporate the ethics of 2022, and later realise the ethics of 2022 was flawed, etc.
    • Conclusion: fails the Reversal/Inconsistency Test, so treat as a Pascal's Mugging and ignore.
    • Note that in this scenario, it is true that the AGI scenario is highly unlikely, but the important thing is not that it's unlikely, it's that it's unactionable.

Glad to see someone already wrote out some of my thoughts. To just tag on, some of my key bullet points for understanding Pascalian wager problems are:

• You can have offsetting uncertainties and consequences (as you mention), and thus you should fight expected value fire with EV fire.

• Anti-Pascalian heuristics are not meant to directly maximize the accuracy of your beliefs, but rather to improve the effectiveness of your overall decision-making in light of constraints on your time/cognitive resources. If we had infinite time to evaluate everything--even possibilities that seem like red herrings--it would probably usually be optimal to do so, but we don't have infinite time so we have to make decisions as to what to spend our time analyzing and what to accept as "best-guesstimates" for particularly fuzzy questions. Thus, you can “fight EV fire with EV fire” at the level of “should I even continue entertaining this idea?”

• Very low probabilities (risk estimates) tend to be associated with greater uncertainty, especially when the estimates aren’t based on clear empirical data. As a result, really low probability estimates like “1/100,000,000” tend to be more fragile to further analysis, which crucially plays into the next bullet point.

• Sometimes the problem with Pascalian situations (especially in some high school policy debate rounds I’ve seen) is that someone fails to update based on the velocity/acceleration of their past updates: suppose one person presents an argument saying “this very high impact outcome is 1% likely.” The other person spends a minute arguing that it’s not 1% likely, and it actually only seems to be 0.1% likely. They spend another minute disputing it and it then seems to be only 0.01% likely. They then say “I have 5 other similar-quality arguments I could give, but I don’t have time.” The person that originally presented the argument could then say “Ha! I can’t dispute their arguments, but even if it’s 0.01% likely, the expected value of this outcome still is large” … the other person gives a random one of their 5 arguments and drops the likelihood by another order of magnitude, etc. The point being, given the constraints on information flow/processing speed and available time in discourse, one should occasionally take into account how fast they are updating and infer the “actual probability estimate I would probably settle on if we had a substantially greater amount of time to explore this.” (Then fight EV fire with EV fire)

This says "200 hundred". Do you mean 200 or 20,000?

Thanks, I've edited it to be 200 rather than 200 hundred

I agree, and though it doesn't matter from an expected value point of view, I suspect part of what people object to in those risks is not just the probabilities being low but also there being lots of uncertainty around them.

Or actually, it could change the expected value calculation too if the probabilities aren't normally distributed, e.g. one could look at an x-risk and judge most of the probability density to be around 0.001% but feel pretty confident that it's not more than 0.01% and not at all confident that it's not below 0.0001% or even 0.00001% etc. This makes it different from your examples, which probably have relatively narrow and normally distributed probabilities (because we have well-grounded base rates for airline accidents and voting and -- I believe -- robust scientific models of asteroid risks).

Edit: I see that Richard Y Chappell made this point already.

Hi Greg! I basically agree with all of this. But one natural worry about (e.g.) x-risk reduction is not that the undesirable event itself has negligible/Pascalian probability, but rather that the probability of making a difference with respect to that event is negligible/Pascalian. So I don't think it fully settles things to observe that the risks people are working on are sufficiently probable to worry about, if one doesn't think there's any way to sufficiently reduce that probability. (For what it's worth, I myself don't think it's reasonable to ignore the tiny reductions in probability that are available to us—I just think this slightly different worry may be more plausible than the one you are explicitly addressing.) 

Low probabilities doesn't seem like the appropriate crux. They can be generated by e.g. considering tiny time intervals.  The issue seems more like "reality is underpowered" or "we don't have well tested or believable models". For airplanes we've flown the huge numbers of miles and crashed the planes needed to find most common failure modes and developed models/checklists/practices to mitigate this. We intentionally simulate the low probability events to maximize our chances in surviving them. For asteroids and voting we have statistical models that give us reasonable confidence in the chances of impact.

The challenge is how do we model existential risks without incurring the risk itself? We can't crash humanity thousands of times to figure out how to not do it as often. In some cases (e.g. pandemics) we may have a small sample to look at to find the more frequent low-impact events to build a swiss cheese model from (pandemic causes, reactions to pandemics), but in others (AI, unknown risks) modeling without any data seems very hard.

I agree the basic version of this objection doesn't work, but my understanding is there's a more sophisticated version here: 

https://globalprioritiesinstitute.org/christian-tarsney-the-epistemic-challenge-to-longtermism/

Where he talks about how a the case for an individual  being longtermist rests on a tiny probability of shifting the entire future.

I think the response to this might be that if we aggregate together the longtermist community, then collectively it's no longer pascalian. But this feels a bit arbitrary.

Anyway, partly wanted to post this paper here for further reading, and partly an interested in responses.

I think the impulse to call AGI safety a Pascal's Mugging does not stem from extremely low probabilities. In fact, I don't think extremely low probabilities are necessary or sufficient for a Pascal's Mugging.

Instead, I think Pascal's Mugging is about epistemic helplessness in evaluating reasonably low priors. Even if I have no hope of evaluating the mugger's claim, at least until it's too late, I'm mathematically prohibited  from assigning his promises a probability of zero. This bug lets the mugger increase the size of his promises or threats until I give in.

AGI safety in particular suffers a similar problem. How is the layperson to evaluate the arguments of AGI safety researchers?

  • There's no consensus in the field of AI that AGI poses a real risk.
  • Laypeople can't understand the math.
  • AGI safety researchers tend to "keep the problems" while "rejecting the solutions" for analogous problems in the human world.
  • What evidence exists resembles the ordinary weirdness of buggy software, and has typically been correctable by a patch.
  • Even if they design and implement an AGI safety program to avert FOOM, humanity will never get direct evidence that it's working, because it will prevent them from ever experiencing the problem in the first place.

I personally think AGI safety is a realistic concern, and I support the research program continuing. But I also think it does intrinsically suffer from being extremely hard for laypeople to evaluate. This, to me, makes it fit what I think is people's true objection when they complain about Pascal's Mugging. They have to start with their prior on whether or not AGI safety researchers are bullshitting them, frankly, and "inevaluable" claims are a classic bullshitter's tool. I guess AGI safety researchers are Pascalian saints?

There's no consensus in the field of AI that AGI poses a real risk.

I'm not sure what the threshold is for consensus but a new survey of ML researchers finds:

"Support for AI safety research is up: 69% of respondents believe society should prioritize AI safety research “more” or “much more” than it is currently prioritized, up from 49% in 2016. ...

The median respondent believes the probability that the long-run effect of advanced AI on humanity will be “extremely bad (e.g., human extinction)” is 5%. ... Many respondents put the chance substantially higher: 48% of respondents gave at least 10% chance of an extremely bad outcome. Though another 25% put it at 0%."

Thank you for bringing the data!

I'm a little skeptical about this survey due to its 17% response rate. I also worry about conflict of interest. AI Impacts is lead by people associated with the rationalist community, and the rationalist community has its inception in trying to figure out ways to convince people of the threat of AGI.

However, I think it's great that these surveys are being created and support further efforts to make the state of expert opinion on this subject more legible.

Shouldn't that last sentence be 'Pascal's mugging should not be the reason why' ?

Yes, corrected. Thanks!

I think the roughly most reasonable approach for an agent who wishes to ignore small probabilities is to ignore probability differences adding up at most to some specified threshold over the sequence of all of their own future actions and the entire sequence of outcomes in the future.* We can make some finer grained distinctions on such an account, and common sense personal prudence seems to have a much higher probability of making a difference than x-risk work, so defining a threshold based on the lifetime probabilities of commonsense personal prudence could still exclude x-risk work as "Pascalian".

  1. While we take some precautions in our daily lives to avoid low probability risks, these probabilities are often not tiny over the course of our lives.
    1. The risk of ever dying in a car crash is about 1% over the course of your life and consistently wearing a seatbelt seems to have made a decent dent in this on average. Someone may forego a seatbelt rarely, but they should wear it almost all of the time when available, unless they ignore probabilities below at least ~0.1% (and even then, there are other personal risks to include besides car crashes, so the threshold may need to be even higher) or they prefer the comfort of not wearing a seatbelt to the reduction in risk of injury or death.
    2. Based on what you wrote, I think looking both ways before crossing the road would reduce the risk of lifetime injury by at least around 1 in 1000, i.e. 0.1%, similar to seatbelts.
    3. A good share of people will have some kind of health condition or medical emergency at some point in their lives and I'd guess most people would benefit from care near the end of their lives, so health insurance might be justified even if we ignore up to 10% of probability differences. That being said, for routine health expenses like regular checkups and dental visits, just saving what you would spend on insurance and paying out of pocket could be more efficient.
  2. Someone allocating funding or deciding policy (or with a significant influence over these decisions) for aviation safety and other safety critical activities may be in a position similar to 1 relative to others' risks, because of (semi-)independent trials over multiple separate possible events across many people, although replaceability could sometimes cut against this significantly. Boeing and Airbus have had multiple accidents with fatalities since 2000, although these may have been human error that manufacturers couldn't be reasonably expected to address. I'd guess increased security after the September 11 attacks probably actually saved lives, and some individuals may have been in a special position to influence this. I don't see multiple crashes from the same commercial airline since 2000, though, so someone working for an airline probably wouldn't make a counterfactual difference through higher standards than the next person who would have been in their position, and it's hard to estimate the baseline risk in such circumstances.
    1. That being said,  people are probably irrationally afraid of plane crashes, lawsuits for avoidable crashes may be very expensive and reputation-tarring, and people may feel comforted by higher standards, so aviation safety may pay for itself for commercial flights and be in the financial interest of shareholders and executives.
  3. Extinction is (basically) a single event that eliminates future extinction risks, so (semi-)independent trials don't count in its favour the same way as in 2,** although someone might believe extinction risk is high enough to make up for it.
  4. I think those actually working on small probability or unrepeatable risks (like extinction**) more directly would usually be much less likely to make a difference than those allocating funding or deciding policy, but financial incentives are often enough to get them to work on these problems without altruistic motivations (and/or they're badly mistaken about their probability of making a difference), so other people will work on it even if you don't think you should yourself. A first-order approximate upper bound of an individual's probability of impact is the combined probability of impact from all those working on the problem divided by the number of individuals working on it (or better, dividing the individual's future work-hours by the total future work-hours), and possibly much lower if they're highly replaceable or there are quickly diminishing marginal returns. I expect the probability to almost always fall below 1/1000 and often fall below 1 in a million, but it'll depend on the particular problem. It's plausible to me that the average individual working directly on AI safety has a better chance than 1 in a million of averting extinction because of how few people have been working on these risks, but I'm not sure, and the growth of resources and people working on the problem may mean a much lower probability. Biosecurity may be similar, but I'm much less informed. Both AI safety and biosecurity could have much better chances of averting human deaths than averting extinction, but then other considerations could dominate, e.g. farmed and wild animal welfare.
  5. Voting in federal elections and diet change (unless you eat locally produced products?) are probably not supported by their "direct" impacts** for the average individual over the course of their lives for a threshold around 1 in a million, although there may be other reasons to engage in these behaviours.

* Even if you're skeptical of the persistence of personal identity or its moral relevance, it's still worth considering how your commitment to ignore some low enough probability differences will affect how much your future selves' will ignore.

** However, acausal influence in a multiverse may increase the probability of making a difference significantly with semi-independent trials (conditional on some baseline factors like the local risk of extinction and difficulty of AI safety), even possibly making them more likely than not have a large impact. I'm mostly thinking about correlated decisions across spatially and acausally separated agents in a universe that's spatially unbounded/infinitely large. There's also the many-worlds interpretation of quantum mechanics.

The demand of the original Pascal's wager is (depending on the religion) devoting a significant share of your life to religion and forcing yourself to believe something that you believe is false. This may be a big sacrifice even if you thought the probability of the religion in question being correct were 1% (and there were no competing religions or infinities to consider). I would feel aversion to giving in to the wager at such probabilities and it still feels Pascalian to me. Those devoting their careers (and donations, if any) primarily to extinction risk reduction and perhaps existential risk reduction generally* are committing most of their altruistic efforts to making a difference to the outcome they're primarily targeting with only very small probability**. Part of me would like to probably make things much better with my career over my life (and probably not make them much worse).

Someone can have normative uncertainty (decision-theoretic uncertainty) about how much probability they can ignore over the course of their entire life, and this could span a wide range, even approaching or passing 50%. Under some approaches to decision-making under normative uncertainty, this might recommend devoting some resources to small risks, but not devoting all or most resources to them. Someone could have a "longtermist" EA bucket, but it need not be their largest bucket. Of course, it could still very well be their largest EA bucket, depending on their beliefs. It's currently not my largest bucket.

 

* It's possible that the far future is predictably and continuously sensitive to the state of affairs today and in the next few decades with decent probability, e.g. the distributions of values and tendencies in the population. Extinction, on the other hand, doesn't really come in degrees.

** However, acausal influence in a multiverse may increase the probability of making a difference significantly with semi-independent trials (conditional on some baseline factors like the local risk of extinction and difficulty of AI safety), even possibly making them more likely than not have a large impact. I'm mostly thinking about correlated decisions across spatially and acausally separated agents in a universe that's spatially unbounded/infinitely large. There's also the many-worlds interpretation of quantum mechanics.

Finally, returning to x-risk. The examples above were also chosen to illustrate a different 'vibe' that could apply to x-risk besides 'impeding disaster and heroic drama'. Safety engineering is non-heroic by design: a saviour snatching affairs from the jaws of disaster indicates an intolerable single point of failure. Rather, success is a team effort which is resilient to an individual's mistake, and their excellence only slightly notches down the risk even further. Yet this work remains both laudable and worthwhile: a career spent investigating not-so-near misses to tease out human factors to make them even more distant has much to celebrate, even if not much of a highlight reel.

'Existential safety' could be something similar. Risks of AI, nukes, pandemics etc. should be at least as remote of those of a building collapsing, a plane crashing, or a nuclear power plant melting down. Hopefully these risks are similarly remote, and hopefully one's contribution amounts to a slight incremental reduction. Only the vainglorious would wish otherwise.

Unfortunately, for the biggest risk, AI, the fact that we depend on EA, MIRI and other organizations to be heroes is a dangerous deficiency in civilizational competence. A lot of that issue is AGI is weird to outsiders, and politicization is strong enough to make civilizational competence negative. Hell, everything in existential risks could be like this.

Curated and popular this week
Relevant opportunities