In the 17th century, the mathematician Blaise Pascal devised the idea of Pascal’s Wager.
Should you become a Christian? If God exists and you’re Christian, then He will reward you with an eternity of bliss in Heaven. If God exists and you’re not Christian, He’ll punish you with an eternity in the fires of Hell. If God doesn’t exist and you’re Christian, you waste many tedious hours praying and at services. If God doesn’t exist and you’re not Christian, you can spend Sunday morning scrolling TikTok. Since an eternity of bliss in Heaven is much better than a finite amount of time spent on TikTok, you should believe in God.
Pascal’s Wager has two major flaws.
First, many religions promise an eternity of bliss for believers and an eternity of punishment for nonbelievers. Maybe instead of following Jesus you should be following Mohammad or Amitabha. Even if you’ve settled on Christianity, you have to pick one of the many mutually contradictory branches of Christianity, many of which believe all the others are going to Hell. Not to mention the many more outre possibilities. How do you know there isn’t a God of Atheism, who has deliberately hid Himself from the world to test the skepticism of His creation, and who punishes anyone who believes in God anyway with an eternity of torture?
Second, people are bad at reasoning about infinities and very small probabilities. We can do okay with 1 in 100 probabilities and even 1 in 10,000 probabilities—at least if we’ve learned to think with numbers. But when you start getting down to the 1 in 1 billion territory, you’re going to end up confusing yourself more than you think anything insightful.
Some very silly people have used Pascalian reasoning about the very long-term future. The long-term future of humanity affects septillions of sentient beings. So if you have a 1 in 1 sextillion chance of positively influencing the future, this is the most important thing you could possibly be doing!
Like Pascal’s original Wager, this has two major flaws.
First, people are very bad at reasoning about 1 in 1 sextillion probabilities and it is unclear that they will come up with anything useful to say about them.
Second, when you’re reasoning about 1 in 1 sextillion possibilities, suddenly you have to consider all kinds of extremely unlikely possibilities. What if the primary effect of your life is getting a complete stranger to run late coming home from work, have sex with his wife a bit later, emit different sperm than he otherwise would, and thus conceive von Neumann Socrates Gandhi, the smartest, wisest and most ethical human being to ever live? What if it’s good to drive humanity extinct because humanity’s extinction would lead to the evolution of ecstatically happy, fulfilled, and virtuous sapient crabs? Admittedly, it doesn’t seem very likely. Is it 1 in 1 sextillion likely? How do you know? (A lot of reasoning about this falls under the heading of “cluelessness.”)
Specifically, you fall victim to the optimizer’s curse. Whenever you try to figure out how good a course of action is, you make mistakes. The action that looks best to you right now is likely very good, but you’re also probably making a bunch of mistakes that make it look better than it is. The more uncertain you are, naturally, the more mistakes you could be making. The more speculative an intervention, the more it will predictably underperform your best estimate of how good it is. Indeed, under some fairly reasonable assumptions, always choosing a nonspeculative intervention leads to better outcomes than sometimes choosing a speculative intervention—no matter how much better the speculative intervention looks!
Some equally silly people have taken the “if you have a 1 in 1 sextillion chance of positively influencing the future, this is the most important thing you could possibly be doing” line of reasoning and used it to dismiss all work on existential risk.
For an article I’m working on for Asterisk, I’ve been asking people who work in AI safety how likely they think it is that humanity is going to go extinct in the next decade. They give numbers like 10%, or 20%, or 50%. Sometimes they give numbers like 99%. No one ever gives a number like “1 in 1 sextillion, but since human extinction might prevent the existence of septillions of future sapients, it is still the most important thing you could be doing.”
“10% chance of human extinction” isn’t Pascalian! It is a normal probability, of the sort we reason about constantly in normal life! If I said to you “my local library has free ice cream one day per week, but I can’t remember which day it is, let’s swing by and check it out”, you would not accuse me of being misled by an infinitesimal chance of infinite cold deliciousness.
Do I think there’s a 10% chance—much less a 99% chance—of human extinction in the next ten years? Uh, no. If you look into the evidence and conclude that AIs are going to cause imminent human extinction, you get a job trying to stop them from doing that; if you look into the evidence and conclude that they’re just going to automate software engineering, you do something else with your life. I’m conscious of the risks of groupthink: if you’re around people who believe in a 99% chance of imminent human extinction, suddenly a 50% chance of human extinction seems positively moderate and restrained. And, frankly, AI has been most effective in the realm of software, which AI researchers have the most expertise in; I expect that broader deployment will make people realize that atoms are much harder than bits.1
But when I talk to the people developing a new technology and they say to me “yeah, we think it might kill everyone on Earth”... I, uh, I am a little concerned about that? I don’t dismiss this entirely out of hand? If you put a gun to my head and make me pick a number, maybe 1%?
1% isn’t a Pascalian number either. We do all kinds of reasoning about 1% chances. About 1% of people in the United States are incarcerated at any given time. About 1% of people speak Korean as their native language. 1% is a little less than the chance that, in a group of four random people, at least two share the same birthday. It’s about the chance that someone has schizophrenia, or celiac disease, or red hair.
If I talked to people working on other cutting-edge technologies—electric airplanes, or sodium-ion batteries, or personalized gene-editing therapies, or malaria vaccines—and I asked them “is your technology going to kill everyone on the planet?”, they would be like:
Most technologies have a truly Pascalian chance of causing human extinction, which is why no one ever publishes thinkpieces like “malaria vaccines: will they lead to fifty-foot-fall supermosquitoes with an unquenchable thirst for blood?”
The engineers who do say “my technology might kill everyone on the planet” are working on nuclear bombs and bioengineering, which, yes, are technologies that ought to be carefully regulated. I realize that this is a controversial position that might get me #cancelled, but neither Sam Altman nor Dario Amodei nor Elon Musk should be allowed to develop their own nuclear umbrellas.
If you are uncertain about whether a technology will drive humanity extinct, that is very bad and justifies heavy-handed regulatory oversight. We should only yolo technologies that we know for certain won’t cause human extinction, i.e., all the other ones.
It drives me nuts that left-of-center people keep making this mistake, because it is straight out of the anti-environmentalist climate change denialist corporate playbook. “Oh, we’re not certain what the effects of climate change are, so we shouldn’t do anything about it.” “Oh, we’re not certain whether the hole in the ozone layer is bad, so we can keep spewing CFCs into the atmosphere.” “Oh, we’re not certain that acid rain will drive vulnerable fish species extinct, so let’s wait until they’re extinct before implementing carbon scrubbers.”
No! By the time we were certain what climate change does, multiple Pacific islands HAD ALREADY SUNK INTO THE OCEAN!
Sometimes, you need to take steps to prevent a bad outcome, even if you aren’t sure how bad it will be—or even if you aren’t sure whether it’s going to happen at all. The time to switch to renewable energy was before the Pacific islands sunk into the ocean. And the time to make it illegal to deploy an advanced AI that isn’t verifiably aligned is before grey goo devours my living room.
Reasoning is Pascalian because of the 1 in 1 sextillion probabilities. The concept of preparing for a bad thing that is less than 50% likely to happen isn’t Pascalian. It is called other things, like “responsibility” and “foresight” and “common sense.”

I think this misunderstands what people mean when they compare arguments about the importance of AI safety to a Pascal's wager.
Pascal's wager refers to situations where a tiny probability of enormous value seemingly leads to ridiculous conclusions if you try to do naive expected value calculations with it. When people say that strong longtermism is a Pascal's wager, the "small probability" they are talking about is not the probability of extinction, which as you point out, is significant. The small probability is the probability that the future will contain "septillions of future sapients". That is the probability that is small. And it gets even smaller if the probability of extinction soon is high! So a large probability of extinction this century makes the Pascal's wager comparison more relevant as a critique of strong longtermism, not less. It is multiplying this small probability by the value of those septillions of potential "sapients" that gives you the astronomical value that says existential risk reduction should almost automatically dominate our concerns.
I think you're completely right to point out that people should care a lot about things which might carry a 10% chance of causing human extinction, even ignoring their stance on longtermism. But some people believe that existential risk has astronomically more value than just the impact it will have on the next few generations, and that therefore tiny changes in the probability of existential risk almost automatically trump any other concern, however small those changes are. When people talk about Pascal's wager in the context of strong longtermism or AI safety, I think it is this claim that they are challenging, not the claim that we should care about extinction at all. And that criticism is just as valid, actually more valid, if the probability of extinction from AI safety is high (though I of course agree that if there are people who use the Pascal's Wager argument to dismiss all work on AI risk then they are making a serious mistake).