I wrote an introduction to Expected Value Fanaticism for Utilitarianism.net. Suppose there was a magical potion that almost certainly kills you immediately but offers you (and your family and friends) an extremely long, happy life with a tiny probability. If the probability of a happy life were one in a billion and the resulting life lasted one trillion years, would you drink this potion? According to Expected Value Fanaticism, you should accept gambles like that.
This view may seem, frankly, crazy - but there are some very good arguments in its favor. Basically, if you reject Expected Value Fanaticism, you'll end up violating some very plausible principles. You would have to believe, for example, that what happens on faraway exoplanets or what happened thousands of years ago in history could influence what we ought to do here and now, even when we cannot affect those distant events. This seems absurd - we don't need a telescope to decide what we morally ought to do.
However, the story is a bit more complicated than that... Well, read the article! Here's the link: https://utilitarianism.net/gue.../expected-value-fanaticism/
I am also skeptical of small percentages but more so because I think that the kinds of probability estimates that are close to 0 or 1 tend to be a lot more uncertain (perhaps because they’re based on rare or unprecedented events that have only been observed a few times).
I’m no statistician, but I’m not sure that we can say that small percentages tend to be exaggerated though… For one, I recall reading in Superforecasters that there’s evidence that people tend to underestimate the likelihood of rare events and overestimate the likelihood of common ones in forecasting exercises, so that’s an piece of evidence pointing towards small probabilities generally being too low rather than too high. Secondly, a low probability can equally be framed as a high probability of that event not happening. So in short - I agree that probability estimates that are close to 0 or 1 tend to be less certain, but not that probability estimates close to 0 tend to be overestimates any more than underestimates