I have this idea which I haven't fully fleshed out yet, but I'm looking to get some feedback. To simplify this, I'll embody the idea in a single, hypothetical Effective Altruist called Alex. I'll assume silly things like no inflation for simplicity. I also use 'lives saved' as a proxy for 'good done'; although this is grossly oversimplified it doesn't affect the argument.
Alex is earning to give, and estimates that they will be able to give $1 million over their lifetime. They have thought a lot about existential risk, and agree that reducing existential risk would be a good thing, and also agree that the problem is at least partially tractable. Alex also accepts things like the notion that future lives are equally as valuable as lives today. However, Alex is somewhat risk averse.
After careful modelling, Alex estimates that they could save a life for $4,000, and thus could save 250 lives over their own lifetime. Alex also thinks that their $1 million might slightly reduce the risk of some catastrophic event, but it probably won't. On expected value terms, they estimate that donating to an X-risk organisation is about ten times as good as donating to a poverty charity (they estimate 'saving' 2,500 lives on average).
However, all things considered, Alex still decides to donate to the poverty organisation, because they are risk averse, and the chances of them making a difference by donating to the X-risk organisation are very low indeed.
This seems to embody the attitude of many EAs I know. However, the question I'd like to pose is: is this selfish?
It seems like some kind of moral narcissism to say that one would prefer to increase their chances of their personal actions making a difference at the expense of overall wellbeing in expectation. If a world where everyone gave to X-risk meant a meaningful reduction in the probability of a catastrophe, shouldn't we all be working towards that instead of trying to maximise the chances that our personal dollars make a difference?
As I said, I'm still thinking this through, and don't mean to imply that anyone donating to a poverty charity instead of an X-risk organisation is selfish. I'm very keen on criticism and feedback here.
Things that would imply I'm wrong include existential risk reduction not being tractable or not being good, some argument for risk aversion that I'm overlooking, an argument for discounting future life, or something that doesn't assume a hardline classical hedonistic utilitarian take on ethics (or anything else I've overlooked).
For what it's worth, my donations to date have been overwhelmingly to poverty charities, so to date at least, I am Alex.
It's possible that preventing human extinction is net negative. A classical utilitarian discusses whether the preventing human extinction would be net negative or positive here: http://mdickens.me/2015/08/15/is_preventing_human_extinction_good/. Negative-leaning utilitarians and other suffering-focused people think the value of the far-future is negative.
This article contains an argument for time-discounted utilitarianism: http://effective-altruism.com/ea/d6/problems_and_solutions_in_infinite_ethics/. I'm sure there's a lot more literature on this, that's about all I've looked into it.
You could also reject maximizing expected utility as the proper method of practical reasoning. Weird things happen with subjective expected utility theory, after all - St. Petersburg paradox, Pascal's Mugging, anything with infinity, dependence on possibly meaningless subjective probabilities, etc. Of course, giving to poverty charities might still be suboptimal under your preferred decision theory.
FWIW, strict utilitarianism isn't concerned with "selfishness" or "moral narcisissm", just maximizing utility.
For something so important, it seems this question is hardly ever discussed. The only literature on the issue is a blog post? It seems like it's often taken for granted that x-risk reduction is net positive. I'd like to see more analysis on whether non-negative utilitarians should support x-risk reduction.