I have this idea which I haven't fully fleshed out yet, but I'm looking to get some feedback. To simplify this, I'll embody the idea in a single, hypothetical Effective Altruist called Alex. I'll assume silly things like no inflation for simplicity. I also use 'lives saved' as a proxy for 'good done'; although this is grossly oversimplified it doesn't affect the argument.
Alex is earning to give, and estimates that they will be able to give $1 million over their lifetime. They have thought a lot about existential risk, and agree that reducing existential risk would be a good thing, and also agree that the problem is at least partially tractable. Alex also accepts things like the notion that future lives are equally as valuable as lives today. However, Alex is somewhat risk averse.
After careful modelling, Alex estimates that they could save a life for $4,000, and thus could save 250 lives over their own lifetime. Alex also thinks that their $1 million might slightly reduce the risk of some catastrophic event, but it probably won't. On expected value terms, they estimate that donating to an X-risk organisation is about ten times as good as donating to a poverty charity (they estimate 'saving' 2,500 lives on average).
However, all things considered, Alex still decides to donate to the poverty organisation, because they are risk averse, and the chances of them making a difference by donating to the X-risk organisation are very low indeed.
This seems to embody the attitude of many EAs I know. However, the question I'd like to pose is: is this selfish?
It seems like some kind of moral narcissism to say that one would prefer to increase their chances of their personal actions making a difference at the expense of overall wellbeing in expectation. If a world where everyone gave to X-risk meant a meaningful reduction in the probability of a catastrophe, shouldn't we all be working towards that instead of trying to maximise the chances that our personal dollars make a difference?
As I said, I'm still thinking this through, and don't mean to imply that anyone donating to a poverty charity instead of an X-risk organisation is selfish. I'm very keen on criticism and feedback here.
Things that would imply I'm wrong include existential risk reduction not being tractable or not being good, some argument for risk aversion that I'm overlooking, an argument for discounting future life, or something that doesn't assume a hardline classical hedonistic utilitarian take on ethics (or anything else I've overlooked).
For what it's worth, my donations to date have been overwhelmingly to poverty charities, so to date at least, I am Alex.
I agree with a lot of the other folk here that risk aversion should not be seen as a selfish drive (even though, as Gleb mentioned, it can serve that drive in some cases), but rather is an important part of rational thinking. In terms of directly answering your question, though, regarding 'discounting future life', I've been wondering about this a bit too. So, I think it's fair to say that there are some risks involved with pursuing X-risks: there's a decent chance you'll be wrong, you may divert resources from other causes, your donation now may be insignificant compared to future donations when the risk is more well-known and better understood, and you'll never really know whether or not you're making any progress. Many of these risks are accurately represented in EA's cost/benefit models about X-risks (I'm sure yours involved some version of these, even if just the uncertainty one).
My recent worry is the possibility that, that when a given X-risk becomes associated with the EA community, these risks become magnified, which in turn needs to be considered in our analyses. I think that this can happen for three reasons:
First, the EA community could create an echo chamber for incorrect X-risks, which increases bias in support of those X-risks. In this case, rational people who would have otherwise dismissed the risk as conspiratorial now would be more likely to agree with it. We’d like to think that large support of various X-risks in the EA communities is because EAs have more accurate information about these risks, but that’s not necessarily the case. Being in the EA community changes who you see as ‘experts’ on a topic – there isn't a vocal majority of experts working on AI globally who see the threat as legitimate, which to a normal rational person may make the risk seem a little overblown. However, the vast majority of experts working on AI who associate with EA do see it as a threat, and are very vocal about it. This is a very dangerous situation to be in.
Second, if an 'incorrect' X-risk is grasped by the community, there’s a lot of resource diversion at stake – EA has the power to more a lot of resources in a positive way, and if certain X-risks are way off base then their popularity in EA has an outsized opportunity cost.
Lastly, many X-risks turns a lot of reasonable people away from EA, even when they’re correct - if we believe that EA is a great boon to humanity, then the reputational risk has very real implications for the analysis.
Those are my rough initial thoughts, which I've elaborated on a bit here. It's a tricky question though, so I'd love to hear people's critiques of this line of thinking - is this magnified risk something we should take into account? How would we account for it in models?