I have this idea which I haven't fully fleshed out yet, but I'm looking to get some feedback. To simplify this, I'll embody the idea in a single, hypothetical Effective Altruist called Alex. I'll assume silly things like no inflation for simplicity. I also use 'lives saved' as a proxy for 'good done'; although this is grossly oversimplified it doesn't affect the argument.
Alex is earning to give, and estimates that they will be able to give $1 million over their lifetime. They have thought a lot about existential risk, and agree that reducing existential risk would be a good thing, and also agree that the problem is at least partially tractable. Alex also accepts things like the notion that future lives are equally as valuable as lives today. However, Alex is somewhat risk averse.
After careful modelling, Alex estimates that they could save a life for $4,000, and thus could save 250 lives over their own lifetime. Alex also thinks that their $1 million might slightly reduce the risk of some catastrophic event, but it probably won't. On expected value terms, they estimate that donating to an X-risk organisation is about ten times as good as donating to a poverty charity (they estimate 'saving' 2,500 lives on average).
However, all things considered, Alex still decides to donate to the poverty organisation, because they are risk averse, and the chances of them making a difference by donating to the X-risk organisation are very low indeed.
This seems to embody the attitude of many EAs I know. However, the question I'd like to pose is: is this selfish?
It seems like some kind of moral narcissism to say that one would prefer to increase their chances of their personal actions making a difference at the expense of overall wellbeing in expectation. If a world where everyone gave to X-risk meant a meaningful reduction in the probability of a catastrophe, shouldn't we all be working towards that instead of trying to maximise the chances that our personal dollars make a difference?
As I said, I'm still thinking this through, and don't mean to imply that anyone donating to a poverty charity instead of an X-risk organisation is selfish. I'm very keen on criticism and feedback here.
Things that would imply I'm wrong include existential risk reduction not being tractable or not being good, some argument for risk aversion that I'm overlooking, an argument for discounting future life, or something that doesn't assume a hardline classical hedonistic utilitarian take on ethics (or anything else I've overlooked).
For what it's worth, my donations to date have been overwhelmingly to poverty charities, so to date at least, I am Alex.
I don't think that risk aversion does what you think it does. Let's say that he only wants to perform interventions which are certain to be helpful.
Whether he is selfish depends on his reasons for acting that way. If he thinks it's morally better to support x risk but gains more personal satisfaction from his donations, then he is selfish. But if he believes that there are moral or other reasons to support more robust interventions then it sounds like he isn't selfish.
If someone did make their allocations based on risk aversion rather than utility maximization, they would be operating according to a fairly reasonable decision model, so probably not selfish there either. (Unless they really believed that utility maximization was correct but derived personal satisfaction from being risk averse, which I don't think describes many people.)
Thanks, there are some good points here.
I still have this feeling, though, that some people support some causes over others simply for the reason that 'my personal impact probably won't make a difference', which seems hard to justify to me.