I have noticed the following distasteful motivations for my interest in EA surface within me from time to time. I'm disclosing them as they may also be reasons why people are suspicious of EA.
- I feel guilty about my privilege in the world and I can use EA as a tool to relieve my guilt (and maintain my privilege)
- I like to feel powerful and in control, and EA makes me feel I am having an impact on the world. More lives affected = more impact = I feel more powerful. I'm not so small and insignificant if my effective actions can have outsized impacts
- Affiliation with EA aligns me with high-status people and elite institutions, which makes me feel part of something special, important and exclusive (even if it's not meant to be)
- If I believe that other people's suffering can be reduced I believe that there is hope for my own potential suffering to be reduced too
- I'm fragile and EA makes me feel that other people are more fragile by drawing attention to all of the suffering in the world. I must be stronger than I feel if I'm in a position to be an EA, so it makes me feel good about myself
- EA helps satisfy my need to feel like what I do matters and that an almighty judge would pat me on the back and let me in to heaven for my good deeds and intentions (despite being an atheist I was socialised with Christian values)
- EA is partly an intellectual puzzle, and gives me opportunities to show off and feel like I'm right and other people are wrong
- It is a way to feel morally superior to other people, to craft a moral dominance hierarchy where I am higher than other people
- EA lets me signal my values to like-minded people, and feel part of an in-group
- I don't have to get my hands dirty helping people, yet I can still feel as or more legitimate than someone who is actually on the front line
Good post with a fairly comprehensive list of the conscious, semi-conscious, covert, or adaptively self-deceived reasons why we may be attracted to EA.
I think these apply to any kind of virtue signaling, do-gooding, or public concern over moral, political, or religious issues, so they're not unique to EA. (Although the 'intellectual puzzle' piece may be somewhat distinctive with EA).
We shouldn't beat ourselves up about these motivations, IMHO. There's no shame in them. We're hyper-social primates, evolved to gain social, sexual, reproductive, and tribal success through all kinds of moralistic beliefs, values, signals, and behaviors. If we can harness those instincts a little more effectively in the direction of helping other current and future sentient beings, that's a huge win.
We don't need pristine motivations. Don't buy into the Kantian nonsense that only disinterested or purely 'altruistic' reasons for altruism are legitimate. There is no naturally evolved species that would be capable of pure Kantian altruism. It's not an evolutionarily stable strategy, in game theory terms.
We just have to do the best we can with the motivations that evolution gave us. I think Effective Altruism is doing the best we can.
The only trouble comes if we try to pretend that none of these motivations should have any legitimacy in EA. If we shame each other for using our EA activities to make friends, find mates, raise status, make a living, or feel good about ourselves, we undermine EA. And if we undermine the payoffs for any of these incentives through some misguided puritanism about what motives we can expect EAs to have, we might undermine EA.
This seems plausible. On the other hand, it may be important to be nuanced here. In the realms of anthropogenic x-risks and meta-EA, it is often very hard to judge whether a given intervention is net-positive or net-negative. Conflicts of interest can cause people to be less likely to make good decisions from an EA perspective.