I have noticed the following distasteful motivations for my interest in EA surface within me from time to time. I'm disclosing them as they may also be reasons why people are suspicious of EA.
- I feel guilty about my privilege in the world and I can use EA as a tool to relieve my guilt (and maintain my privilege)
- I like to feel powerful and in control, and EA makes me feel I am having an impact on the world. More lives affected = more impact = I feel more powerful. I'm not so small and insignificant if my effective actions can have outsized impacts
- Affiliation with EA aligns me with high-status people and elite institutions, which makes me feel part of something special, important and exclusive (even if it's not meant to be)
- If I believe that other people's suffering can be reduced I believe that there is hope for my own potential suffering to be reduced too
- I'm fragile and EA makes me feel that other people are more fragile by drawing attention to all of the suffering in the world. I must be stronger than I feel if I'm in a position to be an EA, so it makes me feel good about myself
- EA helps satisfy my need to feel like what I do matters and that an almighty judge would pat me on the back and let me in to heaven for my good deeds and intentions (despite being an atheist I was socialised with Christian values)
- EA is partly an intellectual puzzle, and gives me opportunities to show off and feel like I'm right and other people are wrong
- It is a way to feel morally superior to other people, to craft a moral dominance hierarchy where I am higher than other people
- EA lets me signal my values to like-minded people, and feel part of an in-group
- I don't have to get my hands dirty helping people, yet I can still feel as or more legitimate than someone who is actually on the front line
For me, it's some subset of the above, plus some related points:
Upvoting for honesty on under-the-surface things!
Good post with a fairly comprehensive list of the conscious, semi-conscious, covert, or adaptively self-deceived reasons why we may be attracted to EA.
I think these apply to any kind of virtue signaling, do-gooding, or public concern over moral, political, or religious issues, so they're not unique to EA. (Although the 'intellectual puzzle' piece may be somewhat distinctive with EA).
We shouldn't beat ourselves up about these motivations, IMHO. There's no shame in them. We're hyper-social primates, evolved to gain social, sexual, reproductive, and tribal success through all kinds of moralistic beliefs, values, signals, and behaviors. If we can harness those instincts a little more effectively in the direction of helping other current and future sentient beings, that's a huge win.
We don't need pristine motivations. Don't buy into the Kantian nonsense that only disinterested or purely 'altruistic' reasons for altruism are legitimate. There is no naturally evolved species that would be capable of pure Kantian altruism. It's not an evolutionarily stable strategy, in game theory terms.
We just have to do the best we can with the motivations that evolution gave us. I think Effective Altruism is doing the best we can.
The only trouble comes if we try to pretend that none of these motivations should have any legitimacy in EA. If we shame each other for using our EA activities to make friends, find mates, raise status, make a living, or feel good about ourselves, we undermine EA. And if we undermine the payoffs for any of these incentives through some misguided puritanism about what motives we can expect EAs to have, we might undermine EA.
This seems plausible. On the other hand, it may be important to be nuanced here. In the realms of anthropogenic x-risks and meta-EA, it is often very hard to judge whether a given intervention is net-positive or net-negative. Conflicts of interest can cause people to be less likely to make good decisions from an EA perspective.
What're the costs/benefits of reversing this shame? By "reversing shame" I mean explicitly pitching EA to people as an opportunity for them to pursue their non-utilitarian desires.
Really appreciate you writing this! Echoing others, I think many of these more self-serving motivations are pretty common in the community. With that said, I think some of these are much more potentially problematic than others, and the list is worth disaggregating on that dimension. For example, your comment about EA helping you not feel so fragile strikes me as prosocial, if anything, and I don't think anyone would have a problem with someone gaining hope that their own suffering could be reduced from engaging in EA.
The ones that I think are most worrying and worth pushing back on (not just for you, but for all of us in the community) are:
The first one is tricky, as affiliation with high-status people and organizations can be instrumentally quite useful for achieving impact--indeed, in some contexts it's essential--and for that reason we shouldn't reject it on principle. And just like I think it's okay to enjoy money, I think it's okay to enjoy the feeling of doing something special and important! The danger is in having the status become its own reward, replacing the drive for impact. I feel that this is something we need to be constantly vigilant about, as it's easy to mistake social signals of importance for actual importance (aka LARPing at impact.)
I grouped the "intellectual puzzle" and "get my hands dirty" items because I see them as two sides of the same coin. In recent years it feels to me that EA has lost touch a bit with its emotional core, which is arguably easier to bring forward in the contexts of animal welfare and global poverty than x-risk (and to the extent there is an emotional core to x-risk, it is mostly one of fear rather than compassion). I personally love solving intellectual puzzles and it's a big reason why I keep coming back to this community, but it mustn't come at the expense of the A in EA. I group this with "get my hands dirty" because I think for many of us, hard intellectual puzzles are our bread and butter and actually take less effort/provoke less discomfort than putting ourselves in a position to help people suffering right in front of us. I similarly see this one as a balance to strike.
The last one is the only one that I think is just unambiguously bad. Not only is it incorrect on its face, or at least at odds with what I see as EA's core values, but it is a surefire way to turn off people who might otherwise be motivated to help. And indeed there has been a history of people in EA publicly communicating in a way that came across to others as morally arrogant, especially in early years of the movement, which created rifts with mainstream nonprofit/social sector practice that are still there today (e.g.).
Small point but the linked tweet in your last para doesn't come across as someone who feels EAs are morally arrogant, atleast if I read the thread without any other context. He's both appreciative and critical of EA, and his criticisms seem mostly on the actual work rather than attitudes or traits of the people involved.
I admit, some of these apply to me as well. I would be interested in reading further on the phenomenon, which I can't seem to find a term for, of "ugly intentions (such as philanthropy purely for status) that produce a variety of good outcomes for self and others, where the actor knows that this variety of good outcomes for others is being produced but is in it for other reasons".
Your post reminds me of some passages from the chapter on charity in the book The Elephant in the Brain (rereading it now to illustrate some points), and could probably be grouped under some of the categories in the final list. I would recommend this reading this book, generally speaking.
Simler and Hanson then cover each of the listed entities in greater depth.
I made my account to upvote this. EA would do well to think more clearly about the practical nature of altruism and self-deception.
It's all good -- what matters is whether we make a (the biggest possible) positive difference in the world, not how the motivational system decided to pick this as a goal.
I do think that it is important for the EA community/system/whatever it is to successfully point the stuff that is done for making friends and feeling high status towards stuff that actually makes that biggest possible difference.
I think the issue is that some of these motivations might cause us to just not actually make as much positive difference as we might think we're making. Goodharting ourselves.
Ummmm, so we say we want to do good, but we actually want to make friends and get laid, so we figure out ways to 'do good' that leads to lots of hanging out with interesting people,and chances to demonstrate how cool we are to them. Often these ways of 'doing good' don't actually benefit anyone who isn't part of the community.
This is at least the worry, which I think is a separate problem from Goodharting, ie when the cea provides money to fly someone from the US to go to an eagx conference in Europe, I don't think there is any metric that is trying to be maximized, but rather just a vague sense that this might something something person becomes effective and then lots of impact.
Now it could interact with Goodharting in a case where, for example, community organizers get funds and status primarily based on numbers of people attending events, when what actually matters is finding the right people, and having the right sorts of events.
Thanks a lot for writing this down with so much clarity and honesty!
I think I share many of those feelings, but would not have been able to write this.
I felt this, but none of the other points on OP's list, then I realized that the people I signaled to were not in fact like-minded. So as I am finishing this paragraph, no point on the list applies to me any more.
Thanks for posting. I endorse a subset of these, another subset is quite alien to me.
I want to zero in on
Because I find it odd that you conflated relieving guilt and maintaining privilege into a single point, and the idea that installing oneself as an altruist in a cruel system (economic, ecological, or otherwise) is hedging against losing relative status or power within that system is a claim that needs to be justified.
As an example, surely many of us will have at least glanced at leftist comments to the effect that donating to AMF is a convenient smokescreen, keeping us blissfully ignorant of postcolonial mechanisms which are the true root cause of disvalue for the people AMF is (ostensibly) helping, and that if we were real altruists we would be anti imperialism activists. These comments, with whatever level of quality we find them, often point at this very claim.
Those of us who have taken substantial paycuts for (ostensibly) altruistic purposes may simply be trading cash for intra-community status-- this observation can justify arguments that we're not genuine altruists (whatever that is), but they do not on their own point to a bid at maintaining privilege.
Obviously Joe Ineffective Philanthropy Schmoe, who donates to the opera for tax breaks and PR, can be accused of using the polite fiction of philanthropy to shore up their privilege. If Joe is laundering money for the paperclip mafia by starting an alignment foundation (via some inscrutable mechanism), this accusation only increases.
But such a line of attack seems orthogonal to actually existing effective altruism.
Moreover, I may be right about the orthogonality but wrong about the emotional substructure. The emotional substructure may not make 100% sense, it may be a voice that assimilates guilt about privilege into some monologue about how you're falling short of franciscan altruism or some self-sacrifice emphasizing notion of altruism. This, however, is I think a mistake, because having an emotional substructure of guilt may not relate at all to the merits of franciscan altruism or mechanisms by which philanthropy fails to think systemically or etc.
My two cents: guilt is a reasonable mechanism to draw one's attention to the stakes and the opportunities of their privilege, but is not "emotionally competitive" with responsibility. You, a member of the species that beat smallpox, are plausibly alive at a hinge of history. Who knows what levers are lying around under your nose. You, in a veil of ignorance sense, would prefer people of your privilege to do a minimum of try. There's a line in an old jewish book about not being free to abandon it, nor obligated to complete it (where it is presumably the brokenness of the world, etc.), which is emotionally very effective for me.
Guilt seems like it wants to emphasize my feelings about the unjust, from a cosmopolitan point of view, situation we find ourselves in. My subjective state, my inner monologue. It seems indifferent to arguments that making myself suffer as much as the people I want to help may not help those people as much as possible. In other words, it is negative. Responsibility is positive, it asks "what actions can you take?" This is at least a reasonable place to start.
I think the correct steelmanning of dotsam's point is:
1. As a member of <group>, I have a great deal of privilege.
2. In order to remove this privilege, we need sweeping societal changes that upend the current power structures.
3. EA does not focus on upending current power structures in a radical way.
4. EA makes me feel less guilty about my privilege despire this.
5. Therefore, EA allows me to maintain my privilege by relieving my guilt by taking actions that doesn't actually require overthrowing current power structures, i.e, the actions that would affect me personally the most.
Under this set of assumptions, most people find ways to maintain their privilege not by actively reinforcing power structures, but by avoiding the moral imperative to overthrow them. EA's are at least slightly more principled, because their price for this is something like "Donate 10% of your income" instead of "Attend a protest", "Sign a petition", or "Decide that you're inherently worthy of what you have and privilege doesn't exist."
Personally, I don't agree with this chain of logic because I disagree with Point 2 above, but I think the chain of logic holds if you agree with points 1 and 2. (And I suppose you also need to add the assumptions that one can tractably work on upending these power structures, and that doing so won't cause more harm than good.)
What's the problem with enlightened self interest? :)
This is a list of EA biases to be aware of and account for.