I have noticed the following distasteful motivations for my interest in EA surface within me from time to time. I'm disclosing them as they may also be reasons why people are suspicious of EA.

  • I feel guilty about my privilege in the world and I can use EA as a tool to relieve my guilt (and maintain my privilege)
  • I like to feel powerful and in control, and EA makes me feel I am having an impact on the world. More lives affected = more impact = I feel more powerful. I'm not so small and insignificant if my effective actions can have outsized impacts
  • Affiliation with EA aligns me with high-status people and elite institutions, which makes me feel part of something special, important and exclusive (even if it's not meant to be)
  • If I believe that other people's suffering can be reduced I believe that there is hope for my own potential suffering to be reduced too
  • I'm fragile and EA makes me feel that other people are more fragile by drawing attention to all of the suffering in the world. I must be stronger than I feel if I'm in a position to be an EA, so it makes me feel good about myself
  • EA helps satisfy my need to feel like what I do matters and that an almighty judge would pat me on the back and let me in to heaven for my good deeds and intentions (despite being an atheist I was socialised with Christian values)
  • EA is partly an intellectual puzzle, and gives me opportunities to show off and feel like I'm right and other people are wrong
  • It is a way to feel morally superior to other people, to craft a moral dominance hierarchy where I am higher than other people
  • EA lets me signal my values to like-minded people, and feel part of an in-group
  • I don't have to get my hands dirty helping people, yet I can still feel as or more legitimate than someone who is actually on the front line
Comments17


Sorted by Click to highlight new comments since:

For me, it's some subset of the above, plus some related points: 

  • Most of my personal and professional successes are due to EA
  • I feel more successful and happy doing EA stuff than non-EA stuff, at least if I take a gradient descent approach. And there's a bunch of momentum in not changing.
    • This isn't exactly true if I zoom way out, for example FIRE would be within my reach if I wasn't so altruistically committed, and I suspect not working would make me happier.
      • On the other hand, I don't think I would necessarily have even recognized this without EA.
  • I feel smarter thinking about EA stuff than thinking about e.g. math or programming
  • Having a "noble" central purpose in my life makes the individual failures in (the rest of) my life feel more bearable
  • EA lets me justify putting off doing a bunch of things that in other cultures would be called "growing up," like learning to drive or having children
  • Non-EA liberal Western society feels increasingly identity-driven, and I like to feel appreciated for my intellectual and community contributions, regardless of how I look

Upvoting for honesty on under-the-surface things!

Good post with a fairly comprehensive list of the conscious, semi-conscious, covert, or adaptively self-deceived reasons why we may be attracted to EA.

I think these apply to any kind of virtue signaling, do-gooding, or public concern over moral, political, or religious issues, so they're not unique to EA. (Although the 'intellectual puzzle' piece may be somewhat distinctive with EA).

We shouldn't beat ourselves up about these motivations, IMHO.  There's no shame in them. We're hyper-social primates, evolved to gain social, sexual, reproductive, and tribal success through all kinds of moralistic beliefs, values, signals, and behaviors. If we can harness those instincts a little more effectively in the direction of helping other current and future sentient beings, that's a huge win. 

We don't need pristine motivations. Don't buy into the Kantian nonsense that only disinterested or purely 'altruistic' reasons for altruism are legitimate. There is no naturally evolved species that would be capable of pure Kantian altruism. It's not an evolutionarily stable strategy, in game theory terms. 

We just have to do the best we can with the motivations that evolution gave us. I think Effective Altruism is doing the best we can.

The only trouble comes if we try to pretend that none of these motivations should  have any legitimacy in EA. If we shame each other for using our EA activities to make friends, find mates, raise status, make a living, or feel good about ourselves, we undermine EA. And if we undermine the payoffs for any of these incentives through some misguided puritanism about what motives we can expect EAs to have, we might undermine EA. 

Ofer
26
0
0

If we shame each other for using our EA activities to make friends, find mates, raise status, make a living, or feel good about ourselves, we undermine EA.

This seems plausible. On the other hand, it may be important to be nuanced here. In the realms of anthropogenic x-risks and meta-EA, it is often very hard to judge whether a given intervention is net-positive or net-negative. Conflicts of interest can cause people to be less likely to make good decisions from an EA perspective.

If we shame each other for using our EA activities to make friends, find mates, raise status, make a living, or feel good about ourselves, we undermine EA.

What're the costs/benefits of reversing this shame? By "reversing shame" I mean explicitly pitching EA to people as an opportunity for them to pursue their non-utilitarian desires.

Really appreciate you writing this! Echoing others, I think many of these more self-serving motivations are pretty common in the community. With that said, I think some of these are much more potentially problematic than others, and the list is worth disaggregating on that dimension. For example, your comment about EA helping you not feel so fragile strikes me as prosocial, if anything, and I don't think anyone would have a problem with someone gaining hope that their own suffering could be reduced from engaging in EA.

The ones that I think are most worrying and worth pushing back on (not just for you, but for all of us in the community) are:

  • Affiliation with EA aligns me with high-status people and elite institutions, which makes me feel part of something special, important and exclusive (even if it's not meant to be)
  • EA is partly an intellectual puzzle, and gives me opportunities to show off and feel like I'm right and other people are wrong / I don't have to get my hands dirty helping people, yet I can still feel as or more legitimate than someone who is actually on the front line
  • It is a way to feel morally superior to other people, to craft a moral dominance hierarchy where I am higher than other people

The first one is tricky, as affiliation with high-status people and organizations can be instrumentally quite useful for achieving impact--indeed, in some contexts it's essential--and for that reason we shouldn't reject it on principle. And just like I think it's okay to enjoy money, I think it's okay to enjoy the feeling of doing something special and important! The danger is in having the status become its own reward, replacing the drive for impact. I feel that this is something we need to be constantly vigilant about, as it's easy to mistake social signals of importance for actual importance (aka LARPing at impact.)

I grouped the "intellectual puzzle" and "get my hands dirty" items because I see them as two sides of the same coin. In recent years it feels to me that EA has lost touch a bit with its emotional core, which is arguably easier to bring forward in the contexts of animal welfare and global poverty than x-risk (and to the extent there is an emotional core to x-risk, it is mostly one of fear rather than compassion). I personally love solving intellectual puzzles and it's a big reason why I keep coming back to this community, but it mustn't come at the expense of the A in EA. I group this with "get my hands dirty" because I think for many of us, hard intellectual puzzles are our bread and butter and actually take less effort/provoke less discomfort than putting ourselves in a position to help people suffering right in front of us. I similarly see this one as a balance to strike.

The last one is the only one that I think is just unambiguously bad. Not only is it incorrect on its face, or at least at odds with what I see as EA's core values, but it is a surefire way to turn off people who might otherwise be motivated to help. And indeed there has been a history of people in EA publicly communicating in a way that came across to others as morally arrogant, especially in early years of the movement, which created rifts with mainstream nonprofit/social sector practice that are still there today (e.g.).

I admit, some of these apply to me as well. I would be interested in reading further on the phenomenon, which I can't seem to find a term for, of "ugly intentions (such as philanthropy purely for status) that produce a variety of good outcomes for self and others, where the actor knows that this variety of good outcomes for others is being produced but is in it for other reasons".

Your post reminds me of some passages from the chapter on charity in the book The Elephant in the Brain (rereading it now to illustrate some points), and could probably be grouped under some of the  categories in the final list. I would recommend this reading this book, generally speaking. 

Intro.

What Singer has highlighted with this argument is nothing more than simple, everyday human
hypocrisy—the gap between our stated ideals (wanting to help those who need it most) and our
actual behavior (spending money on ourselves). By doing this, he’s hoping to change his readers’
minds about what’s considered “ethical” behavior. In other words, he’s trying to moralize.
 

Our goal, in contrast, is simply to investigate what makes human beings tick. But we will still
find it useful to document this kind of hypocrisy, if only to call attention to the elephant. In
particular, what we’ll see in this chapter is that even when we’re trying to be charitable, we
betray some of our uglier, less altruistic motives.

Warm Glow

Instead of acting strictly to improve the well-being of others, Andreoni theorized, we do charity in part because of a selfish psychological motive: it makes us happy. Part of the reason we give to homeless people on the street, for example, is because the act of donating makes us feel good, regardless of the results.


Andreoni calls this the “warm glow” theory. It helps explain why so few of us behave like effective altruists. Consider these two strategies for giving to charity: (1) setting up an automatic monthly payment to the Against Malaria Foundation, or (2) giving a small amount to every panhandler, collection plate, and Girl Scout. Making automatic payments to a single charity may be more efficient at improving the lives of others, but the other strategy—giving more widely, opportunistically, and in smaller amounts—is more efficient at generating those warm fuzzy feelings. When we “diversify” our donations, we get more opportunities to feel good.

...

  • Visibility. We give more when we’re being watched.
  • Peer pressure. Our giving responds strongly to social influences.
  • Proximity. We prefer to help people locally rather than globally.
  • Relatability. We give more when the people we help are identifiable (via faces and/or stories) and give less in response to numbers and facts.
  • Mating motive. We’re more generous when primed with a mating motive.

This list is far from comprehensive, but taken together, these factors help explain why we donate so inefficiently, and also why we feel that warm glow when we donate. Let’s briefly look at each factor in turn.

Simler and Hanson then cover each of the listed entities in greater depth.

I made my account to upvote this. EA would do well to think more clearly about the practical nature of altruism and self-deception.

Thanks a lot for writing this down with so much clarity and honesty!

I think I share many of those feelings,  but would not have been able to write this.

It's all good -- what matters is whether we make a (the biggest possible) positive difference in the world, not how the motivational system decided to pick this as a goal.

I do think that it is important for the EA community/system/whatever it is to successfully point the stuff that is done for making friends and feeling high status towards stuff that actually makes that biggest possible difference.

I think the issue is that some of these motivations might cause us to just not actually make as much positive difference as we might think we're making. Goodharting ourselves.

Ummmm, so we say we want to do good, but we actually want to make friends and get laid, so we figure out ways to 'do good' that leads to lots of hanging out with interesting people,and chances to demonstrate how cool we are to them. Often these ways of 'doing good' don't actually benefit anyone who isn't part of the community.

This is at least the worry, which I think is a separate problem from Goodharting, ie when the cea provides money to fly someone from the US to go to an eagx conference in Europe, I don't think there is any metric that is trying to be maximized, but rather just a vague sense that this might something something person becomes effective and then lots of impact.

Now it could interact with Goodharting in a case where, for example, community organizers get funds and status primarily based on numbers of people attending events, when what actually matters is finding the right people, and having the right sorts of events.

[anonymous]2
0
0

EA lets me signal my values to like-minded people, and feel part of an in-group

I felt this, but none of the other points on OP's list, then I realized that the people I signaled to were not in fact like-minded. So as I am finishing this paragraph, no point on the list applies to me any more.

Thanks for posting. I endorse a subset of these, another subset is quite alien to me. 

I want to zero in on 

I feel guilty about my privilege in the world and I can use EA as a tool to relieve my guilt (and maintain my privilege)

Because I find it odd that you conflated relieving guilt and maintaining privilege into a single point, and the idea that installing oneself as an altruist in a cruel system (economic, ecological, or otherwise) is hedging against losing relative status or power within that system is a claim that needs to be justified

As an example, surely many of us will have at least glanced at leftist comments to the effect that donating to AMF is a convenient smokescreen, keeping us blissfully ignorant of postcolonial mechanisms which are the true root cause of disvalue for the people AMF is (ostensibly) helping, and that if we were real altruists we would be anti imperialism activists. These comments, with whatever level of quality we find them, often point at this very claim. 

Those of us who have taken substantial paycuts for (ostensibly) altruistic purposes may simply be trading cash for intra-community status-- this observation can justify arguments that we're not genuine altruists (whatever that is), but they do not on their own point to a bid at maintaining privilege. 

Obviously Joe Ineffective Philanthropy Schmoe, who donates to the opera for tax breaks and PR, can be accused of using the polite fiction of philanthropy to shore up their privilege. If Joe is laundering money for the paperclip mafia by starting an alignment foundation (via some inscrutable mechanism), this accusation only increases. 

But such a line of attack seems orthogonal to actually existing effective altruism. 

Moreover, I may be right about the orthogonality but wrong about the emotional substructure. The emotional substructure may not make 100% sense, it may be a voice that assimilates guilt about privilege into some monologue about how you're falling short of franciscan altruism or some self-sacrifice emphasizing notion of altruism. This, however, is I think a mistake, because having an emotional substructure of guilt may not relate at all to the merits of franciscan altruism or mechanisms by which philanthropy fails to think systemically or etc. 

My two cents: guilt is a reasonable mechanism to draw one's attention to the stakes and the opportunities of their privilege, but is not "emotionally competitive" with responsibility. You, a member of the species that beat smallpox, are plausibly alive at a hinge of history. Who knows what levers are lying around under your nose. You, in a veil of ignorance sense, would prefer people of your privilege to do a minimum of try. There's a line in an old jewish book about not being free to abandon it, nor obligated to complete it (where it is presumably the brokenness of the world, etc.), which is emotionally very effective for me.

Guilt seems like it wants to emphasize my feelings about the unjust, from a cosmopolitan point of view, situation we find ourselves in. My subjective state, my inner monologue. It seems indifferent to arguments that making myself suffer as much as the people I want to help may not help those people as much as possible. In other words, it is negative. Responsibility is positive, it asks "what actions can you take?" This is at least a reasonable place to start. 

I think the correct steelmanning of dotsam's point is:

1. As a member of <group>, I have a great deal of privilege.
2. In order to remove this privilege, we need sweeping societal changes that upend the current power structures.
3. EA does not focus on upending current power structures in a radical way.
4. EA makes me feel less guilty about my privilege despire this.
5. Therefore, EA allows me to maintain my privilege by relieving my guilt by taking actions that doesn't actually require overthrowing current power structures, i.e, the actions that would affect me personally the most.

Under this set of assumptions, most people find ways to maintain their privilege not by actively reinforcing power structures, but by avoiding the moral imperative to overthrow them. EA's are at least slightly more principled, because their price for this is something like "Donate 10% of your income" instead of "Attend a protest", "Sign a petition", or "Decide that you're inherently worthy of what you have and privilege doesn't exist."

Personally, I don't agree with this chain of logic because I disagree with Point 2 above, but I think the chain of logic holds if you agree with points 1 and 2. (And I suppose you also need to add the assumptions that one can tractably work on upending these power structures, and that doing so won't cause more harm than good.)
 

What's the problem with enlightened self interest? :) 

This is a list of EA biases to be aware of and account for.

More from dotsam
44
dotsam
· · 1m read
Curated and popular this week
 ·  · 38m read
 · 
In recent months, the CEOs of leading AI companies have grown increasingly confident about rapid progress: * OpenAI's Sam Altman: Shifted from saying in November "the rate of progress continues" to declaring in January "we are now confident we know how to build AGI" * Anthropic's Dario Amodei: Stated in January "I'm more confident than I've ever been that we're close to powerful capabilities... in the next 2-3 years" * Google DeepMind's Demis Hassabis: Changed from "as soon as 10 years" in autumn to "probably three to five years away" by January. What explains the shift? Is it just hype? Or could we really have Artificial General Intelligence (AGI)[1] by 2028? In this article, I look at what's driven recent progress, estimate how far those drivers can continue, and explain why they're likely to continue for at least four more years. In particular, while in 2024 progress in LLM chatbots seemed to slow, a new approach started to work: teaching the models to reason using reinforcement learning. In just a year, this let them surpass human PhDs at answering difficult scientific reasoning questions, and achieve expert-level performance on one-hour coding tasks. We don't know how capable AGI will become, but extrapolating the recent rate of progress suggests that, by 2028, we could reach AI models with beyond-human reasoning abilities, expert-level knowledge in every domain, and that can autonomously complete multi-week projects, and progress would likely continue from there.  On this set of software engineering & computer use tasks, in 2020 AI was only able to do tasks that would typically take a human expert a couple of seconds. By 2024, that had risen to almost an hour. If the trend continues, by 2028 it'll reach several weeks.  No longer mere chatbots, these 'agent' models might soon satisfy many people's definitions of AGI — roughly, AI systems that match human performance at most knowledge work (see definition in footnote). This means that, while the compa
saulius
 ·  · 22m read
 · 
Summary In this article, I estimate the cost-effectiveness of five Anima International programs in Poland: improving cage-free and broiler welfare, blocking new factory farms, banning fur farming, and encouraging retailers to sell more plant-based protein. I estimate that together, these programs help roughly 136 animals—or 32 years of farmed animal life—per dollar spent. Animal years affected per dollar spent was within an order of magnitude for all five evaluated interventions. I also tried to estimate how much suffering each program alleviates. Using SADs (Suffering-Adjusted Days)—a metric developed by Ambitious Impact (AIM) that accounts for species differences and pain intensity—Anima’s programs appear highly cost-effective, even compared to charities recommended by Animal Charity Evaluators. However, I also ran a small informal survey to understand how people intuitively weigh different categories of pain defined by the Welfare Footprint Institute. The results suggested that SADs may heavily underweight brief but intense suffering. Based on those findings, I created my own metric DCDE (Disabling Chicken Day Equivalent) with different weightings. Under this approach, interventions focused on humane slaughter look more promising, while cage-free campaigns appear less impactful. These results are highly uncertain but show how sensitive conclusions are to how we value different kinds of suffering. My estimates are highly speculative, often relying on subjective judgments from Anima International staff regarding factors such as the likelihood of success for various interventions. This introduces potential bias. Another major source of uncertainty is how long the effects of reforms will last if achieved. To address this, I developed a methodology to estimate impact duration for chicken welfare campaigns. However, I’m essentially guessing when it comes to how long the impact of farm-blocking or fur bans might last—there’s just too much uncertainty. Background In
 ·  · 4m read
 · 
SUMMARY:  ALLFED is launching an emergency appeal on the EA Forum due to a serious funding shortfall. Without new support, ALLFED will be forced to cut half our budget in the coming months, drastically reducing our capacity to help build global food system resilience for catastrophic scenarios like nuclear winter, a severe pandemic, or infrastructure breakdown. ALLFED is seeking $800,000 over the course of 2025 to sustain its team, continue policy-relevant research, and move forward with pilot projects that could save lives in a catastrophe. As funding priorities shift toward AI safety, we believe resilient food solutions remain a highly cost-effective way to protect the future. If you’re able to support or share this appeal, please visit allfed.info/donate. Donate to ALLFED FULL ARTICLE: I (David Denkenberger) am writing alongside two of my team-mates, as ALLFED’s co-founder, to ask for your support. This is the first time in Alliance to Feed the Earth in Disaster’s (ALLFED’s) 8 year existence that we have reached out on the EA Forum with a direct funding appeal outside of Marginal Funding Week/our annual updates. I am doing so because ALLFED’s funding situation is serious, and because so much of ALLFED’s progress to date has been made possible through the support, feedback, and collaboration of the EA community.  Read our funding appeal At ALLFED, we are deeply grateful to all our supporters, including the Survival and Flourishing Fund, which has provided the majority of our funding for years. At the end of 2024, we learned we would be receiving far less support than expected due to a shift in SFF’s strategic priorities toward AI safety. Without additional funding, ALLFED will need to shrink. I believe the marginal cost effectiveness for improving the future and saving lives of resilience is competitive with AI Safety, even if timelines are short, because of potential AI-induced catastrophes. That is why we are asking people to donate to this emergency appeal
Recent opportunities in Building effective altruism
49
Ivan Burduk
· · 2m read