Hide table of contents

Cross-posting from my substack, where I ask: What would an "Alt-EA" beneficentrist movement look like?

Three Kinds of Critics

Some people just aren’t very altruistic, and so may quietly dislike Effective Altruism for promoting values that conflict with their interests. (It’s easy to see how wealthy academics might be better off with a moral ideology that prioritizes verbiage over material outcomes, for example.) One doesn’t often hear this perspective explicitly voiced, but—human nature being what it is—I expect it must be out there.

Others may be broadly enthusiastic about the idea of Effective Altruism, but have some concerns about the current state of the movement as it actually stands. From here one might offer friendly/internal critiques of EA: “Here’s how you might do better by your own lights!” And my sense is that good-faith critiques of this sort tend to get a very positive reception on the EA forum. (Indeed, there’s now a $100k incentive for criticism of EA and its current priorities.)

Finally, a third class of critics claims to agree with the beneficent values of effective altruism, but regard the actual EA movement as hopelessly misguided, ineffective, cultish, a mere smokescreen for political complacency, or what have you. (These sorts often use sneer quotes to speak of the “Effective” “Altruism” movement.) I find this final group more puzzling. As Jeff McMahan has noted, “the philosophical critics of effective altruism tend to express their objections in a mocking and disdainful manner… suggestive of bad faith.”

One major concern I have with the actually-existing wholesale criticisms of EA is that they tend to reinforce a kind of moral complacency. No need to really do anything beneficent so long as you give it lip-service, and insist that the rest is a “collective responsibility” best left to the state to take care of (just don’t hold your breath…). I feel like these critics are discouraging real-life beneficence, and thereby doing real harm.

Viewed in this light, the absence of any competing explicitly beneficentrist movements is striking. EA seems to be the only game in town for those who are practically concerned to promote the general good in a serious, scope-sensitive, goal-directed kind of way. If a large number of genuinely beneficent people believed that actually-existing-EA was going about this all wrong, I’m surprised that they haven’t set up an alternative movement that better pursues these goals while avoiding the shortcomings they associate with traditional EA. (Perhaps they’d prefer different branding. That’s fine; I’m not concerned here with the label, but with the underlying values and ideas.)

What might an Alt-EA movement look like?

I’d genuinely love to hear from critics what they think a better alternative might look like. (I think it’s now widely acknowledged that early EA was too narrowly focused on doing good with high certainty—as evidenced through RCTs or the like—perhaps in reaction to the aid skepticism that seemed like the major barrier to uptake at the time. But EA is now much more open to diverse approaches and uncertain prospects so long as a decent case can be made for their expected value being high.)

Maybe the alternative would involve a greater political focus, with local community organizing being a major cause priority (as an implicit form of community-building)? Maybe it would avoid utilitarian/cosmopolitan rhetoric, and focus more on meeting the median voter where they are—with appeals to more local and emotive values such as solidarity—with an eye to encouraging many small nudges towards a better world? Maybe it would be more optimistic about the likely outcomes of a political “revolution”, and less optimistic about technocratic interventions? I’m not too sure what the epistemic basis for any of this would be, but perhaps one could lean hard into “self-effacingness” and insist that globally better results can be achieved by not aiming too directly at this goal, along with being guided more by hope than by evidence?

Might it then turn out that already-existing popular political movements can be viewed as alternative (albeit highly indirect) implementations of beneficentrism after all? I’m dubious—it seems awfully fishy to just insist that one’s favoured form of not carefully aiming at the general good should somehow be expected to actually have the effect of best promoting the general good. While it clearly wouldn’t be optimal to make every single decision by appeal to explicit cost-benefit analysis, it seems crazily implausible that (in realistic circumstances) it somehow maximizes expected utility to never employ direct utilitarian reasoning. It’s notable that the utilitarian philosophers who have thought most about this issue end up advocating for a multi-level approach (using explicit utilitarian reasoning in unusual or unexpected high-stakes situations—e.g. pandemic policy—and during “calm, reflective moments” to help guide our choice of everyday heuristics, strategies, and virtues, for example).

But I’d be curious if others—especially those who sneer at actually-existing EA—are more inclined to defend the optimality of existing political movements. Or if they have an entirely different conception of what Alt-EA should look like?

Moral Sincerity

One obvious possibility is that those hostile to EA aren’t truly sympathetic to beneficentrism at all, and really just have worse values. I’d be happy to see that hypothesis refuted. I think it’d be especially exciting to see an entirely new Alt-EA ecosystem spring up around those other beneficentrists who sincerely pursue the general good in a different way, or with a different rhetorical/ideological framing, that maybe appeals better to a different audience than traditional EA does. (So long as this alternative movement has good epistemics and doesn’t seem likely to be positively counterproductive and bad for the world, that is!)

Given the risk of paying empty lip-service to good values, I think it’s worth making the challenge explicit: if not EA, how do you move beyond cheap talk and take your values seriously—promoting them in a scope-sensitive, goal-directed, outcome-oriented way?

I find it so frustrating that the hostile critics don’t even seem to be interested in this question! Whatever your values are, there are so many ways that you could more effectively promote them through donations, direct work, and advocacy (that is explicitly directed towards encouraging more donations and direct work for the best causes). So even if EA is somehow misguided, I think it could still do the world a great service by encouraging more people to actually (and effectively) do more good: to achieve the EA aim, even if they think that the existing EA movement is (for whatever reason) failing in its ambitions.

I really think the great enemy here is not competing values or approaches so much as failing to act (sufficiently) on values at all. Of course, we’re all driven by a variety of motivations, many no doubt less lofty than we would normally like to think. The extent to which our professed values are “sincere” is probably best understood as a matter of degree, rather than a sharp binary distinction between sincere akratics (who don’t always manage to live up to their ambitious values) and outright hypocrites (who don’t genuinely hold the professed values at all). No one with ambitious values always manages to live up to them, but I wouldn’t want fear of being labelled a “hypocrite” to disincentivize having ambitious values at all. (There are worse things in the world than hypocrisy!)

So I’m trying to find a way to frame my point without using the H-word—I grant that we’re all a messy mix of motivations, heavily influenced by the contingent circumstances in which we find ourselves. And, let’s face it, life can be hard—even those in privileged material circumstances aren’t always in a mental space to be able to do more than just get through the day. I want to explicitly grant all that.

But if some social movements or moral ideologies do more to bring our actions more in line with our (ambitious) expressed values, then that seems good, important, and worth encouraging. Good social norms can make it much easier for us to do good things. And it seems to me that EA is near unique in this regard. It just seems remarkably rare for people to treat their values seriously in the way that EA invites us to.

And so, while I guess non-EAs wouldn’t be thrilled to be charged with failing to take their values seriously, and I certainly don’t mean to be gratuitously offensive, I hope that pointing out this disturbingly common disconnect might help to make it less common. It would be great if, in order to avoid this objection, more non-EAs worked to make their own groups and practices more morally ambitious and goal-directed. It would be great to see more embrace an ethos that was oriented more towards promoting good outcomes and less towards expressive symbolism. It would be great, in short, for others to achieve what EA at least tries to achieve.





More posts like this

Sorted by Click to highlight new comments since: Today at 5:18 PM

(So long as this alternative movement has good epistemics and doesn’t seem likely to be positively counterproductive and bad for the world, that is!)

I think this parenthetical misses what is actually hard about forming solidarity out of good intentions, which is in fact that disagreements may run so deep that it may feel mutually negative sum, or like the alt team has to lose necessarily  in order for us to win. I'm not saying it's definitely like this, but it's kind of worstcase/securitarian thinking to prepare your model for that kind of scenario. 

I have some anecdotes. 

  • A guy at my last job bounced off EA quickly because he didn't like a conversation he had with one of us about mental health. He felt like mental health was obviously the number one cause area, and thought the fact that for us it's only vaguely in the top 10-15 if that was a signal that we were totally borked. I was gravely disappointed that he didn't reason more like "the reason they're not serious about mental health is that they haven't met me yet, I better post my arguments on the forum" or "wow, someone should really do something about that, it might as well be me" and found an org. I encouraged him to do both of these things, but that wasn't his mindset at all. I think this is what missed opportunities for alt-EA look like, people have their pet criticisms but fail to take themselves seriously. 
  • I was talking with one of my oldest friends, not an EA whatsoever at this point (she eventually grokked the idea that 1 in 900 mosquito nets saves a life and signed up for the newsletter, still is far from card-carrying, but this was prior to any of that anyway), about the popularity of climate change. It seems like few beliefs are more conventional right now than "climate change really bad", and I asked her why anecdotally every single person who's told me they don't want to have kids because of climate change (not because the broader GCR conversation, but strictly because of climate change) was failing to do energy science or related engineering, and heck I'd even settle for policy theories of change or serious activism. She said, and this is a point for intellectual diversity, because I don't think I would've encountered this if I only talked to EAs, "no, that's a militaristic 'draft' mindset. If everyone has to fight, then what is there left to fight for?", and broadly defended peoples' entitlement to believe there be problems that they're not personally fixing. This, plausibly, explains a cluster of the memespace around what we interpret as missed opportunities to start alt-EA movements! Is the mentality of observing broken stuff and deciding to fix it unusually soldiery? Can we slip some cash to a viral marketing expert to instill that mentality in people, without associating it with EA? Is this plausibly an actual crux separating the alt-EAs we'd like to see from actually-existing critics? 

One more comment:

Others may be broadly enthusiastic about the idea of Effective Altruism, but have some concerns about the current state of the movement as it actually stands. From here one might offer friendly/internal critiques of EA: “Here’s how you might do better by your own lights!” And my sense is that good-faith critiques of this sort tend to get a very positive reception on the EA forum.

I think EA has "a borg property", i.e. the entity/civilization from star trek that could assimilate anything which expresses fear of homogeny that some critics have called an affectation from the west-end of the cold war. I think EA is nimble, a minimal set of premises that admits lots of different stuff and adapts, and I think is genuine about it's enjoyment of criticism. But this means that it literally eats everyone above a certain quality bar (which is good). There's an old saying "who exactly is a rationalist? Simply someone who disagrees with Eliezer Yudkowsky", which I think sums up a lot about our culture. The difficult thing about separating a critic (someone who helps you find a path through action space that deletes their complaint) from a complainer (someone who's the opposite of that) is that, while you have to protect your attention from complainers to a nontrivial degree, you may accidentally block a high quality adversary because what seems like a complaint is actually a criticism that's just really really hard to address, and you don't know the difference. Trashing your progress and going back to the drawing board is painful, we should expect cognitive biases to make it feel even more unpleasant or to tip the scale against doing that! "So you're saying I have to throw out bourgeois economics and arm the malaria patients so they can fight imperialism?" may appear like a hostile interaction to you while also being the critic's earnest attempt to help you be more morally correct with respect to their empirical beliefs. We have, as a tradition, heuristics for honing our sense of who's epistemics we trust, who's beliefs are most true, and so on, but they're not infallible. This only gets worse when you remember that if you're serious about intellectual diversity, you have to actually tolerate very different norms. We can't stay in our comfort-zone norms of discourse -wise, even if we think our norms of discourse are superior, if we're serious about actual intellectual diversity. 

TLDR, a tepid defense of admitting more things that seem like complaints into the overton window of proper criticisms

I found this comment really interesting and helpful.  Thank you!

Viewed in this light, the absence of any competing explicitly beneficentrist movements is striking. EA seems to be the only game in town for those who are practically concerned to promote the general good in a serious, scope-sensitive, goal-directed kind of way.

Before EA, I think there were at least two such movements:

  1. a particular subset of the animal welfare movement that cared about effectiveness, e.g., focusing on factory farming over other animal welfare issues explicitly because it's the biggest source of harm
  2. AI safety

Both are now broadly considered to be part of the EA movement.

Also cost-effectiveness analyses in general, of which only a subset is in EA.

One major concern I have with the actually-existing wholesale criticisms of EA is that they tend to reinforce a kind of moral complacency.


I agree this is common and it was what I most commonly confronted in college at Cornell. Oh, I should actually just be focused on living sustainably, not being racist, and participating in democracy, and this will be an optimally ethical life? Convenient if true!

I have several friends who are members of Direct Action Everywhere. I think DXE, as I'm exposed to it, does present a sort of alt-EA that you are asking about. I think that many DXE members could non hypocritically comment that EA is complacent / EAs are generally more complacent people than themselves. 

While DXE is not focused on the general good (per se), anecdotally it seems like you can persuade DXE folks of extreme conclusions about the importance of AI safety, at least if they are also autistic. 

I do think that you can interpret DXE as a general good, "beneficentrist" org, given that if you are not longetermism-pilled it is IMO it is reasonable to say that animal welfare is the highest moral priority and I think this is their actual belief. It's an org for people to do the most important thing as they see it, not for them to just do a thing.

RE: Complacency:

While DXE is not focused on the general good (per se), anecdotally it seems like you can persuade DXE folks of extreme conclusions about the importance of AI safety, at least if they are also autistic. 

The problem is that you can also convince them about many many things.


Unfortunately, an issue with orgs that draw on ideological tones like "social movement" organizations, is almost constant churn and doubt on probably well understood ideas, like resource allocation, and internal institutions like long planning, that other orgs have solved long ago. 

On the other hand, they constantly indulge things that seem objectively bad, like ignoring evidence against theories of change, and spending enormous time on politics and abstract objects that seem unproductive, and even overshadow EA's excesses.

It may be prejudice, but being inside and seeing several organizations of various classes, this looks overdetermined for dysfunction once these orgs reach any scale. 

Again, at risk of bias, it's hard not to indulge my personal suspicion is that these intensely chaotic environments select for self-replication and media attention with the results:

  1. Why they exist or at least we hear about these particular orgs is their ability to be aggressive
  2. Their ability to focus and gather resources is limited
  3. The aggressive orgs are selected for over more functional, slower orgs, crippling the ecosystem for strong social organizations.
  4. The leaders and cultures arising from them are suspect culturally and "epistemically'


  1. ^

    why do this footnote exist, I deleted it

  2. ^

    why do this footnote also exist, I also deleted it

Sorry  I am not sure I follow this post. I am not really commenting on how much DXE should grow, I'm not involved. However, if I was looking for those "moral optimizers" outside of EA that are surprisingly hard to find, I think that one place you can find them is DXE. It's an existence proof -- there are IMO sincere critics as the OP discusses.

If I were going to discuss whether DXE should grow, I would just try to list what they have accomplished and do some estimates of the costs. Heuristics about types of organization, the quality of the cultures involved, etc., would be of lower interest to me.

For some evidence at this, here is one of the founders of Extinction Rebellion (Robert Hallam, who got cancelled or something, I don't know), wrote about infighting:

You say this to them, their eyes glaze over. They don't understand what you mean. Because they have no life experience of revolution. They have spent their comfortable lives in offices in front of computers, on social media, they can't conceive of any time that will be different than this. In practical terms, it means that they will never support anything which upsets those in power. 


The radical left are those people who say great stuff, but are totally hopeless at doing anything about it. They call for climate justice, they are into ‘intersectionality’, they are pro identity politics. But the main thing is not what they say they want. The main thing is they have no idea about how to make it happen. In fact, everything they actually do stops change from happening. In actual fact, they are not radical at all. They are reactionary.


The biggest disaster of the last 30 years has been the adoption of horizontalist dogma. The notion that you should not have leaders, hierarchies or clear structures. Indeed, for many years, I believed much of this ideology. But practical experience shows it to be nonsense. This is because it imposes moral ideas on timeless truths about how people make decisions together. As such, it prevents movements from reaching a fraction of their political potential. T

Again, this is hard core, former leader of XR (who got cancelled himself at one point), making very basic fights over ideologies and primitive decisions like governance and management (and I think he got deposed or something because of it, but it's just a big soup). 

I'm sure there's every permutation of this "left" vs "right" fighting going on constantly.

The point is that I'm skeptical that these orgs  and cultures are a positive example for anything besides self-replication.

DXE Bay is not very decentralized. It's run by the five people in 'Core Leadership'. The leadership is elected democratically. Though there is a bit on complexity since Wayne is influential but not formally part of the leadership. 

Leadership being replaced over time is not something to lament. I would strongly prefer more uhhhh 'churn' in EA's leadership. I endorse the current leadership quite a bit and strongly prefer that several previous 'Core' members lost their elections.

note: I haven't been very involved in DXE since I left California. Its really quite concentrated in the Bay.

The biggest disaster of the last 30 years has been the adoption of horizontalist dogma. The notion that you should not have leaders, hierarchies or clear structures.


I think this is a fairly common/prominent concern in left circles e.g. The Tyranny of Structurelessness.

I wouldn't really consider DXE particularly horizontalist? Paging @sapphire

I'm also not sure in what sense these quotes would be evidence of anything about DXE

A few thoughts:

I think while there may be no competing movements that have the community aspect of EA, there are lots of individuals (and orgs) out there who do charitable giving in an impact-driven/rational way, or take well paid positions with the view of using the income for good without branding it earning-to-give. Some might do this quietly. Some of these individuals might well agree with core EA ideas, and may have learnt from books like doing good better. You can do all of these without being a movement. If a critic thinks EA is a cult, why would they respond by forming a competing cult?

EA has also changed over time, it looks very different today than 5 years ago. It may be a good exercise to look at wether the criticisms that people formulate for EA today would have also applied to EA 5 years ago. A good Alt-EA movement might look like whatever EA was before longtermism and AI x-risk seemingly overpowered other areas of concern. How would the 2017 EA movement compete with the 2022 EA movement?

Thirdly, it’s pretty difficult to compete since EA hit the jackpot. In places like hiring talent, or funding students, there are limited resources that communities or concern areas compete over. If the EA community has this much more money, they suck the air from adjacent areas like near-term AI safety or AI ethics. Why would you work on alignment of not super intelligent but widely deployed ML if you can make three times as much training cool large language models next door? And for studentship funding, being EA-aligned will make an enormous difference to your funding prospects compared to other students who might work on the same thing but don’t go to EAglobal each year. I think this is where a lot of frustration originates.

Finally, it’s very common to point out that EA is open to good-faith criticism. There is indeed often very polite and thoughtful engagement on this forum, but I am not sure how easy it is to actually make people update their pre-existing beliefs on specific points.

I've read one alternative approach that is well written and made in good faith: Bruce Wydick's book "Shrewd Samaritan".

It's a Christian perspective on doing good, and arrives at many conclusions that are similar to effective altruism. The main difference is an emphasis on "flourishing" in a more holistic way than what is typically done by a narrowly-focused effective charity like AMF. Wydick relates this to the Hebrew concept of Shalom, that is, holistic peace and wellbeing and blessing.

In practical terms, this means that Wydick more strongly (compared to, say, GiveWell) recommends interventions that focus on more than one aspect of wellbeing. For example, child sponsorships or graduation approaches, where poor people get an asset (cash or a cow or similar) plus the ability to save (e.g., a bank account) plus training.

I believe that these approaches fare pretty well when evaluated, and indeed there are some RCTs evaluating them. These programs are more complex to evaluate, however, than programs that do one thing, like distributing bednets. That said, the rationale that "cash + saving + training > cash only" is intuitive to me, and so this might be an area where GiveWell/EA is a bit biased toward stuff that is more easily measurable.

A bit more generally: I think we can look at religions as a set of Alt-EA movements.

Most religions have strong prescriptions and incentives for their members to do good. Many of them also advocate for donating a part of one's income.

All these religions also have members that think hard about how to do the most good in a cost-effective way. Here, "good" follows the definition of the religion and might include aspects such as bringing people closer to God. However, it is usually correlated with EA notions of utility or wellbeing or freedom from suffering. And indeed one can find faith-based organizations with large positive effects: For example, AMF could not distribute its bednets without local partner organizations, and in that list are many faith-based ones like IMA or World Vision.

I'm not claiming that the effect of religion overall is robustly positive -- that's a very difficult question to answer -- but that EA-like intentions, and sometimes actions, can be found in many religious people and organizations.

Yeah, I had wondered about this, as certain religious subcommunities seem the main precedents for moral ambitiousness.  But of course there's also an awful lot of parochialism and explicit demonization of outgroups inherent in many religious communities. (Evangelical Christianity in the US does not seem accurately characterized as driven by universal beneficence, for example!)  Given the immense size of major religions, I'd be wary of attributing beneficentrism to religious institutions as a whole on the basis of what "can be found" amongst some (arguably non-representative) members.

But yes, I think at least some highly-specified religious sub-communities could be a good place to look here. (And I'd guess that's precisely where "EA for Christians" outreach is most successful.)

I came to the basic idea of EA, long before I found the movement, from a Christian perspective. So I think there's certainly the basis for it in a lot of religions. But I think at that point I was more devout than most Christians, even most of those who go to church every Sunday. This is probably a key factor.

 I'm not sure how seriously most people take any of their goals, even the selfish ones. Lack of commitment is a hell of a thing, and even more so when mental effort and uncertainty are required.  It kind of astounds me how often people say they want something and then don't follow through at all on even minimal efforts. A friend wanted a job in my field, so I introduced him to a connection in his area. He never met with her. Other friends have run for office, but then not bothered talking to any voters. A relative repeats the same financial mistakes over and over and over again despite my attempts to help her with financial planning and her swearing up and down each time that next time will be different. 

And all of these personal goals are a lot more straightforward to sort out than "how do I do the most good I can  do?". I could figure out a plan for all of these examples in an afternoon at most, and after years of effort I still don't know how to be a maximally effective altruist. Most people, when they can't round uncertainty off to "yes" or "no", seem to have this idea that it's uncertain so all actions are the same. I recently had a conversation with an acquaintance who accused me of "only thinking in black and white" because I believe with a high degree of confidence that donating to AMF is a better choice than randomly paying for groceries for the person behind you in line, "because maybe they need it and maybe the kindness will ripple through the world and have other effects".  And several other people witnessing this debate agreed with him!


So in addition to altruism, I think key personality traits that would be necessary for someone to be even an alt-EA are an abnormally high level of goal-commitment, and an unusually high level of comfort making decisions under uncertainty. 

Overall, would you recommend reading the book?

Whether you'd enjoy the book and benefit from it depends strongly on your background, I think.

To me, this was a good read because I learned about a broad range of interventions for helping people -- graduation programs and child sponsorships being probably the most notable examples. The book really changed my mind on child sponsorships. I had thought of them as a rather high-overhead intervention that was popular because it appeals to emotion to get donors' money... but now I think they can be cost-effective when done well.

That said, if your goal is to learn about various effective interventions (beyond the few that GiveWell writes about), then a good and free resource would be the life you can save book.

The second reason to recommend the book is its good discussion on "flourishing", that is, a holistic view of health, wellbeing, and prosperity. Finally, a third reason to read it is to get a Christian perspective on the subject, or give the book to Christian friends.

Thank you for this article, full of nuance. 

I think what makes effective altruism unique is that it is trying without preconceptions to work out how to do the most good.  Beneficentric people may help neighbours, or civic groups, or charities, or religions, or pressure groups, or political parties, but these different approaches are not ranked by effectiveness.  

There have always have been some saints, but it is a new idea to try to be an impartial moral maximiser, working through an information-hungry  social movement.