The New Atlantis (American religious conservative magazine about science and ethics) has an article out about Effective Altruism. It endorses some parts of EA, but is critical of EA as a whole. Main points (although the article is more nuanced than this summary can convey):

  • EA charities, at least the global health and development ones, do good
  • EA is closely linked to cultish elements of the rationalist community
  • The "pencil problem": in complex systems, it's hard to centrally plan
  • Emotional appeals are a functioning planning mechanism for the world of charity
  • EA is opposed to emotional appeals
  • EA doesn't include a role for friendship and personal relationships, but it should
  • The "paper towel problem": EA  doesn't include a role for maintaing social norms
  • EAs are more driven by wanting to show off their intellectual firepower than help others
  • EAs don't follow through with their wilder claims
  • He instead recommends a sort of virtue-ethics-ish approach to doing good

    I have no affiliation with the people who produced this article, but came across it and thought that it seemed interesting and was better-informed than many of the other critiques of EA that get discussed on here, although I don't agree with all of his points.

34

0
0
2

Reactions

0
0
2
Comments4


Sorted by Click to highlight new comments since:

I agree with the comments that this post is better-informed than many EA critiques. Lots of the factual content is at least roughly correct (although lots of the judgement calls I don't agree with, e.g. how intertwined EA and rationality are in practice).

As a piece of criticism, though, I don't feel moved by it. (edit: to be clear this is not a criticism of making a linkpost here! I think it's good to be aware of this stuff. I just want to be frank about my take on it.)

The article includes a whole series of things that sound superficially (to my imagined EA-unaware reader) significant, but it just drops them in and shows seemingly no interest in following up on them:

  • wait, is it really a cult or what? what would the implications of that be?
  • those rationality workshops sound expensive, is that a scam or something?
  • one of its promoters did a multi-billion dollar fraud? we're just going to move on from that with no further comment?
  • wait why do they have two castles
  • sex redistribution for incels??
  • is it bad that they tried to fire Sam Altman?
  • why are we talking about toilet paper and none of these things

Overall it feels like they had a checklist of points to hit but don't really have much to say about them, instead preferring to remain in a purely abstract critique about the foundation of what it is to be good to another person, which a lot of the other content... doesn't really seem relevant to. At the end it seems decidedly confused about whether contributing to effective altruism is good or bad:

We should celebrate this work, and if more is to come, celebrate it too. But the rationalists err in seeing this all as a useful occasion to atone for our cognitive sins. And the effective altruists fail in urging us to see this as the whole story, or even the main act.

ok, but like, what is the import of that failure? the work is to be celebrated but it doesn't matter that much actually? should we, the virtuous, who consider our fellow person, donate to bednets or what?

I had a similar question to yours about what the essay is trying to say about Givewell-style effective altruism. My interpretation, which could be wrong, was that the author is saying that Givewell-style EA is a good thing, but is not a moral obligation. I responded in a blog post (not aimed at EAs, but people who may share the same hesitancies as the author) "How do you know how to save a drowning child across the world?".

Following from this, I think criticisms of effective altruism often end up with a conclusion that is too far in the other direction: the conclusion that we only have moral obligation to people in our immediate circles and thus should focus on parochial charity, a conclusion that does not leave room for moral concern and yes, even obligation,2 for the global rich to people living in poverty far from us.

I don't think any argument that focuses solely on helping within communities that we are already in — communities that are, even in the US alone, highly segregated by income; and are globally even more vastly unequal — adequately addresses the moral ill that is global poverty.

I argue that people who might share the concerns of the author (as I understood them) about EA might want to take the option of donating to direct cash transfers or effective community-based organizations in low- and middle-income countries. 

I found this to be one of the better criticisms of EA that I've read. I appreciate that the tone wasn't highly aggressive or strident, and that it mentioned the virtue of Julie Wise/Scott Alexander-type arguments of "let's keep donating money for bednets."

I think this is good. I think it would be valuable for most people here to read a criticism from an uncommon angle, even if you disagree with the author’s argument. Thank you for sharing it here!

Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f