Whilst Googling around for something entirely unrelated, I stumbled on a discussion paper published in January of 2023 about Effective Altruism that argues Global Health & Wellbeing is basically a facade to get people into the way more controversial core of longtermism. I couldn't find something posted about it elsewhere on the forum, so I'll try to summarise here.

The paper argues that there is a big distinction between what they call public facing EA and Core EA. The former cares about global health and wellbeing (GH&W) whereas the latter cares about x-risks, animal welfare and "helping elites get advanced degrees" (which I'll just refer to as core topics). There are several more distinctions between public EA and core EA, e.g. about impartiality and the importance of evidence and reason. The author argues, based on quotes from a variety of posts from a variety of influential people within EA, that for the core audience, GH&W is just a facade such that EA is perceived as 'good' by the broader public, whilst the core members work on much more controversial core topics such as transhumanism that go against many of the principles put forward by GH&W research and positions. The author seems to claim that this was done on purpose and that GH&W merely exists as a method to "convert more recruits" to a controversial core of transhumanism that EA is nowadays. This substantial distinction between GH&W and core topics causes an identity crisis between people who genuinely believe that EA is about GH&W and people who have been convinced of the core topics. The author says that these distinctions have always existed, but have been purposely hidden with nice-sounding GH&W topics by a few core members (such as Yudkowsky, Alexander, Todd, Ord, MacAskill), as a transhumanist agenda would be too controversial for the public, although it was the goal of EA after all and always has been.

To quote from the final paragraph from the paper:

The ‘EA’ that academics write about is a mirage, albeit one invoked as shorthand for a very real phenomenon, i.e., the elevation of RCTs and quantitative evaluation methods in the aid and development sector. [...] Rather, my point is that these articles and the arguments they make—sophisticated and valuable as they are—are not about EA: they are about the Singer-solution to global poverty, effective giving, and about the role of RCTs and quantitative evaluation methods in development practice. EA is an entirely different project, and the magnitude and implications of that project cannot be grasped until people are willing to look at the evidence beyond EA’s glossy front-cover, and see what activities and aims the EA movement actually prioritizes, how funding is actually distributed, whose agenda is actually pursued, and whose interests are actually served.

81

0
0

Reactions

0
0
Comments10


Sorted by Click to highlight new comments since:

I skimmed through the article; thanks for sharing!

Some quick thoughts:

community-members are fully aware that EA is not actually an open-ended question but a set of conclusions and specific cause areas

  • The cited evidence here is one user claiming this is the case; I think they are wrong. For example, if there were a dental hygiene intervention that could help, let's say, a hundred million individuals and government / other philanthropic aid were not addressing this, I would expect a CE-incubated charity to jump on it immediately.
    • There are other places where the author makes what I would consider sweeping generalizations or erroneous inferences. For instance:
      • "...given the high level of control leading organizations like the Centre for Effective Altruism (CEA) exercise over how EA is presented to outsiders" — The evidence cited here is mostly all the guides that CEA has made, but I don't see how this translates to "high level of control." EAs and EA organizations don't have to adhere to what CEA suggests. 
      • "The general consensus seems to be that re-emphasizing a norm of donating to global poverty and animal welfare charities provides reputational benefits..." — upvotes to a comment ≠ general consensus. 
  • Table 1, especially the Cause neutrality section, seems to wedge a line where one doesn't exist.
  • The author acknowledges in the Methodology section that they didn't participate in EA events or groups and mainly used internet forums to guide their qualitative study. I think this is the critical drawback of this study. Some of the most exciting things happen in EA groups and conferences, and I think the conclusion presented would be vastly different if the qualitative study included this data point.
  • I don't know what convinces the article's author to imply that there is some highly coordinated approach to funnel people into the "real parts of EA." If this is true (and my tongue-in-cheek remark here), I would suggest these core people not spend>50% of the money on global health as there could be cheaper ways of maintaining this supposed illusion.

    Overall, I like the background research done by the author, but I think the author's takeaways are inaccurate and seem too forced. At least to me, the conclusion is reminiscent of the discourse around conspiracies such as the deep state or the "plandemic," where there is always a secret group, a "they," advancing their agenda while puppeteering tens of thousands of others. 

    Much more straightforward explanations exist, which aren't entertained in this study.

    EA is more centralized than most other movements, and it would be ideal to have several big donors with different priorities and worldviews. However, EA is also more functionally diverse and consists of some ten thousand folks (and growing), each of whom is a stakeholder in this endeavor and will collectively define the movement's future.

I think the strategic ambiguity that the paper identifies is inherent to EA. The central concept of EA is so broad - "maximize the good using your limited resources" - that it can be combined with different assumptions to reach vastly different conclusions. For example, if you add assumptions like "influencing the long-term future is intractable and/or not valuable", you might reach the conclusion that the best thing to do with your limited resources is to mitigate global poverty through GiveWell-recommended charities or promoting economic growth. But if you tack on assumptions like "influencing the long-term future is tractable and paramount" and "the best way to improve the future is to reduce x-risk", then you get the x-risk and AI safety agenda.

This makes it challenging and often awkward to talk about what EA focuses on and why. But it's important to avoid describing EA in a way that implies it only supports either GHWB or the longtermist agenda. The paper cites this section of the EA Hub guide for EA groups which addresses this pitfall.

That’s a pretty impressive and thorough piece of research, regardless of whether you agree with the conclusions. I think one of its central points — that x-risk/longtermism has always been a core part of the movement — is correct. Some recent critiques have overemphasised the degree to which EA has shifted toward these areas in the last few years. It was always, if not front and centre, ‘hiding in plain sight’. And there was criticism of EA for focusing on x-risk from very early on (though it was mostly drowned out by criticisms of EA’s global health work, which now seems less controversial along with some of the farmed animal welfare work being done).

If someone disagrees empirically with estimates of existential risk, or holds a person-affecting view of population ethics, the idea that it is a front for longtermism is a legitimate criticism to make of EA. Even more resources could be directed toward global health if it wasn’t for these other cause areas. A bit less reasonably, people who hold non-utilitarian beliefs might even suspect that EA was just a way of rebranding ‘total utilitarianism’ (with the ‘total’ part becoming slowly more prominent over time).

At the same time, EAs still do a lot in the global health space (where a majority of EA funding is still directed), so the movement is in a sense being condemned because it has actually noticed these problems (see the Copenhagen Interpretation of Ethics).

This isn’t to say that the paper itself is criticising EA (it seems to be more of a qualitative study of the movement).

I don't know, but this critique feels like 5 years too late. There was a time when the focus of many within EA on longtermist issues wasn't as upfront, but there's been a sustained effort to be more upfront on this and anyone who's done the intro course will know that it is a big focus of EA.

I'd love to know if anyone thinks that there are parts of this critique that hold up today. There very well might be as I've only read the summary above and not the original post.

I think it holds up. I wrote a highly upvoted post on organisations being transparent about their scope one month ago due to similar concerns.

As far as I understand, the paper doesn't disagree with this and an explanation for it is given in the conclusion:

Communication strategies such as the ‘funnel model’ have facilitated the enduring perception amongst the broader public, academics and journalists that ‘EA’ is synonymous with ‘public-facing EA’. As a result, many people are confused by EA’s seemingly sudden shift toward ‘longtermism’, particularly AI/x-risk; however, this ‘shift’ merely represents a shift in EA’s communication strategy to more openly present the movement’s core aims.

Interesting. Seems from my perspective to be a shift towards AI, followed by a delayed update on EA’s new position, followed by further shifts towards AI.

FYI this paper seems to have a really good list of EA Organisations in it. This may well come in handy!

I put the whole list in a spreadsheet for ease of use, in case anyone wants to access it in a way that is a bit more editable than a PDF: https://docs.google.com/spreadsheets/d/1KDcDVpTKylk3qP3CqLFSscWmH01AkW4LLNwjOOcWpF8/edit?usp=sharing

I also through that it was a fairly good (and concise) history of EA. I have been reading EA material for a few years now, but I haven't before seen such a clear tracing of the history of it.

Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f