I came across a critique of EA in an academic article recently[1], and I didn't see any reference to it on the EA Forum. The very short, simplified summary is something like "You aren't calculating the marginal utility right."

Like many critiques, it seems to provide a critique of the way things are done rather than a critique of core aspect of EA ideas. There seems to be a focus on the donation aspect of EA and a neglect of other aspects (such as career planning). I also only skimmed it rather than reading it closely, so it is possible that I am missing some stuff. But this is the part that strikes me as the core idea/argument:

We’re looking to compare the expected marginal rates of return on additional donations of each charity, and keep only those charities that have the highest expected return. And in practice, effective altruists have followed a very simple heuristic for measuring expected rates of return, which we will refer to as ‘myopic marginalism’. Here is one way. Start by calculating the past rate of return on donations. This is easy enough: simply divide the total size of the benefit generated by some intervention by the total cost of the programme. This first measure is a bit crude, since it only tells you about average return on donation, not the return on the last dollar, but it is used by EAs and it does tell you something (see MacAskill Reference MacAskill2015; Open Philanthropy Project 2017; GiveWell 2020 b; Giving What We Can 2021; and especially GiveWell’s 2021 explicit cost-effectiveness calculations in spreadsheets). A more sophisticated measure becomes possible if you have a time-series plotting the evolution of the programme’s costs and benefits: instead of looking at total costs and benefits, look at the ratio of the most recent change in size of the benefits to the change in the costs of the programme. This measure does give you the marginal return on the last dollar spent (see Budolfson and Spears Reference Budolfson, Spears, Greaves and Pummer2019 for further discussion). It is then predicted that the rate of return on the next dollar you donate to some organization will be very similar to the rate of return on the last dollar, up to however much more room the charity has for additional funding. With this information in hand, it is child’s play to identify the elite group of charities that will maximize the impact of the next dollar you donate (up to however much additional room for funding each charity has). And so, in just two easy steps, we’ve winnowed the space of charities worth considering donating to to just a handful, greatly simplifying the decision problems of donors.

There are many steps in the decision procedure we’ve described that one might take issue with. Critics of EA have, for example, criticized the optimizing logic of EA, its over-reliance and over-insistence on RCTs, and its use of cost-effectiveness analysis, which makes no room for permissible partiality and is claimed to overweigh the value of a statistical life. We take issue with none of this in the present paper, and will focus only on the inadequacy of myopic marginalism. As we will now see, myopic marginalism only yields accurate estimates of cost-effectiveness when the benefits of an intervention are continuous in its scale, because only then can we use past returns as a reliable guide to future returns.

With my limited knowledge and background[2], that doesn't sound like an outrageous critique, and if true it is probably something that we can adopt when we have sufficient data available. I am now ever-so-slightly less naïve about numbers relating to rates of return on charitable donations.

  1. ^

    Here is the full citation for anyone who is a stickler for that kind of thing: Côté, N., & Steuwer, B. (2023). Better vaguely right than precisely wrong in effective altruism: The problem of marginalism. Economics & Philosophy, 39(1), 152-169. doi:10.1017/S0266267122000062

  2. ^

    I've done done any cost-benefit analysis of charitable programs, nor any other type of effort to demonstrate the efficacy of similar programs.

Comments1


Sorted by Click to highlight new comments since:

Clear and well-written critique, that in my opinion misses the mark entirely.

"[...]The presupposition that the magnitude of the benefits (or harms) generated by some charity vary continuously in the scale of the intervention performed"

Many, if not most, assessments made by the 'EA Framework' do not rest on such a presupposition. There's plenty of examples of work that would be endorsed by a vast majority of effective altruists' which is justified through non-marginal lumpy benefits.

More from Joseph
82
Joseph
· · 23m read
77
Joseph
· · 20m read
Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f