I came across a critique of EA in an academic article recently, and I didn't see any reference to it on the EA Forum. The very short, simplified summary is something like "You aren't calculating the marginal utility right."
Like many critiques, it seems to provide a critique of the way things are done rather than a critique of core aspect of EA ideas. There seems to be a focus on the donation aspect of EA and a neglect of other aspects (such as career planning). I also only skimmed it rather than reading it closely, so it is possible that I am missing some stuff. But this is the part that strikes me as the core idea/argument:
We’re looking to compare the expected marginal rates of return on additional donations of each charity, and keep only those charities that have the highest expected return. And in practice, effective altruists have followed a very simple heuristic for measuring expected rates of return, which we will refer to as ‘myopic marginalism’. Here is one way. Start by calculating the past rate of return on donations. This is easy enough: simply divide the total size of the benefit generated by some intervention by the total cost of the programme. This first measure is a bit crude, since it only tells you about average return on donation, not the return on the last dollar, but it is used by EAs and it does tell you something (see MacAskill Reference MacAskill2015; Open Philanthropy Project 2017; GiveWell 2020 b; Giving What We Can 2021; and especially GiveWell’s 2021 explicit cost-effectiveness calculations in spreadsheets). A more sophisticated measure becomes possible if you have a time-series plotting the evolution of the programme’s costs and benefits: instead of looking at total costs and benefits, look at the ratio of the most recent change in size of the benefits to the change in the costs of the programme. This measure does give you the marginal return on the last dollar spent (see Budolfson and Spears Reference Budolfson, Spears, Greaves and Pummer2019 for further discussion). It is then predicted that the rate of return on the next dollar you donate to some organization will be very similar to the rate of return on the last dollar, up to however much more room the charity has for additional funding. With this information in hand, it is child’s play to identify the elite group of charities that will maximize the impact of the next dollar you donate (up to however much additional room for funding each charity has). And so, in just two easy steps, we’ve winnowed the space of charities worth considering donating to to just a handful, greatly simplifying the decision problems of donors.
There are many steps in the decision procedure we’ve described that one might take issue with. Critics of EA have, for example, criticized the optimizing logic of EA, its over-reliance and over-insistence on RCTs, and its use of cost-effectiveness analysis, which makes no room for permissible partiality and is claimed to overweigh the value of a statistical life. We take issue with none of this in the present paper, and will focus only on the inadequacy of myopic marginalism. As we will now see, myopic marginalism only yields accurate estimates of cost-effectiveness when the benefits of an intervention are continuous in its scale, because only then can we use past returns as a reliable guide to future returns.
With my limited knowledge and background, that doesn't sound like an outrageous critique, and if true it is probably something that we can adopt when we have sufficient data available. I am now ever-so-slightly less naïve about numbers relating to rates of return on charitable donations.
Here is the full citation for anyone who is a stickler for that kind of thing: Côté, N., & Steuwer, B. (2023). Better vaguely right than precisely wrong in effective altruism: The problem of marginalism. Economics & Philosophy, 39(1), 152-169. doi:10.1017/S0266267122000062
I've done done any cost-benefit analysis of charitable programs, nor any other type of effort to demonstrate the efficacy of similar programs.