This is a linkpost for https://philpapers.org/rec/CHAWNE
Forthcoming in Public Affairs Quarterly:
Effective altruism sounds so innocuous—who could possibly be opposed to doing good, more effectively? Yet it has inspired significant backlash in recent years. This paper addresses some common misconceptions, and argues that the core "beneficentric" ideas of effective altruism are both excellent and widely neglected. Reasonable people may disagree on details of implementation, but all should share the basic goals or values underlying effective altruism.
Woah, a really nice article that identified the most common criticisms of EA that I've come across, namely, cause prioritization, earning to give, billionaire philanthropy, and longtermism. Funnily enough, I've come across these criticisms on the EA forum more than anywhere else!
But it's nice to see a well-researched, external, and in-depth review of EA's philosophy, and as a non-philosopher, I found it really accessible too. I would like to see an article of a similar style arguing against EA principles though. Does anyone know where I can find something like that? A search for EA criticism on the web brings up angry journalists and media articles that often miss the point.
You can take a look at the ‘Further reading’ section of criticism of effective altruism, the articles so tagged, and the other tags starting with “criticism of” in the ‘Related entries’ section.
Thanks for sharing! fyi, I've written up a summary of the main themes of the paper here.
(And seconding Jordan's request for "an article of a similar style arguing against EA principles". My suspicion is that none can exist because there's no reasonable way to make such an argument; insinuation and "political" critique is all that the critics have got. But I'd love to be proven wrong!)
I would strongly push back against the idea that “insinuation and ‘political’ critique’” are all that critics have. Currently posting from my phone before bed, but happy to follow up at a later date once I have some free time with a more in depth and substantive discussion on the matter if you’d be interested :)
For this quick message though I hope it is at least fair to suggest that dismissing critiques off hand is potentially risky as we are naturally inclined to steal man our own favored conclusions and straw man arguments against, which doesn’t do us any favors epistemologically speaking
Definitely interested to hear your substantive views when you have time! (All views are risky. I'm just honestly reporting my current opinion, based on what I've read to date. Happy to update after hearing more, though.)
*Edit: I accidentally hit Save before I was finished, went back to finish
*I started writing this the week after your reply but went down too deep of a rabbit hole and didn't get around to finishing it. Apologies for the delay! Note, the first portion was written 3 months ago (Novemberish 2023) and the latter portion was written today (12 Feb 2024)
Preamble
Ok - I've had a bit more time to read through some of your writing and some of the comments to give myself a little context and hopefully I can contribute a bit more meaningfully now.
Before getting into details though, probably best to frame things:
^I hope this sounds reasonable - if you'd like to modify any points please let me know :)
On another note, at some point (time permitting) I would love to flesh out a more comprehensive post synthesizing and summarizing criticism of EA in a more rigorous, systematic and thoughtful way. However, a project like that seems like it would take quite a bit of work and collaboration, so I'm not too optimistic I'll be able to take it on personally (at least not in the near future) :(
Examples of (semi-)Formal Criticism
Here I've tried to collect an incomplete list of several critiques of EA and tried to sort them by my best guess of where they fall along several relevant criteria
Concerns about Narrow Goal-Posts and dismissing 'Political' Criticism
"As an academic, I think we should assess claims primarily on their epistemic merits, not their practical consequences." from page 33 of your paper -> from a purely academic philosophical perspective I can understand this claim if the word 'epistemic' was replaced with a term like 'ethical, logical, or philosophical' as the basic tenants of EA are pretty defensible on paper. However, the word 'epistemic' relates to knowledge, and generally considers evidence alongside logic. To ignore 'practical consequences' would be to ignore a large body of evidence that may help to inform our perspective on EA's merits. Of course, there are many confounding variables that abstract the relationship between the core philosophical tenants of EA and the 'practical consequences' of EA that should lead us to think carefully before updating our perspective of EA's merits based on any one given piece of real-world evidence. However, to deprioritize practical consequences entirely seems like it would lead us to miss out on some key considerations.
Let's imagine that EA's core ideas are applied in many different scenarios and that, separately a randomized sample of main-stream ethical frameworks are applied in those same scenarios. If we started to observe that after a statistically robust amount of trials that the EA-applied scenarios led to worse outcomes on average than the other group, it would certainly lead me to question the epistemic merits of EA's core claims. While this level of experimental rigor would be impractical, I believe a naturalistic observation comparing the successes and failings of EA vs equivalent non-EA frameworks would be a reasonable proxy for modestly bolstering or weakening (updating) my perception of the merits of EA's core tenants.
Additionally, given the focus within Effective Altruism on applied ethics, which is a highlighted in the title's usage of the word "Effective", it seems to me that one of the core claims is that it is important to examine practical consequences when evaluating how good or bad an idea is. To assess the merit of EA's core ideas purely on non-'political' critique seems to run counter to those very core ideas. In fact, I would imagine that a good-faith interpretation of EA's core principles would lead one to rigorously assess all kinds of critiques, philosophical as well as political, to constantly update our beliefs and actions.
Circling back to your paper, on page 33 & 34, you continue
Personally, I don't find this argument particularly compelling as 1) it lumps all political opponents of EA into one group, 2) makes a very large claim with no supporting evidence and 3) the hypothetical 'political' wrongness of the critics doesn't affect the hypothetical 'political' wrongness of EA (seems like a form of 'What About-ism'[4]). Of course, I'm sure that you have many more perfectly legitimate arguments for why we shouldn't place an undue amount of credence in political critiques, but that's a debate I would like to see more fleshed out than the attention it has been afforded in this discussion thus far before I am convinced.
Side note, JerL's comment on your Substack Post raises some points I find compelling :)
Concerns about how we approach engaging with Criticism of EA
I posit that, people in the EA space should be more receptive to criticism from outside of EA, even if it is flawed by EA standards for several reasons:
Regardless of how 'correct' or not EA's principles are, the way that people in the EA orbit absorb, assess, and respond to criticism is important can have real consequences. I have noticed a trend both on the EA forum, as well as in discussions with people from EA aligned organizations, at EAGs and other EA events, that most popular responses to external criticism of EA tend to be highly dismissive and focus more on tearing down the arguments of the critic rather than making a good faith effort to engage with the underlying sentiment and intention of the critic.
EA, as you have cited, places a very high value on self-critique and has invested in a significant amount of diverse initiatives to promote such critique, such as the red-teaming contest. However, such criticism suffers from a huge blind spot as people who are already associated with EA enough to participate in that type of critique are a severely biased sample.
It can often seem like critiques of EA from people outside the EA space are only taken seriously by EAs if those critiques mold themselves to meet the specific criteria, argumentative formulations, and style preferred by people within the EA space. If that is the case (it could just be my personal perception!), then we risk missing out on the diverse perspectives of the vast majority of people who are not inclined to communicate their perspectives in an 'EA way'.
A portion of EA thought emphasizes the value of worldview diversification[5], in large part because there's been a significant amount of research on the practical value-add of diversity (though the evidence is much more nuanced than is often portrayed in common discussion)[6]. Part of worldview diversification includes engaging with style of argument that do not align with our own, as well as engaging with arguments coming from people with beliefs and backgrounds very different to our own. A very well intentioned person who isn't comfortable speaking in academic jargon or assembling logical arguments to a forensic standard may still have great points, and we would benefit to engage with those points.
Beyond the potential epistemic benefits of engaging with external critique, the way in which we engage with critique has an impact in and of itself. If the EAs most popular reactions to external criticism of EA are negative, dismissive, patronizing, or just generally don't attempt to meet the critic where they are, then we may only serve to perpetuate negative impressions of EA and create a chilling effect on dissent within the EA space.
I'm not sure if pro-EA responses to critiques of EA get more upvotes and agrees and karma than critical-of-EA responses on the forum, but it seems plausible that might be the case. I'm also not present enough on X or any other social media platforms to see what the average response of EAs to criticism looks like, it could be very respectful and well received! But it isn't hard to imagine that some responses by some EAs to criticism might dismissive, come across as 'elitist', or are at least somewhat alienating to the non-EAs who see the responses. Regardless, such responses are bound to have at least a modest effect on the EA 'brand' and I would hope that we err on the side of engaging in good-faith, empathetic, personable responses when reasonable. (If the majority of EA responses to external criticism already are like that, great, let's keep it up! If they aren't, that's unfortunate)
To try to get some sense of how this dynamic plays out (at least on the EA Forum) I spent some time looking through the EA Forum for external and internal critiques of EA and luckily @JWS shared this list collecting some criticism of EA criticism. As a little exercise, reading through the pieces JWS linked and the comments below a couple things popped out to me:
One last note
I really appreciate you engaging on this so openly! Really respect your ideas and everything you bring to the table :)
Apologies of any of my counter-arguments misunderstood your original points or don't seem fair, I'm sure I'm off base in a few places and am happy to update
Unfortunately I don't have the time to make it through the full paper right now :( I'm sure you share a lot of very valuable arguments therein
In my limited understanding, the distinction between "Political" vs "Principle" critique is similar to the distinction between a "Consequentialist" vs "Deontological" approach whereby "Political" criticism refers to how things have actually played out in the real world and "Principle"-based criticism refers to how good the actual underlying ideas are
I'm much more familiar with internal criticism shared on the EA Forum, during EA events, etc.
https://en.wikipedia.org/wiki/Whataboutism
Example from Open Philanthropy: https://www.openphilanthropy.org/research/worldview-diversification/
A couple relevant studies:
https://pubmed.ncbi.nlm.nih.gov/30765101/
https://journals.sagepub.com/doi/10.1177/0149206307308587
This is a rather uncharitable take on the ~weakest forms of the arguments presented. It's also the first published instance of a tendency (fortunately not a widespread one) I've seen in online EA spaces when responding to criticism to water down the philosophy of EA to something close to its broadest, most comprehensive form to the point where it becomes virtually undistinguishable from any other philanthropical enterprise. I think this is where a kind of social/intellectual history of EA ideas would be extremely valuable: it seems to be that there is a gap between what someone who is entrenched in EA and EA spaces considers EA to be versus what someone who is observing it from the outside and relying on published materials would understand it to be. [ETA because I forgot a sentence: and this probably stems from the relatively fast evolution on EA philosophy over the past 7-8 years in particular and the difficulty in understanding what is still considered fundamental and what is outdated.] This creates a disconnect between critics and EAs and I think to some extent, to put it in very imprecise terms, newer versus older EAs, and longtermist versus neartermist EAs re: what are the guiding principles and how each of these principles and these components are weighted relative to each other. I'd love to see a robust article or even better an extended dialogue between EAs discussing EA from a ship of Theseus-like perspective to see hiw far you can push these boundaries and at what point EA stops being EA.