tl;dr:
I found Philosophy Tube's new video on EA enjoyable and the criticisms fair. I wrote out some thoughts on her criticisms. I would recommend a watch.
Background
I’ve been into Abigail Thorn's channel Philosophy Tube for about as long as I’ve been into Effective Altruism. I currently co-direct High Impact Engineers, but this post is written from a personal standpoint and does not represent the views of High Impact Engineers. Philosophy Tube creates content explaining philosophy (and many aspects of Western culture) with a dramatic streak (think fantastic lighting and flashy outfits - yes please!). So when I found out that Philosophy Tube would be creating a video on Effective Altruism, I got very excited.
I have written this almost chronologically and in a very short amount of time, so the quality and format may not be up to the normal standards of the EA Forum. I wanted to hash out my thoughts for my own understanding and to see what others thought.
Content, Criticisms, and Contemplations
EA and SBF
Firstly, Thorn outlines what EA is, and what’s happened over the past 6 months (FTX, a mention of the Time article, and other critical pieces) and essentially says that the leaders of the movement ignored what was happening on the ground in the community and didn’t listen to criticisms. Although I don’t think this was the only cause of the above scandals, I think there is some truth in Thorn’s analysis. I also disagree with the insinuation that Earning to Give is a bad strategy because it leads to SBF-type disasters: 80,000 Hours explicitly tells people to not take work that does harm even if you expect the positive outcome to outweigh the harmful means.
EA and Longtermism
In the next section, Thorn discusses Longtermism, What We Owe the Future (WWOTF), and The Precipice. She mentions that there is no discussion of reproductive rights in a book about our duties to future people (which I see as an oversight – and not one that a woman would have made); she prefers The Precipice, which I agree is more detailed, considers more points of view, and is more persuasive. However, I think The Precipice is drier and less easy to read than WWOTF, the latter of which is aimed at a broader audience.
There is a brief (and entertaining) illustration of Expected Value (EV) and the resulting extreme case of Pascal’s Mugging. Although MacAskill puts this to the side, Thorn goes deeper into the consequences of basing decisions on EV and the measurability bias that results – and she is right that although there is thinking done on how to overcome this in EA (she gives the example of Peter Singer’s The Most Good You Can Do, but also see this, this and this for examples of EAs thinking about tackling measurability bias), she mentions that this issue is never tackled by MacAskill. (She generalises this to EA philosophers, but isn't Singer one of the OG EA philosophers?)
EA and ~The System~
The last section is the most important criticism of EA. I think this section is most worth watching. Thorn mentions the classic leftist criticism of EA: it reinforces the 19th-century idea of philanthropy where people get rich and donate their money to avoid criticisms of how they got their money and doesn’t directly tackle the unfair system that privileges some people over others.
Thorn brings Mr Beast into the discussion, and although she doesn’t explicitly say that he’s an EA, she uses Mr Beast as an example of how EA might see this as: “1000 people were blind yesterday and can see today – isn’t that a fact worth celebrating?”. The question that neither Mr Beast nor the hypothetical EA ask is: “how do we change the world?”. Changing the world, she implies, necessitates changing the system.
She points out here that systemic change is rarely ex-ante measurable. Thus, the same measurability bias that MacAskill sets aside yields a bias against systemic change.
EA and Clout
Though perhaps not the most important, the most interesting claim she makes (in my opinion) is that in the uncertainty between what’s measurable and what would do the most good, ‘business clout’ rushes in to fill the gap. This, she argues, explains the multitude of Westerner-lead charities on EAs top-rated list.
Thorn says: “MacAskill and Ord write a lot about progress and humanity’s potential, but they say almost nothing about who gets to define those concepts. Who gets seen as an expert? Who decides what counts as evidence? Whose vision of the future gets listened to? In my opinion, those aren’t side-questions to hide in the footnotes. They’re core to the whole project.”
This analysis makes sense to me. I almost want to go a bit further: EA heavily draws from Rationalism, which views reason as the chief source of knowledge; specifically, EA heavily prioritises quantitative analysis over qualitative analysis. Often charity/intervention evaluations stop at the quantitative analysis, when in fact qualitative analysis (through techniques like thematic analysis or ethnography) may bridge the gap between what’s measurable and what would do the most good. In my experience, regranting organisations do more qualitative analyses due to the high uncertainty of the projects they fund, but I think these techniques should be recognised and regarded more highly in the EA community, and not seen as second-class analyses (as much as it pains my quantitative brain to admit that).
Conclusion
Overall, I think it was an enjoyable, fair analysis of Effective Altruism, executed with the characteristic wit and empathy I have come to expect from Philosophy Tube. She paints EA in a slightly simplistic light (can’t expect much more from a 40-min video on a huge movement that’s over a decade old), but I appreciated her criticisms and the video made me think. I’d highly recommend a watch and I look forward to the comments!
Thanks for the summary. I hope to make it through the video. I like thorn and fully expect her to be one of EA's higher quality outside critics.
I'm going to briefly jot down an answer to a (rhetorical?) question of hers. (epistemic status: far left for about 7 years)
It's a great question, and as far as I know EAs outperform any overly prioritarian standpoint theorist at facing it. I think an old arbital article (probably Eliezer?) did the best job at distilling and walking you through the exercise of generalizing cosmopolitanism. But maybe Soares' version is a little more to the point, and do also see my shortform about how I think negative longtermism dominates positive longtermism. At the same time Critch has been trying to get the alignment community to pay attention to social choice theory. Feeling a little "yeah, we thought of that" and that lack of enthusiasm for something like Doing EA Better's "indigenous ways of knowing" remark is a feature not a bug.
It's a problem that terrifies me, I fear its intractability, but at least EAs will share the terror with me and understand where I'm coming from. Leftists (or more precisely prioritarian standpoint theorists) tend to be extremely confident about everything, that we'd all see how right they were if we just gave them power, etc. I don't see any reasonable way of expecting them to be more trustworthy than us about "who's vision of the future gets listened to?"
Sort of related to this, I started to design an easier dialect of English because I think English is too hard and that (1) it would be easier to learn it in stages and (2) two people who have learned the easier dialect could speak it among themselves. This would be nice in reverse; I married a Filipino but found it difficult to learn Tagalog because of the lack of available Tagalog courses and the fact that my wife doesn't understand and cannot explain the grammar of her language. I wish I could learn an intentionally-designed pidgeon/simplified version of... (read more)