tl;dr:
I found Philosophy Tube's new video on EA enjoyable and the criticisms fair. I wrote out some thoughts on her criticisms. I would recommend a watch.
Background
I’ve been into Abigail Thorn's channel Philosophy Tube for about as long as I’ve been into Effective Altruism. I currently co-direct High Impact Engineers, but this post is written from a personal standpoint and does not represent the views of High Impact Engineers. Philosophy Tube creates content explaining philosophy (and many aspects of Western culture) with a dramatic streak (think fantastic lighting and flashy outfits - yes please!). So when I found out that Philosophy Tube would be creating a video on Effective Altruism, I got very excited.
I have written this almost chronologically and in a very short amount of time, so the quality and format may not be up to the normal standards of the EA Forum. I wanted to hash out my thoughts for my own understanding and to see what others thought.
Content, Criticisms, and Contemplations
EA and SBF
Firstly, Thorn outlines what EA is, and what’s happened over the past 6 months (FTX, a mention of the Time article, and other critical pieces) and essentially says that the leaders of the movement ignored what was happening on the ground in the community and didn’t listen to criticisms. Although I don’t think this was the only cause of the above scandals, I think there is some truth in Thorn’s analysis. I also disagree with the insinuation that Earning to Give is a bad strategy because it leads to SBF-type disasters: 80,000 Hours explicitly tells people to not take work that does harm even if you expect the positive outcome to outweigh the harmful means.
EA and Longtermism
In the next section, Thorn discusses Longtermism, What We Owe the Future (WWOTF), and The Precipice. She mentions that there is no discussion of reproductive rights in a book about our duties to future people (which I see as an oversight – and not one that a woman would have made); she prefers The Precipice, which I agree is more detailed, considers more points of view, and is more persuasive. However, I think The Precipice is drier and less easy to read than WWOTF, the latter of which is aimed at a broader audience.
There is a brief (and entertaining) illustration of Expected Value (EV) and the resulting extreme case of Pascal’s Mugging. Although MacAskill puts this to the side, Thorn goes deeper into the consequences of basing decisions on EV and the measurability bias that results – and she is right that although there is thinking done on how to overcome this in EA (she gives the example of Peter Singer’s The Most Good You Can Do, but also see this, this and this for examples of EAs thinking about tackling measurability bias), she mentions that this issue is never tackled by MacAskill. (She generalises this to EA philosophers, but isn't Singer one of the OG EA philosophers?)
EA and ~The System~
The last section is the most important criticism of EA. I think this section is most worth watching. Thorn mentions the classic leftist criticism of EA: it reinforces the 19th-century idea of philanthropy where people get rich and donate their money to avoid criticisms of how they got their money and doesn’t directly tackle the unfair system that privileges some people over others.
Thorn brings Mr Beast into the discussion, and although she doesn’t explicitly say that he’s an EA, she uses Mr Beast as an example of how EA might see this as: “1000 people were blind yesterday and can see today – isn’t that a fact worth celebrating?”. The question that neither Mr Beast nor the hypothetical EA ask is: “how do we change the world?”. Changing the world, she implies, necessitates changing the system.
She points out here that systemic change is rarely ex-ante measurable. Thus, the same measurability bias that MacAskill sets aside yields a bias against systemic change.
EA and Clout
Though perhaps not the most important, the most interesting claim she makes (in my opinion) is that in the uncertainty between what’s measurable and what would do the most good, ‘business clout’ rushes in to fill the gap. This, she argues, explains the multitude of Westerner-lead charities on EAs top-rated list.
Thorn says: “MacAskill and Ord write a lot about progress and humanity’s potential, but they say almost nothing about who gets to define those concepts. Who gets seen as an expert? Who decides what counts as evidence? Whose vision of the future gets listened to? In my opinion, those aren’t side-questions to hide in the footnotes. They’re core to the whole project.”
This analysis makes sense to me. I almost want to go a bit further: EA heavily draws from Rationalism, which views reason as the chief source of knowledge; specifically, EA heavily prioritises quantitative analysis over qualitative analysis. Often charity/intervention evaluations stop at the quantitative analysis, when in fact qualitative analysis (through techniques like thematic analysis or ethnography) may bridge the gap between what’s measurable and what would do the most good. In my experience, regranting organisations do more qualitative analyses due to the high uncertainty of the projects they fund, but I think these techniques should be recognised and regarded more highly in the EA community, and not seen as second-class analyses (as much as it pains my quantitative brain to admit that).
Conclusion
Overall, I think it was an enjoyable, fair analysis of Effective Altruism, executed with the characteristic wit and empathy I have come to expect from Philosophy Tube. She paints EA in a slightly simplistic light (can’t expect much more from a 40-min video on a huge movement that’s over a decade old), but I appreciated her criticisms and the video made me think. I’d highly recommend a watch and I look forward to the comments!
I think you might have misunderstood my comment.
I, as someone who is at least trying to be an EA, and who can speak two languages fluently and survive in 3 more, would "count" as an EA who is not from "Anglosphere West", and who has read world literature. So yes, I know I exist.
My point is that EA, as a community, should encourage that kind of thing among its members. And it really doesn't. Yes, people can do it as a personal project, but I think EA generally puts a lot of stock on people doing what are ultimately fairly difficult things (like, self-directed study of AI) without providing a consistent community with accountability that would help them achieve those things. And I think that the WEIRD / Anglosphere West / etc. demographic bias of EA is part of the reason why this seems to be the case.
Yes, it is possible to want a perspective to survive in the future without being particularly well-versed in it. I theoretically would not want Hinduism to go extinct in 50 years and can want that without knowing a whole lot about Hinduism.
That said, in order to know what will allow certain worldviews, and certain populations to thrive, you need to understand them at least a little. And if you're going to try to maximize the good you do for people, which would include a LOT of people who are not Anglosphere West. If I genuinely thought that Hinduism was under threat of extinction and wanted to do something about it, trying to do that without learning anything about Hinduism would be really short-sighted of me.
Given that most human beings for most of history have not been WEIRD in the Heinrich sense, and that a lot of currently WEIRD people are becoming less so (increase in antidemocratic sentiment, the affordability crisis and rising inequality) it is reasonable to believe that the future people EA is so concerned with will not be particularly WEIRD. And if you want to do what is best for that population, there should be more effort put into ensuring they will be WEIRD in some fashion[1] or into ensuring that EA interventions will help non-WEIRD people a meaningful amount in ways that they will value. Which is more than just malaria nets.
And like... I haven't seen that conversation.
I've seen allusions to it. But I haven't really seen it. Nor have I seen EA engage with the "a bunch of philosophers and computer scientists got together and determined that the most important thing you can be is a philosopher or computer scientist" critique particularly well, nor have I seen EA engage very well with the question of lowering the barriers of entry (which I also received a fairly unhelpful response to when I posited it, which boiled down to "well you understand all of the EA projects that you're not involved in and create lower barriers of entry for all of them", which again comes back to the problem that EA creates a community and then doesn't seem to actually use it to do the things communities are good for..?).
So I think it's kind of a copout to just say "well, you can care in this theoretical way about perspectives you don't understand", given that part of the plan of EA, and the success condition is to affect those people's lives meaningfully.
Not to mention the question of "promoting" vs "understanding".
Should EA promote, iunno, fascism, on a community level? Obviously not.
Should EA seek to understand fascism, and authoritarianism more broadly, as a concerning potential threat that has arisen multiple times and could arise yet again with greater technological and military force in the future? Fucking definitely.
The closest thing to this is the "liberal norms" political career path as far as I'm aware, but I think both paths should be taken concurrently and that OR is inclusive, yet the second is largely neglected.