Hide table of contents


I found Philosophy Tube's new video on EA enjoyable and the criticisms fair. I wrote out some thoughts on her criticisms. I would recommend a watch.


I’ve been into Abigail Thorn's channel Philosophy Tube for about as long as I’ve been into Effective Altruism. I currently co-direct High Impact Engineers, but this post is written from a personal standpoint and does not represent the views of High Impact Engineers. Philosophy Tube creates content explaining philosophy (and many aspects of Western culture) with a dramatic streak (think fantastic lighting and flashy outfits - yes please!). So when I found out that Philosophy Tube would be creating a video on Effective Altruism, I got very excited.

I have written this almost chronologically and in a very short amount of time, so the quality and format may not be up to the normal standards of the EA Forum. I wanted to hash out my thoughts for my own understanding and to see what others thought.

Content, Criticisms, and Contemplations

EA and SBF

Firstly, Thorn outlines what EA is, and what’s happened over the past 6 months (FTX, a mention of the Time article, and other critical pieces) and essentially says that the leaders of the movement ignored what was happening on the ground in the community and didn’t listen to criticisms. Although I don’t think this was the only cause of the above scandals, I think there is some truth in Thorn’s analysis. I also disagree with the insinuation that Earning to Give is a bad strategy because it leads to SBF-type disasters: 80,000 Hours explicitly tells people to not take work that does harm even if you expect the positive outcome to outweigh the harmful means.

EA and Longtermism

In the next section, Thorn discusses Longtermism, What We Owe the Future (WWOTF), and The Precipice. She mentions that there is no discussion of reproductive rights in a book about our duties to future people (which I see as an oversight – and not one that a woman would have made); she prefers The Precipice, which I agree is more detailed, considers more points of view, and is more persuasive. However, I think The Precipice is drier and less easy to read than WWOTF, the latter of which is aimed at a broader audience.

There is a brief (and entertaining) illustration of Expected Value (EV) and the resulting extreme case of Pascal’s Mugging. Although MacAskill puts this to the side, Thorn goes deeper into the consequences of basing decisions on EV and the measurability bias that results – and she is right that although there is thinking done on how to overcome this in EA (she gives the example of Peter Singer’s The Most Good You Can Do, but also see thisthis and this for examples of EAs thinking about tackling measurability bias), she mentions that this issue is never tackled by MacAskill. (She generalises this to EA philosophers, but isn't Singer one of the OG EA philosophers?)

EA and ~The System~

The last section is the most important criticism of EA. I think this section is most worth watching. Thorn mentions the classic leftist criticism of EA: it reinforces the 19th-century idea of philanthropy where people get rich and donate their money to avoid criticisms of how they got their money and doesn’t directly tackle the unfair system that privileges some people over others. 

Thorn brings Mr Beast into the discussion, and although she doesn’t explicitly say that he’s an EA, she uses Mr Beast as an example of how EA might see this as: “1000 people were blind yesterday and can see today – isn’t that a fact worth celebrating?”. The question that neither Mr Beast nor the hypothetical EA ask is: “how do we change the world?”. Changing the world, she implies, necessitates changing the system.

She points out here that systemic change is rarely ex-ante measurable. Thus, the same measurability bias that MacAskill sets aside yields a bias against systemic change.

EA and Clout

Though perhaps not the most important, the most interesting claim she makes (in my opinion) is that in the uncertainty between what’s measurable and what would do the most good, ‘business clout’ rushes in to fill the gap. This, she argues, explains the multitude of Westerner-lead charities on EAs top-rated list.

Thorn says: “MacAskill and Ord write a lot about progress and humanity’s potential, but they say almost nothing about who gets to define those concepts. Who gets seen as an expert? Who decides what counts as evidence? Whose vision of the future gets listened to? In my opinion, those aren’t side-questions to hide in the footnotes. They’re core to the whole project.”

This analysis makes sense to me. I almost want to go a bit further: EA heavily draws from Rationalism, which views reason as the chief source of knowledge; specifically, EA heavily prioritises quantitative analysis over qualitative analysis. Often charity/intervention evaluations stop at the quantitative analysis, when in fact qualitative analysis (through techniques like thematic analysis or ethnography) may bridge the gap between what’s measurable and what would do the most good. In my experience, regranting organisations do more qualitative analyses due to the high uncertainty of the projects they fund, but I think these techniques should be recognised and regarded more highly in the EA community, and not seen as second-class analyses (as much as it pains my quantitative brain to admit that).


Overall, I think it was an enjoyable, fair analysis of Effective Altruism, executed with the characteristic wit and empathy I have come to expect from Philosophy Tube. She paints EA in a slightly simplistic light (can’t expect much more from a 40-min video on a huge movement that’s over a decade old), but I appreciated her criticisms and the video made me think. I’d highly recommend a watch and I look forward to the comments!


Sorted by Click to highlight new comments since:

I also watched the video and was also pleasantly surprised by how fair it ended up feeling.

For what it's worth, I didn't find the EA and systemic change section to be that interesting, but that might just be because it's a critique I've spent time reading about previously. My guess is that most other forum readers won't find much new in that section relative to existing discussions around the issue. And Thorn doesn't mention anything about tradeoffs or opportunity costs in making that critique, which makes it feel like it's really missing something. Because for practical purposes, the systemic change argument she's making requires arguing that it's worth letting a substantial number of people die from preventable diseases (plus letting a substantial number of people suffer from lack of mental healthcare, letting a substantial number of animals be subject to terrible conditions on factory farms etc.) in the short run in order to bring about systemic change that will do more to save and improve lives in the long run. It's possible that's right, but I think making that case really requires a clear understanding of what those opportunity costs are and a justification of why they would be worth accepting. 

Also, I found the lack of discussion of animal welfare frustrating. That's one of the three big cause areas within EA (or one of four if you count community building)!

EA heavily draws from Rationalism, which views reason as the chief source of knowledge; specifically, EA heavily prioritises quantitative analysis over qualitative analysis.

This misunderstands what rationalism is (in the context of EA, LW, etc.).

You're thinking "reason, rationality, deliberation, explicit thought, etc. as opposed to emotion, intuition, implicit thought, etc." LessWrong-style "rationality" is instead about truth as opposed to falsehood ("epistemic rationality") and succeeding in your goals as opposed to failing ("instrumental rationality").

Classic LW content like What Do We Mean By "Rationality"?, The Straw Vulcan, Feeling Rational, and When (Not) To Use Probabilities talk about how different these concepts are: using deliberative, explicit, quantitative, etc. thought is rational (in the LW sense) if it helps you better understand things or helps you succeed in life, but emotions, hunches, intuitions, etc. are also a crucial part of understanding the world and achieving your goals.

Also I think "reason as the chief source of knowledge" is not quite it, right? I think "observation is the chief source of knowledge" would pass an ideological turing test a bit better. 

"Observation is the chief source of knowledge" falls under the Empiricism school of thought in epistemology, as opposed to Rationalism, which is perhaps where my misunderstanding came about.

(A minor gripe I have about LW, and EA by extension, is that words with a specific meaning in philosophy are misused and therefore take on a different meaning – take "epistemic status", which has grown out of its original intended meaning of how confident one is in one's claim and is now used more to describe someone's background and raise general caveats and flags for where someone might have blind spots.)

In general, I'd agree that using different tools to help you better understand the world and succeed in life is a good thing; however, my point here is that LW and the Rationality community in general view certain tools and ways of looking at the world as "better" (or are only exposed to these tools and ways of looking at the world, and therefore don't come across other methods). I have further thoughts on this that I might write a post about in the future, but in short, I think that this leads to the Rationality community (and EA to some extent) to tend to be biased in certain ways that could be mitigated by increasing recognition of the value of other tools and worldviews, and maybe even some reading of academic philosophy (although I recognise not everyone has the time for this).

I think lesswrong and EA are gluttonous and appropriative and good at looting the useful stuff from a breadth of academic fields, but excluding continental philosophy is a deeply correct move that we have made and will continue to make

I think the point is that our subculture's "rationalism" and a historian of philosophy's "rationalism" are homonyms.

A minor gripe I have about LW, and EA by extension, is that words with a specific meaning in philosophy are misused and therefore take on a different meaning

The version of "rationalist" you're talking about is a common usage, but:

  • The oldest meaning of "rationalist" is about truth, science, inquiry, and good epistemics rather than about "observation matters less than abstract thought".
  • Rationalists' conception of "rationality" isn't our invention: we're just using the standard conception from cognitive science.
  • Lots of groups have called themselves "rationalist" in a LW-like sense prior to LessWrong. It's one of the more common terms humanists, secularists, atheists, materialists, etc. historically used to distinguish themselves from religionists, purveyors of pseudoscience, and the like.

Also, the rationalist vs. empiricist debate in philosophy is mostly of historical interest; it's not clear to me that it should matter much to non-historians nowadays.

take "epistemic status"

"Epistemic status" isn't philosophy jargon, is it?

I took it to be riffing on early LiveJournal posts that began with text like "status: bored" or "current mood: busy", adding the qualifier "epistemic" as a cute variation.

Epistemic status is 100% philosophy jargon. Hell, the word "epistemic" or the word "epistemology" is itself philosophy jargon. I only ever hear it from LW people/EAs and people in philosophy departments. 

The word "epistemic" is philosophy jargon. The phrase "epistemic status" in the link you gave isn't a separate piece of jargon, it's just the normal word "status" modified by the word "epistemic".

The original comment I was replying to said:

"A minor gripe I have about LW, and EA by extension, is that words with a specific meaning in philosophy are misused and therefore take on a different meaning – take "epistemic status", which has grown out of its original intended meaning of how confident one is in one's claim and is now used more to describe someone's background and raise general caveats and flags for where someone might have blind spots."

If the claim is that rationalists are misusing the word "epistemic", not some specific unfamiliar-to-me new piece of jargon ("epistemic status"), then the claim is based on a misunderstanding of the word "epistemic". Epistemic in philosophy means "pertaining to knowledge (belief justification, reliability, accuracy, reasonableness, warrant, etc.)", not "pertaining to confidence level".

Someone's "epistemic status" includes what they believe and how strongly they believe it, but it also includes anything that's relevant to how justified, reasonable, supported, based-on-reliable-processes, etc. your beliefs are. Like, "epistemic status: I wrote this whole hungry, which often makes people irritable and causes them to have more brain farts, which reduces the expected reliability and justifiedness of the stuff I wrote" is totally legit. And if people have the background knowledge to understand why you might want to flag that you were hungry, it's completely fine to write "epistemic status: written while hungry" as a shorthand.

(I do think rationalists sometimes put other stuff under "epistemic status" as a joke, but "rationalists joke too much" is a different criticism than "rationalists have their own nonstandard meaning for the word 'epistemic'".)

Language is a mess of exaptations build upon ever more exaptations until you find a reference to something physical at the bottom of it all. Consider what "the mouth the river" means if you believe you live in a world where everything is/has a spirit. Definition discussions are useful for communication, but adjusting definitions to make progress is deeply necessary, because new ideas need to build upon old foundations and going for some sort of new word creates more confusion than it is worth.

I think you are 100% correct and would be interested in helping you with a post about this if you wanted.

I want to note that both Philosophy Tube's and Sabine Hossenfelder's sceptism against AI-risk stemmed from AGI's reliance on extraordinary hardware capacities. They both believe it will be very difficult for an AGI to copy itself because there won't be suitable hardware in the world. Therefore AGI will be physically bound, limited in number and easier to deal with. I think introductory resources should address this more often. For example, there isn't a mention of this criticism in 80000 Hours' problem profile on this topic.

The main point I took from video was that Abigail is kinda asking the question: "How can a movement that wants to change the world be so apolitical?" This is also a criticism I have of many EA structures and people. I even have come across people who view EA and themselves as not political, even as they are arguing for longtermism. The video also highlights this.

When you are quantifying something you don't become objective all over sudden. You cannot quantify everything, so you have to make a choice on what you want to quantify. And this is a political choice. There is not objective source of truth that tells you that for example quality adjusted life years are the best objective measure. People choose what makes the most sense to them given their background. But you could easily switch it to something else. There is only your subjective choice on what you want to focus. And I would really appreciate if this would be highlighted more in EA.

Right now the vibe is often "We have objectively compared such and such and therefore the obvious choice is this intervention or cause." But this just frames personal preferences on what is important as an objective truth about the world. Would be great if this subjectivity would be acknowledged more.

And one final point the video also hints at: In EA basically all modern philosophy outside of consequentialism is ignored. Even though much of that philosophy is explicitly developed to crticise pure reason and consequentialism. But if you read EA material you get the impression to only notable philosophers of the 20th century are Peter Singer and Derek Parfit.

The main point I took from video was that Abigail is kinda asking the question: "How can a movement that wants to change the world be so apolitical?" This is also a criticism I have of many EA structures and people.

I think it's surprising that EA is so apolitical, but I'm not convinced it's wrong to make some effort to avoid issues that are politically hot. Three reasons to avoid such things: 1) they're often not the areas where the most impact can be had, even ignoring constraints imposed by them being hot political topics 2) being hot political topics makes it even harder to make significant progress on these issues and 3) if EAs routinely took strong stands on such things, I'm confident it would lead to significant fragmentation of the community.

EA does take some political stances, although they're often not on standard hot topics: they're strongly in favour of animal rights and animal welfare, and were involved in lobbying for a very substantial piece of legislation recently introduced in Europe. Also, a reasonable number of EAs are becoming substantially more "political" on the question of how quickly the frontier of AI capabilities should be advanced.

It seems to me that we are talking about different definitions about what political means. I agree that in some situations it can make sense to not chip in political discussions, to not get pushed to one side.  I also see that there are some political issues where EA has taken a stance like animal welfare. However, when I say political I mean what are the reason for us doing things and how do we convince other people of it? In EA there are often arguments that something is not political, because there has been an "objective" calculation of value. However, there is almost never a justification why something was deemed important, even though when you want to change the world in a different way, this is the important part. Or on a more practical level why are QUALYs seen as the best way to measure outcomes in many cases? Using this and not another measure is choice which has to be justified. 

Alternatives to QALYs (such as WELLBYs) have been put forward from within the EA movement. But if we’re trying to help others, it seems plausible that we should do it in ways that they care about. Most people care about their quality of life or well-being, as well as the amount of time they’ll have to experience or realise that well-being.

I’m sure there are people who would say they are most effectively helping others by “saving their souls” or promoting their “natural rights”. They’re free to act as they wish. But the reason that EAs (and not just EAs, because QALYs are widely used in health economics and resource allocation) have settled on quality of life and length of life is frankly because they’re the most plausible (or least implausible) ways of measuring the extent to which we’ve helped others.

I'd like to add a thought on the last point:

EA appears to largely ignore the developments of modern and post-modern philosophy, making EA appear like a genuinely new idea/movement. Which it is not. That means that there is a lot to learn from past instances of EA-like movements. EA-like meaning Western rich people trying to do good with Rationality. 20th century philosophy is brimming with very valid critiques of Rationality, but somehow EA seems to jump from Bentham/Mills to Singer/Parfit without batting an eye.

Abigail leaves open how we should do good, whether we want to pursue systemic change or work within the system, or even how we shall define what "good" is. I am sure this is intentionally put at the end of the video. She warns people who consider joining EA to do so with open eyes. I deeply agree with this. If you are thinking about making EA your political movement of choice, be very careful, as with any political movement. EA claims to be open to different moral standpoints, but it most certainly not. There are unchecked power dynamics at play, demographic bias, "thought leaders", the primacy of Rationality. If I had any advice for anyone in EA, I would recommend they go and spend a year or more learning about all the philosophy that came AFTER utilitarianism*. Otherwise, EA will be lacking context, and could even appear as The Truth. You will be tempted to buy into the opinion of a small number of apparently smart people saying apparently smart things, and by that, hand over your moral decisions to them.

* (for a start, Philosophize This is a nice podcast that deals at length with a lot of these topics)

EA is a movement that aims to use reason and evidence to do the most good, so the centrality of “rationality” (broadly speaking) shouldn’t be too surprising. Many EAs are also deeply familiar with alternatives to utilitarianism. While most (according to the surveys) are utilitarians, some are non-utilitarian consequentialists or pluralists.

I suspect that the movement is dominated by utilitarians and utilitarian-leaning people because while all effective altruists shouldn’t necessarily be utilitarians, all utilitarians should be effective altruists. In contrast, it’s hard to see why a pure deontologist or virtue ethicist should, as a matter of philosophical consistency, be an effective altruist. It’s also difficult to see how a pure deontologist or virtue ethicist could engage in cause prioritisation decisions without ultimately appealing to consequences.

I want to clarify that I do specifically mean philosophical movements like existentialism, structuralism, post-structuralism, the ethics behind communism and fascism -- which all were influential in the 20th century. I would also argue that the grouping into consequentialism/virtue ethics/deontology does not capture the perspectives brought up in the aforementioned movements. I would love to see EAs engage with more modern ideas about ethics because they specifically shed light on the flexibility and impermanence of the terms 'reason' and 'evidence' over the decades. 

Sure, you have to choose some model at some point to act, or else you'll be paralyzed. But I really wish that people who make significant life changes based on reason and evidence take a close look at how these terms are defined within their political movement, and by whom.

I don’t quite see how existentialism, structuralism, post-structuralism and fascism are going to help us be more effectively altruistic, or how they’re going to help us prioritise causes. Communism is a different case as in some formats it’s a potential altruistic cause area that people may choose to prioritise.

I also don’t think that these ideas are more “modern” than utilitarianism, or that their supposed novelty is a point in their favour. Fascism, just to take one of these movements, has been thoroughly discredited and is pretty much the antithesis of altruism. These movements are movements in their own right, and I don’t think they’d want EAs to turn them into something they’re not. The same is true in the opposite direction.

By all means, make an argument in favour of these movements or their relevance to EA. But claiming that EAs haven’t considered these movements (I have, and think they’re false) isn’t likely to change much.

Surely, they are more modern than utilitarianism. Utilitarianism has been developed in the 19th century, while all the other ones mentioned are from the 20th century. And it is not their "novelty" which is interesting, but that they are a direct follow up and criticism of things like utilitarianism.  Also, I don't think that post above was an endorsement of using fascism, but instead a call to understand the idea why people even started with fascism in the first place. 

The main contribution of the above mentioned fields of ideas to EA is that they highlight that reason is not a strong tool, as many EA think it is. You can easily bring yourself into bad situation, even if you follow reason all the way. Reason is not something objective, but born from your standpoint in the world and the culture you grow up in. 

And if EA (or you) have considered things like existentialism, structuralism, post-structuralism I'd love to see those arguments why it is not important to EA. Never seen anything in this regard. 

I think reason is as close to an objective tool as we’re likely to get and often isn’t born from our standpoint in the world or the culture we grow up in. That’s why people from many different cultures have often reached similar conclusions, and why almost everyone (regardless of their background) can recognise logical and mathematical truths. It’s also why most people agree that the sun will rise the next morning and that attempting to leave your house from your upper floor window is a bad idea.

I think the onus is on advocates of these movements to explain their relevance to “doing the most good”. As for the various 20th Century criticisms of utilitarianism, my sense is that they’ve been parried rather successfully by other philosophers. Finally, my point about utilitarianism being just as modern is that it hasn’t in any way been superseded by these other movements — it’s still practiced and used today.

I think it's fairly unsurprising that EA is mostly consequentialists or utilitarians. But often it goes way beyond that, into very specific niches that are not all a requirement for trying to "do good effectively". 

For example, a disproportionate amount of people here are are capital R "Rationalists", referring to the subculture built around fans of the "sequences" blogposts on Lesswrong written by Yudkowsky. I think this subgroup in particular suffers from "not invented here" syndrome, where philosophical ideas that haven't been translated into rationalist jargon are not engaged with seriously. 

I think the note on Not Invented Here syndrome is actually amazing and I'm very happy you introduced that concept into this discussion.

"There is not objective source of truth that tells you that for example quality adjusted life years are the best objective measure."

There's no objective source of truth telling humans to value what we value; on some level it's just a brute fact that we have certain values. But given a set of values, some metrics will do better vs. worse at describing the values.

Or in other words: Facts about how much people prefer one thing relative to other things are "subjective" in the weak sense that all psychological facts are subjective: they're about subjects / minds. But psychology facts aren't "subjective" in a sense like "there are no facts of the matter about minds". Minds are just as real a part of the world as chairs, electrons, and zebras.

Consider, for example, a measure that says "a sunburn is 1/2 as bad as a migraine" versus one that says "a sunburn is a billion times as bad as a migraine". We can decompose this into a factual claim about the relative preferences of some group of agents, plus a normative claim that calls the things that group dislikes "bad".

For practical purposes, the important contribution of welfare metrics isn't "telling us that the things we dislike are bad"; realists are already happy to run with this, and anti-realists are happy to play along with the basic behavioral take-aways in practice.

Instead, the important contribution is the factual claim about what a group prefers, which is as objective/subjective as any other psych claim. Viewed through that lens, even if neither of the claims above is perfectly accurate, it seems clear that the "1/2 as bad" claim is a lot closer to the psychological truth.

I agree that the choices we make are in some sense political. But they’re not political in the sense that they involve party or partisan politics. Perhaps it would be good for EAs to get involved in that kind of politics (and we sometimes do, usually in an individual capacity), but I personally don’t think it would be fruitful at an institutional level and it’s a position that has to be argued for.

Many EAs would also disagree with your assumption that there aren’t any objective moral truths. And many EAs who don’t endorse moral realism would agree that we shouldn’t make the mistake of assuming that all choices are equally valid, and that the only reason anyone makes decisions is due to our personal background.

Without wishing to be too self-congratulatory, when you look at the beings that most EAs consider to be potential moral patients (nonhuman animals including shrimp and insects, potential future people, digital beings), it’s hard to argue that EAs haven’t made more of an effort than most to escape their personal biases.

I agree that the choices we make are in some sense political. But they’re not political in the sense that they involve party or partisan politics.


I disagree. Counter-examples: Sam Bankman-Fried was one of the largest donors to Joe Biden's presidential campaign. Voting and electoral reform has often been a topic on the EA Forum and appeared on the 80000h podcast. I know several EAs who are or have been actively involved in party politics in Germany. The All-Party Parliamentary Group in the UK says on its website that it "aims to create space for cross-party dialogue". I would put these people and organizations squarely in the EA space. The choices these people and organizations made directly involve political parties*.

* or their abolition, in the case of some proposed electoral reforms, I believe.

My comment mainly referred to the causes we’ve generally decided to prioritise. When we engage in cause prioritisation decisions, we don’t ask ourselves whether they’re a “leftist” or “rightist” cause area.

I did say that EAs may engage in party politics in an individual or group capacity. But they’re still often doing so in order to advocate for causes that EAs care about, and which people from various standard political ideologies can get on board with. Bankman-Fried also donated to Republican candidates who he thought were good on EA issues, for example. And the name of the “all-party” parliamentary group clearly distinguishes it from just advocating for a standard political ideology or party.

This was an interesting video, many important points made in an entertaining manner.

Though I don't strongly agree with all the points mentioned but I agree that 
The precipice > what we owe the future 
(but they are both great books nonetheless)

Thanks for the summary. I hope to make it through the video. I like thorn and fully expect her to be one of EA's higher quality outside critics. 

I'm going to briefly jot down an answer to a (rhetorical?) question of hers. (epistemic status: far left for about 7 years) 

Whose vision of the future gets listened to?

It's a great question, and as far as I know EAs outperform any overly prioritarian standpoint theorist at facing it. I think an old arbital article (probably Eliezer?) did the best job at distilling and walking you through the exercise of generalizing cosmopolitanism. But maybe Soares' version is a little more to the point, and do also see my shortform about how I think negative longtermism dominates positive longtermism. At the same time Critch has been trying to get the alignment community to pay attention to social choice theory. Feeling a little "yeah, we thought of that" and that lack of enthusiasm for something like Doing EA Better's "indigenous ways of knowing" remark is a feature not a bug. 

It's a problem that terrifies me, I fear its intractability, but at least EAs will share the terror with me and understand where I'm coming from. Leftists (or more precisely prioritarian standpoint theorists) tend to be extremely confident about everything, that we'd all see how right they were if we just gave them power, etc. I don't see any reasonable way of expecting them to be more trustworthy than us about "who's vision of the future gets listened to?"

I think this question is more centred about elitism and EA being mostly western, educated, industrialized, rich and democratic (WEIRD) than about the culture war between left and right.

I’m sure Thorn does do this (I haven’t watched the video in full yet), but it seems more productive to criticise the “EA vision of the future” than to ask where it comes from (and there were EA-like ideas in China, India, Ancient Greece and the Islamic world long before Bentham).

MacAskill, Ord and others seem to me to have advocated a highly pluralistic future in which humanity is able to reflect on its values. Clearly, some people don’t like what they think is the “EA vision of the future” and want their vision to prevail instead. The question seems to imply, though, that EAs are the only ones who are excluding others’ visions of the future from their thinking. Actually, everyone is doing that, otherwise they wouldn’t have a specific vision.

Just regarding this bit: "MacAskill, Ord and others seem to me to have advocated a highly pluralistic future in which humanity is able to reflect on its values."

I have posited, multiple times, in different EA spaces, that EAs should learn more languages in order to be better able to think, better able to understand perspectives further removed from that which they were raised in, healthier (trilinguals are massively protected against dementia, Alzheimer's, etc), etc. 

And the response I have received has been broadly "eh" or at best "this is interesting but I don't know if it's worth EAs time".

I have not seen any EA "world literature" circles based around trying to expand their horizons to perspectives as far from their own as possible. I have not seen any EA language learning groups. I have not seen any effort put towards using the EA community (that is so important to build!) in order to enable individual EAs to become better at understanding radically different perspectives, etc.

So like... Iunno, I don't buy the "it's not a problem we're mostly wealthy white guys" argument. It seems to me like a lot of EAs don't know what they don't know, and don't realize the axes along which they could not-know-things on top. They don't behave the way people who are genuinely invested in a more pluralistic vision of the future would behave. And they don't react positively to proposals that aim to improve that.

Thanks for your reply! Firstly, there will many EAs (particularly from the non-Anglosphere West and non-Western countries) who do understand multiple languages. I imagine there are also many EAs who have read world literature.

When we say that EAs “mostly” have a certain demographic background, we should remember that this still means there are hundreds of EAs that don’t fit that background at all and they shouldn’t be forgotten. Relatedly, I (somewhat ironically) think critics of EA could do with studying world history because it would show them that EA-like ideas haven’t just popped up in the West by any means.

I also don’t think one needs to understand radically different perspectives to want a world in which those perspectives can survive and flourish into the future. There are so many worldviews out there that you have to ultimately draw a line somewhere, and many of those perspectives will just be diametrically opposed to core EA principles, so it would be odd to promote them at the community level. Should people try to expand their intellectual horizons as a personal project? Possibly!

I think you might have misunderstood my comment. 

I, as someone who is at least trying to be an EA, and who can speak two languages fluently and survive in 3 more, would "count" as an EA who is not from "Anglosphere West", and who has read world literature. So yes, I know I exist.

My point is that EA, as a community, should encourage that kind of thing among its members. And it really doesn't. Yes, people can do it as a personal project, but I think EA generally puts a lot of stock on people doing what are ultimately fairly difficult things (like, self-directed study of AI) without providing a consistent community with accountability that would help them achieve those things. And I think that the WEIRD / Anglosphere West / etc. demographic bias of EA is part of the reason why this seems to be the case. 

Yes, it is possible to want a perspective to survive in the future without being particularly well-versed in it. I theoretically would not want Hinduism to go extinct in 50 years and can want that without knowing a whole lot about Hinduism. 

That said, in order to know what will allow certain worldviews, and certain populations to thrive, you need to understand them at least a little. And if you're going to try to maximize the good you do for people, which would include a LOT of people who are not Anglosphere West. If I genuinely thought that Hinduism was under threat of extinction and wanted to do something about it, trying to do that without learning anything about Hinduism would be really short-sighted of me. 

Given that most human beings for most of history have not been WEIRD in the Heinrich sense, and that a lot of currently WEIRD people are becoming less so (increase in antidemocratic sentiment, the affordability crisis and rising inequality) it is reasonable to believe that the future people EA is so concerned with will not be particularly WEIRD. And if you want to do what is best for that population, there should be more effort put into ensuring they will be WEIRD in some fashion[1] or into ensuring that EA interventions will help non-WEIRD people a meaningful amount in ways that they will value. Which is more than just malaria nets.  

And like... I haven't seen that conversation. 

I've seen allusions to it. But I haven't really seen it. Nor have I seen EA engage with the "a bunch of philosophers and computer scientists got together and determined that the most important thing you can be is a philosopher or computer scientist" critique particularly well, nor have I seen EA engage very well with the question of lowering the barriers of entry (which I also received a fairly unhelpful response to when I posited it, which boiled down to "well you understand all of the EA projects that you're not involved in and create lower barriers of entry for all of them", which again comes back to the problem that EA creates a community and then doesn't seem to actually use it to do the things communities are good for..?). 

So I think it's kind of a copout to just say "well, you can care in this theoretical way about perspectives you don't understand", given that part of the plan of EA, and the success condition is to affect those people's lives meaningfully

Not to mention the question of "promoting" vs "understanding". 

Should EA promote, iunno, fascism, on a community level? Obviously not. 

Should EA seek to understand fascism, and authoritarianism more broadly, as a concerning potential threat that has arisen multiple times and could arise yet again with greater technological and military force in the future? Fucking definitely. 

  1. ^

    The closest thing to this is the "liberal norms" political career path as far as I'm aware, but I think both paths should be taken concurrently and that OR is inclusive, yet the second is largely neglected.

Great comment, thanks for clarifying your position. To be clear, I’m not particularly concerned about the survival of most particular worldviews as long as they decline organically. I just want to ensure that there’s a marketplace in which different worldviews can compete, rather than some kind of irreversible ‘lock-in’ scenario.

I have some issues with the entire ‘WEIRD’ concept and certainly wouldn’t want humanity to lock in ‘WEIRD’ values (which are typically speciesist). Within that marketplace, I do want to promote moral circle expansion and a broadly utilitarian outlook as a whole. I wouldn’t say this is as neglected as you claim it is — MacAskill discusses the value of the future (not just whether there is a future) extensively in his recent book, and there are EA organisations devoted to moral values spreading. It’s also partly why “philosopher” is recommended as a career in some cases, too.

If we want to spread those values, I agree with you that learning about competitor philosophies, ideologies, cultures and perspectives (I personally spend a fair bit of time on this) would be important, and that lowering language barriers could be helpful.

It could also be useful to explore whether there are interventions in cultures that we’re less familiar with that could improve people’s well-being even more than the typical global health interventions that are currently recommended. Perhaps there’s something about a particular culture which, if promoted more effectively, would really improve people’s lives. But maybe not: children dying of malaria is really, really bad, and that’s not a culture-specific phenomenon.

Needless to say, none of the above applies to the vast majority of moral patients on the planet, whether they’re factory farmed land animals, fishes or shrimps. (Though if we want to improve, say, shrimp welfare in Asia, learning local languages could help us work and recruit more effectively as well as spread values.)

If we want to spread those values, I agree with you that learning about competitor philosophies, ideologies, cultures and perspectives (I personally spend a fair bit of time on this) would be important, and that lowering language barriers could be helpful.

Wonderful! What specific actions could we take to make that easier for you (and others like you for whom this would be a worthwhile pursuit)?

Maybe a reading group that meets every week (or month). Or an asynchronous thread in which people provide reviews of philosophical articles or world literature. Or a group of Duolingo "friends" (or some other language-learning app of people's choice, I have a variety of thoughts on which languages should be prioritized, but starting with something would be good, and Spanish-language EAs seem to be growing in number and organization). 

It could also be useful to explore whether there are interventions in cultures that we’re less familiar with that could improve people’s well-being even more than the typical global health interventions that are currently recommended. Perhaps there’s something about a particular culture which, if promoted more effectively, would really improve people’s lives. 

Bhutan's notion of Gross Domestic Happiness, Denmark's "hygge", whatever it is that makes certain people with schizophrenia from Africa get the voices to say nice things to them, indigenous practices of farming and sustainable hunting, and maybe the practice of "insulting the meat" just off the top of my head, would probably be good things to make more broadly understood and build into certain institutions. Not to mention that knowledge of cultural features that need to be avoided or handled somewhat (for example, overtly strict beauty standards which harm people in a variety of different cultures). 

(Though if we want to improve, say, shrimp welfare in Asia, learning local languages could help us work and recruit more effectively as well as spread values.)

And, very importantly, it could allow you to discover new things to value, new frameworks, new ways of approaching a problem. Every language you learn comes with new intuition pumps, new frames upon which you can hang your thoughts. 

Even if you think the vast majority of moral patients are non-human and our priorities should reflect that, there are ways of thinking about animals and their welfare that have been cultivated for centuries by less WEIRD populations that could prove illuminating to you. I don't know about them, because I have my own areas of ignorance. But that's the kind of thing that EA could benefit from aggregating somewhere. 

I would be very interested in working on a project like that, of aggregating non-EA perspectives in various packages for the convenience of individual EAs who may want to learn about perspectives that are underrepresented in the community and may offer interesting insights. 

it is reasonable to believe that the future people EA is so concerned with will not be particularly WEIRD 

There's a half-joking take that some people in longtermism bring up sometimes that roughly looks like

according to demographic models, there's not a sense in which longtermism isn't just a flavor of afrofuturism

(i.e. predictions that most new humans will be born in africa) 

TBH I think that half-joking take should probably be engaged with more seriously (maybe say, pursuing more translations of EA works into Igbo or something), and I'm glad to hear it. 

Sort of related to this, I started to design an easier dialect of English because I think English is too hard and that (1) it would be easier to learn it in stages and (2) two people who have learned the easier dialect could speak it among themselves. This would be nice in reverse; I married a Filipino but found it difficult to learn Tagalog because of the lack of available Tagalog courses and the fact that my wife doesn't understand and cannot explain the grammar of her language. I wish I could learn an intentionally-designed pidgeon/simplified version of the language before tackling the whole thing. Hearing the language spoken in the house for several years hasn't helped.

It would be good for EAs to learn other languages, but it's hard. I studied Spanish in my free time for four years, but I remained terrible at it, my vocabulary is still small and I usually can't understand what Spanish people are saying. If I moved to Mexico I'm sure I would learn better. But I have various reasons not to.

Excellent reply. I cheaply agree-voted but am not agree-voting in a costly manner because that would require me backing up my cosmopolitan values by learning a language. 

Skeptical that language learning is actually the most pivotal part of learning about wild (to you) perspectives, but it's not obviously wrong. 

Thank you! I don't think it's necessarily the most pivotal [1] but it is one part that has recently begun having its barrier of entry lowered [2]. Additionally, while reading broadly [3]could also help, the reason why language-learning looks so good in my eyes is because of the stones-to-birds ratio. 

If you read very broadly and travel a lot, you may gain more "learning about wild(to you) perspective" benefits. But if you learn a language [4]you are: 

1) benefitting your brain, 

2) increasing the amount of people in the world you can talk to, and whose work you can learn from, 

3) absorb new ideas you may not have otherwise been able to absorb, 

4) acquire new intuitions [5]

You can separately do things that will fulfill all four of those things (and even fulfill some of the other benefits that language learning can provide for you) without learning another language. But I am very bad at executive skills, and juggling 4+ different habits, so I generally don't find the idea of say... 

  • doing 2 crosswords, 2 4x4x4 sudoku a day, and other brain teasers + 
  • taking dance classes or learning a new instrument + 
  • taking communications classes and reading books about public speaking and active listening + 
  • engaging in comparative-translation reading +
  • ingratiating myself to radically different communities in order to cultivate those modes of thought [6] 

...to be less onerous than learning a new language.  Especially since language-learning can help and be done concurrently with these alternatives [7].

Language learning is also something that can help with community bonding, which would probably be helpful to the substantial-seeming portion of EAs who are kind of lonely and depressed.  It can also help you remember what it is like to suck at something, which I think a lot of people in Rationalist spaces would benefit from more broadly, since so many of them were gifted kids who now have anxiety, and becoming comfortable with failure and iteration is also good for you and your ability to do things in general. 

  1. ^

    Travelling broadly will probably provide better results to most people, but it also costs a lot of money, even more if you need to hire a translator.

  2. ^

    Especially with Duolingo offering endangered languages now.

  3. ^

    Say, reading a national award-winning book from every nation in the world.

  4. ^

     Or, preferrably, if you learn 2, given that the greatest benefits are found in trilinguals+.

  5. ^

    I find that personally, I am more socially conservative in Spanish and more progressive in English, which has allowed me to test ideas against my own brain in a way that most monolinguals I talk to seem to find somewhat alien and much more effortful. Conversely, in French, I am not very capable, and I find that quite useful because it allows me to force myself to simplify my ideas on the grounds that I am literally unable to express the complex version. 

  6. ^

     + [whatever else I haven't thought of yet that would help obtain these benefits]

  7. ^

    Music terminology is often in French or Italian, learning languages will just broaden your vocabulary for crossword puzzles, knowing another language is a gateway to communities that were previously closed to you, and you can engage in reading different translations of something more easily if you can also just read it in the original language.  

Just regarding your last sentence: I disagree that it has any bearing whatsoever whether everyone else is excluding other's visions of the future or not.
No matter if everyone else is great or terrible - I want EA to be as good as it possibly can, and if it fails on some metric it should be criticised and changed in that regard, no matter if everyone else fails on the same metric too, or not, or whatever.

Thanks for your reply! I’m not saying that EA should be able to exclude others’ visions because others are doing so. I’m claiming that it’s impossible not to exclude others’ visions of the future. Let’s take the pluralistic vision of the future that appeals to MacAskill and Ord. There will be many people in the world (fascists, Islamists, evangelical Christians) who disagree with such a vision. MacAskill and Ord are thus excluding those visions of the future. Is this a bad thing? I will let the reader decide.

What are the beliefs of prioritarian standpoint theorists?

Prioritarianism is a flavor of utilitarianism that tries to increase impact by starting with the oppressed or unprivileged.

Standpoint theory or standpoint epistemology is about advantages and disadvantages to gaining knowledge based on demographic membership. 

Leftist culture is deeply exposed to both of these views, occasionally to the point of them being invisible/commonsensical assumptions. 

My internal gpt completion / simulation of someone like Thorn assumed that her rhetorical question was gesturing toward "these EA folks seem to be underrating at least one of prioritarianism or standpoint epistemology" 

measurability bias

Side note from main discussion: I really really dislike this phrase. It seems to crop up whenever anyone in the EA/rationality-adjacent space wants to handwave that their pet cause area is underappreciated but can't provide any good reason for the claim - which is exactly what you might imagine a priori that such a phrase should get used for. 

EA and ~The System~ is a perfect case in point. Leftists think EA should aim to change the system to be more left, rightists think EA should change it to be more right, or at least actively resist leftist change, and Scott Alexander observes (correctly, IMO) that 'if everyone gave 10% of their income to effective charity, it would be more than enough to end world poverty, cure several major diseases, and start a cultural and scientific renaissance. If everyone became very interested in systemic change, we would probably have a civil war.'

Putting the word 'bias' after a concept is not a reasonable way of criticising that concept.

I personally don't see how Thorn's analysis of effective altruism was 'fair' when she dedicates nearly a third of the video to longtermism and Sam Bankman-Fried, misleadingly mentions Elon Musk, while not discussing effective altruist animal advocacy or key concepts like the ITN framework and cause neutrality.

She paints EA in a slightly simplistic light (can’t expect much more from a 40-min video on a huge movement that’s over a decade old)

Actually, I do think you could expect a bit more nuance in a video that is nearly half as long as a college lecture. Thorn touches on the (leftist) critique of philanthropy, but doesn't elaborate on it - a missed opportunity in my opinion.

Thorn says: “MacAskill and Ord write a lot about progress and humanity’s potential, but they say almost nothing about who gets to define those concepts. Who gets seen as an expert? Who decides what counts as evidence? Whose vision of the future gets listened to? In my opinion, those aren’t side-questions to hide in the footnotes. They’re core to the whole project.”

I think this was one of the best parts of the video. Unfortunately, again, Thorn doesn't really go much in-depth here. Overall I think this video is not bad but it could have been far better if Thorn had put more effort into research and balanced representation of views and debates.

Curated and popular this week
Relevant opportunities