This is a wonderful critique - I agreed with it much more than I thought I would.
Fundamentally, EA is about two things. The first is a belief in utilitarianism or a utilitarian-esque moral system, that there exists an optimal world we should aspire to. This is a belief I believe to be pretty universal, whether people want to admit it or not.
The second part of EA is the belief that we should try to do as much good as possible. Emphasis on “try” - there is a subtle distinction between “hope to do the most amount of good”(the previous paragraph) and “actively try to do the most amount of good”. This piece points out many ways in which doing the latter does not actually lead to the former. The focus on quantifying impact leads to a male/white community, it leads to a reliance on nonprofits that tend to be less sustainable, it leads to outsourcing of intellectual work to individual decision-makers, etc.
But the question of “does trying to optimize impact actually lead to optimal outcomes?” is just an epistemic one. The critiques mentioned are simply counter-arguments, and there are numerous arguments in favor that many others have made. But this is a question on which we have some actual evidence, and I feel that this piece understates the substantial work that EA has already done. We have very good evidence that GiveWell charities have an order of magnitude higher impact than the average one. We are supporting animal welfare policy that has had some major victories in state referenda. We have good reason to believe AI safety is a horribly neglected issue that we need to work on.
This isn’t just a theoretical debate. We know we are doing better work than the average altruistic person outside the community. Effective Altruism is working.
It's kind of funny for me to hear about people arguing that weirdness is a necessary part of EA. To me, EA concepts are so blindingly straightforward ("we should try to do as much good with donations as possible", "long-term impacts are more important than short-term impacts", "even things that have a small probability of happening are worth tackling if they are impactful enough") that you have to actively modify your rhetoric to make them seem weird.
Strongly agree with all of the points you brought up - especially on AI Safety. I was quite skeptical for a while until someone gave me an example of AI risk that didn't sound like it was exaggerated for effect, to which my immediate reaction was "Yeah, that seems... really scarily plausible".
I think something like 0.1% of the population is a more accurate figure for how you coded the most strict category. 0.3% for the amount I would consider to have actually heard of the movement. These are the figures I would have given before seeing the study, anyway.
It's hard for me to point to specific numbers that have shaped my thinking, but I'll lay out a bit of my thought process. Of the people I know in person through non-EA means, I'm pretty sure not more than a low-single-digit percent know about EA, and this is a demographic that is way more likely to have heard of EA than the general public. Additionally, as someone who looks at a lot at political polls, I am constantly shocked at how little the public knows about pretty much everything. Given that e.g. EA forum participation numbers are measured in the thousands, I highly doubt 6 million Americans have heard of EA.
Really good write-up!
I find the proportion of people who have heard of EA even after adjusting for controls to be extremely high. I imagine some combination of response bias and just looking up the term is causing overestimation of EA knowledge.
Moreover, given that I expect EA knowledge to be extremely low in the general population, I’m not sure what the point of doing these surveys is. It seems to me you’re always fighting against various forms of survey bias that are going to dwarf any real data. Doing surveys of specific populations seems a more productive way of measuring knowledge.
I’ll update my priors a bit but I remain skeptical
I think we can use the EA/Rationality divide to form a home for the philosophy-oriented people in Rationality that doesn't dominate EA culture. Rationality used to totally dominate EA, something that has I think become less true over time, even if it's still pretty prevalent at current levels. Having separate rationality events that people know about, while still ensuring that people devoted to EA have strong rationalist fundamentals (which is a big concern!), seems like the way to go for creating a thriving community.
Shouldn’t we know better than to update in retrospect based on one highly uncertain datapoint?
We have a number of political data people in EA (eg David Shor) who thought donating to Flynn was a good investment early in the campaign cycle (later on I was hearing they thought it was no longer worth it). There was also good reason to believe Flynn could be high-impact if elected. Let’s not overthink this.
If you want to get a lot of money for your project, EA grants are not the way to do it. Because of the strong philosophical principles of the EA community, we are more skeptical and rigorous than just about any funding source out there. Granted, I don't actually know much about the nonprofit grant space as a whole: if it comes to the point that EA grants are basically the only game in town for nonprofit funding, then maybe it could become an issue. But if that becomes the case I think we are in a very good position and I believe we could come up with some solutions.
I do think that it's very important not to make animal welfare a partisan issue, so if you do bring it up, be careful. The same probably goes for a lot of these other issues, I just know animal welfare in particular is able to make a lot of headway in public referenda because it is relatively nonpartisan.
For me, mental health is a notable topic because it is one of the few downsides of modernization. I have a pretty grim view of humanity, and I've talked to a lot of people about how I think the median human living 5$ a day probably has a terrible life.
The response is always something along these lines: they've never experienced anything else, so for them it's really not that bad of a life. That is, people have some underlying intuition that there is always hedonistic adaptation to new "quality of life", and that someone's perspective on their own life matters maybe even more than their actual life.
In rich countries, this looks like mental health issues. People get so used to their physical needs being taken care of that any emotional struggles in their life feel amplified, leading to anxiety and depression.
So I think it is accurate that the most important issues in this world are global health, poverty, etc., simply because so much of the world is underdeveloped. However, if we want to get to a really great world, a world approaching perfection, we will have to tackle mental health issues.
In that case I'm going to blame Google for defining volition as "the faculty or power of using one's will." Or maybe that does mean "endorse"? Honestly I'm very confused, feel free to ignore my original comment.