Hide table of contents

YouGov recently reported the results of a survey (n=1000) suggesting that about “one in five (22%) Americans are familiar with effective altruism.”[1]


We think these results are exceptionally unlikely to be true. Their 22% figure is very similar to the proportion of Americans we previously found claim to have heard of effective altruism (19%) in our earlier survey (n=6130). But, after conducting appropriate checks, we estimated that much lower percentages are likely to have genuinely heard of EA[2] (2.6% after the most stringent checks, which we speculate is still likely to be somewhat inflated[3]).


Is it possible that these numbers have simply dramatically increased following the FTX scandal?

Fortunately, we have tested this with multiple followup surveys explicitly designed with this possibility in mind.[4] 

In our most recent survey (conducted October 6th[5]), we estimated that approximately 16% (13.0%-20.4%) of US adults would claim to have heard of EA. Yet, when we add in additional checks to assess whether people appear to have really heard of the term, or have a basic understanding of what it means, this estimate drops to 3% (1.7% to 4.4%), and even to approximately 1% with a more stringent level of assessment.[6] 
 


These results are roughly in line with our earlier polling in May 2022, as well as additional polling we conducted between May 2022 and October 2023, and do not suggest any dramatic increase in awareness of effective altruism, although assessing small changes when base rates are already low is challenging.

We plan to continue to conduct additional surveys, which will allow us to assess possible changes from just before the trial of Sam Bankman-Fried to after the trial.

Attitudes towards EA

YouGov also report that respondents are, even post-FTX, overwhelmingly positive towards EA, with 81% of those who (claim to) have heard of EA approving or strongly approving of EA.

Fortunately, this positive view is broadly in line with our own findings- across different ways of breaking down who has heard of EA and different levels of stringency- which we aim to report on separately at a later date. However, our earlier work did find that  awareness of FTX  was associated with more negative attitudes towards EA. 

Conclusions

The point of this post is not to criticise YouGov in particular. However, we do think it’s worth highlighting that even highly reputable polling organizations should not be assumed to be employing all the additional checks that may be required to understand a particular question. This may apply especially in relation to niche topics like effective altruism, or more technical topics like AI, where additional nuance and checks may be required to assess understanding.


 

  1. ^

    Also see this quick take.

  2. ^

    There are many reasons why respondents may erroneously claim knowledge of something. But simply put, one reason is that people like demonstrating their knowledge, and may err on the side of claiming to have heard of something even if they are not sure. Moreover, if the component words that make up a term are familiar, then the respondent may either mistakenly believe they have already encountered the term, or think it is sufficient that they believe they can reasonably infer what the term means from its component parts to claim awareness (even when explicitly instructed not to approach the task this way!). Some people also appear to conflate the term with others - for example, some amalgamation of inclusive fitness/reciprocal altruism appears quite common. 

    For reference, in another check we included, over 12% of people claim to have heard of the specific term “Globally neutral advocacy”: A term that our research team invented, which returns no google results as a quote, and which is not recognised as a term by GPT—a large-language model trained on a massive corpus of public and private data. “Globally neutral advocacy” represents something of a canary for illegitimate claims of having heard of EA, in that it is composed of terms people are likely to know, and the combination of which they might reasonably think they can infer the meaning or even simply mistakenly believe they have encountered. 

  3. ^

    For example, it is hard to prevent a motivated respondent from googling “effective altruism” in order to provide a reasonable open comment explanation of what effective altruism means. However, we have now implemented additional checks to guard against this.

  1. ^

    The results of some of these have been reported earlier here. Some of these are part of our Pulse survey program.

  2. ^

    n=1300 respondents overall, but respondents were randomly assigned to receive one of two different question formats to assess their awareness of EA. Results were post-stratified to be representative of US adults. This is a smaller sample size than we typically recommend for nationally representative sample, as this was an intermediate, 'pre-test' survey, and hence the error bars around these estimates are relatively wider than they otherwise would be. A larger N would be especially useful for more robustly determining the rates of low incidence outcomes (such as awareness of niche topics).

  3. ^

    As an additional check, we also assessed EA awareness using an alternative approach, in which a different subset of the respondents were shown the term and its definition, then asked if they knew the term only, the term and associated ideas, only the ideas, or neither the term nor the ideas. Using this design, approximately 15% claimed knowledge either of the term alone or both the term and ideas, while only 5% claimed knowlege of the term and the ideas.

Show all footnotes
Comments8


Sorted by Click to highlight new comments since:

Strongly agree. Given the question design ("Are you familiar with effective altruism?"), there's clear risk of acquiescence bias - on top of the fundamental social desirability bias of wanting to not appear ignorant to your interviewer.

For sure, and just misunderstanding error could account for a lot of positive responses too - people thinking they know it when they don't.

Agreed. As we note in footnote 2:

There are many reasons why respondents may erroneously claim knowledge of something. But simply put, one reason is that people like demonstrating their knowledge, and may err on the side of claiming to have heard of something even if they are not sure. Moreover, if the component words that make up a term are familiar, then the respondent may either mistakenly believe they have already encountered the term, or think it is sufficient that they believe they can reasonably infer what the term means from its component parts to claim awareness (even when explicitly instructed not to approach the task this way!). 

For reference, in another check we included, over 12% of people claim to have heard of the specific term “Globally neutral advocacy”: A term that our research team invented, which returns no google results as a quote, and which is not recognised as a term by GPT—a large-language model trained on a massive corpus of public and private data. “Globally neutral advocacy” represents something of a canary for illegitimate claims of having heard of EA, in that it is composed of terms people are likely to know, and the combination of which they might reasonably think they can infer the meaning or even simply mistakenly believe they have encountered. 

I think this is one reason why "effective altruism" gets higher levels of claimed awareness than other fake or low incidence terms (which people would be very unlikely to have encountered).

Outsider here! I dropped out of grad school years ago and was never really involved in the "elite" academic or professional scene to which most EA members belong. The term "effective altruism" was familiar to me since my student days in the early '10s, but I didn't really know much about it until very recently (the whole OpenAI scandal brought it to my attention, and I decided to explore the philosophical roots of it all over the holiday).

 

What are the stringent and permissive criteria for judging that someone has heard of EA?

The full process is described in our earlier post, and included a variety of other checks as well. 

But, in brief, the "stringent" and "permissive" criteria refer to respondents' open comment explanations of what they understand "effective altruism" to means, and whether they either displayed clear familiarity with effective altruism, such that it would be very unlikely someone would give that response if they were not genuinely familiar with effective altruism (e.g. by referring to using evidence and reason to maximise the amount of good done with your donations or career; or referring to specific EA figures, books, orgs, events etc.), or whether it was merely probable based on their comment that they had heard of effective altruism (e.g. because the responses were more vague or less specific).

This is a very helpful post, thanks! 

You write "YouGov also report that respondents are ... overwhelmingly positive towards EA, with 81% of those who (claim to) have heard of EA approving or strongly approving of EA. Fortunately, this positive view is broadly in line with our own findings ... which we aim to report on separately at a later date". 

Could you give an ETA for that? Or could you provide further details? Even if you haven't got data for the Netherlands it'd help us make estimates, which will then inform our strategy.  

Thanks!

We'll definitely be reporting on changes in awareness of and attitudes towards EA in our general reporting of EA Pulse in 2024. I'm not sure if/when we'd do a separate dedicated post towards changes in EA awareness/attitudes. We have a long list (this list is very non-exhausive) of research which is unpublished due to lack of capacity. A couple of items on that list also touch on attitudes/awareness of EA post-FTX, although we have run additional surveys since then.

Feel free to reach out privately if there are specific things it would be helpful to know for EA Netherlands.

Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
 ·  · 9m read
 · 
TL;DR In a sentence:  We are shifting our strategic focus to put our proactive effort towards helping people work on safely navigating the transition to a world with AGI, while keeping our existing content up. In more detail: We think it’s plausible that frontier AI companies will develop AGI by 2030. Given the significant risks involved, and the fairly limited amount of work that’s been done to reduce these risks, 80,000 Hours is adopting a new strategic approach to focus our efforts in this area.   During 2025, we are prioritising: 1. Deepening our understanding as an organisation of how to improve the chances that the development of AI goes well 2. Communicating why and how people can contribute to reducing the risks 3. Connecting our users with impactful roles in this field 4. And fostering an internal culture which helps us to achieve these goals We remain focused on impactful careers, and we plan to keep our existing written and audio content accessible to users. However, we are narrowing our focus as we think that most of the very best ways to have impact with one’s career now involve helping make the transition to a world with AGI go well.   This post goes into more detail on why we’ve updated our strategic direction, how we hope to achieve it, what we think the community implications might be, and answers some potential questions. Why we’re updating our strategic direction Since 2016, we've ranked ‘risks from artificial intelligence’ as our top pressing problem. Whilst we’ve provided research and support on how to work on reducing AI risks since that point (and before!), we’ve put in varying amounts of investment over time and between programmes. We think we should consolidate our effort and focus because:   * We think that AGI by 2030 is plausible — and this is much sooner than most of us would have predicted 5 years ago. This is far from guaranteed, but we think the view is compelling based on analysis of the current flow of inputs into AI