Hide table of contents

YouGov recently reported the results of a survey (n=1000) suggesting that about “one in five (22%) Americans are familiar with effective altruism.”[1]


We think these results are exceptionally unlikely to be true. Their 22% figure is very similar to the proportion of Americans we previously found claim to have heard of effective altruism (19%) in our earlier survey (n=6130). But, after conducting appropriate checks, we estimated that much lower percentages are likely to have genuinely heard of EA[2] (2.6% after the most stringent checks, which we speculate is still likely to be somewhat inflated[3]).


Is it possible that these numbers have simply dramatically increased following the FTX scandal?

Fortunately, we have tested this with multiple followup surveys explicitly designed with this possibility in mind.[4] 

In our most recent survey (conducted October 6th[5]), we estimated that approximately 16% (13.0%-20.4%) of US adults would claim to have heard of EA. Yet, when we add in additional checks to assess whether people appear to have really heard of the term, or have a basic understanding of what it means, this estimate drops to 3% (1.7% to 4.4%), and even to approximately 1% with a more stringent level of assessment.[6] 
 


These results are roughly in line with our earlier polling in May 2022, as well as additional polling we conducted between May 2022 and October 2023, and do not suggest any dramatic increase in awareness of effective altruism, although assessing small changes when base rates are already low is challenging.

We plan to continue to conduct additional surveys, which will allow us to assess possible changes from just before the trial of Sam Bankman-Fried to after the trial.

Attitudes towards EA

YouGov also report that respondents are, even post-FTX, overwhelmingly positive towards EA, with 81% of those who (claim to) have heard of EA approving or strongly approving of EA.

Fortunately, this positive view is broadly in line with our own findings- across different ways of breaking down who has heard of EA and different levels of stringency- which we aim to report on separately at a later date. However, our earlier work did find that  awareness of FTX  was associated with more negative attitudes towards EA. 

Conclusions

The point of this post is not to criticise YouGov in particular. However, we do think it’s worth highlighting that even highly reputable polling organizations should not be assumed to be employing all the additional checks that may be required to understand a particular question. This may apply especially in relation to niche topics like effective altruism, or more technical topics like AI, where additional nuance and checks may be required to assess understanding.


 

  1. ^

    Also see this quick take.

  2. ^

    There are many reasons why respondents may erroneously claim knowledge of something. But simply put, one reason is that people like demonstrating their knowledge, and may err on the side of claiming to have heard of something even if they are not sure. Moreover, if the component words that make up a term are familiar, then the respondent may either mistakenly believe they have already encountered the term, or think it is sufficient that they believe they can reasonably infer what the term means from its component parts to claim awareness (even when explicitly instructed not to approach the task this way!). Some people also appear to conflate the term with others - for example, some amalgamation of inclusive fitness/reciprocal altruism appears quite common. 

    For reference, in another check we included, over 12% of people claim to have heard of the specific term “Globally neutral advocacy”: A term that our research team invented, which returns no google results as a quote, and which is not recognised as a term by GPT—a large-language model trained on a massive corpus of public and private data. “Globally neutral advocacy” represents something of a canary for illegitimate claims of having heard of EA, in that it is composed of terms people are likely to know, and the combination of which they might reasonably think they can infer the meaning or even simply mistakenly believe they have encountered. 

  3. ^

    For example, it is hard to prevent a motivated respondent from googling “effective altruism” in order to provide a reasonable open comment explanation of what effective altruism means. However, we have now implemented additional checks to guard against this.

  1. ^

    The results of some of these have been reported earlier here. Some of these are part of our Pulse survey program.

  2. ^

    n=1300 respondents overall, but respondents were randomly assigned to receive one of two different question formats to assess their awareness of EA. Results were post-stratified to be representative of US adults. This is a smaller sample size than we typically recommend for nationally representative sample, as this was an intermediate, 'pre-test' survey, and hence the error bars around these estimates are relatively wider than they otherwise would be. A larger N would be especially useful for more robustly determining the rates of low incidence outcomes (such as awareness of niche topics).

  3. ^

    As an additional check, we also assessed EA awareness using an alternative approach, in which a different subset of the respondents were shown the term and its definition, then asked if they knew the term only, the term and associated ideas, only the ideas, or neither the term nor the ideas. Using this design, approximately 15% claimed knowledge either of the term alone or both the term and ideas, while only 5% claimed knowlege of the term and the ideas.

Show all footnotes
Comments8


Sorted by Click to highlight new comments since:

Strongly agree. Given the question design ("Are you familiar with effective altruism?"), there's clear risk of acquiescence bias - on top of the fundamental social desirability bias of wanting to not appear ignorant to your interviewer.

For sure, and just misunderstanding error could account for a lot of positive responses too - people thinking they know it when they don't.

Agreed. As we note in footnote 2:

There are many reasons why respondents may erroneously claim knowledge of something. But simply put, one reason is that people like demonstrating their knowledge, and may err on the side of claiming to have heard of something even if they are not sure. Moreover, if the component words that make up a term are familiar, then the respondent may either mistakenly believe they have already encountered the term, or think it is sufficient that they believe they can reasonably infer what the term means from its component parts to claim awareness (even when explicitly instructed not to approach the task this way!). 

For reference, in another check we included, over 12% of people claim to have heard of the specific term “Globally neutral advocacy”: A term that our research team invented, which returns no google results as a quote, and which is not recognised as a term by GPT—a large-language model trained on a massive corpus of public and private data. “Globally neutral advocacy” represents something of a canary for illegitimate claims of having heard of EA, in that it is composed of terms people are likely to know, and the combination of which they might reasonably think they can infer the meaning or even simply mistakenly believe they have encountered. 

I think this is one reason why "effective altruism" gets higher levels of claimed awareness than other fake or low incidence terms (which people would be very unlikely to have encountered).

Outsider here! I dropped out of grad school years ago and was never really involved in the "elite" academic or professional scene to which most EA members belong. The term "effective altruism" was familiar to me since my student days in the early '10s, but I didn't really know much about it until very recently (the whole OpenAI scandal brought it to my attention, and I decided to explore the philosophical roots of it all over the holiday).

 

What are the stringent and permissive criteria for judging that someone has heard of EA?

The full process is described in our earlier post, and included a variety of other checks as well. 

But, in brief, the "stringent" and "permissive" criteria refer to respondents' open comment explanations of what they understand "effective altruism" to means, and whether they either displayed clear familiarity with effective altruism, such that it would be very unlikely someone would give that response if they were not genuinely familiar with effective altruism (e.g. by referring to using evidence and reason to maximise the amount of good done with your donations or career; or referring to specific EA figures, books, orgs, events etc.), or whether it was merely probable based on their comment that they had heard of effective altruism (e.g. because the responses were more vague or less specific).

This is a very helpful post, thanks! 

You write "YouGov also report that respondents are ... overwhelmingly positive towards EA, with 81% of those who (claim to) have heard of EA approving or strongly approving of EA. Fortunately, this positive view is broadly in line with our own findings ... which we aim to report on separately at a later date". 

Could you give an ETA for that? Or could you provide further details? Even if you haven't got data for the Netherlands it'd help us make estimates, which will then inform our strategy.  

Thanks!

We'll definitely be reporting on changes in awareness of and attitudes towards EA in our general reporting of EA Pulse in 2024. I'm not sure if/when we'd do a separate dedicated post towards changes in EA awareness/attitudes. We have a long list (this list is very non-exhausive) of research which is unpublished due to lack of capacity. A couple of items on that list also touch on attitudes/awareness of EA post-FTX, although we have run additional surveys since then.

Feel free to reach out privately if there are specific things it would be helpful to know for EA Netherlands.

Curated and popular this week
 ·  · 52m read
 · 
In recent months, the CEOs of leading AI companies have grown increasingly confident about rapid progress: * OpenAI's Sam Altman: Shifted from saying in November "the rate of progress continues" to declaring in January "we are now confident we know how to build AGI" * Anthropic's Dario Amodei: Stated in January "I'm more confident than I've ever been that we're close to powerful capabilities... in the next 2-3 years" * Google DeepMind's Demis Hassabis: Changed from "as soon as 10 years" in autumn to "probably three to five years away" by January. What explains the shift? Is it just hype? Or could we really have Artificial General Intelligence (AGI) by 2028?[1] In this article, I look at what's driven recent progress, estimate how far those drivers can continue, and explain why they're likely to continue for at least four more years. In particular, while in 2024 progress in LLM chatbots seemed to slow, a new approach started to work: teaching the models to reason using reinforcement learning. In just a year, this let them surpass human PhDs at answering difficult scientific reasoning questions, and achieve expert-level performance on one-hour coding tasks. We don't know how capable AGI will become, but extrapolating the recent rate of progress suggests that, by 2028, we could reach AI models with beyond-human reasoning abilities, expert-level knowledge in every domain, and that can autonomously complete multi-week projects, and progress would likely continue from there.  On this set of software engineering & computer use tasks, in 2020 AI was only able to do tasks that would typically take a human expert a couple of seconds. By 2024, that had risen to almost an hour. If the trend continues, by 2028 it'll reach several weeks.  No longer mere chatbots, these 'agent' models might soon satisfy many people's definitions of AGI — roughly, AI systems that match human performance at most knowledge work (see definition in footnote).[1] This means that, while the co
saulius
 ·  · 22m read
 · 
Summary In this article, I estimate the cost-effectiveness of five Anima International programs in Poland: improving cage-free and broiler welfare, blocking new factory farms, banning fur farming, and encouraging retailers to sell more plant-based protein. I estimate that together, these programs help roughly 136 animals—or 32 years of farmed animal life—per dollar spent. Animal years affected per dollar spent was within an order of magnitude for all five evaluated interventions. I also tried to estimate how much suffering each program alleviates. Using SADs (Suffering-Adjusted Days)—a metric developed by Ambitious Impact (AIM) that accounts for species differences and pain intensity—Anima’s programs appear highly cost-effective, even compared to charities recommended by Animal Charity Evaluators. However, I also ran a small informal survey to understand how people intuitively weigh different categories of pain defined by the Welfare Footprint Institute. The results suggested that SADs may heavily underweight brief but intense suffering. Based on those findings, I created my own metric DCDE (Disabling Chicken Day Equivalent) with different weightings. Under this approach, interventions focused on humane slaughter look more promising, while cage-free campaigns appear less impactful. These results are highly uncertain but show how sensitive conclusions are to how we value different kinds of suffering. My estimates are highly speculative, often relying on subjective judgments from Anima International staff regarding factors such as the likelihood of success for various interventions. This introduces potential bias. Another major source of uncertainty is how long the effects of reforms will last if achieved. To address this, I developed a methodology to estimate impact duration for chicken welfare campaigns. However, I’m essentially guessing when it comes to how long the impact of farm-blocking or fur bans might last—there’s just too much uncertainty. Background In
gergo
 ·  · 11m read
 · 
Crossposted on Substack and Lesswrong. Introduction There are many reasons why people fail to land a high-impact role. They might lack the skills, don’t have a polished CV, don’t articulate their thoughts well in applications[1] or interviews, or don't manage their time effectively during work tests. This post is not about these issues. It’s about what I see as the least obvious reason why one might get rejected relatively early in the hiring process, despite having the right skill set and ticking most of the other boxes mentioned above. The reason for this is what I call context, or rather, lack thereof. Subscribe to The Field Building Blog On professionals looking for jobs It’s widely agreed upon that we need more experienced professionals in the community, but we are not doing a good job of accommodating them once they make the difficult and admirable decision to try transitioning to AI Safety. Let’s paint a basic picture that I understand many experienced professionals are going through, or at least the dozens I talked to at EAGx conferences. 1. They do an AI Safety intro course 2. They decide to pivot their career 3. They start applying for highly selective jobs, including ones at OpenPhilanthropy 4. They get rejected relatively early in the hiring process, including for more junior roles compared to their work experience 5. They don’t get any feedback 6. They are confused as to why and start questioning whether they can contribute to AI Safety If you find yourself continuously making it to later rounds of the hiring process, I think you will eventually land the job sooner or later. The competition is tight, so please be patient! To a lesser extent, this will apply to roles outside of AI Safety, especially to those aiming to reduce global catastrophic risks. But for those struggling to penetrate later rounds of the hiring process, I want to suggest a potential consideration. Assuming you already have the right skillset for a given role, it might
Recent opportunities in Building effective altruism
49
Ivan Burduk
· · 2m read