Hide table of contents

Summary

  • In this report, we will explore the difference between those who self-identify as effective altruists versus those who say they broadly subscribe to effective altruism but do not self-identify. As there is variation in levels of involvement in the effective altruism movement, we were interested in assessing people who are outside the scope of the typical analysis.
  • Past reports in the EA Survey Series have exclusively reported only on respondents who are aware of effective altruism, subscribe to effective altruism, and describe themselves as effective altruists.

To perform this analysis, we used three questions* to classify people into two segments – “subscribers” and “identifiers.”

  • Subscribers are defined as those that are aware of effective altruism and broadly subscribe to the ideals, but do not identify as effective altruists
  • Identifiers are defined as those respondents that are aware of effective altruism, broadly subscribe to the ideals, and identify as effective altruists

After analyzing the data we found:

  • Subscribers are demographically similar to identifiers, but on average have been involved in EA for less time.
  • The scope of subscriber involvement is fairly limited – they donate less money, volunteer less, and are less likely to be a part of an effective altruism group.
  • As this is the first year we have asked the question in this way, we do not yet have the longitudinal analysis needed to see how/if subscribers convert to identifiers. However, the population demographic similarities as well as utilization of similar resources suggests that over time subscribers may deepen their involvement and later become identifiers.

Insights

Demographics

Within the sample of valid Effective Altruist survey respondents, 2582 (90%) claimed to be Identifiers and 302 (10%) were Subscribers. These populations were broadly similar with relation to age, gender, ethnicity, religion, employment, and marital status.

Getting Into EA

As may have been expected, the amount of time that these two populations have spent in the community is significantly different. Subscribers are largely newer to the community, 9% report their involvement in EA prior to 2014, compared to 21% of Identifiers.

The graph below shows the distribution of when the two groups report having first started being involved in effective altruism.

It may be the case that as subscribers spend more time as a part of the effective altruism community, they convert to identifying as effective altruists. Or it may be the case that people who subscribe to the ideals but do not identify are a growing subgroup in the community given the expansion of awareness of effective altruism as a concept. Further research will be needed to explore how people relate to the ideals of effective altruism over time. The below graph displays the top sources of where subscribers and identifiers first heard of effective altruism.

When looking at the below table of important factors for involvement below, the primary source that both identifiers and subscribers report is 80,000 Hours. Identifiers then report that GiveWell, Other (books/articles), and personal contact were the most important factors for involvement.

Cause Prioritization

Among all respondents in the survey, global poverty is the single cause most selected as a top priority. Subscribers and identifiers also are in agreement with that selection, but vary with the distribution of cause preference. Climate change is a priority for subscribers, with 19% of them seeing it as the top priority compared to 13% of identifiers. After global poverty, a large portion of identifiers (16%) think that reducing risk from artificial intelligence is the most important cause, while only 9% of subscribers agree.

Community Involvement

The majority of subscribers (56%) are not involved in any effective altruism groups (groups includes the EA Facebook Group, LessWrong, the EA Forum, as well as local EA groups). This appears to be partially due to lack of knowledge about where to access resources. 59% of subscribers state that they do not know of a local EA group, and only 7% of them are involved in local EA groups. 33% of identifiers are involved in local groups. Subscribers generally feel unsure about their desire to be involved in local effective altruism communities (45%). Identifiers are also doubly likely to volunteer for an effective altruism cause (32% vs 14%).

The graph below shows distribution of involvement with different effective altruism groups.

It appears that the majority of subscribers do not know how they feel about the community at large. Subscribers are most likely (36%) to say that “I don’t know” when asked about how welcoming they find the community, this is likely due to their relative lack of involvement.

Career Path

Where most identifiers (78%) feel that learning about effective altruism has shifted or will shift their career path, 48% of subscribers do not feel that learning about effective altruism has changed or will change their career. A plurality of subscribers do not plan to follow direct charity work, earning to give, or research.

Donations

When it comes to behavioral markers of involvement such as plans to donate money as well as desire to be involved in effective altruism in the future - subscribers and identifiers vary as well. 22% of subscribers report that they do not plan to donate money in the coming year, while this applies to only 12% of identifiers. Additionally, only 5% of subscribers have taken the Giving What We Can pledge, while 33% of identifiers have done so.

Other Findings

In future surveys, it would be interesting to explore if people convert from subscribers to identifiers and what inflection points may be correlated to that mindset shift.

Methodology

*Questions used for segmentation (Answers are Yes/No):

  • Are you aware of effective altruism?
  • Do you broadly subscribe to the basic ideas behind effective altruism?
  • Could you, however loosely, be described as "an Effective Altruist"?

Credits

This post was written by Lauren Whetstone.

Thanks to Tee Barnett and David Moss for editing.

The annual EA Survey is a project of Rethink Charity with analysis and commentary from researchers at Rethink Priorities.

Supporting Documents

Other articles in the 2018 EA Survey Series:

I - Community Demographics & Characteristics

II - Distribution & Analysis Methodology

III - How do people get involved in EA?

V - Donation Data

VI - Cause Selection

VII- EA Group Membership

VIII- Where People First Hear About EA and Higher Levels of Involvement

IX- Geographic Differences in EA

X- Welcomingness- How Welcoming is EA?

XI- How Long Do EAs Stay in EA?

XII- Do EA Survey Takers Keep Their GWWC Pledge?

Prior EA Surveys conducted by Rethink Charity

Comments13


Sorted by Click to highlight new comments since:

Thanks for this analysis. If there's time for more, I'd be keen to see something more focused on 'level of contribution' rather than subscriber vs. identifier. I'm not too concerned about whether someone identifies with EA, but rather with how much impact they're able to have. It would be useful to know which sources are most responsible for the people who are most contributing.

I'm not sure what proxies you have for this in the survey data, but I'm thinking ideally of concrete achievements, like working full-time in EA; or donating over $5,000 per year.

You could also look at how dedicated to social impact they say they are combined with things like academic credentials, but these proxies are much more noisy.

One potential source of proxies is how involved someone says they are in EA, but again I don't care about that so much compared to what they're actually contributing.

Agreed. A per my reply to you here we're still going to talk about the influence of different levels of involvement with regards to cause selection and in a post addressing your question about levels of involvement and different routes by which people get involved in EA.

Great! I was wondering if this might be it.

This is a bit of a nitpick, but I wonder if it's a bit strong to label people who said 'yes' to all three questions as "identifiers" and to say that they "identify as an EA."

I can imagine quite a few people would say yes when asked "Could you, however loosely, be described as 'an Effective Altruist'?" but would say no when asked "do you identify as an EA" or even "do you, however loosely, identify as an EA?"

"Could you be described" seems to focus on how other people would describe you and seems to set the bar pretty low - around "would it be unreasonable for somebody to describe you as X" or "does anybody at all describe you as X."

I'd usually only say that someone identifies as 'X' if they describe themselves as X or they think X is a good description of them or they prefer to be described as X.

This may be a very minor point and there might not be all that many people who say they "could, however loosely, be described as an EA" but who do not identify, however loosely, as an EA. For what it's worth, I think there were a couple of years where I myself fell into this category but it wouldn't really surprise me if I wasn't representative..

Agreed. When I was reading the article, I thought, "Oh yes! I'm a subscriber. What a clever way of describing me."

When I read the actual questions, I realized there's no way I would have been counted as a subscriber. Because I regularly volunteer, other people would definitely describe me as EA, even if I'm a bit on the fence.

It seems like your causality might go this way as well. Rather than "subscribers volunteer less," the story might actually be "people who volunteer for the community know that others describe them as EA, so volunteers are usually identifiers."

Hi there, just a quick thought on the cause groupings in case you use them in future posts.

Currently, the post notes that global poverty is the cause most often selected as the top priority, but it should add that this is sensitive to how the causes are grouped, and there's no single clear way to do this.

The most common division we have is probably these 4 categories: global poverty, GCRs, meta and animal welfare.

If we used this grouping, then the identifiers would report:

GCRs: 28%

Global poverty: 27%

Meta: 27%

Animal welfare: 10%

(Plus Climate Change: 13% Mental health: 4%)

So, basically the top 3 areas are about the same. If climate change were grouped into GCRs, then GCRs would go up to 41% and be the clear leader.

Global poverty is a huge area that receives hundreds of billions of dollars of investment, and could arguably be divided into health, economic empowerment (e.g. cash transfers), education, policy-change etc. That could also be an option for the next version of the survey.

I'm glad we have the finer grained divisions in the survey, but we have to be careful about how to present the results.

Thanks Ben. I totally agree and we're going to go into this a lot more in the Cause Preference post.

It's interesting that you think of grouping Climate Change with GCRs. I normally think of it as an aspect of medium-term global poverty, because my impression is that climate change will displace and kill millions in the developing world, but doesn't present an existential risk. I'm open to changing my mind though.

I think in practice people work on it for both reasons depending on their values.

I'd find it really helpful if you included the exact wording of each question when you describe these results or alternatively a prominent link to all of the questions. For example, how exactly did the survey ask about"top cause?"

Hi Howie! The cause preference post that is upcoming will go into the phrasing of top cause in more detail. In the meantime, you can find the entirety of the survey text here: https://docs.google.com/document/d/1LGxLFDTbjhPnme3ruOnwt_ZLumdwTs7ylkHx0-qJl6I/edit

[anonymous]3
0
0

Thanks for taking the time to look into this.

I think this highlights the issues with the nomenclature of effective altruism. I find the question "do you identify as an effective altruist" to be akin to "do you identify as a good person." No matter how much I donate to EA causes or how much good I do with my career, I would not answer yes because it comes off to me as a bit presumptuous and arrogant, and it insinuates that people outside this community are not as effective and altruistic. To be clear, I think the community as a whole does a ton of good and I'm grateful it exists--my concern (which I know others have raised as well) is only with the title.

I agree that looking at more concrete metrics of contribution (e.g. percentage of income donated) might be more informative.

Curated and popular this week
 ·  · 52m read
 · 
In recent months, the CEOs of leading AI companies have grown increasingly confident about rapid progress: * OpenAI's Sam Altman: Shifted from saying in November "the rate of progress continues" to declaring in January "we are now confident we know how to build AGI" * Anthropic's Dario Amodei: Stated in January "I'm more confident than I've ever been that we're close to powerful capabilities... in the next 2-3 years" * Google DeepMind's Demis Hassabis: Changed from "as soon as 10 years" in autumn to "probably three to five years away" by January. What explains the shift? Is it just hype? Or could we really have Artificial General Intelligence (AGI) by 2028?[1] In this article, I look at what's driven recent progress, estimate how far those drivers can continue, and explain why they're likely to continue for at least four more years. In particular, while in 2024 progress in LLM chatbots seemed to slow, a new approach started to work: teaching the models to reason using reinforcement learning. In just a year, this let them surpass human PhDs at answering difficult scientific reasoning questions, and achieve expert-level performance on one-hour coding tasks. We don't know how capable AGI will become, but extrapolating the recent rate of progress suggests that, by 2028, we could reach AI models with beyond-human reasoning abilities, expert-level knowledge in every domain, and that can autonomously complete multi-week projects, and progress would likely continue from there.  On this set of software engineering & computer use tasks, in 2020 AI was only able to do tasks that would typically take a human expert a couple of seconds. By 2024, that had risen to almost an hour. If the trend continues, by 2028 it'll reach several weeks.  No longer mere chatbots, these 'agent' models might soon satisfy many people's definitions of AGI — roughly, AI systems that match human performance at most knowledge work (see definition in footnote).[1] This means that, while the co
saulius
 ·  · 22m read
 · 
Summary In this article, I estimate the cost-effectiveness of five Anima International programs in Poland: improving cage-free and broiler welfare, blocking new factory farms, banning fur farming, and encouraging retailers to sell more plant-based protein. I estimate that together, these programs help roughly 136 animals—or 32 years of farmed animal life—per dollar spent. Animal years affected per dollar spent was within an order of magnitude for all five evaluated interventions. I also tried to estimate how much suffering each program alleviates. Using SADs (Suffering-Adjusted Days)—a metric developed by Ambitious Impact (AIM) that accounts for species differences and pain intensity—Anima’s programs appear highly cost-effective, even compared to charities recommended by Animal Charity Evaluators. However, I also ran a small informal survey to understand how people intuitively weigh different categories of pain defined by the Welfare Footprint Institute. The results suggested that SADs may heavily underweight brief but intense suffering. Based on those findings, I created my own metric DCDE (Disabling Chicken Day Equivalent) with different weightings. Under this approach, interventions focused on humane slaughter look more promising, while cage-free campaigns appear less impactful. These results are highly uncertain but show how sensitive conclusions are to how we value different kinds of suffering. My estimates are highly speculative, often relying on subjective judgments from Anima International staff regarding factors such as the likelihood of success for various interventions. This introduces potential bias. Another major source of uncertainty is how long the effects of reforms will last if achieved. To address this, I developed a methodology to estimate impact duration for chicken welfare campaigns. However, I’m essentially guessing when it comes to how long the impact of farm-blocking or fur bans might last—there’s just too much uncertainty. Background In
gergo
 ·  · 11m read
 · 
Crossposted on Substack and Lesswrong. Introduction There are many reasons why people fail to land a high-impact role. They might lack the skills, don’t have a polished CV, don’t articulate their thoughts well in applications[1] or interviews, or don't manage their time effectively during work tests. This post is not about these issues. It’s about what I see as the least obvious reason why one might get rejected relatively early in the hiring process, despite having the right skill set and ticking most of the other boxes mentioned above. The reason for this is what I call context, or rather, lack thereof. Subscribe to The Field Building Blog On professionals looking for jobs It’s widely agreed upon that we need more experienced professionals in the community, but we are not doing a good job of accommodating them once they make the difficult and admirable decision to try transitioning to AI Safety. Let’s paint a basic picture that I understand many experienced professionals are going through, or at least the dozens I talked to at EAGx conferences. 1. They do an AI Safety intro course 2. They decide to pivot their career 3. They start applying for highly selective jobs, including ones at OpenPhilanthropy 4. They get rejected relatively early in the hiring process, including for more junior roles compared to their work experience 5. They don’t get any feedback 6. They are confused as to why and start questioning whether they can contribute to AI Safety If you find yourself continuously making it to later rounds of the hiring process, I think you will eventually land the job sooner or later. The competition is tight, so please be patient! To a lesser extent, this will apply to roles outside of AI Safety, especially to those aiming to reduce global catastrophic risks. But for those struggling to penetrate later rounds of the hiring process, I want to suggest a potential consideration. Assuming you already have the right skillset for a given role, it might
Recent opportunities in Building effective altruism
49
Ivan Burduk
· · 2m read