Hide table of contents

This post is part of a series of posts I am working on to propose a sociological model of Effective Altruism that could help understand and address coordination challenges.

Summary

Effective altruism is a social movement, defined as "a group of agents trying to achieve a set of shared goals through collective action". Further:

  • Effective altruism is not a purely intellectual movement. (More)
  • Mass movements are part of the social movement reference class. (More)

Introduction

I have noticed sometimes a lack of clarity in movement building strategy discussions. These discussions can span a variety of topics such as coordination, personal experiences with EA, EA as a philosophy and the different "domains" of EA. There is often a lack of clarity around what the correct reference class of EA should be used, which can lead to miscommunication.

The purpose of this post is to clarify the reference class I use to think about the EA movement when developing models of movement building, in particular when considering constraints within the EA movement such as coordination. I hope this clarification helps create a common language for future work in this space and can be used as a reference document.

Definition of social movements

I define a social movement as:

A group of actors trying to achieve a set of shared goals through collective action.[1]

where

  • An actor may be an individual or an organization associated with the movement in some capacity. Actors can take on many different roles within a movement, depending on the function they serve. Common roles in a social movement are leaders, who typically coordinate the movement, and members, who typically act on the leaders’ actions.
  • Shared goals are the common outcomes all actors within the movement wish for the movement to achieve. These are very broad, overarching, goals, which are then broken down into more specific actions. Most actions can be justified by demonstrating how they achieve the broader shared goals (even if some actors may not pursue those actions themselves).
  • Collective action is defined as an act that can only be accomplished by two or more actors coordinating with each other. This could include tasks that require several actors performing the same task, like voting, protesting, or lobbying. It could also include actions that require multiple actors performing differentiated tasks based on their skills, interests, social position, and more.

Clarifications

Effective altruism is not just an intellectual movement

EA might also be observed to share many features with intellectual or philosophical movements like Structuralism or Stoicism. The key difference is that EA by definition requires action in order to achieve its goals, and there is an explicit distinction between EA as an intellectual movement and EA as a social movement. See, for example, Will Macaskill’s tentative definition which explicitly separates the intellectual task of figuring out the most effective things to do, and actually doing them.[2] His definition is broadly accepted by the community, and used in CEA’s Introduction to Effective Altruism and amongst local community building efforts. Thus, although parts of the effective altruism movement could be compared to intellectual movements, the movement as a whole should not be. In other words, effective altruism is not just a question.

Mass movements are part of the social movement reference class

Social movements are an umbrella category and mass movements are a very common type of social movement. As per the definition of a social movement, actors associated with a mass movement like feminism often have a set of shared goals goals they want to achieve (e.g. women’s rights), and which require particular collective action to be achieved (e.g. mass protests, petitions, political advocacy groups, direct work organisations). Mass movements have been of interest to many popular causes within EA already such as animal advocacy and climate change.

It seems valuable to consider mass movements as a relevant reference class to EA because they can often comprise of many different smaller movements within them, which may be of value to EA. For example, groups focused on supporting and advocating for survivors of sexual assault may have benefited (increased donations, destigmatization, policy change) as a result of cultural change influenced by the #MeToo movement. Studying smaller groups within mass movements could be very valuable for EA as a whole.

Thanks to Arjun Khandelwal, Neha Georgie, and Nathan Lee Heath for feedback.


    1. This definition comes from my work over the course of my BA thesis. This is not necessarily representative of the whole field of social movement theory, but probably comes close to the consensus in the resource mobilisation literature. I did not attempt to do an in-depth dive into the movement definitions, instead choosing a simple definition which highlights what I see as the key differentiating features of a social movement. ↩︎

    2. MacAskills’ definition of effective altruism is “(i) the use of evidence and careful reasoning to work out how to maximize the good with a given unit of resources, tentatively understanding ‘the good’ in impartial welfarist terms, and (ii) the use of the findings from (i) to try to improve the world.” Note that global priorities research does count as “direct work”, however, the movement has always been focused on making most of this research actionable to the community. This action-orientation is what separates EA from a purely intellectual movement. ↩︎

Comments


No comments on this post yet.
Be the first to respond.
Curated and popular this week
 ·  · 5m read
 · 
This work has come out of my Undergraduate dissertation. I haven't shared or discussed these results much before putting this up.  Message me if you'd like the code :) Edit: 16th April. After helpful comments, especially from Geoffrey, I now believe this method only identifies shifts in the happiness scale (not stretches). Have edited to make this clearer. TLDR * Life satisfaction (LS) appears flat over time, despite massive economic growth — the “Easterlin Paradox.” * Some argue that happiness is rising, but we’re reporting it more conservatively — a phenomenon called rescaling. * I test rescaling using long-run German panel data, looking at whether the association between reported happiness and three “get-me-out-of-here” actions (divorce, job resignation, and hospitalisation) changes over time. * If people are getting happier (and rescaling is occuring) the probability of these actions should become less linked to reported LS — but they don’t. * I find little evidence of rescaling. We should probably take self-reported happiness scores at face value. 1. Background: The Happiness Paradox Humans today live longer, richer, and healthier lives in history — yet we seem no seem for it. Self-reported life satisfaction (LS), usually measured on a 0–10 scale, has remained remarkably flatover the last few decades, even in countries like Germany, the UK, China, and India that have experienced huge GDP growth. As Michael Plant has written, the empirical evidence for this is fairly strong. This is the Easterlin Paradox. It is a paradox, because at a point in time, income is strongly linked to happiness, as I've written on the forum before. This should feel uncomfortable for anyone who believes that economic progress should make lives better — including (me) and others in the EA/Progress Studies worlds. Assuming agree on the empirical facts (i.e., self-reported happiness isn't increasing), there are a few potential explanations: * Hedonic adaptation: as life gets
 ·  · 38m read
 · 
In recent months, the CEOs of leading AI companies have grown increasingly confident about rapid progress: * OpenAI's Sam Altman: Shifted from saying in November "the rate of progress continues" to declaring in January "we are now confident we know how to build AGI" * Anthropic's Dario Amodei: Stated in January "I'm more confident than I've ever been that we're close to powerful capabilities... in the next 2-3 years" * Google DeepMind's Demis Hassabis: Changed from "as soon as 10 years" in autumn to "probably three to five years away" by January. What explains the shift? Is it just hype? Or could we really have Artificial General Intelligence (AGI)[1] by 2028? In this article, I look at what's driven recent progress, estimate how far those drivers can continue, and explain why they're likely to continue for at least four more years. In particular, while in 2024 progress in LLM chatbots seemed to slow, a new approach started to work: teaching the models to reason using reinforcement learning. In just a year, this let them surpass human PhDs at answering difficult scientific reasoning questions, and achieve expert-level performance on one-hour coding tasks. We don't know how capable AGI will become, but extrapolating the recent rate of progress suggests that, by 2028, we could reach AI models with beyond-human reasoning abilities, expert-level knowledge in every domain, and that can autonomously complete multi-week projects, and progress would likely continue from there.  On this set of software engineering & computer use tasks, in 2020 AI was only able to do tasks that would typically take a human expert a couple of seconds. By 2024, that had risen to almost an hour. If the trend continues, by 2028 it'll reach several weeks.  No longer mere chatbots, these 'agent' models might soon satisfy many people's definitions of AGI — roughly, AI systems that match human performance at most knowledge work (see definition in footnote). This means that, while the compa
 ·  · 4m read
 · 
SUMMARY:  ALLFED is launching an emergency appeal on the EA Forum due to a serious funding shortfall. Without new support, ALLFED will be forced to cut half our budget in the coming months, drastically reducing our capacity to help build global food system resilience for catastrophic scenarios like nuclear winter, a severe pandemic, or infrastructure breakdown. ALLFED is seeking $800,000 over the course of 2025 to sustain its team, continue policy-relevant research, and move forward with pilot projects that could save lives in a catastrophe. As funding priorities shift toward AI safety, we believe resilient food solutions remain a highly cost-effective way to protect the future. If you’re able to support or share this appeal, please visit allfed.info/donate. Donate to ALLFED FULL ARTICLE: I (David Denkenberger) am writing alongside two of my team-mates, as ALLFED’s co-founder, to ask for your support. This is the first time in Alliance to Feed the Earth in Disaster’s (ALLFED’s) 8 year existence that we have reached out on the EA Forum with a direct funding appeal outside of Marginal Funding Week/our annual updates. I am doing so because ALLFED’s funding situation is serious, and because so much of ALLFED’s progress to date has been made possible through the support, feedback, and collaboration of the EA community.  Read our funding appeal At ALLFED, we are deeply grateful to all our supporters, including the Survival and Flourishing Fund, which has provided the majority of our funding for years. At the end of 2024, we learned we would be receiving far less support than expected due to a shift in SFF’s strategic priorities toward AI safety. Without additional funding, ALLFED will need to shrink. I believe the marginal cost effectiveness for improving the future and saving lives of resilience is competitive with AI Safety, even if timelines are short, because of potential AI-induced catastrophes. That is why we are asking people to donate to this emergency appeal