Epistemic status: I mostly want to provide a starting point for discussion, not make any claims with high confidence.
Introduction and summary
It’s 2024. The effective altruism movement no longer exists, or is no longer doing productive work, for reasons our current selves wouldn’t endorse. What happened, and what could we have done about it in 2019?
I’m concerned not to hear this question discussed more often (though CEA briefly speculates on it here). It’s a prudent topic for a movement to be thinking about at any stage of its life cycle, but our small, young, rapidly changing community should be taking it especially seriously—it’s very hard to say right now where we’ll be in five years. I want to spur thinking on this issue by describing four plausible ways the movement could collapse or lose much of its potential for impact. This is not meant to be an exhaustive list of scenarios, nor is it an attempt to predict the future with any sort of confidence—it’s just an exploration of some of the possibilities, and what could logically lead to what.
- Sequestration: The EAs closest to leadership become isolated from the rest of the community. They lose a source of outside feedback and a check on their epistemics, putting them at a higher risk of forming an echo chamber. Meanwhile, the rest of the movement largely dissolves.
- Attrition: Value drift, burnout, and lifestyle changes cause EAs to drift away from the movement one by one, faster than they can be replaced. The impact of EA tapers, though some aspects of it may be preserved.
- Dilution: The movement becomes flooded with newcomers who don’t understand EA’s core concepts and misapply or politicize the movement’s ideas. Discussion quality degrades and “effective altruism” becomes a meaningless term, making the original ideas impossible to communicate.
- Distraction: The community becomes engrossed in concerns tangential to impact, loses sight of the object level, and veers off track of its goals. Resources are misdirected and the best talent goes elsewhere.
Below, I explore each scenario in greater detail.
To quote CEA’s three-factor model of community building,
Some people are likely to have a much greater impact than others. We certainly don’t think individuals with more resources matter any more as people, but we do think that helping direct their resources well has a higher expected value in terms of moving towards CEA’s ultimate goals.
good community building is about inclusion, whereas good prioritization is about exclusion
It might be difficult in practice for us to be elitist about the value someone provides whilst being egalitarian about the value they have, even if the theoretical distinction is clear.
I don’t want to be seen as arguing for any position in the debate about whether and how much to prioritize those who appear most talented—a sufficiently nuanced writeup of my thoughts would distract from my main point here. However, I do want to highlight a possible risk of too much elitism that I haven’t really seen talked about. The terms “core” and “middle” are commonly used here, but I generally find their use conflates level of involvement or commitment with level of prominence or authority. In this post I’ll be using the following definitions:
- Group 1 EAs are interested in effective altruism and may give effectively or attend the occasional meetup, but don’t spend much time thinking about EA or consider it a crucial part of their identities and their lives.
- Group 2 EAs are highly dedicated to the community and its project of making the world a better place; they devour EA content online and/or regularly attend meetups. However, they are not in frequent contact with EA decision-makers.
- Group 3 EAs are well-known community members, or those who have been identified as potentially high-impact and have prominent EAs or orgs like 80K investing in their development as effective altruists.
A sequestration collapse would occur if EA leadership stopped paying much attention to Groups 1 and 2, or became so tone-deaf about putting Group 3 first that everyone else would feel alienated and leave the movement. Without direction and support, most of Group 1 and some of Group 2 would likely give up on the idea of doing good effectively. The others might try to go it alone, or even try to found a parallel movement—but without the shared resources, coordination ability, and established networks of the original community, they would be unlikely to recapture all the impact lost in the split. Meanwhile, Group 3 would be left with little to no recruitment ability, since most Group 3 EAs pass through Groups 1 and 2 first.
Finally, Group 2 and especially Group 1 act as a bridge between Group 3 and the rest of the world, and the more grounded, less radical perspective they bring may help prevent groupthink, group polarization, and similar dynamics. Without it, Group 3 would be left dangerously isolated and more prone to epistemic errors. Overall, losing Groups 1 and 2 would curtail EA’s available resources and threaten the efficiency with which we used them—possibly forever.
Again, prioritization of promising members is a very hard needle to thread. However, EA leadership should put a great deal of thought and effort into welcoming, inclusive communication and try hard to avoid implying that certain people aren’t valuable. They should also keep an eye on the status and health of the community: if decision-makers get out of touch with the perspectives, circumstances, and problems of the majority of EAs, their best efforts at inclusivity are unlikely to succeed. Prominent EAs should strive to be accessible to members of Groups 1 and 2 and to hear out non-experts’ thoughts on important issues, especially ones concerning community health. Local group organizers should create newcomer-friendly, nonjudgmental spaces and respond to uninformed opinions with patience and respect. We can all work to uphold a culture of basic friendliness and openness to feedback.
Over time, some EAs will inevitably lose their sense of moral urgency or stop feeling personally compelled to act against suffering. Some will overwork themselves, experience burnout, and retreat from the community. Some will find that as they grow older and move into new life stages, an altruism-focused lifestyle is no longer practical or sustainable. Some will decide they disagree with the movement’s ideals or the direction it seems to be moving in, or be drawn by certain factors but repelled by others and find that over time their aversion wins out. Each person’s path to leaving the movement will be unique and highly personal. But these one-offs will pose a serious danger to the movement if they accumulate faster than we can bring new people in.
In an attrition collapse scenario, the movement’s impact would taper slowly as people dropped out one by one. EA’s ideas may influence ex-members’ thinking over their lifetimes, and some people might continue to donate substantially to high-impact charities without necessarily following the latest research or making the community a part of their lives. Some highly active percentage of EAs would continue to pursue effective altruist goals as people bled away around them, possibly keeping some of the movement’s institutions on life support. If we managed to retain a billionaire or two, we could even continue work like the Open Philanthropy Project. But even if we did, our capacity would be greatly reduced and our fundamental ideas and aspirations would die out.
Whether and when to leave the movement is something we should all decide for ourselves, so we shouldn’t fight attrition on the level of individuals. Instead, we should shore up the movement as a whole. EA leadership should keep an eye on the size of the community and be sure to devote enough resources to recruitment to keep EAs off the endangered species list. Local group organizers can maintain welcoming environments for newcomers and make sure they’re creating a warm and supportive community that people enjoy engaging with. We should all work hard to be that community, online and in person.
From CEA’s fidelity model:
A common concern about spreading EA ideas is that the ideas will get "diluted" over time and will come to represent something much weaker than they do currently. For example, right now when we talk about which cause areas are high impact, we mean that the area has strong arguments or evidence to support it, has a large scope is relatively neglected and is potentially solvable.
Over time we might imagine that the idea of a high impact cause comes to mean that the area has some evidence behind it and has some plausible interventions that one could perform. Thus, in the future, adherence to EA ideas might imply relatively little difference from the status quo.
I'm uncertain about whether this is a serious worry. Yet, if it is, spreading messages about EA with low fidelity would significantly exacerbate the problem. As the depth and breadth of ideas gets stripped away, we should expect the ideas around EA to weaken over time which would eventually cause them to assume a form that is closer to the mainstream.
In a dilution scenario, the movement becomes flooded with newcomers who don’t understand EA’s core concepts and misapply or politicize the movement’s ideas. Discussion quality degrades and so many things fall under the banner of “effective altruism” that it becomes meaningless to talk about. “It’s effective!” starts to look like “It’s healthy!” or “It’s environmentally friendly!”: often poorly thought out or misleading. It becomes much harder to distinguish the signal from the noise. CEA uses the possibility of this scenario as an argument against “low-fidelity” outreach strategies like mass media.
I think it’s possible that EA becoming more mainstream would result in a two-way transfer of ideas. Depending on the scale and specifics of this process, the benefits of slightly improving decision-making in large swaths of society may completely swamp the effects from damage to the original movement. This seems plausible, though not necessarily probable, for global poverty reduction and animal welfare. It seems very unlikely for x-risk reduction, which may succeed or fail based on the quality of ideas of a relatively small number of people.
Could we just shrug and sneak away from the confusion to quietly pursue our original goals? Probably not. People we needed to communicate with would often misunderstand us, interpreting what we said through the lens of mainstream not-quite-EA. It would also be difficult to separate our new brand from the polluted old one, meaning the problem would likely follow us wherever we went.
Assuming we decide a dilution scenario is bad, what can we do to avoid it? As CEA emphasizes, we should make sure to communicate about the movement in high-fidelity ways. That means taking care with how we communicate about EA and avoiding the temptation to misrepresent the movement to make it easier to explain. Experienced EAs should try to be available and approachable for newcomers to correct misconceptions and explain ideas in greater depth. Outreach should focus on long-form and high-bandwidth communication like one-on-ones, and we should grow the movement carefully and intentionally to give each newcomer the chance to absorb EA ideas correctly before they go off and spread them to others.
In this collapse scenario, EA remains an active, thriving community, but fails to direct its efforts toward actually producing impact. We’ve followed our instrumental objectives on tangents away from our terminal ones, until we’ve forgotten what we came here to do in the first place.
I’m not talking about the risk that we’ll get caught up in a promising-looking but ultimately useless project, as long as it’s a legitimate attempt to do the most good. Avoiding that is just a question of doing our jobs well. Instead, I’m pointing at something sort of like unintentionally Goodharting EA: optimizing not for the actual goal of impact, but for everything else that has built up around it—the community, the lifestyle, the vaguely related interests. Compare meta traps #2 and #4.
Here are a few examples of how a distraction collapse could manifest:
- EA busywork:
- We get so wrapped up in our theories that we forget to check whether work on them will ever affect reality.
- We chase topics rather than goals. For example, after someone suggests that a hypothetical technology could be EA-relevant, we spend resources investigating whether it could work without really evaluating its importance if it did.
- We focus so much on our current cause areas that we forget to reevaluate them and keep an eye out for better ones, missing opportunities to do the most good.
- We do things because they’re the kinds of things EAs do without having actual routes to value in mind, and run projects that don’t have mechanisms to affect the things we say we want to change.
- Fun shiny things:
- Gossip about community dynamics and philosophical debate irrelevant to our decisions crowds out discussion of things like study results and crucial considerations.
- We let the hard work of having an impact slide in favor of the social and cultural aspects of the movement, while still feeling virtuous for doing EA activities.
Distraction is, of course, a matter of degree: we’re almost certainly wasting effort on all sorts of pointless things right now. A collapse scenario would occur only if useless activities crowded out useful ones so much that we lost our potential to be a serious force driving the world toward better outcomes.
In this possible future, impact would taper gradually and subtly as more and more person-hours and funding streams were diverted to useless work. Some people would recognize the dynamic and take their talent elsewhere, worsening the problem through evaporative cooling. The version of EA that remained would still accomplish some good: I don’t think we’d completely abandon bed nets in this scenario. But the important work would be happening elsewhere, or else not happening at all.
A distraction scenario is hard to recognize and avoid. Work several steps removed from the problem is often necessary and valuable, but it can be hard to tell the useless and the useful apart: you can make up plausible indirect impact mechanisms for anything. We may want to spend more time explicitly mapping out our altruistic projects’ routes to impact. It’s probably a good habit of mind to constantly ask ourselves about the ultimate purpose of our current EA-motivated task: does it bottom out in impact, or does it not?
EA is carrying precious cargo: a unique, bold, and rigorous set of ideas for improving the world. I want us to pass this delicate inheritance to our future selves, our children, and their children, so they can iterate and improve on it and create the world we dream of. And I want to save a whole lot of kids from malaria as we sail along.
If the ship sinks, its cargo is lost. Social movements sail through murky waters: strategic uncertainties, scandals and infighting, and a changing zeitgeist, with unknown unknowns looming in the distance. I want EA to be robust against those challenges.
Part of this is simply movement best practices: thinking decisions through carefully, being kind to each other, creating a healthy intellectual climate. It’s also crucial to consider collapse scenarios in advance, so we can safely steer away from them.
Having done this, I have one major recommendation: beware ideological isolation. This is a risk factor for both the sequestration and distraction scenarios, as well as a barrier to good truthseeking in general. Though the community tends to appreciate the value of criticism, we still seem very much at risk of becoming an echo chamber—and to some degree certainly are one already. We tend to attract people with similar backgrounds and thinking styles, limiting the diversity of perspectives in discussions. Our ideas are complex and counterintuitive enough that anyone who takes the time to understand them probably thinks we’re onto something, meaning much of the outside criticism we receive is uninformed and shallow. It’s vital that we pursue our ideas in all the unconventional directions they take us, but at each step the movement becomes more niche and inferential distance grows.
I don’t know what to do about this problem, but I don’t think being passively open to criticism is enough to keep us safe: if we want high-quality analysis from alternate viewpoints, we have to actively seek it out.
Thanks to Vaidehi Agarwalla for the conversation that inspired this post, and to Vaidehi, Taymon Beal, Sammy Fries, lexande, Joy O’Halloran, and Peter Park for providing feedback. All of you are wonderful and amazing people and I appreciate it.
If you’d like to suggest additions to this list, please seriously consider whether talking about your collapse scenario will make it more likely to happen.