Acknowledgements
A big thank you to Bruce Tsai, Shakeel Hashim, Ines, and Nathan Young for their insightful notes and additions (though they do not necessarily agree with/endorse anything in this post).
Important Note
This post is a quick exploration of the question, 'is EA just longtermism?' (I come to the conclusion that EA is not). This post is not a comprehensive overview of EA priorities nor does it dive into the question from every angle - it is mostly just my brief thoughts. As such, there are quite a few things missing from this post (many of the comments do a great job of filling in some gaps). In the future, maybe I'll have the chance to write a better post on this topic (or perhaps someone else will; please let me know if you do so I can link to it here).
Also, I've changed the title from 'Is EA just longtermism now?' so my main point is clear right off the bat.
Preface
In this post, I address the question: is Effective Altruism (EA) just longtermism? I then ask, among other questions, what factors contribute to this perception? What are the implications?
1. Introduction
Recently, I’ve heard a few criticisms of Effective Altruism (EA) that hinge on the following: “EA only cares about longtermism.” I’d like to explore this perspective a bit more and the questions that naturally follow, namely: How true is it? Where does it come from? Is it bad? Should it be true?
2. Is EA just longtermism?
In 2021, around 60% of funds deployed by the Effective Altruism movement came from Open Philanthropy (1). Thus, we can use their grant data to try and explore EA funding priorities. The following graph (from Effective Altruism Data) shows Open Philanthropy’s total spending, by cause area, since 2012:

Overall, Global Health & Development accounts for the majority of funds deployed. How has that changed in recent years, as AI Safety concerns grow? We can look at this uglier graph (bear with me) showing Open Philanthropy grants deployed from January, 2021 to present (data from the Open Philanthropy Grants Database):

We see that Global Health & Development is still the leading fund-recipient; however, Risks from Advanced AI is now a closer second. We can also note that the third and fourth most funded areas, Criminal Justice Reform and Farm Animal Welfare, are not primarily driven by a goal to influence the long-term future
With this data, I feel pretty confident that EA is not just longtermism. However, it is also true (and well-known) that funding for longtermist issues, particularly AI Safety, has increased. Additionally, the above data doesn't provide a full picture of the EA funding landscape nor community priorities. This raises a few more questions:
2.1 Funding has indeed increased, but what exactly is contributing to the view that EA essentially is longtermism/AI Safety?
(Note: this list is just an exploration and not meant to claim whether the below things are good or bad, or true)
- William Macaskill’s upcoming book, What We Owe the Future, has generated considerable promotion and discussion. Following Toby Ord’s The Precipice, published in March, 2020, I imagine this has contributed to the outside perception that EA is becoming synonymous with longtermism.
- The longtermist approach to philanthropy is different from mainstream, traditional philanthropy. When trying to describe a concept like Effective Altruism, sometimes the thing that most differentiates it is what stands out, consequently becoming its defining feature.
- Of the longtermist causes, AI Safety receives the most funding, and furthermore, has a unique ‘weirdness’ factor that generates interest and discussion. For example, some of the popular thought experiments used to explain Alignment concerns can feel unrealistic, or something out of a sci-fi movie. I think this can serve to both: 1. draw in onlookers whose intuition is to scoff, 2. give AI-related discussions the advantage of being particularly interesting/compelling, leading to more attention.
- AI Alignment is an ill-defined problem with no clear solution and tons of uncertainties: What counts as AGI? What does it mean for an AI system to be fair or aligned? What are the best approaches to Alignment research? With so many fundamental questions unanswered, it’s easy to generate ample AI Safety discussion in highly visible places (e.g. forums, social media, etc.) to the point that it can appear to dominate EA discourse.
- AI Alignment is a growing concern within the EA movement, so it's been highlighted recently by EA-aligned orgs (for example, AI Safety technical research is listed as the top recommended career path by 80,000 Hours).
- Within the AI Safety space, there is cross-over between EA and other groups, namely tech and rationalism. Those who learn about EA through these groups may only interact with EA spaces focussed on AI Safety/crossing over into other groups–I imagine this shapes their understanding of EA as a whole.
- For some, the recent announcement of the FTX Future Fund seemed to solidify the idea that EA is now essentially billionaires distributing money to protect the long-term future.
- [Edit: There are many more factors to consider that others have outlined in the comments below :)]
2.2 Is this view a bad thing? If so, what can we do?
Is it actually a problem that some people feel EA is “just longtermism”. I would say, yes, insofar that it is better to have an accurate picture of an idea/movement versus an inaccurate one. Beyond that, such a perception may turn away people who could be convinced to work on cause areas more unrelated to longtermism, like farmed animal welfare, but would disagree with longtermist arguments. If this group is large enough, then it seems important to try and promote a clearer outside understanding of EA, allowing the movement to grow in various directions and find its different target audiences, rather than having its pieces eclipsed by one cause area or worldview.
What can we do?
I’m not sure, there are likely a few strategies (e.g. Shakeel Hashim suggested we could put in some efforts to promote older EA content, such as Doing Good Better, or organizations associated with causes like Global Health and Farmed Animal Welfare).
2.3 So EA isn’t “just longtermism,” but maybe it’s “a lot of longtermism”? And maybe it’s moving towards becoming “just longtermism”?
I have no clue if this is true, but if so, then the relevant questions are:
2.4 What if EA was just longtermism? Would that be bad? Should EA just be longtermism?
I’m not sure. I think it’s true that EA being “just longtermism” leads to worse optics (though this is just a notable downside, not an argument against shifting towards longtermism). We see particularly charged critiques like,
Longtermism is an excuse to ignore the global poor and minority groups suffering today. It allows the privileged to justify mistreating others in the name of countless future lives, when in actuality, they’re obsessed with pursuing profitable technologies that result in their version of ‘utopia’–AGI, colonizing mars, emulated minds–things only other privileged people would be able to access, anyway.
I personally disagree with this. As a counter-argument:
Longtermism, as a worldview, does not want present day people to suffer; instead, it wants to work towards a future with as much fluorishing as possible, for everyone. This idea is not as unusual as it is sometimes framed - we hear something very similar with climate change advocacy (i.e. “We need climate interventions to protect the future of our planet. Future generations could stand to suffer immensely poor environmental conditions due to our choices”). An individual or elite few individuals could twist longtermist arguments to justify poor behavior, but this is true of all philanthropy.
Finally, there are many conclusions one can draw from longtermist arguments–but the ones worth pursuing will be well thought-out. Critiques can often highlight niche tech rather than the prominent concerns held by the longtermist community at large: risks from advanced Artificial Intelligence, pandemic preparedness, and global catastrophic risks. Notably, working on these issues can often improve the lives of people living today (e.g. working towards safe advanced AI includes addressing already present issues, like racial or gender bias in today’s systems).
But back to the optics–so longtermism can be less intuitively digestible, it can be framed in a highly negative way–does that matter? If there is a strong case for longtermism, should we not shift our priorities towards it? In which case, the real question is, does the case for longtermism hold?
This leads me to the conclusion: if EA were to become "just longtermism," that’s fine, conditional on the arguments being incredibly strong. And if there are strong arguments against longtermism, the EA community (in my experience) is very keen to hear them.
Conclusion
Overall, I hope this post generates some useful discussion around EA and longtermism. I posed quite a few questions, and offered some of my personal thoughts; however, I hold all these ideas loosely and would be very happy to hear other perspectives.
Citations

Thanks for sharing this history and your perspective Aaron.
I agree that 1) the problems with the 3rd edition were less severe than those with the 2nd edition (though I’d say that’s a very low bar to clear) and 2) the 3rd edition looks more representative if you weigh the “more to explore” sections equally with “the essentials” (though IMO it’s pretty clear that the curriculum places way more weight on the content it frames as “essential” than a content linked to at the bottom of the “further reading” section.)
I disagree with your characterization of "The Effectiveness Mindset", "Differences in Impact", and "Expanding Our Compassion" as neartermist content in a way that’s comparable to how subsequent sections are longtermist content. The early sections include some content that is clearly neartermist (e.g. “The case against speciesism”, and “The moral imperative toward cost-effectiveness in global health”). But much, maybe most, of the
"essential" reading in the first three sections isn’t really about neartermist (or longtermist) causes. For instance, “We are in triage every second of every day” is about… triage. I’d also put “On Fringe Ideas”, “Moral Progress and Cause X”, “Can one person make a difference?”, “Radical Empathy”, and “Prospecting for Gold” in this bucket.
By contrast, the essential reading in the “Longtermism”, “Existential Risk”, and “Emerging technologies” section is all highly focused on longtermist causes/worldview; it’s all stuff like “Reducing global catastrophic biological risks”, “The case for reducing existential risk”, and “The case for strong longtermism”.
I also disagree that the “What we may be missing?” section places much emphasis on longtermist critiques (outside of the “more to explore” section, which I don’t think carries much weight as mentioned earlier). “Pascal’s mugging” is relevant to, but not specific to, longtermism, and “The case of the missing cause prioritization research” doesn’t criticize longtermist ideas per se, it more argues that the shift toward prioritizing longtermism hasn’t been informed by significant amounts of relevant research. I find it telling that “Objections to EA” (framed as a bit of a laundry list) doesn’t include anything about longtermism and that as far as I can tell no content in this whole section addresses the most frequent and intuitive criticism of longtermism I’ve heard (that it’s really really hard to influence the far future so we should be skeptical of our ability to do so).
Process-wise, I don’t think the use of test readers was an effective way of making sure the handbook was representative. Each test reader only saw a fraction of the content, so they’d be in no position to comment on the handbook as a whole. While I’m glad you approached members of the animal and global development communities for feedback, I think the fact that they didn’t respond is itself a form of (negative) feedback (which I would guess reflects the skepticism Michael expressed that his feedback would be incorporated). I’d feel better about the process if, for example, you’d posted in poverty and animal focused Facebook groups and offered to pay people (like the test readers were paid) to weigh in on whether the handbook represented their cause appropriately.