Hm interesting. One reaction I have is that in-person communities have different functions, and it might be worth specifying more precisely what function you envision in-person EA communities having in 2026 (and how this has maybe changed?). Here are some different models I could see; 2 and 3 seem more promising to me than 1 or 4:
I didn’t mean this to be that deep; I meant (1) the average college student EA (i.e., many EAs should still pursue other kinds of careers) and (2) AI safety broadly construed (to include issues related to biorisk, policy, and many issues unrelated to x-risk). I don’t know much about how competitive jobs are throughout this space, but at least in some spheres (eg, academic philosophy) there is growing interest in AI, so much so that it’d be prudent for a philosophy PhD student to work on issues related to AI solely to get a job (i.e., bracketing any interest in EA/having a socially valuable career). I assume that’s true in at least some other spheres as well (policy?), and while I could see that changing in the next few years, it feels like the entire job market will change a lot in the next few years, such that I doubt the advice “don’t go into AI safety because it’s oversaturated; do X instead” will be reliable advice for most X.
At the core of my project is the idea that people can disagree and still cooperate.
I think this is the crux of the issue. A lot of EAs working on non-AI projects don't even disagree that AI is the most important issue of our time. The problem is relevance/interest. Many non-AI-safety people have already spent years building careers in other areas. We have PhDs, social networks, and jobs that aren't oriented around AI safety—in many cases because we were explicitly following EA principles/ideas/advice from a decade ago—and don't plan to start from scratch. (I agree that the average college student encountering EA today should focus on issues related to AI safety.)
It used to be the case that engaging with the EA community was a good way to facilitate doing important work in non-AI areas, but this is becoming less true; if the average EA has the bandwidth/resources to go to two conferences a year, the global health EA may find it higher yield to go to global health conferences, the GPR EA may find it higher yield to go to philosophy/econ conferences, and so on. Non-EA global health people (etc) are also thinking about how to do the most good within the field of global health, and global health EAs may benefit more professionally (and socially?) from engaging directly with them. It doesn't seem like a great use of our limited time (for us, or for the AI safety people, probably) to professionally network with EAs working on things totally unrelated to what we're working on.
Anyways, I'm not sure I fully understand what your proposal is, but I'm just trying to articulate what I see as a fundamental barrier to getting those of us who don't do AI safety work to be more actively engaged in EA: many of us agree that AI is the most important issue of our time, but that doesn't mean it makes sense for us to re-focus our careers on AI, given our existing backgrounds and skills. Correspondingly, it makes less and less sense for us to spend our professional time engaging with a community that is focused on AI safety. (I think there's perhaps a better case to be made that it would be socially fulfilling to do this; I don't feel socially compatible with the average AI safety EA, but maybe others do.)
I don't think it's possible to create a stable EA comprised of people who (1) "believe in the tools more than the conclusions" and (2) treat AI safety as a settled conclusion.
I think the evolution of Forethought is perhaps emblematic of some of the problems with 2026 EA. Before Forethought became "a research nonprofit focused on how to navigate the transition to a world with superintelligent AI systems," it was the Forethought Foundation for Global Priorities Research, which aimed "to promote academic work that addresses the question of how to use our scarce resources to improve the world by as much as possible." FFGPR funded an annual fellowship that dozens of doctoral students from a range of fields participated in, and funded them to spend a month together in Oxford thinking through their GPR projects together.
New Forethought seems to: (1) focus on a narrower range of projects (those related to navigating the transition to a world with superintelligent AI systems) and (2) not explicitly fund/prioritize community building (e.g., the research supported by Forethought seems to come from Forethought employees, many of whom are based at Oxford, versus academics based around the world).
These changes parallel an issue that affects the EA community more generally: talented people who don't work on issues related to AI—the very people most well-equipped to help EA course-correct—are less and less likely to be brought in. Several things have contributed to this—funding for non-AI projects has dried up; for non-AI-safety EAs, EAGs increasingly consist of conversations spent discussing one's work with people who can rarely help (or worse, look down on it); it is difficult to build connections/friendships with people whose fundamental beliefs diverge more and more from one's own. In short, the average non-AI-safety EA gains less—professionally, socially, and otherwise—from being actively engaged in the EA community in 2026 than they did in 2022. As one of these people, I want to be clear that my values and goals haven't changed—I still want to use evidence and reason to do a lot of good—but it has ceased to feel like being part of the EA community facilitates this.
EA is a philosophy trying to find the most effective ways to help others, and a social movement that aims to put those ideas into practice; the social movement follows from the philosophy. So if EA has answered the question of how to most effectively help others—work on issues related to AI safety—then why should those of us who don't focus on these issues be involved in this social movement?
When the theoretical question of whether we still need EA, if the answer to EA is AI safety, and an AI safety community exists gets posed, people tend to point to all of the non-AI-safety EA things that still exist ("look how much money is still going to GiveWell," or "Coefficient Giving devotes a lot of resources to animal welfare," and so on). But this isn't an answer. And on a personal level, the question has already de facto been answered: as EA orgs like Forethought (and 80k, and others) increasingly shift from focusing on GPR -> AI safety, the fact that CG devotes resources to animal welfare legislation isn't very relevant to the experience that I have when I go to EAGs, or read the Forum, or try to have conversations with AI safety researchers, or peruse 80k's recent episodes, or cease to see grant opportunities relevant to my projects.
The orientation you want is: “what kind of person do I want to be?” I suspect that to even ask the question immediately pushes you towards some tentative answers. You want to do good. You want to help others. You want to be fair minded about that, maybe even impartial. You want to do more good rather than less. You want to believe true things and understand the world.
All of these things are still true about me. But EA doesn't just aim to do good; it aims to do the most good. And EA is no longer agnostic about the answer to the question of how to do the most good. The loss of that agnosticism has been—perhaps rightly—accompanied by a change in the structure of the EA community. This seems like a bullet new EA may just have to bite.
As you note, there are different ways to be cool. And I think the way in which EA is sometimes cool is when impressive people sympathetic to EA ideas do impactful things. Like, most of the general public probably wouldn’t say Ezra Klein is cool, but a lot of EAs (and potential EAs) probably would. I think EA’s path to greater coolness—to the extent we think this matters, which I’m not convinced it does—is probably via finding and supporting more smart, likable people who publicly buy into EA ideas while doing high-profile, impactful work.
I think you’re right that some of the abundance ideas aren’t exactly new to EA folks, but I also think it’s true that: (1) packaging a diverse set of ideas/policies (re: housing, science, transportation) under the heading of abundance is smart and innovative, (2) there is newfound momentum around designing and implementing an abundance-related agenda (eg), and (3) the implementation of this agenda will create opportunities for further academic research (enabling people to, for instance, study some of those cruxes). All of this to say, if were a smart, ambitious, EA-oriented grad student, I think I would find the intellectual opportunities in this space exciting and appealing to work on.
Currently, the online EA ecosystem doesn’t feel like a place full of exciting new ideas, in a way that’s attractive to smart and ambitious people.
I think one thing that has happened is that as EA has grown/professionalized, an increasing share of EA writing/discourse is occurring in more formal outlets (e.g., Works in Progress, Asterisk, the Ezra Klein podcast, academic journals, and so on). As an academic, it's a better use of my time—both from the perspective of direct impact and my own professional advancement—to publish something in one of these venues than to write on the Forum. Practically speaking, what that means is that some of the people thinking most seriously about EA are spending less of their time engaging with online communities. While there are certainly tradeoffs here, I'm inclined to think this is overall a good thing—it subjects EA ideas to a higher level of scrutiny (since we now have editors, in addition to people weighing in on Twitter/the Forum/etc about the merits of various articles) and it broadens exposure to EA ideas.
I also don't really buy that the ideas being discussed in these more formal venues aren't exciting or new; as just two recent examples, I think (1) the discourse/opportunities around abundance are exciting and new, as is (2) much of the discourse happening in The Argument. (While neither of these examples is explicitly EA-branded, they are both pretty EA-coded, and lots of EAs are working on/funding/engaging with them.)
Thanks for writing this. It feels like the implicit messaging, ideas, and infrastructure of the EA community have historically been targeted towards people in their 20s (i.e., people who can focus primarily on maximizing their impact). A lot of the EA writing (and EAs) I first encountered pushed for a level of commitment to EA that made more sense for people who had few competing obligations (like kids or aging parents). This resonated with me a decade ago—it made EA feel like an urgent mission—but today feels more unrealistic, and sometimes even alienating.
Given that the average age of the EA community is increasing, I wonder if it’d be good to rethink this messaging/set of ideas/infrastructure; to create a gentler, less hardheaded EA—one that takes more seriously the non-EA commitments we take on as we age, and provides us with a framework for reconciling them with our commitment to EA. (I get the sense that some orgs—like OP, which seems to employ older EAs on average—do a great job of this through, e.g., their generous parental leave policies, but I’d like to see the implicit philosophy connoted by these policies become part of EA’s explicit belief system and messaging to a greater extent.)
I watched this with a non-EA (omnivore) friend, and we both found it compelling, informative, and not preachy. Nice job!! With respect to the advocacy ask you make at the end: we would benefit from further guidance on how exactly to do this, and what practical steps (aside from changing diets) people who care about these issues should take. For instance, I don’t have a great sense of how to talk about factory farming, because it’s hard to broach these issues without implicitly condemning someone’s behavior (which often feels both socially inappropriate and counterproductive). It would be easier to broach this, I think, if there were specific positive actions I could recommend, or at least some concrete guidance on what to say versus not say when this comes up. I have been a vegetarian for many years, so this kind of conversation has come up organically many times (often over meals), and my natural inclination is always to quickly explain why I don’t eat meat and then change the subject, so I don’t make whoever asked uncomfortable (since often they’re eating meat). But presumably there’s a better way to approach this, so I’m curious if you/others have thoughts, or if there’s research on this.
Yes, I overstated this a bit (“has dried up”), but I kind of think we’re both right. On a large scale, orgs like GiveWell are still getting a lot of funding. But on an individual level, the funding environment feels really different to me than it did five years ago, when there were more fellowship and grant and award opportunities than I could possibly apply to. It does not feel like that today.