People have been reacting to my last post with #notallEAs, which, totally, but… have you considered that if you have to distinguish yourself from other EAs then maybe the label doesn’t describe you well?
I fought for EA to mean something simpler— just someone who 1. Figured out the best way to improve the world and 2. Does it— but I lost. EA became focused on careers and technical AI Safety. Part of that was it kind of stopped being a thing everyone can participate in in their own way. I’m beyond thrilled that Giving What We Can has gotten more confident in itself again, but for a while there even giving money was being treated as deprecated in the core community. If you’re not going to have an EA career, you can no longer be a real insider.
Again, this is not how I wanted it. Don’t be mad at me for just describing what happened. I wanted EA to nurture fractional participation at every level through teachings and community support, more focused on the middle of the funnel. It started much more that way. But tastes changed and circa 2017 CEA officially changed its recruitment model to be about focusing on making “core EAs”, and EA messaging started being more about recruiting people into a handful of careers. It’s in the second edition of the EA handbook. It was openly discussed, roughly coincident with the switch to longtermism.
Whether everyone reading knows it or not, there is a core community that calls the shots about EA. Even if you run your own group outside of an EA hub, these people tell you what’s effective and worth doing by providing the materials, controlling the money, and setting the trends. In early EA, lots of people did their own research and compared notes. Now that’s less common and there are think tanks (like Rethink Priorities, where I used to work) where Open Phil dictates what research to do and whether it can be shared. (Trying to please OP was a huge concern at RP, and it exerted a huge psychic influence even on me that affected how clearly I could think for myself.)
So what I’m saying is, if you’re not at the top calling the shots, maybe you just shouldn’t cast your lot with them. Because they are the ones controlling what “EA” means, perks and liabilities. If you’re all one people when it comes time to get benefits, how can everyone be distinct when it comes time to share responsibility for EA problems? Every time I engage on this I come upon a bailey of smiling people loving to identify as the same thing, only to have them crawl up into the motte of “not ALL” by the time they’ve finished my post. If you don’t accept the critique and claim you don’t recognize it, maybe you also don’t really need or accept the label.
You value the friends? You can just be friends. If that doesn’t work without adopting the label, they aren’t good friends.
You like the online conversation? You can just talk.
You want an intellectual community? You don’t have to be in communion with them.
You want funding? This one’s tough but you can’t let it dictate your identity to you. Accepting money creates a bond, which you need to accept responsibility for. If you can’t, maybe it’s not worth getting EA money.
You want the EA community to be what you wish it was? Yeah, I did too. But you have to take a clear-eyed look at what it is. And if you take part in it and bear the name, you have to accept the good and the bad of how it actually is.
The last thing I wanted to do was leave EA. I wanted it to be the community it was at the beginning, and I had a lot of influence, but I couldn’t dictate what EA “really” meant in the face of the actual people and choices making up the community. I stuck around for a long time arguing that my version of EA was how it should be, and insisting that that’s how it was for me even if others were doing it differently. When I was forced to leave EA to pursue PauseAI, I could admit to myself that I was co-signing the bad stuff by being there and lending my name and work, and it was shitty of me to think I could shirk responsibility just because I wanted EA to be something else.
So, idk, if you think I’m wrong because you’re an EA and I’m not describing you— in what sense are you an EA? You can always leave.

Hey I'm wondering what you mean by "leave EA" exactly here? First its not clear to me what you mean practically by "leave" exactly? Second FWIW I call myself an Effective Altruist and I don't feel like I need to sign up to the extent/standards you do to carry that label.
I call myself an EA because I'm committed to "Finding the best way to help others" and "Turning good intentions into impact" (love these from the CEA website). In addition I've been impressed by the character and heart of EAs I have met who do Global Development things, and I appreciate the forum development discourse (although there is less material year on year).
I feel like people will have diverse reasons for identifying as an "EA" from your nice list, whether that's community, the mindset, the online discourse or a combination of them all. Some might have vaguer reasons which is all good too.
Also I suspect I'm just in far less deep than you were here, so its harder for me to identify with your experience. I can also imagine the AI/GCR community and disagreements within it are more fraught than within GHD.
Holly --
I think the frustrating thing here, for you and me, is that, compared to its AI safety fiascos, EA did so much soul-searching after the Sam Bankman-Fried fiasco with the FTX fraud in 2022. We took the SBF/FTX debacle seriously as a failure of EA people, principles, judgment, mentorship, etc. We acknowledged that it hurt EA's public reputation, and we tried to identify ways to avoid making the same catastrophic mistakes again.
But as far as I've seen, EA has done very little soul-searching for its complicity in helping to launch OpenAI, and then in helping to launch Anthropic -- both of which have proven to be far, far less committed to serious AI safety ethics than they'd promised, and far less than we'd hoped.
In my view, accelerating the development of AGI, by giving the EA seal of approval to first OpenAI and then Anthropic, has done far, far more damage to humanity's likelihood of survival than the FTX fiasco ever did. But of course so many EAs go on to get lucrative jobs at OpenAI and Anthropic, and 80,000 Hours is delighted to host such job ads, that EA as a career-advacement movement is locked into the belief that 'technical AI safety research' within 'frontier AI labs' is a far more valuable use of bright young people's talents than merely promoting grass-roots AI safety advocacy.
Let me know if that captures any of your frustration. It might help EAs understand why this double standard -- taking huge responsibility for SBF/FTX turning reckless and evil, but taking virtually no responsibility for OpenAI/Anthropic turning reckless and evil -- is so grating to you (and me).
I thought EA was too eager to accept fault for a few people committing financial crimes out of their sight. The average EA actually is complicit in the safetywashing of OpenAI and Anthropic! Maybe that’s why the don’t want to think about it…
So I think the problem (?) is that nobody donates to EA infrastructure for the purpose of cultivating a nice community. They donate to EA infrastructure almost exclusively for the purpose of cultivating impactful actions (that are the actions they want to see)
I mean, I sure would like it if people donated to cultivate a nice community. However, I don't think I'm owed that from an explicitly EA funding pot. Why should EA-aligned donors spend cash on me and not on e.g. malaria prevention? Heck, I'm an EA-aligned donor, and I spend cash on malaria prevention that could have been spent on me.
For what it is worth, this is not how I feel in my local EA community. There are people leading effective giving organisations and others who just go on with their usual lives with trial pledges; and I feel we are fairly non judgemental.
Why use the EA name? There is a leadership and they’re telling people where to donate that money and how to think. You have some responsibility for that.
I don’t think we see much top-down leadership. There’s eg GiveWell, which I take seriously, but sometimes prioritising between different broad cause areas is very hard, and my understanding is that people in my local community feel the same way and are broadly supportive of diverse points of view.
So will you join me in denouncing the horrible mistakes around AI Safety like working with the labs that the actual people in control of the name EA have made?
I think there are good arguments why those actions might have indeed been horrible mistakes. But I’m also quite uncertain about what would have been the best course of action at the time. Eg, there’s a reasonable case that the best we might hope for is steering the development of AI. I unfortunately don’t know.
Let me give a non AI example: I find it reasonable that some EAs try to steer how factory farming works (most animal advocacy), despite I preferring no animal died or was tortured for food.
But on the other hand I believe people at leadership positions failed to detect and flag the FTX scandal ahead of time. And that’s a shame.
There’s no need for a group like yours to be implicated in AI company wheeling and dealing. Being connected to EA’s decisions has probably made the issue much more confusing for you than it should be— PauseAI is suited for local groups and only involves talking about the danger and giving grassroots support to AI Safety bills. That should obviously have been the sort of thing local EA groups did for AI Safety, but the AI Safety part of EA has always been this weird elitist conspiracy to have stake in the Singularity.