This is a Forum Team crosspost from Substack.
Matt would like to add: "Epistemic status = incomplete speculation; posted here at the Forum team's request"
When you ask prominent Effective Altruists about Effective Altruism, you often get responses like these:
For context, Will MacAskill and Holden Karnofsky are arguably, literally the number one and two most prominent Effective Altruists on the planet. Other evidence of their ~spouses’ personal involvement abounds, especially Amanda’s. Now, perhaps they’ve had changes of heart in recent months or years – and they’re certainly entitled to have those – but being evasive and implicitly disclaiming mere knowledge of EA is comically misleading and non-transparent. Calling these statements lies seems within bounds for most.[1]
This kind of evasiveness around one’s EA associations has been common since the collapse of FTX in 2022, (which, for yet more context, was a major EA funder that year and its founder and now-convicted felon Sam Bankman-Fried was personally a proud Effective Altruist). As may already be apparent, this evasiveness is massively counterproductive. It’s bad enough to have shared an ideology and community with a notorious crypto fraudster. Subsequently very-easily-detectably lying about that association does not exactly make things better.
To be honest, I feel like there’s not much more to say here. It’s seems obvious that the mature, responsible, respectable way to deal with a potentially negative association, act, or deed is to speak plainly, say what you know and where you stand – apologize if you have something to apologize for and maybe explain the extent to which you’ve changed your mind. A summary version of this can be done in a few sentences that most reasonable people would regard as adequate. Here are some examples of how Amanda or Daniela might reasonably handle questions about their associations with EA:
“I was involved with EA and EA-related projects for several years and have a lot of sympathy for the core ideas, though I see our work at Anthropic as quite distinct from those ideas despite some overlapping concerns around potential risks from advanced AI.”
“I try to avoid taking on ideological labels personally, but I’m certainly familiar with EA and I’m happy to have some colleagues who identify more strongly with EA alongside many others”
“My husband is quite prominent in EA circles, but I personally limit my involvement – to the extent you want to call it involvement – to donating a portion of my income to effective charities. Beyond that, I’m really just focused on exactly what we say here at Anthropic: developing safe and beneficial AI, as those ideas might be understood from many perspectives.”
These suggestions stop short of full candor and retain a good amount of distance and guardedness, but in my view, they at least pass the laugh test. They aren’t counter productive the way the actual answers Daniela and Amanda gave were. I think great answers would be more forthcoming and positive on EA, but given the low stakes of this question (more below), suggestions like mine should easily pass without comment.
Why can’t EAs talk about EA like normal humans (or even normal executives)?
As I alluded to, virtually all of this evasive language about EA from EAs happened in the wake of the FTX collapse. It spawned the only-very-slightly-broader concept of being ‘EA adjacent’ wherein people who would happily declare themselves EA prior to November 2022 took to calling themselves “EA adjacent,” if not some more mealy-mouthed dodge like those above.
So the answer is simple: the thing you once associated with now has a worse reputation and you selfishly (or strategically) want to get distance from those bad associations.
Okay, not the most endearing motivation. Especially when you haven’t changed your mind about the core ideas or your opinion of 99% of your fellow travelers.[2] Things would be different if you stopped working on e.g. AI safety and opened a cigar shop, but you didn’t do that and now it’s harder to get your distance.
Full-throated disavowal and repudiation of EA would make the self-servingness all too clear given the timing and be pretty hard to square with proceeding apace on your AI safety projects. So you try to slip out the back. Get off the EA Forum and never mention the term; talk about AI safety in secular terms. I actually think both of these moves are okay. You’re not obliged to stan for the brand you stanned for once for all time[3] and it’s always nice to broaden the tent on important issues.
The trouble only really arises when someone catches you slipping out the back and asks you about it directly. In that situation, it just seems wildly counterproductive to be evasive and shifty. The person asking the question knows enough about your EA background to be asking the question in the first place; you really shouldn’t expect to be able to pull one over on them. This is classic “the coverup is worse than the crime” territory. And it’s especially counter-productive when – in my view at least – the “crime” is just so, so not-a-crime.[4]
If you buy my basic setup here and consider both that the EA question is important to people like Daniela and Amanda, and that Daniela and Amanda are exceptionally smart and could figure all this out, why do they and similarly-positioned people keep getting caught out like this?
Here are some speculative theories of mine building up to the one I think is doing most of the work:
Coming of age during the Great Awokening
I think people born roughly between 1985 and 2000 just way overrate and fear this guilt-by-association stuff. They also might regard it as particularly unpredictable and hard to manage as a consequence of being highly educated and going through higher education when recriminations about very subtle forms of racism and sexism were the social currency of the day. Importantly here, it’s not *just* racism and sexism, but any connection to known racists or sexists however loose. Grant that there were a bunch of other less prominent “isms” on the chopping block in these years and one might develop a reflexive fear that the slightest criticism could quickly spiral into becoming a social pariah.
Here, it was also hard to manage allegations levied against you. Any questions asked or explicit defenses raised would often get perceived as doubling down, digging deeper, or otherwise giving your critics more ammunition. Hit back too hard and even regular people might somewhat-fairly see you as a zealot or hothead. Classically, straight up apologies were often seen as insufficient by critics and weakness/surrender/retreat by others. The culture wars are everyone’s favorite topic, so I won’t spill more ink here, but the worry about landing yourself in a no-win situation through no great fault of your own seemed real to me.
Bad Comms Advice
Maybe closely related to the awokening point, my sense is that some of the EAs involved might have a simple world model that is too trusting of experts, especially in areas where verifying success is hard. “Hard scientists, mathematicians, and engineers have all made very-legibly great advances in their fields. Surely there’s some equivalent expert I can hire to help me navigate how to talk about EA now that it’s found itself subject to criticism.”
So they hire someone with X years of experience as a “communications lead” at some okay-sounding company or think tank and get wishy-washy, cover-your-ass advice that aims not to push too hard in any one direction lest it fall prey to predictable criticisms about being too apologetic or too defiant. The predictable consequence *of that* is that everyone sees you being weak, weasely, scared, and trying to be all things to all people.
Best to pick a lane in my view.
Not understanding how words work (coupled with motivated reasoning)
Another form of naïvety that might be at work is willful ignorance about language. Here, people genuinely think or feel – albeit in a quite shallow way – that they can have their own private definition of EA that is fully valid for them when they answer a question about EA, even if the question-asker has something different in mind.
Here, the relatively honest approach is just getting yourself King of the Hill memed:
The less honest approach is disclaiming any knowledge or association outright by making EA sound like some alien thing you might be aware of, but feel totally disconnected to and even quite critical of and *justifying this in your head* by saying “to me, EAs are all the hardcore, overconfident, utterly risk-neutral Benthamite utilitarians who refuse to consider any perspective other than their own and only want to grow their own power and influence. I may care about welfare and efficiency, but I’m not one of them.”
This is less honest because it’s probably not close to how the person who asked you about EA would define it. Most likely, they had only the most surface-level notion in mind, something like: “those folks who go to EA conferences and write on the thing called the EA Forum, whoever they are.” Implicitly taking a lot of definitional liberty with “whoever they are” in order to achieve your selfish, strategic goal of distancing yourself works for no one but you, and quickly opens you up to the kind of lampoonable statement-biography contrasts that set up this post when observers do not immediately intuit your own personal niche, esoteric definition of EA, but rather just think of it (quite reasonably) as “the people who went to the conferences.”
Speculatively, I think this might also be a great awokening thing? People have battled hard over a transgender woman’s right to answer the question “are you a woman?” with a simple “yes” in large part because the public meaning of the word woman has long been tightly bound to biological sex at birth. Maybe some EAs (again, self-servingly) interpreted this culture moment as implying that any time someone asks about “identity,” it’s the person doing the identifying who gets to define the exact contours of the identity. I think this ignores that the trans discourse was a battle, and a still-not-entirely-conclusive one at that. There are just very, very few terms where everyday people are going to accept that you, the speaker, can define the term any way you please without any obligation to explain what you mean if you’re using the term in a non-standard way. You do just have to do that to avoid fair allegations of being dishonest.
Trauma
There’s a natural thing happening here where the more EA you are, the more ridiculous your EA distance-making looks.[5] However, I also think that the more EA you are, the more likely you are to believe that EA distance-making is strategically necessary, not just for you, but for anyone. My explanation is that EAs are engaged in a kind of trauma-projection.
The common thread running through all of the theories above is the fallout from FTX. It was the bad thing that might have triggered culture war-type fears of cancellation, inspired you to redefine terms, or led to you to desperately seek out the nearest so-so comms person to bail you out. As I’ve laid out here, I think all these reactions are silly and counterproductive and the mystery is why such smart people reacted so unproductively to a setback they could have handled so much better.
My answer is trauma. Often when smart people make mistakes of any kind it’s because they're at least a bit overwhelmed by one or another emotion or general mental state like being rushed, anxious or even just tired. I think the fall of FTX emotionally scarred EAs to an extent where they have trouble relating to or just talking about their own beliefs. This scarring has been intense and enduring in a way far out of proportion to any responsibility, involvement, or even perceived-involvement that EA had in the FTX scandal and I think the reason has a lot to do with the rise of FTX.
Think about Amanda for example. You’ve lived to see your undergrad philosophy club explode into a global movement with tens of thousands of excited, ambitious, well-educated participants in just a few years. Within a decade, you’re endowed with more than $40 billion and, as an early-adopter, you have an enormous influence over how that money and talent gets deployed to most improve the world by your lights. And of course, if this is what growth in the first ten years has looked like, there’s likely more where that came from – plenty more billionaires and talented young people willing to help you change the world. The sky is the limit and you’ve barely just begun.
Then, in just 2-3 days, you lose more than half your endowment and your most recognizable figurehead is maligned around the world as a criminal mastermind. No more billionaire donors want to touch this – you might even lose the other one you had. Tons of people who showed up more recently run for the exits. The charismatic founder of your student group all those years ago goes silent and falls into depression.
Availability bias has been summed up as the experience where “nothing seems as important as what you’re thinking about while you’re thinking about it.” When you’ve built your life, identity, professional pursuits, and source of meaning around a hybrid idea-question-community, and that idea-question-community becomes embroiled in a global scandal, it’s hard not to take it hard. This is especially so when you’ve seen it grow from nothing and you’ve only just started to really believe it will succeed beyond your wildest expectations. One might catastrophize and think the project is doomed. Why is the project doomed? Well maybe the scandal is all the project's fault or at least everyone will think that – after all the project was the center of the universe until just now.
The problem of course, is that EA was not and is not the center of anyone’s universe except a very small number of EAs. The community at large – and certainly specific EAs trying to distance themselves now – couldn’t have done anything to prevent FTX. They think they could have, and they think others see them as responsible, but this is only because EA was the center of their universe.
In reality, no one has done more to indict and accuse EA of wrongdoing and general suspiciousness than EAs themselves. There are large elements of self-importance and attendant guilt driving this, but overall I think it’s the shock of having your world turned upside down, however briefly, from a truly great height. One thinks of a parent who loses a child in a faultless car accident. They slump into depression and incoherence, imagining every small decision they could have made differently and, in every encounter, knowing that their interlocutor is quietly pitying them, if not blaming them for what happened.
In reality, the outside world is doing neither of these things to EAs. They barely know EA exists. They hardly remember FTX existed anymore and even in the moment, they were vastly more interested in the business itself, SBF’s personal lifestyle, and SBF’s political donations. Maybe, somewhere in the distant periphery, this “EA” thing came up too.
But trauma is trauma and prominent EAs basically started running through the stages of grief from the word go on FTX, which is where I think all the bad strategies started. Of course, when other EAs saw these initial reactions, rationalizations mapping onto the theories I outlined above set in.
“No, no, the savvy thing is rebranding as AI people – every perspective surely sees the importance of avoiding catastrophes and AI is obviously a big deal.”
“We’ve got to avoid reputational contagion, so we can just be a professional network”
“The EA brand is toxic now, so instrumentally we need to disassociate”
This all seems wise when high status people within the EA community start doing and saying it, right up until you realize that the rest of the world isn’t populated by bowling pins. You’re still the same individuals working on the same problems for the same reasons. People can piece this together.
So it all culminates in the great irony I shared at the top. It has become a cultural tick of EA to deny and distance oneself from EA. It is as silly as it looks and there are many softer, more reasonable, and indeed more effective ways to communicate one's associations in this regard. I suspect it’s all born of trauma, so I sympathize, but I’d kindly ask that my friends and fellow travelers please stop doing it.
- ^
I should note that I’m not calling these lies here and don’t endorse others doing so. At a minimum, there’s a chance they’re being adversarially quoted here, though I find it hard to picture them saying these particular things in a more forthcoming context.
More broadly, I’m just using Daniela and Amanda’s words as examples of a broader trend I see in EA, rather than trying to call them out as especially bad actors here. They actually make this easier by being powerful, accomplished people who are more likely to take this in stride or otherwise not even notice my musings amid their substantial and important work.
- ^
All except one Mr. Bankman-Fried, for example
- ^
Though I really should add that there’s a serious free rider problem here. To the extent most AI safety people are (or were) legibly EA in one way or another, it’s pretty important that some of them stick with the brand if only to soften the blow to others who would benefit more from their EA affiliation being seen as not-so-bad. If everyone abandons ship, the guilt-by-association hits all of them harder. See Alix Pham’s excellent piece here.
- ^
Like really “You believe in taking a scientific mindset towards maximizing global welfare subject to side-constraints?” is just not the dunk you think it is. It is reasonable and good and it’d be great if more people thought and acted this way. “But did you know that one guy who claimed to be doing this ignored the side constraints?” does not change that.
- ^
I recall a now-deleted Ben Todd tweet where he spoke of EA and EA’s getting distance from EA in a very third-person manner that reeked of irony.
Can I suggest that anyone who wants to dig into whether Amanda/Daniela were lying here head over to this related post, and that comments on this post stay focused on the general idea of EA Adjacency as FTX trauma?
Fully endorse that I think EA is getting a lot of bad comms advice. I think a good comms person would have prepared Antrhopic folks way better, assuming those quotes weren't taken agressively out of context or something.
That said, I am not sure I agree that EA adjacency is mostly ascribable to FTX trauma in the personal PR "project will fail" sense, because I think there are two other explanations of EA adjacency. One, which could be related to FTX trauma, is leadership betrayal. The other is brand confusion.
Leadership betrayal: My reasoning is anecdotal, because I went through EA adjacency before it was cool. Personally, I became "EA Adjacent" when Scott Alexander's followers attacked a journalist for daring to scare him a little -- that prompted me to look into him a bit, at which point I found a lot of weird race IQ, Nazis-on-reddit, and neo-reactionary BS that went against my values. I then talked to a bunch of EA insiders about it and found the response extremely weak ("I know Scott personally and he's a nice guy," as though people who are nice to their friends can't also be racists
and weirdly into monarchy[1]).Whether you love Scott Alexander or not, what I'm trying to point out is that there is another cause of "EA Adjacency" besides personal brand protection, and it might be leadership betrayal. I had been EA since 2012 in a low-key way when I found out about Scott, and I actively told people I was into EA, and even referenced it in career-related things I was doing as something that was shaping my goals and career choices. I wasn't working in the space, but I hoped to eventually, and I was pretty passionate about it! I tried to promote it to a lot of people! I stopped doing this after talking to CEA's community health team and several other prominent EAs, because of feeling like EA leadership was massively not walking the walk and that this thing I thought was the only community whose values I had ever trusted had sort of betrayed my trust. I went through some serious soul searching after this; it was very emotionally taxing, and I decoupled EA from my identity pretty substantially as a result. Probably healthy, tbh.
I am not sure the extent to which the post-FTX adjacency might be attributed to brand protection and what percent is toward leadership betrayal, but I suspect both could be at play, because many people could have felt betrayed by the fact that EA leadership was well aware of FTX sketchiness and didn't say anything (or weren't aware, but then maybe you'd be betrayed by their incompetence).
Brand confusion: After brand embarrassment and leadership betrayal, I think a 3rd potential explanation for EA adjacency is a sort of brand confusion problem. Here, I think EA is sort of like Christianity -- there's an underlying set of beliefs that almost everyone in the general Christian community agrees with, but different factions can be WILDLY different culturally and ideologically. Unfortunately, only EA insiders are familiar with these distinctions. So right now, acknowledging being an EA is like acknowledging you're a Christian to someone who only knows about Mormons. If you're actually a liberal Episcopalian and don't want to be seen as being a Mormon, maybe you don't have time to get into the fact that yes, technically you are a Christian but not that kind of Christian. I wonder if EA-adjacent folks would be more comfortable acknowledging EA connection if they could identify a connection with only one part or faction of EA, and there was greater clarity in the public eye about the fact that EA is not a monolith.
In terms of what behavior I'd like to see from other EAs and EA-adjacents: If I'm talking to an EA insider, I still say I have issues with parts of EA while acknowledging that I have a ton of shared values and work in the space. If someone is mocking EA as an outsider, I am actually MORE likely to admit connection and shared values with EA, because I usually think they are focusing on the wrong problems.
struck for accuracy, see comments
(For what it's worth, I don't think you're irrational, you're just mistaken about Scott being racist and what happened with the Cade Metz article. If someone in EA is really racist, and you complain to EA leadership and they don't do anything about it, you could reasonably be angry with them. If the person in question is not in fact racist, and you complain about them to CEA and they don't do anything about it, they made the right call and you'd be upset due to the mistaken beliefs, but conditional on those beliefs, it wasn't irrational to be upset.)