Hide table of contents

This is a Forum Team crosspost from Substack
Matt would like to add: "Epistemic status = incomplete speculation; posted here at the Forum team's request"


When you ask prominent Effective Altruists about Effective Altruism, you often get responses like these:

For context, Will MacAskill and Holden Karnofsky are arguably, literally the number one and two most prominent Effective Altruists on the planet. Other evidence of their ~spouses’ personal involvement abounds, especially Amanda’s. Now, perhaps they’ve had changes of heart in recent months or years – and they’re certainly entitled to have those – but being evasive and implicitly disclaiming mere knowledge of EA is comically misleading and non-transparent. Calling these statements lies seems within bounds for most.[1]

This kind of evasiveness around one’s EA associations has been common since the collapse of FTX in 2022, (which, for yet more context, was a major EA funder that year and its founder and now-convicted felon Sam Bankman-Fried was personally a proud Effective Altruist). As may already be apparent, this evasiveness is massively counterproductive. It’s bad enough to have shared an ideology and community with a notorious crypto fraudster. Subsequently very-easily-detectably lying about that association does not exactly make things better.

To be honest, I feel like there’s not much more to say here. It’s seems obvious that the mature, responsible, respectable way to deal with a potentially negative association, act, or deed is to speak plainly, say what you know and where you stand – apologize if you have something to apologize for and maybe explain the extent to which you’ve changed your mind. A summary version of this can be done in a few sentences that most reasonable people would regard as adequate. Here are some examples of how Amanda or Daniela might reasonably handle questions about their associations with EA:

“I was involved with EA and EA-related projects for several years and have a lot of sympathy for the core ideas, though I see our work at Anthropic as quite distinct from those ideas despite some overlapping concerns around potential risks from advanced AI.”

“I try to avoid taking on ideological labels personally, but I’m certainly familiar with EA and I’m happy to have some colleagues who identify more strongly with EA alongside many others”

“My husband is quite prominent in EA circles, but I personally limit my involvement – to the extent you want to call it involvement – to donating a portion of my income to effective charities. Beyond that, I’m really just focused on exactly what we say here at Anthropic: developing safe and beneficial AI, as those ideas might be understood from many perspectives.”

These suggestions stop short of full candor and retain a good amount of distance and guardedness, but in my view, they at least pass the laugh test. They aren’t counter productive the way the actual answers Daniela and Amanda gave were. I think great answers would be more forthcoming and positive on EA, but given the low stakes of this question (more below), suggestions like mine should easily pass without comment.

Why can’t EAs talk about EA like normal humans (or even normal executives)?

As I alluded to, virtually all of this evasive language about EA from EAs happened in the wake of the FTX collapse. It spawned the only-very-slightly-broader concept of being ‘EA adjacent’ wherein people who would happily declare themselves EA prior to November 2022 took to calling themselves “EA adjacent,” if not some more mealy-mouthed dodge like those above.

So the answer is simple: the thing you once associated with now has a worse reputation and you selfishly (or strategically) want to get distance from those bad associations.

Okay, not the most endearing motivation. Especially when you haven’t changed your mind about the core ideas or your opinion of 99% of your fellow travelers.[2] Things would be different if you stopped working on e.g. AI safety and opened a cigar shop, but you didn’t do that and now it’s harder to get your distance.

Full-throated disavowal and repudiation of EA would make the self-servingness all too clear given the timing and be pretty hard to square with proceeding apace on your AI safety projects. So you try to slip out the back. Get off the EA Forum and never mention the term; talk about AI safety in secular terms. I actually think both of these moves are okay. You’re not obliged to stan for the brand you stanned for once for all time[3] and it’s always nice to broaden the tent on important issues.

The trouble only really arises when someone catches you slipping out the back and asks you about it directly. In that situation, it just seems wildly counterproductive to be evasive and shifty. The person asking the question knows enough about your EA background to be asking the question in the first place; you really shouldn’t expect to be able to pull one over on them. This is classic “the coverup is worse than the crime” territory. And it’s especially counter-productive when – in my view at least – the “crime” is just so, so not-a-crime.[4]

If you buy my basic setup here and consider both that the EA question is important to people like Daniela and Amanda, and that Daniela and Amanda are exceptionally smart and could figure all this out, why do they and similarly-positioned people keep getting caught out like this?

Here are some speculative theories of mine building up to the one I think is doing most of the work:

Coming of age during the Great Awokening

I think people born roughly between 1985 and 2000 just way overrate and fear this guilt-by-association stuff. They also might regard it as particularly unpredictable and hard to manage as a consequence of being highly educated and going through higher education when recriminations about very subtle forms of racism and sexism were the social currency of the day. Importantly here, it’s not *just* racism and sexism, but any connection to known racists or sexists however loose. Grant that there were a bunch of other less prominent “isms” on the chopping block in these years and one might develop a reflexive fear that the slightest criticism could quickly spiral into becoming a social pariah.

Here, it was also hard to manage allegations levied against you. Any questions asked or explicit defenses raised would often get perceived as doubling down, digging deeper, or otherwise giving your critics more ammunition. Hit back too hard and even regular people might somewhat-fairly see you as a zealot or hothead. Classically, straight up apologies were often seen as insufficient by critics and weakness/surrender/retreat by others. The culture wars are everyone’s favorite topic, so I won’t spill more ink here, but the worry about landing yourself in a no-win situation through no great fault of your own seemed real to me.

Bad Comms Advice

Maybe closely related to the awokening point, my sense is that some of the EAs involved might have a simple world model that is too trusting of experts, especially in areas where verifying success is hard. “Hard scientists, mathematicians, and engineers have all made very-legibly great advances in their fields. Surely there’s some equivalent expert I can hire to help me navigate how to talk about EA now that it’s found itself subject to criticism.”

So they hire someone with X years of experience as a “communications lead” at some okay-sounding company or think tank and get wishy-washy, cover-your-ass advice that aims not to push too hard in any one direction lest it fall prey to predictable criticisms about being too apologetic or too defiant. The predictable consequence *of that* is that everyone sees you being weak, weasely, scared, and trying to be all things to all people.

Best to pick a lane in my view.

Not understanding how words work (coupled with motivated reasoning)

Another form of naïvety that might be at work is willful ignorance about language. Here, people genuinely think or feel – albeit in a quite shallow way – that they can have their own private definition of EA that is fully valid for them when they answer a question about EA, even if the question-asker has something different in mind.

Here, the relatively honest approach is just getting yourself King of the Hill memed:

The less honest approach is disclaiming any knowledge or association outright by making EA sound like some alien thing you might be aware of, but feel totally disconnected to and even quite critical of and *justifying this in your head* by saying “to me, EAs are all the hardcore, overconfident, utterly risk-neutral Benthamite utilitarians who refuse to consider any perspective other than their own and only want to grow their own power and influence. I may care about welfare and efficiency, but I’m not one of them.”

This is less honest because it’s probably not close to how the person who asked you about EA would define it. Most likely, they had only the most surface-level notion in mind, something like: “those folks who go to EA conferences and write on the thing called the EA Forum, whoever they are.” Implicitly taking a lot of definitional liberty with “whoever they are” in order to achieve your selfish, strategic goal of distancing yourself works for no one but you, and quickly opens you up to the kind of lampoonable statement-biography contrasts that set up this post when observers do not immediately intuit your own personal niche, esoteric definition of EA, but rather just think of it (quite reasonably) as “the people who went to the conferences.”

Speculatively, I think this might also be a great awokening thing? People have battled hard over a transgender woman’s right to answer the question “are you a woman?” with a simple “yes” in large part because the public meaning of the word woman has long been tightly bound to biological sex at birth. Maybe some EAs (again, self-servingly) interpreted this culture moment as implying that any time someone asks about “identity,” it’s the person doing the identifying who gets to define the exact contours of the identity. I think this ignores that the trans discourse was a battle, and a still-not-entirely-conclusive one at that. There are just very, very few terms where everyday people are going to accept that you, the speaker, can define the term any way you please without any obligation to explain what you mean if you’re using the term in a non-standard way. You do just have to do that to avoid fair allegations of being dishonest.

Trauma

There’s a natural thing happening here where the more EA you are, the more ridiculous your EA distance-making looks.[5] However, I also think that the more EA you are, the more likely you are to believe that EA distance-making is strategically necessary, not just for you, but for anyone. My explanation is that EAs are engaged in a kind of trauma-projection.

The common thread running through all of the theories above is the fallout from FTX. It was the bad thing that might have triggered culture war-type fears of cancellation, inspired you to redefine terms, or led to you to desperately seek out the nearest so-so comms person to bail you out. As I’ve laid out here, I think all these reactions are silly and counterproductive and the mystery is why such smart people reacted so unproductively to a setback they could have handled so much better.

My answer is trauma. Often when smart people make mistakes of any kind it’s because they're at least a bit overwhelmed by one or another emotion or general mental state like being rushed, anxious or even just tired. I think the fall of FTX emotionally scarred EAs to an extent where they have trouble relating to or just talking about their own beliefs. This scarring has been intense and enduring in a way far out of proportion to any responsibility, involvement, or even perceived-involvement that EA had in the FTX scandal and I think the reason has a lot to do with the rise of FTX.

Think about Amanda for example. You’ve lived to see your undergrad philosophy club explode into a global movement with tens of thousands of excited, ambitious, well-educated participants in just a few years. Within a decade, you’re endowed with more than $40 billion and, as an early-adopter, you have an enormous influence over how that money and talent gets deployed to most improve the world by your lights. And of course, if this is what growth in the first ten years has looked like, there’s likely more where that came from – plenty more billionaires and talented young people willing to help you change the world. The sky is the limit and you’ve barely just begun.

Then, in just 2-3 days, you lose more than half your endowment and your most recognizable figurehead is maligned around the world as a criminal mastermind. No more billionaire donors want to touch this – you might even lose the other one you had. Tons of people who showed up more recently run for the exits. The charismatic founder of your student group all those years ago goes silent and falls into depression.

Availability bias has been summed up as the experience where “nothing seems as important as what you’re thinking about while you’re thinking about it.” When you’ve built your life, identity, professional pursuits, and source of meaning around a hybrid idea-question-community, and that idea-question-community becomes embroiled in a global scandal, it’s hard not to take it hard. This is especially so when you’ve seen it grow from nothing and you’ve only just started to really believe it will succeed beyond your wildest expectations. One might catastrophize and think the project is doomed. Why is the project doomed? Well maybe the scandal is all the project's fault or at least everyone will think that – after all the project was the center of the universe until just now.

The problem of course, is that EA was not and is not the center of anyone’s universe except a very small number of EAs. The community at large – and certainly specific EAs trying to distance themselves now – couldn’t have done anything to prevent FTX. They think they could have, and they think others see them as responsible, but this is only because EA was the center of their universe.

In reality, no one has done more to indict and accuse EA of wrongdoing and general suspiciousness than EAs themselves. There are large elements of self-importance and attendant guilt driving this, but overall I think it’s the shock of having your world turned upside down, however briefly, from a truly great height. One thinks of a parent who loses a child in a faultless car accident. They slump into depression and incoherence, imagining every small decision they could have made differently and, in every encounter, knowing that their interlocutor is quietly pitying them, if not blaming them for what happened.

In reality, the outside world is doing neither of these things to EAs. They barely know EA exists. They hardly remember FTX existed anymore and even in the moment, they were vastly more interested in the business itself, SBF’s personal lifestyle, and SBF’s political donations. Maybe, somewhere in the distant periphery, this “EA” thing came up too.

But trauma is trauma and prominent EAs basically started running through the stages of grief from the word go on FTX, which is where I think all the bad strategies started. Of course, when other EAs saw these initial reactions, rationalizations mapping onto the theories I outlined above set in.

“No, no, the savvy thing is rebranding as AI people – every perspective surely sees the importance of avoiding catastrophes and AI is obviously a big deal.”

“We’ve got to avoid reputational contagion, so we can just be a professional network”

“The EA brand is toxic now, so instrumentally we need to disassociate”

This all seems wise when high status people within the EA community start doing and saying it, right up until you realize that the rest of the world isn’t populated by bowling pins. You’re still the same individuals working on the same problems for the same reasons. People can piece this together.

So it all culminates in the great irony I shared at the top. It has become a cultural tick of EA to deny and distance oneself from EA. It is as silly as it looks and there are many softer, more reasonable, and indeed more effective ways to communicate one's associations in this regard. I suspect it’s all born of trauma, so I sympathize, but I’d kindly ask that my friends and fellow travelers please stop doing it.

  1. ^

    I should note that I’m not calling these lies here and don’t endorse others doing so. At a minimum, there’s a chance they’re being adversarially quoted here, though I find it hard to picture them saying these particular things in a more forthcoming context.

    More broadly, I’m just using Daniela and Amanda’s words as examples of a broader trend I see in EA, rather than trying to call them out as especially bad actors here. They actually make this easier by being powerful, accomplished people who are more likely to take this in stride or otherwise not even notice my musings amid their substantial and important work.

  2. ^

    All except one Mr. Bankman-Fried, for example

  3. ^

    Though I really should add that there’s a serious free rider problem here. To the extent most AI safety people are (or were) legibly EA in one way or another, it’s pretty important that some of them stick with the brand if only to soften the blow to others who would benefit more from their EA affiliation being seen as not-so-bad. If everyone abandons ship, the guilt-by-association hits all of them harder. See Alix Pham’s excellent piece here.

  1. ^

    Like really “You believe in taking a scientific mindset towards maximizing global welfare subject to side-constraints?” is just not the dunk you think it is. It is reasonable and good and it’d be great if more people thought and acted this way. “But did you know that one guy who claimed to be doing this ignored the side constraints?” does not change that.

  2. ^

    I recall a now-deleted Ben Todd tweet where he spoke of EA and EA’s getting distance from EA in a very third-person manner that reeked of irony.

Show all footnotes
Comments26


Sorted by Click to highlight new comments since:

Can I suggest that anyone who wants to dig into whether Amanda/Daniela were lying here head over to this related post, and that comments on this post stay focused on the general idea of EA Adjacency as FTX trauma? 

Fully endorse that I think EA is getting a lot of bad comms advice. I think a good comms person would have prepared Antrhopic folks way better, assuming those quotes weren't taken agressively out of context or something. 

That said, I am not sure I agree that EA adjacency is mostly ascribable to FTX trauma in the personal PR "project will fail" sense, because I think there are two other explanations of EA adjacency. One, which could be related to FTX trauma, is leadership betrayal. The other is brand confusion. 

Leadership betrayal: My reasoning is anecdotal, because I went through EA adjacency before it was cool. Personally, I became "EA Adjacent" when Scott Alexander's followers attacked a journalist for daring to scare him a little -- that prompted me to look into him a bit, at which point I found a lot of weird race IQ, Nazis-on-reddit, and neo-reactionary BS that went against my values. I then talked to a bunch of EA insiders about it and found the response extremely weak ("I know Scott personally and he's a nice guy," as though people who are nice to their friends can't also be racists and weirdly into monarchy[1]). 


Whether you love Scott Alexander or not, what I'm trying to point out is that there is another cause of "EA Adjacency" besides personal brand protection, and it might be leadership betrayal. I had been EA since 2012 in a low-key way when I found out about Scott, and I actively told people I was into EA, and even referenced it in career-related things I was doing as something that was shaping my goals and career choices. I wasn't working in the space, but I hoped to eventually, and I was pretty passionate about it! I tried to promote it to a lot of people! I stopped doing this after talking to CEA's community health team and several other prominent EAs, because of feeling like EA leadership was massively not walking the walk and that this thing I thought was the only community whose values I had ever trusted had sort of betrayed my trust. I went through some serious soul searching after this; it was very emotionally taxing, and I decoupled EA from my identity pretty substantially as a result. Probably healthy, tbh.

I am not sure the extent to which the post-FTX adjacency might be attributed to brand protection and what percent is toward leadership betrayal, but I suspect both could be at play, because many people could have felt betrayed by the fact that EA leadership was well aware of FTX sketchiness and didn't say anything (or weren't aware, but then maybe you'd be betrayed by their incompetence). 

Brand confusion: After brand embarrassment and leadership betrayal, I think a 3rd potential explanation for EA adjacency is a sort of brand confusion problem. Here, I think EA is sort of like Christianity -- there's an underlying set of beliefs that almost everyone in the general Christian community agrees with, but different factions can be WILDLY different culturally and ideologically. Unfortunately, only EA insiders are familiar with these distinctions. So right now, acknowledging being an EA is like acknowledging you're a Christian to someone who only knows about Mormons. If you're actually a liberal Episcopalian and don't want to be seen as being a Mormon, maybe you don't have time to get into the fact that yes, technically you are a Christian but not that kind of Christian. I wonder if EA-adjacent folks would be more comfortable acknowledging EA connection if they could identify a connection with only one part or faction of EA, and there was greater clarity in the public eye about the fact that EA is not a monolith.

In terms of what behavior I'd like to see from other EAs and EA-adjacents: If I'm talking to an EA insider, I still say I have issues with parts of EA while acknowledging that I have a ton of shared values and work in the space. If someone is mocking EA as an outsider, I am actually MORE likely to admit connection and shared values with EA, because I usually think they are focusing on the wrong problems.

  1. ^

    struck for accuracy, see comments

Another, very obvious reason is just that more EA people are near real power now than in 2018, and with serious involvement in power and politics comes  tactical incentives to avoid saying what you actually think. I think that is probably a lot of what is going on with Anthropic people playing down their EA connections. 

Is that a separate reason from the one OP names and the ones in my comment? If EA had an excellent brand, tactical incentives would encourage naming yourself as an EA, not discourage it. It's the intersection of bad brand and/or brand confusion with tactics that leads to this result, not tactics alone, right?

I think the most likely explanation, particularly for people working at Anthropic is that EA has a lot of "takes" on AI, many of which they (for good or bad reasons) very strongly disagree with. This might fall into "brand confusion", but I think some of it's simply a point of disagreement. It's probably accurate to characterise the AI safety wing of EA as generally regarding it as very important to debate whether AGI is safe to attempt to develop. Anthropic and their backers have obviously picked a side on that already.

I think that's probably more important for them to disassociate from than FTX or individuals being problematic in other ways.

Personally, I think disagreements like this fit under the definition of brand confusion, at least as I intended it - if everyone understood that there were EAs who are debating AGI is safe, and others who have already made a decision on that, then someone who spent a lot of time reading about EA/talking about EA/being married to EA leadership wouldn't feel as bad saying "Yeah, I'm EA" just because they disagreed with some other EAs. 

Not saying something in this realm is what's happening here, but in terms of common causes of people identifying as EA adjacent, I think there are two potential kinds of brand confusion one may want to avoid:

  1. Associations with a particular brand (what you describe)
  2. Associations with brands in general:

I think EAs often want to be seen as relatively objective evaluators of the world, and this is especially true about the issues they care about. The second you identify as being part of a team/movement/brand, people stop seeing you as an objective arbiter of issues associated with that team/movement/brand. In other words, they discount your view because they see you as more biased. If you tell someone you're a fan of the New York Yankees and then predict they're going to win the World Series, they'll discount your view relative to if you just said you follow baseball but aren't on the Yankees bandwagon in particular. I suspect some people identify as politically independent for this same reason: they want to and/or want to seem like they're appraising issues objectively. My guess is this second kind of brand confusion concern is the primary thing leading many EAs to identify as EA adjacent; whether or not that's reasonable is a separate question, but I think you could definitely make the case that it is.

Also, I don't like Scott Alexander's politics at all, but in the interests of strict accuracy I don't think he is a monarchist, or particularly monarchism sympathetic (except insofar as he finds some individuals with far-right views who like monarchy kind of endearing.) If anything, I had the impression that whilst Scott has certainly been influenced by and promoted the far right in many ways, a view that monarchism is just really, really silly was one of the things that genuinely kept him from regarding himself as fully in sympathy with the neo-reactionaries. 

I have mixed feelings about this but since the point isn't critical to the intention of my comment I'm happy to strike it. 

Leadership betrayal: My reasoning is anecdotal, because I went through EA adjacency before it was cool. Personally, I became "EA Adjacent" when Scott Alexander's followers attacked a journalist for daring to scare him a little -- that prompted me to look into him a bit, at which point I found a lot of weird race IQ, Nazis-on-reddit, and neo-reactionary BS that went against my values.

  1. Scott Alexander isn't in EA leadership
  2. This is also extremely factually inaccurate - every clause in the part of your comment I've italicized is at least half false.

I downvoted this comment because it's not relevant to the purpose of this conversation. I shared my personal opinion to illustrate a psychological dynamic that can occur; the fact that you disagree with me about Scott does not invalidate the pattern I was trying to illustrate (and in fact, you missed the point that I was referring to CEA staff and others I spoke with afterwards as EA leadership, not Scott). 

If you think for some reason our disagreement about Scott Alexander is relevant to potential explanations for people refusing to acknowledge their relationship to EA, please explain that and I will revise my comment here. 

I will acknowledge that my description is at least a little glib, but I didn't take that much time to perfect how I was describing my feelings about Scott because it wasn't relevant to my point. 

Thanks, that's a great reason to downvote my comment and I appreciate you explaining why you did it (though it has gotten some upvotes so I wouldn't have noticed anyone downvoted except that you mentioned it). And yes, I misread whom your paragraph was referring to; thanks for the clarification.

However, you're incorrect that those factual errors aren't relevant. Your feelings toward EA leadership are based on a false factual premise, and we shouldn't be making decisions about branding with the goal of appealing to people who are offended based on their own misunderstanding.

Cool, I adjusted my vote, thanks for addressing. 

I think there's something to what you're saying about factual errors, but not at the level of diagnosing the problem. Instead, I'd argue that whether or not my opinion is based on factual errors[1] is more relevant to the treatment than the diagnosis. 

Let's say for arguments sake that I'm totally wrong: I got freaked out by an EA influencer, I approached EA leaders, they gave me a great response, and yet here I am complaining on the EA forum about it. My claim, though, isn't that EA leaders doing something wrong leads to EA-adjacency. It's that people feeling like EA leaders have done wrong leads to EA-adjacency. 

Given that what I was trying to emphasize is the cause of the behavior, whether someone having a sense of being betrayed by leadership is based on reality or a hallucination is irrelevant - it's still the explanation for why they are not acknowledging their EA connections (I am positing). 

However, you are definitely correct that when strategizing how to address EA adjacency/brand issues, if that's something you want to try to do, it helps to know whether the feelings people are having are based on facts or some kind of myth. In the case of the FTX trauma, @Mjreard is pointing out that there may be a myth of some sort at play in the minds of the people doing the denying. In the case of brand confusion, I think the root cause is something in lack of clarity around how EA factions relate to each other. In the case of leadership betrayal, I'd argue it's because the people I spoke with genuinely let me down, and you might argue it's because I'm totally irrational or something :) But nevertheless, identifying the feeling I'm having is still useful to begin the conversation. 

  1. ^

    Obviously, I don't think my opinion is based on factual errors, but that's neither here nor there. 

(For what it's worth, I don't think you're irrational, you're just mistaken about Scott being racist and what happened with the Cade Metz article. If someone in EA is really racist, and you complain to EA leadership and they don't do anything about it, you could reasonably be angry with them. If the person in question is not in fact racist, and you complain about them to CEA and they don't do anything about it, they made the right call and you'd be upset due to the mistaken beliefs, but conditional on those beliefs, it wasn't irrational to be upset.)

I don't think it's absolutely clear from the one-sentence quote alone that Amanda was claiming personal lack of knowledge of EA (which would absolutely be deceptive if she was obviously), though I agree that is one reasonable reading. She has her GWWC membership fairly prominently displayed on her personal website, so if she's trying to hide being or having been EA, she's not doing so very strongly. 

Yeah one thing I failed to articulate is how not-deliberate most of this behavior is. There's just a norm/trend of "be scared/cagey/distant" or "try [too] hard to manage perceptions about your relationship to EA" when you're asked about EA in any quasi-public setting. 

It's genuinely hard for me understand what's going on here. Like there are vastly worse ~student groups people have been a part of from their current professional outlook that don't induce this much panic. It seems like an EA cultural tick. 

Part of it may be that before FTX there was already a strong norm for people to not identify as EA (EA as a question). And that has only got stronger since. At least in the UK a lot of people working in EA areas wouldn't call themselves EA including myself, pre 2020.

An implicit claim I'm making here is that "I don't do labels" is kind of a bullshit non-response in a world where some labels are more or less descriptively useful and speakers have the freedom to qualify the extent to which the label applies. 

Like I notice no one responds to the question "what's your relationship to Nazism?" with "I don't do labels." People are rightly suspicious when people give that answer and there just doesn't seem to be a need for it. You can just defer to the question asker a tiny bit and give an answer that reflects your knowledge of the label if nothing else.  

I think EA and Nazism are quite different (in many ways). EA doesn't have a membership policy, and EA has a very wide range of philosophies, including opposing views, that people can believe in whilst still doing EA related work (positive vs negative utilitarianism, virtue ethics, deontology, consequentialism, some people care about animals some don't, a very large range of time discounts, etc).

As in the original article about EA as a question, it makes less sense philosophically and practically to have EA as an identity.

Maybe what you're noticing is people who haven't been asked about their 'EA' status before, giving the answer they would have always given.

I think Godwinning the debate actually strengthens the case for "I don't do labels" as a position. True, most people won't hesitate to say that the label "Nazi" doesn't apply to them, whether they say they don't do labels or have social media profiles which read like a menu of ideologies.[1] On the other hand, many who wouldn't hesitate to say that they think Nazis and fascists are horrible and agree should be voted against and maybe even fought against would hesitate to label themselves as "antifascist", with its connotations of ongoing participation in activism and/or membership of self-styled antifascist groups whose other positions they may not agree with. 

  1. ^

    and from this, we can perhaps infer than figures at Anthropic don't think EA is as bad as Naziism, if that was ever in doubt ;-)

I've been interested in EA & adjacent stuff for about a year now (i.e. well after FTX). The first time I attended an in-person meetup, I said I will "never call myself an EA or rationalist publicly", but I'm not sure that this post completely addresses why, so I want to give my reasoning.

In my view, effective altruism, and other idea-communities like religions or political ideologies, consist of both the ideas and methods, and the broader culture of the community. For example, writing long blog posts isn't an intrinsically EA thing to do, and publishing video essays on YouTube isn't an intrinsically leftist thing to do, but EAs culturally like blog posts while leftists culturally like video essays. I think conflictaverse's notion of brand confusion is related to this.

To outsiders, I think the cultural aspects of a community come to mind much more than the ideas. This relates to the section on not understanding how words work, in that defining effective altruism as [The things done and believed by] “those folks who go to EA conferences and write on the thing called the EA Forum, whoever they are.” includes a lot of stuff that isn't technically EA.

Finally then, despite agreeing with many of the core principles of EA and other idea-communities, I refuse to identify with any of these labels as I don't associate with the broader community. If I say "I am an effective altruist", I expect people to think "this guy reads a lot of blog posts and probably works in tech or finance" and not "this guy thinks uses decision-theoretic approaches to decide how to allocate charitable resources". Since that's not what I meant, it's not effective communication, and so I avoid it.

The community at large – and certainly specific EAs trying to distance themselves now – couldn’t have done anything to prevent FTX. They think they could have, and they think others see them as responsible, but this is only because EA was the center of their universe.


Ding! ding! ding!

You’re saying it nicely. I think it was irrationality and cowardice. I felt traumatized by the hysterical reaction and the abandoning of our shared work and community. I also felt angry about how my friends helped enemies of EA to destroy EA’s reputation for their own gain. 

I’ve stopped identifying with EA as much bc PauseAI is big tent and doesn’t hold all the EA precepts (that and much of the community is hostile to advocacy and way too corrupted by the AI industry…), but I always explain the connection and say that I endorse EA principles when I’m asked about it. It’s important to me to defend the values! 

I think @conflictaverse is on to something with his notion of "brand confusion". I wish there was an easy, widely understood shorthand I could use that would indicate I am inspired by the "early" EA project although I am now concerned that the movement's centre of gravity seems to have drifted both organizationally and philosophically in directions I am not comfortable with. I am reluctant to abandon the EA label altogether, leaving it to that "wing" of the movement.

One thing that strikes me as interesting when I think about my own experience and my impression of the people around me is that it can be hard to tell what my own reasons are when I might distance myself from EA. I might describe myself as EA adjacent and this could be some combination of:

  1. Seeing the 'typical' EA as someone who is much more hardcore and believes in all of it.
  2. Some part of my brain is always unconsciously tracking status.
  3. I am worried about the impact it will have on my ability to get jobs in the future.
  4. I might be more persuasive or likeable to the person in front of me if I distance myself from EA.   

And as humans often do, I might just tell myself a story that is more flattering than what is actually happening. I might tell myself that this is a very strategic choice to persuade this person to care about AI Safety, or for my long-term career prospects, or to protect my organisation from future scandals, and EA being a low(ish) status in some circles right now might be doing the heavy lifting. 

As someone out of the loop in terms of the contextual specifics of said people/organizations, I think there's a much simpler explanation than those statements being strategic lies. Firstly, those statements resemble expressions of boundaries within a conversation. To oversimplify, they basically translate to "I don't want to talk about EA". This is a difference between literal speak (preferred by autistic people for example) and neurotypical speak, where something that would be bizarre/false if interpreted factually is understood as a contextual boundary which is not about the facts and therefore isn't considered lying.

However, even in a more literal sense, I don't think those statements are necessarily false, if you take into account the fact that EA is "everything" to some EAs and "just a speck" to others. If I think maths is the most important thing in the world and I belong to a community with some degree of agreement, then it's easy for me to start accusing people adjacent or on the borders of the community for downplaying the importance of math. Every time someone mentions something math-adjacent but not in a maths-worshipping tone, they're being disingenuous. But this would be a fallacy where me equating lack of worship for lying says more about my world view than anything else.

A personal example: I identified as an EA for a few years and now I would consider myself "post-EA", if such a term existed. Both things are possible, that I invested a lot into EA and was inspired significantly by it, and simultaneously that I find relatively few tools moving forwards to be worth attributing to EA conversationally or philosophically. EA isn't "one consistent thing" and it's certainly not everything. For example, ranking charities is very EA, but it also exists outside EA, so even if my exposure was through EA, it doesn't necessarily make sense for me to acknowledge EA in a conversation about charity efficiency. The EA-ness of it doesn't mean anything to non-EAs, and it barely means anything to me having integrated the perspectives I want to keep vs discard.

I found a bunch of this really interesting, thanks for sharing this! I'd be pretty curious whether this resonates with anyone who now thinks of themselves as "EA adjacent" in part because of FTX fallout :) (although maybe those folks aren't, y'know, on the EA Forum 🤷🏻‍♀️)

I  think a lot of the folks whose opinion I'm interested are definitely on the EA forum, at least as lurkers -- the folks who are active in EA (and so likely to read stuff here) but still call themselves EA adjacent in polite company :)

Curated and popular this week
 ·  · 5m read
 · 
[Cross-posted from my Substack here] If you spend time with people trying to change the world, you’ll come to an interesting conundrum: Various advocacy groups reference previous successful social movements as to why their chosen strategy is the most important one. Yet, these groups often follow wildly different strategies from each other to achieve social change. So, which one of them is right? The answer is all of them and none of them. This is because many people use research and historical movements to justify their pre-existing beliefs about how social change happens. Simply, you can find a case study to fit most plausible theories of how social change happens. For example, the groups might say: * Repeated nonviolent disruption is the key to social change, citing the Freedom Riders from the civil rights Movement or Act Up! from the gay rights movement. * Technological progress is what drives improvements in the human condition if you consider the development of the contraceptive pill funded by Katharine McCormick. * Organising and base-building is how change happens, as inspired by Ella Baker, the NAACP or Cesar Chavez from the United Workers Movement. * Insider advocacy is the real secret of social movements – look no further than how influential the Leadership Conference on Civil Rights was in passing the Civil Rights Acts of 1960 & 1964. * Democratic participation is the backbone of social change – just look at how Ireland lifted a ban on abortion via a Citizen’s Assembly. * And so on… To paint this picture, we can see this in action below: Source: Just Stop Oil which focuses on…civil resistance and disruption Source: The Civic Power Fund which focuses on… local organising What do we take away from all this? In my mind, a few key things: 1. Many different approaches have worked in changing the world so we should be humble and not assume we are doing The Most Important Thing 2. The case studies we focus on are likely confirmation bias, where
 ·  · 2m read
 · 
I speak to many entrepreneurial people trying to do a large amount of good by starting a nonprofit organisation. I think this is often an error for four main reasons. 1. Scalability 2. Capital counterfactuals 3. Standards 4. Learning potential 5. Earning to give potential These arguments are most applicable to starting high-growth organisations, such as startups.[1] Scalability There is a lot of capital available for startups, and established mechanisms exist to continue raising funds if the ROI appears high. It seems extremely difficult to operate a nonprofit with a budget of more than $30M per year (e.g., with approximately 150 people), but this is not particularly unusual for for-profit organisations. Capital Counterfactuals I generally believe that value-aligned funders are spending their money reasonably well, while for-profit investors are spending theirs extremely poorly (on altruistic grounds). If you can redirect that funding towards high-altruism value work, you could potentially create a much larger delta between your use of funding and the counterfactual of someone else receiving those funds. You also won’t be reliant on constantly convincing donors to give you money, once you’re generating revenue. Standards Nonprofits have significantly weaker feedback mechanisms compared to for-profits. They are often difficult to evaluate and lack a natural kill function. Few people are going to complain that you provided bad service when it didn’t cost them anything. Most nonprofits are not very ambitious, despite having large moral ambitions. It’s challenging to find talented people willing to accept a substantial pay cut to work with you. For-profits are considerably more likely to create something that people actually want. Learning Potential Most people should be trying to put themselves in a better position to do useful work later on. People often report learning a great deal from working at high-growth companies, building interesting connection
 ·  · 1m read
 · 
Need help planning your career? Probably Good’s 1-1 advising service is back! After refining our approach and expanding our capacity, we’re excited to once again offer personal advising sessions to help people figure out how to build careers that are good for them and for the world. Our advising is open to people at all career stages who want to have a positive impact across a range of cause areas—whether you're early in your career, looking to make a transition, or facing uncertainty about your next steps. Some applicants come in with specific plans they want feedback on, while others are just beginning to explore what impactful careers could look like for them. Either way, we aim to provide useful guidance tailored to your situation. Learn more about our advising program and apply here. Also, if you know someone who might benefit from an advising call, we’d really appreciate you passing this along. Looking forward to hearing from those interested. Feel free to get in touch if you have any questions. Finally, we wanted to say a big thank you to 80,000 Hours for their help! The input that they gave us, both now and earlier in the process, was instrumental in shaping what our advising program will look like, and we really appreciate their support.