Hide table of contents


  • I have some concerns about the "effective altruism" branding of the community.
  • I recently posted them as a comment, and some people encouraged me to share them as a full post instead, which I'm now doing.
  • I think this conversation is most likely not particularly useful or important to have right now, but there's some small chance it could be pretty valuable.
  • This post is based on my personal intuition and anecdotal evidence. I would put more trust in well-run surveys of the right kinds of people or other more reliable sources of evidence.


"Effective Altruism" sounds self-congratulatory and arrogant to some people:

  • Calling yourself an "altruist" is basically claiming moral superiority, and anecdotally, my parents and some of my friends didn't like it for that reason. People tend to dislike it if others are very public with their altruism, perhaps because they perceive them as a threat to their own status (see this article, or do-gooder derogation against vegetarians). Other communities and philosophies, e.g., environmentalism, feminism, consequentialism, atheism, neoliberalism, longtermism don't sound as arrogant in this way to me.
  • Similarly, calling yourself "effective" also has an arrogant vibe, perhaps especially among professionals in relevant areas. E.g., during the Zurich ballot initiative, officials at the city of Zurich unpromptedly asked me why I consider them "ineffective", indicating that the EA label basically implied to them that they were doing a bad job. I've also heard other professionals in different contexts react similarly. Sometimes I also get sarcastic "aaaah, you're the effective ones, you figured it all out, I see" reactions.


"Effective altruism" sounds like a strong identity:

  • Many people want to keep their identity small, but EA sounds like a particularly strong identity: It's usually perceived as both a moral commitment, a set of ideas, and a community. By contrast, terms like "longtermism" are somewhat weaker and more about the ideas per se.
  • Perhaps partly because of this, at the Leaders Forum 2019, around half of the participants (including key figures in EA) said that they don’t self-identify as "effective altruists", despite self-identifying, e.g., as feminists, utilitarians, or atheists. I don't think the terminology was the primary concern for everyone, but it may play a role for several individuals.
  • In general, it feels weirdly difficult to separate agreement with EA ideas from the EA identity. The way we use the term, being an EA or not is often framed as a binary choice, and it's often unclear whether one identifies as part of the community or agrees with its ideas.


Some further, less important points:

  • "Effective altruism" sounds more like a social movement and less like a research/policy project. The community has changed a lot over the past decade, from "a few nerds discussing philosophy on the internet" with a focus on individual action to larger and respected institutions focusing on large-scale policy change, but the name still feels reminiscent of the former.
  • A lot of people don't know what "altruism" means.
  • "Effective altruism" often sounds pretty awkward when translated to other languages. That said, this issue also affects a lot of the alternatives.
  • We actually care about cost-effectiveness or efficiency (i.e., impact per unit of resource input), not just about effectiveness (i.e., whether impact is non-zero). This sometimes leads to confusion among people who first hear about the term.
  • Taking action on EA issues doesn't strictly require altruism. While I think it’s important that key decisions in EA are made by people with a strong moral motivation, involvement in EA should be open to a lot of people, even if they don’t strongly self-identify as altruists. Some may be mostly interested in contributing to the intellectual aspects without making large personal sacrifices.
  • There was a careful process where the name of CEA was determined. However, the adoption of the EA label for the entire community happened organically and wasn’t really a deliberate decision.


Some thoughts on potential implications:

  • The longer-term goal is for the EA community to attract highly skilled students, academics, professionals, policy-makers, etc., and the EA brand might plausibly be unattractive for some of these people. If that's true, the EA brand might act as a cap on EA's long-term growth potential, so we should perhaps aim to de-emphasize it. Or at least do some marketing research on whether this is indeed an issue.
  • EA organizations that have "effective altruism" in their name or make it a key part of their messaging might want to consider de-emphasizing the EA brand, and instead emphasize the specific ideas and causes more. I personally feel interested in rebranding "EA Funds" (which I run) to some other name partly for these reasons.
  • I personally would feel excited about rebranding "effective altruism" to a less ideological and more ideas-oriented brand (e.g., "global priorities community", or simply "priorities community"), but I realize that others probably wouldn't agree with me on this, it would be a costly change, and it may not even be feasible anymore to make the change at this point. OTOH, given that the community might grow much bigger than it currently is, it's perhaps worth making the change now? I'd love to be proven wrong, of course.


Thanks to Stefan Torges and Tobias Pulver for prompting some of the above thoughts and helping me think about them in more detail.

Sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I personally would feel excited about rebranding "effective altruism" to a less ideological and more ideas-oriented brand (e.g., "global priorities community", or simply "priorities community"), but I realize that others probably wouldn't agree with me on this, it would be a costly change, and it may not even be feasible anymore to make the change at this point. OTOH, given that the community might grow much bigger than it currently is, it's perhaps worth making the change now? I'd love to be proven wrong, of course.

This sounds very right to me. 

Another way of putting this argument is that "global priorities (GP)"  community is both more likable and more appropriate  than "effective altruism (EA)" community. More likable because it's less self-congratulatory, arrogant, identity-oriented, and ideologically intense. 

More appropriate (or descriptive) because it better focuses on large-scale change, rather than individual action, and ideas rather than individual people or their virtues. I'd also say, more controversially, that when introducing EA ideas, I would be more likely to ask the question: "how ought one to decide what to work on?", or "what are the big probl... (read more)

I really liked this comment, thanks!

The current discussion in the comments seems quite centered on "effective altruism vs. global priorities". I just wanted to highlight that I spent, like, 3 minutes in total thinking about alternative naming options, and feel pretty confident that there are probably quite a few options that work better than "global priorities". In fact, when renaming CLR, we only came up with the new name after brainstorming many options. So I would really like us to generate a list of >10 great alternatives (i.e. actually viable alternatives) before starting to compare them.

This seems like a really good point.

Off the top of my head, I think how we[1] should proceed is something like:

  • Generate a long list of possible labels
  • Generate a set of goals we have / criteria for evaluating the labels
  • Generate a set of broader approaches we could take, such as having different labels that we use for different audiences, or different labels for different segments of the community, or 
  • Then evaluate the labels and approaches (or combinations thereof) against the goals / criteria we came up with

I think the first three actions can/should be done roughly in parallel, and that the fourth should mostly wait till we've done the first three. Or we might iterate through "first three actions, then fourth action, then first three actions again ..." a few times.

And I'd say this is best done through one or more well-run surveys, as you suggest. Maybe there could first be surveys that ask EAs to generate ideas for labels, goals/criteria, and broader approaches, then ask them to rate given ideas and approaches against given goals/criteria (or maybe that should be split into a followup survey). And then there could be surveys of non-EAs that just skip to that last step (since I imagine it'd be hard for them to come up with useful ideas without context first). 

[1] I'm not sure who the relevant "we" is. 

I think a name change might be good, but am not very excited about the "Global Priorities" name. I expect it would attract mostly people interested in seeking power and "having lots of influence" and I would generally expect a community with that name to be very focused on achieving political aims, which I think would be quite catastrophic for the community.

I actually considered this specific name in 2015 while I was working at CEA US as a potential alternative name for the community, but we decided against it at the time for reasons in this space (and because changing names seems hard).

While I'm not sure we're using terms like "political" and "power" in the same way, as far as I can tell this worry makes a lot of sense to me.

However, I think there is an opposite failure mode: mistakenly believing that because of one's noble goals and attitudes one is immune to the vices of power, and can safely ignore the art of how to navigate a world that contains conflicting interests.

A key assumption from my perspective is that political and power dynamics aren't something one can just opt out of. There is a reason why thinkers from Plato over Macchiavelli to Carl Schmitt have insisted that politics is a separate domain that merits special attention (and I'm not saying this as someone who is not particularly sympathetic to any of these three on the object level). [ETA: Actually I'm not sure if Plato says that, and I'm confused why I included him originally. In a sense he may suggest the opposite view since he sometimes compares the state to the individual.]

Internally, community members with influence over more financial or social capital have power over those whose projects depend on such capital.  There certainly are different views with respect to how this capital is b... (read more)

I agree that changing names is hard and costly (you can't do it often), something that definitely should be taken into account.

I'm noticing I don't fully understand the way in which you think "Global Priorities" would attract power-seekers, or what you mean by that. Like, I have a vague sense that you're probably right, but I don't see the direct connection yet. Would be very interested in more elaboration on this.

I mean, I just imagine what kind of person would be interested, and it would mostly be the kind of person who is ambitious, though not necessarily competent, and would seek out whatever opportunities or clubs there are that are associated with the biggest influence over the world, or sound the highest status, have the most prestige, or sound like would be filled with the most powerful people. I have met many of those people, and a large fraction of high-status opportunities that don't also strongly select for merit seem filled with them. 

Currently both EA and Rationality are weird in a way that is not immediately interesting to people who follow that algorithm, which strikes me as quite good. In universities, when I've gone to things that sounded like "Global Priorities" seminars, I mostly met lots of people with a political science degree, or MBA's, really focusing on how they can acquire more power and the whole conversation being very status oriented.  

Jonas V
Thanks, I find that helpful, and agree that's a dangerous dynamic, and could be exacerbated by such a name change.
Ben Pace
—HPMOR, Chapter 70, Self-Actualization (part 5) Added: The following is DEFINITELY NOT a strong argument, but just kind of an associative point. I think that Voldemort (both the real one from JK Rowling and also the one in HPMOR) would be much more likely to decide that he and his Death Eaters should have “Global Priorities” meetings than “Effective Altruist” meetings. (“We’re too focus on taking over the British Ministry for Magic, we need to also focus on our Global Priorities.“) In that way I think the former phrase has a more general connotation of ”taking power and changing the world” in a way the latter does not.

I think this is a good point. That said, I imagine it's quite hard to really tell. 

Empirical data could be really useful to get here. Online experimentation in simple cases, or maybe we could even have some University chapters try out different names and see if we can infer any substantial differences. 


1) I'm convinced that a "GP" community would attract somewhat more power-seeking people. But they might be more likely to follow (good) social norms than the current consequentialist crowd. Moreover, we would be heading toward a more action-oriented and less communal group, which could reduce the attraction to manipulative people. And today's community is older and more BS-resistant with some legibly-trustworthy leaders. But you seem to think there would be a big and harmful net effect - can you explain?

2) assuming that "GP" is too intrinsically political, can you think of any alternatives that have some of its advantages of "GP" without that disadvantage?

Ben Pace
I don't expect a brand change to "Global Priorities" to bring in more action-oriented people. I expect fewer people would donate money themselves, for instance, they would see it as cute but obviously not having any "global" impact, and therefore below them. (I think it was my inner Quirrell / inner cynic that wrote some of this comment, but I stand by it as honestly describing a real effect that I anticipate.)
I don't understand this. We would be trending towards seeking more power, which would further attract power-seekers. We have already substantially gone down this path. You might have different models of what attracts manipulative people. My model is doing visibly power-seeking and high-status work is one of the most common attractors. I think we have overall become substantially less BS-resistant as we have grown and have drastically increased the surface area of the community, though it depends a bit on the details.  Yep, I would be up for doing that, but alas won't have time for it this week. It seemed better to leave a comment voicing my concerns at all, even if I don't have time to explain them in-depth, though I do apologize for not having the time to explain them in full. 

I don't understand this. We would be trending towards seeking more power, which would further attract power-seekers. We have already substantially gone down this path. You might have different models of what attracts manipulative people. My model is doing visibly power-seeking and high-status work is one of the most common attractors.

I'm concerned about people seeking power in order to mistreat, mislead, or manipulate others (cult-like stuff), as seems more likely in a social community, and less likely in a group of people who share interests in actually doing things in the world. I'm in favour of people gaining influence, all things equal!

Alas, I think that isn't actually what tends to attract the most competent manipulative people. Random social communities might attract incompetent or average-competence manipulative people, but those are much less of a risk than the competent ones. In general, professional communities, in particular ones aiming for relatively unconditional power, strike me as having a much higher density of manipulative people than random social communities. I also think when I go into my models here, the term "manipulative" feels somewhat misleading, but it would take me a while longer to explain alternative phrasings. 
TBC, this feels like a bit of a straw man of my actual view, which is that power and communality jointly contribute to risks of cultishness and manipulativeness.
nods My concerns have very little to do with cultishness, so my guess is we are talking about very different concerns here. 

I think the "global priorities" label fails to escape several of the problems that Jonas argued the EA brand has. In particular, it sounds arrogant for someone to say that they're trying to figure out global priorities. If I heard of a global priorities forum or conference, I'd expect it to have pretty strong links with the people actually responsible for implementing global decisions; if it were actually just organised by a bunch of students, then they'd seem pretty self-aggrandizing.

The "priorities" part may also suggest to others that they're not a priority. I expect "the global priorities movement has decided that X is not a priority" seems just as unpleasant to people pursuing X as "the effective altruism movement has decided that X is not effective".

Lastly, "effective altruism" to me suggests both figuring out what to do, and then doing it. Whereas "global priorities" only has connotations of the former.

What kinds of names do you think would convey the notion of prioritised action while being less self-aggrandising?

Well, my default opinion is that we should keep things as they are;  I don't find the arguments against "effective altruism" particularly persuasive, and name changes at this scale are pretty costly.

Insofar as people want to keep their identities small, there are already a bunch of other terms they can use - like longtermist, or environmentalist, or animal rights advocate. So it seems like the point of having a term like EA on top of that is to identify a community. And saying "I'm part of the effective altruism community" softens the term a bit.

around half of the participants (including key figures in EA) said that they don’t self-identify as "effective altruists"

This seems like the most important point to think about; relatedly, I remember being surprised when I interned at FHI and learned how many people there don't identify as effective altruists. It seems indicative of some problem, which seems worth pursuing directly. As a first step, it'd be good to hear more from people who have reservations about identifying as an effective altruist. I've just made a top-level question about it, plus an anonymous version - if that describes you, I'd be interested to see your responses!

Great comment. To these points I would also add (or maybe just summarize some of the points you made) that "global priorities" seems to have more empirical/world-focused connotations to me, whereas "effective altruism" sounds a lot more philosophical/ideological to me.

E.g. I agree that "global priorities" suggests questions like "what are the big challenges of our time?", which I like a lot more than e.g. "how altruistic should we be?", "is there something like 'true altruism'?" or whichever other thing "effective altruism" makes people first think of.

Of course, I agree that ultimately the project of doing as much good as we can involves both empirical and philosophical questions. But relative to today, I think we'd be better equipped to execute that project well with a stronger emphasis on empirical and practical questions and less emphasis on abstract philosophy. (Though to be fair to the EA label, the status quo is more due to founder effects rather than due to the name differentially attracting philosophers.)

The fact that prioritisation arises more frequently in EA org names than any phrase except for "EA" itself might be telling us something important. Consider: "Rethink Priorities", "Global Priorities Project", "Global Priorities Institute", "Priority Wiki", "Cause Prioritisation Wiki".

It seems worth noting that all of those orgs/wikis are focused on producing or collecting research, not on more directly acting on the world. This is of course a key part of EA, but not the whole of it. 

In line with that, I think that "global priorities", "global priorities community", or similar terms sound like they're mostly about working out what the global priorities are and less about actually acting on those answers. EA is already often perceived as too research-focused (though I'm not saying I agree with those perceptions myself), so it might be good to avoid things that would exacerbate that.

I like this style of thinking, but I don't think it pushes in the direction that you suggest. EA entities with "priorities" in the name disproportionately work on surveys and policy, whereas those with "EA" in the name tend to be communal or meta, e.g. EA Forum, EA Global, EA Handbook, and CEA. Groups that act in the world tend to have neither, like GWWC, AMF, OpenAI.

On balance, I think "global priorities" connotes more concreteness and action-orientation than "EA", which is more virtue- and identity- oriented. If I was wrong on this, it would partly convince me.

I guess I intended my comment above to make three claims: 1. It is empirically true that those orgs/wikis you noted as having "priorities" in their names are focused on producing or collecting research, not on more directly acting on the world 2. Separately, to me, "global priorities" does seem to have connotations of working out what the global priorities are and less about actually acting on those answers. 3. Claim 1 seems to be in line with claim 2.  1. But I think claim 1 wasn't the basis for claim 2; I already felt those connotations before you named those orgs, though of course I had already heard of the orgs. But I don't see these claims as super important, because: * We can just run a bunch of surveys and see what connotations other people perceive * Action-oriented vs research-oriented is just one of many relevant dimensions * "global priorities" is just one alternative name I guess I see the small value of my comment as quickly highlighting small reasons to doubt your initial views and therefore additional reasons to gather more options, consider our goals/criteria/desiderata more (I like that your comment lists some general goals for names), and run a bunch of surveys. 
OK, what names would we expect to promote action-orientation if "GP" wouldn't?
Ben Pace
I do not know. Let me try generating names for a minute. Sorry. These will be bad. “Marginal World Improvers” ”Civilizational Engineers” ”Black Swan Farmers” “Ethical Optimizers” ”Heavy-Tail People” Okay I will stop now.

A friend's "names guy" once suggested calling the EA movement "Unfuck the world"...

We can begin here.

EA popsci would be fun!  §1. The past was totally fucked.  §2. Bioweapons are fucked.  §3. AI looks pretty fucked.  §4. Are we fucked?  §5. Unfuck the world!
I will resist the temptation to further expand that list.
2[comment deleted]
Ben Pace
“Hello, I’m an Effective Altruist.” “Hello, I’m a world-unfucker.” Honestly, I think the second one might be more action-oriented. And less likely to attract status-seekers. Alright, I’m convinced, let’s do it :)

I was just reflecting on the term 'global priorities'. I think to me it sounds like it's asking "what should the world do", in contrast to "what should I do". The latter is far mode, the former is near. I think that staying near mode while thinking about improving the world is pretty tough. I think when people fail, they end making recommendations that could only work in-principle if everyone coordinates at the same time, and also as a result shape their speech to focus on signaling to achieve these ends, and often walk off a cliff of abstraction. I think when people stay in near mode, they focus on opportunities that do not require coordination, but opportunities they can personally achieve. I think that EAs caring very much about whether they actually helped someone with their donation has been one of the healthier epistemic things for the community. Though I do not mean to argue it should be held as a sacred value.

For example, I think the question "what should the global priority be on helping developing countries" is naturally answered by talking broadly about the West helping Africa build a thriving economy, talk about political revolution to remove corruption in governments, ... (read more)

But it seems like GP is harder to extend to agents specifically? Currently, I can say "I'm an [EA / effective altruist / aspiring EA]". That sounds a bit arrogant, but probably less so than saying "I'm a global priority" :P Obviously that's not the label we'd use for individuals, but I'm not sure the alternative. Some ideas that seem bad: * Global prioritist * GP (obviously that acronym is already taken, and in any case it'd just expand out to things like "I'm a global priority" or "we're global priorities") * Member of the global priorities community (way too long) (In any case, as Jonas notes, our focus for now should probably be on brainstorming ideas rather than pitting them against each other so far. So this comment may not be very important.)

I kinda think that "I'm an EA/he's an EA/etc" is mega-cringey (a bad combo of arrogant + opaque acryonym + tribal) , and that deprecating it is a feature, rather than a bug.

Though you can just say "I'm interested in / I work on global priorities / I'm in the prioritisation community", or anything that you would say about the AI safety community, for example.

I kinda think that "I'm an EA/he's an EA/etc" is mega-cringey (a bad combo of arrogant + opaque acryonym + tribal)

It sounds like you think it’s bad that people have identified their lives with trying to help people as much as they can? Like, people like Julia Wise and Toby Ord shouldn’t have made it part of their life identity to do the most good they can do. They shouldn’t have said “I’m that sort of person” but they should have said “This is one of my interests”.

Neel Nanda
I also find that a bit cringy. To me, the issue is saying "I have SUCCEEDED at being effective at altruism", which feels like a high bar and somewhat arrogant to explicitly admit to
But: * By a similar token, one could replace "I'm/He's an EA" with "I'm/He's interested in effective altruism", which would at least somewhat reduce the problems you note. * People usually don't do this, which I think is because we naturally gravitate towards shorter phrases. I guess this could be seen as a downside of the fact that the current phrase can be conveniently shortened. * But, of course, the ability to shorten also has an upside (saving time and space). * I often say/write and hear/read things like "EAs are often interested in ...", "One mistake some EAs make is...", etc. This is more common than me referring to myself as an EA, and somewhat less at risk of seeming arrogant (though it still can). I think expanding all such uses of "EAs" to "people interested in global priorities" would be a hassle (though not necessarily net negative). * "I'm interested in global priorities" and "I work on global priorities" also seem kind-of arrogant, bland, and/or weirdly vague to me. Maybe like a parody of vacuous business speak. * Not sure how common this perception would be - we should run a survey. (Though I feel I should emphasise that I just see these as small reasons to doubt your views, which therefore pushes in favour of gathering more options, considering our goals/criteria/desiderata more, and running a bunch of surveys. My intention isn't really to definitively argue against "global priorities".) ETA: I just saw that Will Bradshaw already said things quite similar to what I said here, but a bit more concisely...

Yeah, I'm much more sympathetic to concerns with "effective altruist" than with "effective altruism", and it doesn't seem like GP does any better in that regard – all the solutions you could apply here ("I'm a member of the global priorities community", "I'm interested in global priorities") also apply to EA.

Maybe the fact that the short forms are so awkward for GP is part of the idea? Like, EA has this very attractive but somewhat problematic personalised form ("effective altruist"); GP's personalised forms are all unattractive, so you avoid the problematic attractor?

But it still seems that, if personalised forms are a big part of the concern (which I think they are), this is a good argument in favour of keeping looking. Which was Jonas's proposal anyway.

(Or, of course, we could cut the arrogance down by just saying "I'm an early-career aspiring global priority.")

I asked my team about this, and Sky provided the following information. This quarter CEA did a small brand test, with Rethink’s help. We asked a sample of US college students if they had heard of “effective altruism.” Some respondents were also asked to give a brief definition of EA and a Likert scale rating of how negative/positive their first impression was of “effective altruism.”

Students who had never heard of “effective altruism” before the survey still had positive associations with it. Comments suggested that they thought it sounded good  - effectiveness means doing things well; altruism means kindness and helping people. (IIRC, the average Likert scale score was 4+ out of 5). There were a small number of critiques too, but fewer than we expected. (Sorry that this is just a high-level summary - we don't have a full writeup ready yet.)

Caveats: We didn't test the name “effective altruism” against other possible names. Impressions will probably vary by audience. Maybe "EA" puts off a small-but-important subsection of the audience we tested on (e.g. unusually critical/free-thinking people).

I don't think this is dispositive - I think that testing other brands might still be a good idea. We're currently considering trying to hire someone to test and develop the EA brand, and help field media enquiries. I'm grateful for the work that Rethink and Sky Mayhew have been doing on this.

I wonder if there would be a strong difference between "What do you think of a group/concept called 'effective altruism'", "Would you join a group called 'effective altruism'", "What would you think of someone who calls themselves an 'effective altruist'", "Would you call yourself an 'effective altruist'".

I wonder which of these questions is most important in selecting a name.

Thanks for sharing that info, Max. It was an interesting first pass at some of these questions. 

I agree there are a lot of things that are nonideal about the term, especially the connotations of arrogance and superiority.

However, I want to defend it a little:

  • It seems like it's been pretty successful? EA has grown a lot under the term, including attracting some great people, and despite having some very controverisal ideas hasn't faced that big of a backlash yet. Hard to know what the counterfactual would be, but it seems non-obvious it would be better.
  • It actually sounds non'ideological' to me if what that means is being comitted to certain ideas of what we should do and how we should think-- it sounds like it's saying 'hey, we want to do the effective and altruistic thing. We're not saying what that is.' it sounds more open, more like 'a question' than many -isms.

Many people want to keep their identity small, but EA sounds like a particularly strong identity: It's usually perceived as both a moral commitment, a set of ideas, and a community.

I feel less sure this is true more of EA than other terms, at least wrt to the community aspect. I think the reason some terms don't seem to imply a community is that there isn't [much of] one. But insofrar as we want to keep the EA community, and I think it's very valuable and that we should, changing the term won't shrink the identity associated with it along that dimension. I guess what I'm saying is: I'd guess the largeness of the identity associated with EA is not that related to the term.

I think these are good points. Readers of these comments may also be interested in the post Effective Altruism is a Question (not an ideology). (I assume you've already read the post and had it somewhat in mind, but also that some readers wouldn't know the post.)

Empirical research on people's responses to the term (and alternative terms) certainly seems valuable, and important to do before any potential rebrand.

Anecdotally, I find that people hate reference to "priorities" or "prioritising" as much or more than they hate "effective altruism." Referring to specific "global priorities" quite overtly implies that other things are not priorities. And terminology aside, I find that many people outright oppose "prioritisation" in the field of philanthropic or pro-social endeavours for roughly this reason: it's rude/inappropriate to imply that certain good things that people care about are more important than others. (The use of the word "global" just makes this even worse: this implies that you don't even just think that they are local or otherwise particular priorities, but rather that they are the priorities for everyone!)

To some extent, I think that what those who dislike effective altruism dislike isn't that term, but rather the set of ideas it expresses. As such, replacing it with another term that's supposed to express broadly the same set of ideas (like "priorities" or "global priorities") might make less of a difference than one might think at first glance (though it likely makes some difference).

What might make a greater difference, for better or worse, is choosing a term that expresses a quite different set of ideas. E.g. I think that people have substantially different reactions to the term "longtermism".

+1. A short version of my thoughts here is that I’d be interested in changing the EA name if we can find a better alternative, because it does have some downsides, but this particular alternative seems worse from a strict persuasion perspective.

Most of the pushback I feel when talking to otherwise-promising people about EA is not really as much about content as it is about framing: it’s people feeling EA is too cold, too uncaring, too Spock-like, too thoughtless about the impact it might have on those causes deemed ineffective, too naive to realise the impact living this way will have on the people who dive into it. I think you can see this in many critiques.

(Obviously, this isn’t universal; some people embrace the Spock-like-mindset and the quantification. I do, to some extent, or I wouldn’t be here. But I’ve been steadily more convinced over the years that it’s a small minority.)

You can fight this by framing your ideas in warmer terms, but it does seem like starting at ‘Global Priorities community’ makes the battle more uphill. And I find losing this group sad, because I think the actual EA community is relatively warm, but first impressions are tough to overcome.

Low confidence on all of the above, would be happy to see data.

I still think "effective altruism" sounds a bit more like we've already found the correct answer to "what should we prioritize" rather than just being interested in the question, but I agree these are some good points.

It seems like EA could benefit from a dedicated, evidence-based messaging consultancy that served all EA orgs.

Rethink Priorities is pretty close to this! We've done message testing now for many orgs across cause areas... Centre for Effective Altruism, Will MacAskill, Open Phil, the Centre for the Study of Existential Risk, Humane Society for the United States, The Humane League, Mercy for Animals, and various EA-aligned lobbyists. We have a lot of skills and resources to do this well and already have a well-built pipeline for producing this kind of work.

We'd be happy to consider doing more work for other people in EA and the EA movement as a whole!

Amazing. I knew RP did a lot of great work in this space, but didn’t realize how systematized you’d gotten. Great stuff :-)
This is great! Can you summarize your findings across these tests?
James Özden
I’ve been thinking about this! I  really have no sense if anyone involved in building the EA movement/EA orgs has sat down and really meticulously thought about narratives, audiences, framing and other elements of building a strong message. Does anyone know if this is being done?   If not, this seems like a potentially really exciting piece of work. If we just look at organisation that had a strong “meme”/message, whether it’s McDonalds or Friday’s for Future, it can really help an org reach their desired outcome. For us this might not be exponential growth in the general public (if we’re concerned about keep strong community values) but exponential growth in certain social groups e.g. donors or talented individuals in specific fields. The consensus on messaging says that emotional narratives work far better than facts and I think that could be an area where EA messaging hasn't been optimal -as my impression is that we're far more likely to speak about statistics vs emotional stories of those we're helping.   One piece of work could be focus groups with the various audiences, high net-worth donors as an example, to figure out what message resonates the most then we try align wider EA orgs that do fundraising around this message. Same could go for recruiting people involved in technical AI safety etc. I get the impression it could be quite high-leverage as having been involved in crowdfunding, the strength of your messaging can make a huge (10x) difference to your results. This is a field where you can be quite rigorous with building narratives based on evidence so seems like a no-brainer for EA-aligned folks. Would love to hear if any of this work is already being done as I definitely see it as a need in the EA-meta ecosystem. I could see it fitting in with CEA potentially or like you said - external consultancy or non-profit.

Reading this thread, I sort of get the impression that the crux here is between people who want EA to be more institutional (for which purpose the current name is kind of a problem) and people who want it to be more grassroots (for which purpose the current name works pretty okay).

There are other issues with the current name, like the thing where it opens us up to accusations of hypocrisy every time we fail to outperform anyone on anything, but I'm not sure that that's really what's driving the disagreement here. Partly, this is because people have tried to come up with better names over the years (though not always with a view towards driving serious adoption of them; often just as an intellectual exercise), and I don't think any of the candidates have produced widespread reactions of "oh yeah I wish we'd thought of that in 2012", even among people who see problems with the current name. So coming up with a name that's better than "effective altruism", by the lights of what the community currently is, seems like a pretty hard problem. (Obviously this is skewed somewhat by the inertia behind the current name, but I don't think that fully explains what's going on here.) When people ... (read more)

This is a discussion that has happened a few times. I do think that 'global priorities' has already grown as a brand enough to be seriously considered for wider use, and perhaps even as the main term for the movement.

I'd still be reluctant to ditch 'effective altruism' entirely. There is an important part of the original message of the movement (cf pond analogy) that's about asking people to step up and give more (whether money or time) - questioning personal priorities/altruism. I think we've probably developed a healthier sense of how to balance that ('altruism/life balance') but it feels like 'global priorities' wouldn't cover it.

This is an excellent point. I "joined" EA because of the pond idea. I found the idea of helping a lot of people with the limited funds I could spare really appealing, and it made me feel like I could make a real difference. I didn't get into EA because of its focus on global prioritization research.

Of course, what I happened to join EA because of is not super important, but I wonder how others feel. Like EA as a "donate more to AMF and other effective charities" is a really different message than EA as "research and philosophize about what issues are really important/neglected."

I'm not sure which EA is anymore, and changing the name to global priorities might change the movement from the Doing Good Better movement to the "Case for Strong Longtermism" movement and those are very different. But I'm very uncertain on which EA will/should end up as. 

Jonas V
I want to push back against the idea that a name change would implicitly change the movement in a more longtermist direction (not sure you meant to suggest that, but I read that between the lines). I think a name change could quite plausibly also be very good for the global health and development and animal welfare causes. It could shift the focus from personal life choices to institutional change, which I think people aren't thinking about enough.  The EA community would probably greatly increase its impact if it focused a bit less on personal donations and a bit more on spending ODA budgets more wisely, improving developing-world health policy, funding growth diagnostics research, vastly increasing government funding for clean meat research, etc.

The EA community would probably greatly increase its impact if it focused a bit less on personal donations and a bit more on spending ODA budgets more wisely, improving developing-world health policy, funding growth diagnostics research, vastly increasing government funding for clean meat research, etc.

I think I disagree with this given what the community currently looks like. (This might not be the best place to get into this argument, since it's pretty far from the original points you were trying to make, but here we go.)

Two points of disagreement:

i) The EA Survey shows that current donation rates by EAs are extremely low. From this I conclude that there is way too little focus on personal donations within the EA community. That said, if we get some of the many EAs which are donating very little to work on the suggestions you mention, that is plausibly a net improvement, as the donation rates are so low anyway.

Relatedly, personal donations are one of the few things that everyone can do. In the post, you write that "The longer-term goal is for the EA community to attract highly skilled students, academics, professionals, policy-makers, etc.", but as I understand the terms you... (read more)

Edit: I think my below comment kind of misses the point – my main response is simply: Some people could probably do a huge amount of good by, e.g., helping increase meat alternatives R&D budgets, this seems a much bigger opportunity than increasing donations and similarly tractable, so we should focus more on that (while continuing to also increase donations).


Some quick thoughts:

  • I personally think the EA community could plausibly grow 1000-fold compared to its current size, i.e. to 2 million people, which would correspond to ~0.1% of the Western population. I think EA is unlikely to be able to attract >1% of the (Western and non-Western) population primarily because understanding EA ideas (and being into them) typically requires a scientific and prosocial/altruistic mindset, advanced education, and the right age (no younger than ~16, not old enough to be too busy with lots of other life goals). Trying to attract >1% of the population would in my view likely lead to a harmful dilution of the EA community. We should decide whether we want to grow more than 1000-fold once we've grown 100-fold and have more information.
  • Low donation rates indeed feel concerning. To me, the l
... (read more)

I think EA is unlikely to be able to attract >1% of the (Western and non-Western) population primarily because understanding EA ideas (and being into them) typically requires a scientific and prosocial/altruistic mindset, advanced education, and the right age (no younger than ~16, not old enough to be too busy with lots of other life goals). Trying to attract >1% of the population would in my view likely lead to a harmful dilution of the EA community.

Thanks for stating your view on this as I would guess this will be a crux for some.

FWIW, I'm not sure if I agree with this. I certainly agree that there is a real risk from 'dilution' and other risks from both too rapid growth and a too large total community size.

However, I'm most concerned about these risks if I imagine a community that's kind of "one big blob" without much structure. But that's not the only strategy on the table. There could also be a strategy where the total community is quite large but there is structure and diversity within the community regarding what exactly 'being an EA' means for people, who interacts with whom, who commands how many resources, etc.

I feel like many other professional, academic, or politi... (read more)

Jonas V
Yeah, these are great points. I agree that with enough structure, larger-scale growth seems possible. Basically, I agree with everything you said. I'd perhaps add that in such a world, "EA" would have a quite different meaning from how we use the term now. I also don't quite buy the point about Ramanujan – I think "spreading the ideas widely" is different from "making the community huge". (Small meta nitpick: I find it confusing to call a community of 2 million people "small" – really wish we were using "very large" for 2 million and "insanely huge" for 1% of the population, or similar. Like, if someone said "Jonas wants to keep EA small", I would feel like they were misrepresenting my opinion.)
Yeah, I think that's an important insight I also agree with. In an ideal world the best thing to do would be to expose everyone to some kind of "screening device" (e.g. a pitch or piece of content with a call to action at the end) which draws them into the EA community if and only if they'd make a net valuable contribution. In the actual world there is no such screening device, but I suspect we could still do more to expand the reach of "exposure to the initial ideas / basic framework of EA" while relying on self-selection and existing gatekeeping mechanisms for reducing the risk of dilution etc. My main concern with such a strategy would actually not be that it risks dilution but that it would be more valuable once we have more of a "task Y", i.e. something a lot of people can do. (Or some other change that would allow us to better utilize more talent.)
I meant this slightly differently than you interpreted it I think. My best guess is that less than 10% of the Western population are capable of entering potentially high impact career paths and we already have plenty of people in the EA community for whom this is not possible. This can be for a variety of reasons: they are not hard-working enough, not smart enough, do not have sufficient educational credentials, are chronically ill, etc. But maybe you think that most people in the current EA community are very well qualified to enter high impact career paths and our crux is there? While I agree that government jobs are easier to get into than other career paths lauded as high impact in the EA Community (at least this seems to be true for the UK civil service), my impression is that I am a lot more skeptical than other EAs that government careers are a credible high impact career path. I say this as someone who has a government job. I have written a bit about this here, but my thinking on the matter is currently very much a work in progress and the linked post does not include most reasons why I feel skeptical. To me it seems like a solid argument in favour has just not been made. I completely agree with this (and I think I have mentioned this to you before)! I'm afraid I only have wild guesses why donation rates are low. More generally, I'd be excited about more qualitative research into understanding what EA community members think their bottlenecks to achieving more impact are.
Jonas V
Thanks for clarifying – I basically agree with all of this. I particularly agree that the "government job" idea needs a lot more careful thinking and may not turn out to be as great as one might think. I think our main disagreement might be that I think that donating large amounts effectively requires an understanding of EA ideas and altruistic dedication that only a small number of people are ever likely to develop, so I don't see the "impact through donations" route as an unusually strong argument for doing EA messaging in a particular direction or having a very large movement. And I consider the fact that some people can have very impactful careers a pretty strong argument for emphasizing the careers angle a bit more than the donation angle (though we should keep communicating both). (Disclaimer: Written very quickly.) I also edited my original comment (added a paragraph at the top) to make this clearer; I think my previous comment kind of missed the point.
  While we're empirically investigating things, it seems like what proportion of the population seem like they could potentially be aligned with EA, might also be a high priority thing to investigate. 
Though I was surprised when I read the results of the first EA survey because I was expecting the majority of non-student EAs would donate 10% of their pretax income, I don't think that saying that EA donations are extremely low is quite fair. The mean donation of EAs in the 2019 survey was 7.5%. The mean donation of Americans of pretax income is about 3.6%. However, with a significant number of EAs outside of the US giving less, the fact that many EAs are students, and the since I think that the EA mean is by person rather than weighted by donation (as the US average number is), I would guess EAs donate about 3-5 times as much as the same demographic that is not an EA. I do think that we could do better, and a lot of good could come from more donations.
I'm all for focusing on the power of policy, but I'm not sure giving up any of our positions on personal donations will help get us there.
I think I more or less agree with you. However, I think my point wasn't about longtermism, but rather just the difference between the project that DGB was engaging in and the later work by MacAskill on cause prioritization. Like, one was saying, "Hey! evidence can be really helpful in doing good, and we should care about how effective the charities are that we donate to," and the other work was a really cerebral, unintuitive piece about what we should care about, and contribute to, because of expected value reasons. And just that these are two very different projects, and it's not obvious to me which one EA is at the moment. To use a cliche, EA has an identity crisis, maybe, and the classic EA pitch of Peter Singer and DGB and AMF is a very distinct pitch from the global prioritization one. And whichever EA decides on, it should acknowledge that these are different, regardless of which one is more or less impactful. 

A small and simple change that CEA could do is to un-bold the 'Effective' in their 'Effective Altruism' logo which is used on https://www.effectivealtruism.org/ and EAG t-shirts

I find the bold comes across as unnecessarily smug emphasis in Effective Altruism.

"Effective altruism" sounds more like a social movement and less like a research/policy project. The community has changed a lot over the past decade, from "a few nerds discussing philosophy on the internet" with a focus on individual action to larger and respected institutions focusing on large-scale policy change, but the name still feels reminiscent of the former.

It's not just that it has developed in that direction, it has developed in many directions. Could the solution then be to use different brands in different contexts? "Global priorities community" might work better than "Effective Altruism community" when doing research and policy advocacy, but as an organizer of a university group, I feel like "Effective Altruism" is quite good when trying to help (particularly smart and ambitious) individuals do good effectively. For example, I don't think a "Global priorities fellowship" sounds like something that is supposed to be directly useful for making more altruistic life choices.

Outreach efforts focused on donations and aimed at a wider audience could use yet another brand. In practice it seems like Giving What We Can and One for the World already play this role.

Jonas V
I think it might actually be pretty good if EA groups called themselves Global Priorities groups, as this shifts the implicit focus from questions like "how do we best collect donations for charity?" to questions like "how can I contribute to [whichever cause you care about] in a systematic way over the course of a lifetime?", and I think the latter question is >10x more impactful to think about. (I generally agree if there are different brands for different groups, and I think it's great that e.g. Giving What We Can has such an altruism-oriented name. I'm unconvinced that we should have multiple labels for the community itself.)
Barry Grimes
I agree that the community should only have one label but the community has multiple goals and is seeking to influence very different target audiences. In each case, we need to use language that appeals to the target audience. Perhaps the effective altruism brand should be more like the Unilever brand with marketing segmented into multiple ‘product brands’. This could include existing brands like 80,00 Hours and Giving What We Can whilst the academic project becomes “global priorities research” rather than “effective altruism”. The right name for groups will depend on the target audience and what the message testing reveals. I expect something like “High Impact Careers” or something along those lines may be more attractive to a wider audience than “effective altruism”
I think it's true that introductions to EA and initial perceptions of EA often focus on increasing regular individual people's donations to charity (as well as better allocating such donations to charity) to an extent that's disproportionate both to the significance of those topics and to how much of a focus those topics actually are in EA.  But I'm not confident that the label "effective altruism" makes that issue worse than the label "global priorities" would. We already aren't using charity in the name, and my guess is that "altruism" isn't very strongly associated with "individual charity donations" in most people's minds (I'd guess the term "altruism" is similarly or more strongly associated with "heroic sacrifices"). I'd guess that this problem is more just a result of earlier EA messaging, plus local groups often choosing to lead with a focus on individual donations.  (Of course, survey research could provide better answers on this question than our guesses would.)

Thanks for writing this, Jonas.

For what it's worth:

  1. I share the concerns you mentioned.
  2. I personally find the name "effective altruism" somewhat cringe and off-putting. I've become used to it over the years but I still hear it and feel embarrassed every now and then.
  3. I find the label "effective altruist" several notches worse: that elicits a slight cringe reaction most of the time I encounter it.
  4. The names "Global priorities" and "Progress studies" don't trigger a cringe reaction for me.
  5. I have a couple of EA-inclined acquaintances who have told me they were put off by the name "effective altruism".
  6. While I don't like the name, the thought that it might be driving large and net positive selection effects does not seem crazy to me.
  7. I would be glad if someone gave this topic further thought, plausibly to the extent of conducting surveys and speaking to relevant experts.

While I think this post was useful to have shared and this is a topic that is worth discussing, I want to throw out a potential challenge that seems at least worth considering: perhaps the name "effective altruism" is not the true underlying issue here? 

My (subjective, anecdotal) experience is that topics like this crop up every so often. Topics "like this" refer to things like:

  • concerns about the name of the movement/community/set of ideas, 
  • concerns about respected people adjacent to the movement not wanting to associate with "effective altruism" in some way and,
  •  discussions of potential other movements (for example having a separate long-term focused movement) and names (see comments about Global Priorities instead) 

I wonder if some of what is underpinning these discussions is less the accuracy or branding issues of particular names and more the difficulty of coordinating a growing community?

As the number of people interested in the ideas associated with effective altruism grows, more people enter the space with different values and interpretations of the various ideas. It becomes harder for everyone to get what they wanted from the community and less likely th... (read more)

Thanks, I think antipathy effects towards the name “Effective Altruism”, or worse, “I’m an effective altruist”, are difficult to overstate.

Also, somewhat related to what you write I happen to have thought to myself just today: “I (and most of us are) am just as much an effective egoist as an effective altruist”, after all even the holiest of us probably cannot always help ourselves putting a significantly higher weight on our own welfare than on those of average strangers.

Nevertheless, some potential upside of the current term – equally I’m not sure it matters much at all, but I attribute a small chance to them being really important: If some people are kept away by the name’s bit geeky/partly unfashionable connotation, maybe these are exactly the people that would anyways be mostly distractors. I think the bit narrow EA community has this extraordinary vibe along a few really important dimensions, and it seems invaluable (in that sense while RyanCarey mentions we may not attract the core audience with different names, I find the problem might be more another way round, we might simply dilute the core).

Maybe I’m completely overestimating this, and maybe it’s not outweighing at all the downside of attracting/appealing to fewer. But in a world where the lack of fruitful communication threatens entire social systems, maybe having a particularly strong core in that regard is highly valuable.

Agree that selection effects can be desirable and that dilution effects may matter if we choose a name that is too likable. But if we hold likability fixed, and switch to a name that is more appropriate (i.e. more descriptive), then it should select people more apt for the movement, leading to a stronger core.
Aditya Vaze
Strongly agree. The potential benefits of selection effects are underrated in these discussions.

at the Leaders Forum 2019, around half of the participants (including key figures in EA) said that they don’t self-identify as "effective altruists

Small note that this could also be counter evidence - these are folks that are doing a good job of 'keeping their identity small' yet are also interested in gathering under the 'effective altruism' banner. (edit: nevermind, seems like they identified with other -isms) .

Somehow the EA brand is threading the needle of being a banner and also not mind-killing people ... I think.

Would EA be much worse if we removed the 'banner' aspect of it? I don't know... it feels like we're running an experiment of whether it's possible to nurture and grow global prioritist qualities in the world (in people who might not have otherwise done much global prioritism, without a banner/community to help them get started). It's not clear if we're done with that experiment - if anything, initial results look promising from where I'm sitting. So my initial thought is that I don't quite want to remove the banner variable yet (but then again maybe Global Priorities could keep that variable)

Jonas V
I specifically wrote: For further clarification, see also the comment I just left here.
Ah whoops, thanks for the clarification. I'm glad that delineation was made during the session!  Hmm so maybe some weaker point:  perhaps banners like 'atheism' and 'feminism' have the property 'blend me with your identity or consequences', whereas EA doesn't as much, and maybe that's better. ¯\_(ツ)_/¯  Anyway, thanks for the post Jonas, I agree with many points and have had similar experiences.

A friend (edit: Ruairi Donnelly) raised the following point, which rings true to me:

If you mention EA in a conversation with people who don't know about it yet, it often derails the conversation in unfruitful ways, such as discussing the person's favorite pet theory/project for changing the world, or discussing whether it's possible to be truly altruistic. It seems 'effective altruism' causes people to ask the wrong questions.

In contrast, concepts like 'consequentialism', 'utilitarianism', 'global priorities', or 'longtermism' seem to lead to more fruitful conversations, and the complexity feels more baked into the framing.

"Effective Altruism" sounds self-congratulatory and arrogant to some people:

Your comments in this section suggest to me there might be something going on where EA is only appealing within some particular social context. Maybe it's appealing within WEIRD culture, and the further you get from peak WEIRD the more objections there are. Alternatively maybe there's something specific to northern European or even just Anglo culture that makes it work there and not work as well elsewhere, translation issues aside.

I think I'd expect US culture to be most ok with self-congratulation, and basically everywhere else (including UK) to be more allergic to it? But most of the people who voted on the name in the first place were British.

EA organizations that have "effective altruism" in their name or make it a key part of their messaging might want to consider de-emphasizing the EA brand, and instead emphasize the specific ideas and causes more. I personally feel interested in rebranding "EA Funds" (which I run) to some other name partly for these reasons.


This makes a lot of sense to me if there's a cap on donations due to branding, especially for the neartermist funds and if you create a legible LTF fund, then that as well. 

How big of a priority is it for the EA Funds plan to grow the donor base to non-EA donors, and on what time scale?

Jonas V
Right now, reaching non-EA donors is not a big priority, and the rebrand is correspondingly pretty far down on my priority list. This may change on a horizon of 1-3 years, though. (Rebranding has some benefits other than reaching non-EA donors, such as reducing reputational risk for the community from making very weird grants.) 

Great post.

Has this debate evolved? Did someone try to give the 10 names?

I like efficient altruism, it drops the smugness a bit.

Neoutilitariansm could also make sense. But maybe someone who understands EA better than me points out the differences between what EA has been and utilitarianism.

Change now after 10 years can be really really difficult. But the best time is as soon as possible. Also it is difficult because EA is not a single organization or exact philosophy with one person behind it.

I usually say "I admire/follow the Effective Altruism community" rather than saying I am an Effective Altruist.

Curated and popular this week
Relevant opportunities