Introduction
- I have some concerns about the "effective altruism" branding of the community.
- I recently posted them as a comment, and some people encouraged me to share them as a full post instead, which I'm now doing.
- I think this conversation is most likely not particularly useful or important to have right now, but there's some small chance it could be pretty valuable.
- This post is based on my personal intuition and anecdotal evidence. I would put more trust in well-run surveys of the right kinds of people or other more reliable sources of evidence.
"Effective Altruism" sounds self-congratulatory and arrogant to some people:
- Calling yourself an "altruist" is basically claiming moral superiority, and anecdotally, my parents and some of my friends didn't like it for that reason. People tend to dislike it if others are very public with their altruism, perhaps because they perceive them as a threat to their own status (see this article, or do-gooder derogation against vegetarians). Other communities and philosophies, e.g., environmentalism, feminism, consequentialism, atheism, neoliberalism, longtermism don't sound as arrogant in this way to me.
- Similarly, calling yourself "effective" also has an arrogant vibe, perhaps especially among professionals in relevant areas. E.g., during the Zurich ballot initiative, officials at the city of Zurich unpromptedly asked me why I consider them "ineffective", indicating that the EA label basically implied to them that they were doing a bad job. I've also heard other professionals in different contexts react similarly. Sometimes I also get sarcastic "aaaah, you're the effective ones, you figured it all out, I see" reactions.
"Effective altruism" sounds like a strong identity:
- Many people want to keep their identity small, but EA sounds like a particularly strong identity: It's usually perceived as both a moral commitment, a set of ideas, and a community. By contrast, terms like "longtermism" are somewhat weaker and more about the ideas per se.
- Perhaps partly because of this, at the Leaders Forum 2019, around half of the participants (including key figures in EA) said that they don’t self-identify as "effective altruists", despite self-identifying, e.g., as feminists, utilitarians, or atheists. I don't think the terminology was the primary concern for everyone, but it may play a role for several individuals.
- In general, it feels weirdly difficult to separate agreement with EA ideas from the EA identity. The way we use the term, being an EA or not is often framed as a binary choice, and it's often unclear whether one identifies as part of the community or agrees with its ideas.
Some further, less important points:
- "Effective altruism" sounds more like a social movement and less like a research/policy project. The community has changed a lot over the past decade, from "a few nerds discussing philosophy on the internet" with a focus on individual action to larger and respected institutions focusing on large-scale policy change, but the name still feels reminiscent of the former.
- A lot of people don't know what "altruism" means.
- "Effective altruism" often sounds pretty awkward when translated to other languages. That said, this issue also affects a lot of the alternatives.
- We actually care about cost-effectiveness or efficiency (i.e., impact per unit of resource input), not just about effectiveness (i.e., whether impact is non-zero). This sometimes leads to confusion among people who first hear about the term.
- Taking action on EA issues doesn't strictly require altruism. While I think it’s important that key decisions in EA are made by people with a strong moral motivation, involvement in EA should be open to a lot of people, even if they don’t strongly self-identify as altruists. Some may be mostly interested in contributing to the intellectual aspects without making large personal sacrifices.
- There was a careful process where the name of CEA was determined. However, the adoption of the EA label for the entire community happened organically and wasn’t really a deliberate decision.
Some thoughts on potential implications:
- The longer-term goal is for the EA community to attract highly skilled students, academics, professionals, policy-makers, etc., and the EA brand might plausibly be unattractive for some of these people. If that's true, the EA brand might act as a cap on EA's long-term growth potential, so we should perhaps aim to de-emphasize it. Or at least do some marketing research on whether this is indeed an issue.
- EA organizations that have "effective altruism" in their name or make it a key part of their messaging might want to consider de-emphasizing the EA brand, and instead emphasize the specific ideas and causes more. I personally feel interested in rebranding "EA Funds" (which I run) to some other name partly for these reasons.
- I personally would feel excited about rebranding "effective altruism" to a less ideological and more ideas-oriented brand (e.g., "global priorities community", or simply "priorities community"), but I realize that others probably wouldn't agree with me on this, it would be a costly change, and it may not even be feasible anymore to make the change at this point. OTOH, given that the community might grow much bigger than it currently is, it's perhaps worth making the change now? I'd love to be proven wrong, of course.
Thanks to Stefan Torges and Tobias Pulver for prompting some of the above thoughts and helping me think about them in more detail.
While I'm not sure we're using terms like "political" and "power" in the same way, as far as I can tell this worry makes a lot of sense to me.
However, I think there is an opposite failure mode: mistakenly believing that because of one's noble goals and attitudes one is immune to the vices of power, and can safely ignore the art of how to navigate a world that contains conflicting interests.
A key assumption from my perspective is that political and power dynamics aren't something one can just opt out of. There is a reason why thinkers from Plato over Macchiavelli to Carl Schmitt have insisted that politics is a separate domain that merits special attention (and I'm not saying this as someone who is not particularly sympathetic to any of these three on the object level). [ETA: Actually I'm not sure if Plato says that, and I'm confused why I included him originally. In a sense he may suggest the opposite view since he sometimes compares the state to the individual.]
Internally, community members with influence over more financial or social capital have power over those whose projects depend on such capital. There certainly are different views with respect to how this capital is best allocated, and at least for practical purposes I don't think these are purely empirical disagreements and instead involve 'brute differences in interests'.
Externally, EAs have power over beneficiaries when they choose to help some but not others. And a lot of EA projects are relevant to the interests of EA-external actors that form a complex network of partly different and partly aligned interests and different amounts of power over each other. Perhaps most drastically, a lot of EA thought around AI risk is about how to best influence how essentially the whole world will be reshaped (if not an outright plan for how to essentially take over the world).
Therefore, I think we will need to deal with 'politics' anyway, and we will attract people who are motivated by seeking power anyway. Non-EA political structures and practice contain a lot of accumulated wisdom on how to navigate conflicting interests while limiting damage from negative-sum interactions, on how to keep the power of individual actors in check, and on how to shape incentives in such a way that power-seeking individuals make prosocial contributions in their pursuit of power. (E.g. my prior is that any head of government in a democracy is at least partly motivated by pursuing power.)
To be clear, I think there are significant problems with these non-EA practices. (Perhaps most notably negative x-risk externalities from international competition.) And if EA can contribute technological or other innovations that help with reducing these problems, I'm all for it.
Yet overall I feel like I more often see EAs make the mistake of naively thinking they can ignore their externally imposed entanglement in political and power dynamics, and that there is nothing to be learned from established ways for how to reign in and shape these dynamics (perhaps because they view established practice and institutions largely as a morass of corruption and incompetence one better steers clear of). E.g. some significant problems I've seen at EA orgs could have been avoided by sticking more closely to standard advice of having e.g. a functional board that provides accountability to org leadership.
My best guess is that, on the margin, it would be good to attract more people with a more common-sense perspective on politics and power-seeking as opposed to people who lack the ability or willingness to understand how power operates in the world, and how to best navigate this. If rebranding to "Global Priorites" would have that effect (which I think I'm less confident in than you), then I'd count that as a reason for rebranding (though I doubt it would be among the top 5 most important pro or con reasons).