Hide table of contents

Lots of people have an angsty, complicated, or fraught relationship with the EA community. When I was thinking through some of my own complicated feelings, I realised that there are lots of elements of EA that I strongly believe in, identify with, and am part of… but lots of others that I’m sceptical about, alienated from, or excluded from. 

This generates a feeling of internal conflict, where EA-identification doesn’t always feel right or fitting, but at the same time, something meaningful would clearly be lost if I “left” EA, or completely disavowed the community. I thought my reflections might be helpful to others who have similarly ambivalent feelings.

When we’re in a community but feel like we're fitting awkwardly, we can either :

(1) ignore it (‘you can still be EA even if you don’t donate/aren’t utilitarian/don’t prioritise longtermism/etc’)
(2) try to fix it (change the community to fit us better, 'Doing EA better')
(3) leave ('It’s ok to leave EA', 'Don’t be bycatch'). 

I want to suggest a fourth option: like the parts you like, dislike the parts you don’t, and be aware of it and own it. Not ‘keep your identity small’ or ‘hold your identity lightly’ — though those metaphors can be useful too — but make your identity bespoke, a tailor-made, unique garment designed to fit you, and only you, perfectly.

By way of epistemic status/caveat, know that I came up with this idea literally this morning, so I’m not yet taking it too seriously. It might help to read this as advice to myself.
 

Elements of EA

So, what are some of the threads, colours, cuts, styles that might go in to making your perfect EA-identity coat? I suggest:


Philosophy and theory

‘Doing the most good possible’ is almost tautologically simple as a principle, but obviously, EAs approach this goal using a host of specific philosophical and theoretical ideas and approaches. Some are held by most EAs, others are disputed. Things like heavy-tailed-ness, expected value, longtermism, randomised controlled trials, utilitarianism, population ethics, rationality, Bayes’ theorem, and hits-based giving fall into this category (to name just a few). You might agree with some of these but not others; or, you might disagree with most EA philosophy but still have some EA identification because of the other elements.


Moral obligation

Many EAs hold themselves to moral obligations: for example, to donate a proportion of their income, or to plan their career with positive impact in mind. You can clearly feel these moral obligations without subscribing to the rest of EA: lots of people tithe, and lots of people devote their lives to a cause. Maybe then these principles are enough unique enough to ‘count’ as central EA elements. But if you add in a commitment to impartiality and effectiveness, I think this does give these moral obligations a distinct flavour; and, importantly, you can aspire to work toward the impartial good, effectively, without agreeing with (most) underlying EA theory, or agreeing with EA cause prioritization.


The four central cause areas

EAs prioritise lots of causes, but four central areas are often used for the purposes of analysis: global health and development, x-risk prevention, animal welfare, and meta-EA. Obviously, you don’t need to subscribe to EA theory or EA’s ideas about moral obligation to work on nuclear risk prevention, corporate animal welfare campaigns, or curing malaria. Similarly, you might consider yourself EA, but think that the most pressing cause does not fall into any of these categories, or (more commonly) is de-prioritized within the category (for example, mental health, or wild animal welfare, which are 'niche-r' interests within the wider causes of global health and animal welfare respectively). Or, you might think that one major cause area is clearly the highest priority, and feel alienated that many EAs prioritize the others. 

 

The professional community

People who plan their career according to EA principles, either working directly or earning to give. You can be part of the EA professional community without subscribing much to the philosophical side — for example, you might work with EA colleagues at an EA-influenced animal charity just because you care about animals and you think they are doing good work, even if you don’t subscribe to utilitarianism or EA ideas about donating.


The social community

EA is a social community as well as a professional community. You can be part of the social community without being part of the professional community — for example, if you go to local group events and are close friends with EAs, but you’re not willing or able to get a highly impactful job. What’s more, EA attracts a certain type of person — kind, nerdy, takes ideas seriously, open-minded. If you have those traits, you might really enjoy the vibe of EA social spaces even if you disagree with pretty much everything about the philosophy.

All these elements are clearly related. There’s an idealised picture of becoming an EA in which all five of these elements fit seamlessly together, mutually reinforcing one another. You hear about EA philosophy, and through it you develop a sense of moral obligation to have a positive impact; or maybe you start with a sense of moral obligation and that leads you to discover the philosophy. You join a local or university group, which plugs you into the social community. You read more EA content, talk to your new EA friends, and this helps you decide which cause to prioritize. (This is likely among the central 4 cause areas, though some will go for something more niche). You then plan your career with that cause in mind, joining the professional community.

I think a bunch of EAs had a journey like this, maybe with a few more twists and turns. But I hypothesise that for others, one or more of these elements are present, but one or more others are missing. This creates an angsty dynamic where they are both drawn to the community, but at the same time alienated and repelled. I think this might be behind a lot of internal EA criticism — that is, criticism from EAs, EA-adjacents, or ‘post’-EAs (people who used to identify as EA but no longer do).

This 'ambivalent identification' dynamic might also be why so many people self-label as ‘EA-adjacent’ even when they are pretty engaged in EA, by most metrics. 

Warm vs cool EA

Another framework is to divide EA into ‘warm’ and ‘cool’ elements, like so:

warmcool
altruismeffectiveness
fuzzies[1]utilons
got here through Giving What We Cangot here through LessWrong
London/BerlinBay Area
Global poverty, animal welfareAI safety, longtermism, meta
more feminine-coded/more women?more masculine-coded/more men?
more common sense ideasweirder ideas

 

I suspect the things in the columns are correlated with each other, and ‘warm’ EAs are most likely to be alienated by the ‘coldest’ poles of the movement, and vice versa. But obviously many people are a mix; for example, I’m mostly in the warmer column, but I draw ‘weirder ideas’ from the right column.
 

So, what to do with these frameworks? If done right, I think these differences between us could be exciting and generative tensions, rather than things we need to split up or go to war over. When forming relationships, I’m looking for people who share decent amounts of common ground, but I’m not looking for carbon copies of myself. The same is true of intellectual comrades. EAs in whom different elements dominate can very easily collaborate in ways that achieve both of their goals; they don’t need to become each other.

  1. ^

    I'm not saying 'warm' EAs are purely fuzzies-motivated — if that were true, they’d just be average altruists — but they are more likely to either be motivated by illegible emotional considerations, or to take fuzzy feelings more seriously, or to think that fuzzies are very important for the good life even if not a good proxy for effectiveness, or something to that effect.

92

4
0

Reactions

4
0

More posts like this

Comments3
Sorted by Click to highlight new comments since: Today at 5:50 AM

I resonate a lot with this. I have been thinking about writing an EAF post titled: "EA is not special, nor should it be". I have been thinking about this because a couple of months ago I was walking with my best friend, explaining to him for the first time what EA was. After my 5 minute intro he goes: "But does not everyone do that? I mean, they should think about effectiveness and all that! It is the only sensible thing to do."

I think we can bring in a lot more people by reframing EA as being more about "Doing good together" rather than the current, a bit esoteric amalgamation of unusual ideas. To me, what is truly special about EA is that we together figure out how each one of us can have the most impact. My non-EA friends who want to make the world better (of which there are many!) do not collaborate much or seek advice on how they can have the most impact. I think this, at the core, is what sets EA apart. Neglectedness, tractability and importance is something that should not be weird and that most people will subscribe to. Perhaps counterfactual impact is also a bit unique, but I do not think we should lead so heavily with that. In fact, has my friends collaborated more when thinking about how to do good, I am pretty sure they too would have identified counterfactual impact. 

EA, to me, is more like a "community career service".

"But does not everyone do that? I mean, they should think about effectiveness and all that! It is the only sensible thing to do."

If only! I think from the inside (and, it seems, some people on the outside), EA can seem "obvious", at least in the core / fundamentals. But I think most philanthropy is not done like this still, even among people who spend a lot of time and effort on it.

For example, the idea that we should compare between different cause areas or that we should be neutral between helping those in our own country vs. those abroad still seem relative minority positions to me.

I get this on a personal level, even if I don’t entirely agree.

Ideally I’d be involved in all five, but I live pretty far from EA hubs (so not much chance to form relationships). My career is related to EA’s mission but as a small contractor it’s hard to choose effective charities (even though I would ideally love to focus on getting animal welfare initiatives funded), and I’m not sure a career change at this point in my life is practical for me.

So do I count as actually EA? In my own mind, probably not, although I definitely want to be—I just don’t think I do enough to count. This is probably just imposter syndrome on my part, though.

The one thing that I think is essential* to EA is the philosophy. Everything else (the social and professional communities, the effective methods, the moral obligation, etc.) flows from universalist utilitarianism, usually of the hedonic / suffering minimization kind (although the specific forms can differ slightly). If that’s not there, and if you’re coming from a totally different philosophical perspective (e.g., virtue ethics, some form of deontology not rooted in observable reality, etc.), to me it’s hard to call this EA. It seems like a house with no foundation. This was the main reason I didn’t join EA back in college, because I wasn’t ready to accept utilitarianism.

I don’t know. Maybe I’m too idealistic?

*in the sense of “if this isn’t there, you’re no longer doing EA”