Hide table of contents

I learned a lot about EA from my local group before reading that much by myself. So when I finally started to actively read EA material, I was quite confused about EA as a global social movement and who the people behind all those texts were. Here is a list that would have helped me put EA texts in the correct context; maybe it will be helpful for someone else.

There is a social EA movement

  • EA is not just the way some people happen to think. There is also an active global EA movement that runs the EA forum, organizes events and facilitates collaboration between EA individuals and organizations.
  • There are a lot of real professional people in EA, and those people are influencing things in the real world – EA is by no means just a philosophy discussion club, even if your local EA club is one (and it does not have to be one forever!)
  • It is even possible to work in “movement building”, for example helping EAs have more positive impact, communicating about EA to new people and supporting local volunteer groups. There is funding for this kind of work because a lot of people believe that right now, it is even more important to establish a good EA movement than to work directly on EA cause areas.
  • EA related work might already influence the world around you. (I was really surprised to learn that the Finnish Ministry for Foreign Affairs had funded a research project on x-risk in 2017. It made EA seem a lot like something in the real world and less like just a bunch of well-meaning people on the internet.)

Size of EA

  • Even if EA has relevant real world influence, the EA movement has way fewer people than I initially thought. There are currently (March 2022) less than 21 000 members in the Effective Altruism FB-group and the EA survey 2020 collected 2 166 answers, so probably the number of engaged EAs is somewhere between those numbers. (To understand what that number means, I remind myself that my favorite role-playing event Ropecon had almost 5 000 attendees in 2019. You should use something you can easily imagine instead.)
  • There are a lot of “famous EA figures” whose names most engaged EAs would recognize at least on the level of “this is an important person in the movement”. Also, a lot of EAs know each other personally, because they have worked together, run EA groups together or even live together. (Understanding these two things made it suddenly a lot easier to read the EA forum: knowing that users might have other information than just the contents of a certain post or comment helped to make sense of some interactions.)

EA material

  • When you first get to EA, it feels like there is an EA text about everything. That is not true: actually, the amount of texts commenting on issues from an EA perspective is limited. (Personally, I would love for someone to write about balancing between moral impartiality and wanting to help locally because they live in a middle-to-low-income country. Being from Finland myself, it is relatively easy for me to let our Nordic welfare state take care of local issues, but this is not the case everywhere.)
  • In particular, there is no secret EA database of estimates of effectiveness of every possible action (sadly). When you tell people effective altruism is about finding effective, research-based ways of doing good, it is a natural reaction to ask: “so, what are some good ways of reducing pollution in the Baltic Sea / getting more girls into competitive programming / helping people affected by [current crisis that is on the news]” or “so, what does EA think of the effectiveness of [my favorite charity]”. Here, the honest answer is often “nobody in EA knows”, and it is easy to sound dismissive by adding “and we are not going to find out anytime soon, since it is obvious that the thing you wanted to know about is not going to be the most effective thing anyway”. (Personally, I try to admit that I don’t know the answer and ask if the person would want to try discussing possible effective ways of dealing with the problem they mentioned. In the best case, I can learn something more about the problem and the person might learn something about estimating the effectiveness of interventions.)
  • If you ask EAs about a certain topic, they might direct you to a forum/blog post about it. There are big differences in how much research the writer of said post has put in writing it. Some posts aim for the scrutiny level of scientific papers and are written by professionals in the field, and some are just a collection of thoughts by a random person (like this post).

EA cause areas

  • EA aims to be cause neutral, but there is actually quite a lot of consensus in the EA movement about what causes are particularly effective right now. (I guess some people would say there is not that much of a consensus, but I was surprised how confident many EAs are about having already found a lot of very effective things to work on.)

EA “brand” and EA language

  • EA is something like a brand, and you should be somewhat careful about how you communicate about EA in order to not harm the brand (Thinking like this helped me understand why it was not an universally accepted truth that everybody should tell everybody about EA by all means and immediately. I think a lot of new people in EA go through this “tell everyone” phase, because EA can be super exciting.)
  • Not all somewhat reasonably altruistically motivated things are considered “EA” by EAs: EAs want to keep the focus on the most effective ways to do good. (But it can be hard to know where to draw the line.)
  • “EA job” often means “a job in an EA organization”, not just any potentially impactful job (but EAs can choose to have many kinds of jobs for EA reasons)
  • and “EA organization” often means “an organization that was founded because of EA”, not any effectively altruistic organization (but EAs can choose work at or collaborate with many kinds of organizations for EA reasons)
  • There are some ways to say “EA” when you don’t want to use the words “effective altruism”: these include “aiming for positive impact”, “being ambitious about doing good” and “ensuring a good future for humanity”. (It made a difference for me when I started to be able to recognize that someone has an interest in EA even if they are not explicitly stating it.)
  • A lot of EA related material is written by EAs, even if it is not explicitly stated (for example Vox Future Perfect is not called Vox Effective Altruism). I somehow previously thought that most materials on the EA intro program were EA related things “found in the wild”, not produced by people who wrote them because they were already part of the EA movement.
  • It can be a bit hard to understand some EA texts because of EA/rationalist slang. Don’t worry, you’ll get used to it (and this might help).

EA and other movements

  • There is something called the rationalist movement, and a lot of EAs are part of it. However, there are also a lot of EAs who are not part of the movement even if they like thinking about their altruistic choices rationally, and of course, not all rationalists have altruistic motivations.
  • There is also something called longtermism, which is not a separate movement, but an idea developed within EA, and this is why people sometimes talk for example about longtermist movement building. A lot of EAs are also longtermists, but not all (and the most used antonym for longtermism is neartermism, not shorttermism).
  • And there is an ethical theory called utilitarianism that a lot of people in EA like, but not everyone in EA is a utilitarian.

Also, there are still a lot of things I don’t know about EA as a social movement – for example, I’m quite out of the loop of what leading EA figure founded what organization, or what is the relation between EA and animal advocacy movements. (I know they are related, but I don’t know the details.)

Is there anything that surprised you about EA or the EA movement?

Comments22
Sorted by Click to highlight new comments since: Today at 2:24 PM

Really enjoyed this post, and will likely be sharing this with newer EAs in the future! A lot of this is what I'd call "landscape knowledge" which only becomes clear once you start interacting with the community itself. Thanks for writing it!

When you first get to EA, it feels like there is an EA text about everything.

EA also stands for Endless Articles!

Really? I thought it stood for Easy Answers

I thought it was Endless Arguments

Thanks for writing this post! This is a great resource, especially for newcomers 😀 

Great post, thank you! This is useful as a guide to what to try and add in to intro fellowships, in particular:

There are a lot of real professional people in EA, and those people are influencing things in the real world – EA is by no means just a philosophy discussion club, even if your local EA club is one (and it does not have to be one forever!)

I think this is a really important realisation to have as someone doing an intro fellowship/getting into EA. My guess is that realising this makes it a lot easier to think seriously about making career choices based on ideas/methods from EA. 

So, how can we help new people realise this sooner?

A quick brainstorm:

  • Include some readings/podcasts in intro fellowships where people talk in the first person about their EA-aligned work
  • Encourage new members to attend EAG(x)
  • Have talks/Q&As with people currently doing EA-aligned work
  • Include a few bios of individuals and their stories of getting into this kind of work
  • Chat to new members about what previous members of your group have gone on to do (if your group is mature enough)

I think it'd be ideal if, people understand that EA is not just a philosophy discussion group, but a thing that they could shape their career around, from when they first learn about it.

With EA career stories I think it is important to to keep in mind that new members might not read them the same way as more engaged EAs who already know what organization is considered cool and effective within EA.  When I started attending local EA meetups I met a person who worked at OpenPhil (maybe as a contractor? I can't remember the details), but I did not find it particularly impressive because I did not know what OpenPhilanthropy was and assumed the "phil" stood for "philosophy". 

I was going to suggest the last point, but you're way ahead of me! In the next couple of years, the first batch of St Andrews EAs will have fully entered the world of work/advanced study, and keeping some record of what the alumni are doing would be meaningful. 
[As highlighted in the thread post, we are two EAs who know each other outside the forum.]

Re: "In particular, there is no secret EA database of estimates of effectiveness of every possible action (sadly). When you tell people effective altruism is about finding effective, research-based ways of doing good, it is a natural reaction to ask: “so, what are some good ways of reducing pollution in the Baltic Sea / getting more girls into competitive programming / helping people affected by [current crisis that is on the news]” or “so, what does EA think of the effectiveness of [my favorite charity]”. Here, the honest answer is often “nobody in EA knows”"

Yeees, this is such a common first reaction I have found in people first being introduced to Effective Altruism. I always really want to give some beginning of an answer but feel self-conscious that I can't even give an honest best guess from what I know without sort of disgracing the usual standards of rigor of the movement, and misrepresenting its usual scope.

Well, no one has the "real" answers to any of these questions, even the most EA of all EAs. The important thing is to be asking good questions in the first place. I think it's both most truthful and most interpersonally effective to say something like "gee, I've never thought about that before. But here's a question I would ask to get started. What do you think?"

As EA grew from humble, small, and highly specific beginnings (like but not limited high impact philanthropy), it became increasingly big tent.

In becoming big tent, it has become tolerant of ideas or notions that previously would be heavily censured or criticized in EA meetings.

Namely, this is in large part because early EA was more data driven with less of a focus on hypotheticals, speculation, and non-quantifiable metrics. That’s not to say current EA isn’t these things- it’s just relatively less stressed compared to 5-10 years ago.

In practice, this means today’s EA is more willing to consider altruistic impact that can’t be easily accurately measured or quantified, especially with (some) Longtermist interests. I find this to be a rather damning weakness, although one could make the case it is also a strength.

This also extends to outreach.

For example, I wouldn’t be surprised if an EA would give a dollar or volunteer for the seeing eye-dog organizations [or any other ineffective organizations] under the justification that this is “community-building” and like the borg, someday we will be able to assimilate them and make them more effective or recruit new people in EA.

To me and other old guard EAs, it’s wishful thinking, because it makes EA less pure to its epistemic roots, esp. overtime as non-EA ideas enter the fold and influence the group. One example of this is how DEI initiatives are wholeheartedly welcomed by EA organizations whereas in fact there is little evidence the DEI/progressive way of hiring personnel and staff results in better performance outcomes compared to normal hiring that doesn’t factor in or give advantage/edge to a candidate based on their ethnicity, gender, race, or sexual orientation.

But even more so, with cause prioritization. In the past, I felt that it became very difficult to have your championed or preferred cause even considered remotely effective. In fact, the null hypothesis was that your cause isn’t effective... and that most causes weren’t.

Now it’s more like any and all causes are assumed effective or potentially effective off the get-go and then are supported by some marginal amount of evidence. A less elitist and stringent approach, but inevitable once you become big tent. Some people feel this made EA a friendlier place. Let’s just say that today you’d be less likely to be kicked out of an EA meeting for being naively optimistic and without a care for figures/numbers, and more likely to be kicked out for being overtly critical (or even mean) even if that criticalness was the strict attitude of early EA meetings that turned a lot of people off from EA (including myself, when I first heard about it. I later came around to appreciate and welcome that sort of attitude and its relative rarity in the world. Strict and robust epistemics are underappreciated).

For ex. If you think studying jellyfish is the most effective way to spend your life and career, draft an argument or forum post explaining the potential boonful future consequences of your research and bam, you are now an EA. In the past it would have been received as, why study jellyfish when you could use your talents to accomplish X or Y and something greater and follow a proven career path that is less risky and more profitable (intellectually and fiscally) than jellyfish study.

Unlike Disney and its iron claw grip over its brands and properties, it’s much easier to call oneself or identify an EA nowadays or as part of the EA sphere… because well, anything and everything can be EA. The EA brand, whilst once tightly controlled and small, has now grown and it can be difficult to tell the fake Gucci bags from the real deal when both are sold at the same market.

My greatest fear is that EA will overtime become just A, without the E, and lose the initial ruthless data and results driven form of moral concern.

I don't think core EA is more "big tent" now than it used to be. Relatively more intellectual effort is devoted to longtermism now than global health and development, which represents more a shift in focus than a widening of focus.

What you might be seeing is an influx of money across the board, which results at least partially in decreasing the bar of funding for more speculative interventions.

Also, many people now believe that the ROI of movement building is incredibly high, which I think was less true even a few years ago. So net positive but not very exciting movement building interventions -- both things that look more like traditional "community building" and things that look like "support specific promising young EAs" -- are much more likely to be funded than before.  In the "support specific promising young EAs" case, this might be true even if they say dumb things or are currently pursuing lower-impact cause areas, as long as the CB case for it is sufficiently strong (above some multiplier for funding, and reasonably probable to be net positive).

[This comment is no longer endorsed by its author]Reply

I think I no longer endorse this comment. Not sure but it does seem like there's a much broader set of things that people research, fund, and work on (e.g. I don't think there was that much active work on biosecurity 5 years ago).

I actually don't relate to much of what you're saying here. 

For ex. If you think studying jellyfish is the most effective way to spend your life and career, draft an argument or forum post explaining the potential boonful future consequences of your research and bam, you are now an EA. In the past it would have been received as, why study jellyfish when you could use your talents to accomplish X or Y and something greater and follow a proven career path that is less risky and more profitable (intellectually and fiscally) than jellyfish study.

I know jellyfish is a fictional example.  Can you give a real example of this happening? I'm not sure what you mean by "bam, you are now an EA". What is the metric for this?

I wrote a post about two years ago arguing that the promotion of philosophy education in schools could be a credible longtermist intervention. I think reception was fairly lukewarm and it is clear that my suggestion has not been adopted as a longtermist priority by the community. Just because there were one or two positive comments and OKish karma doesn't mean anything - no one has acted on it. It seems to me that it's a similar story for most new cause suggestions.

Now it’s more like any and all causes are assumed effective or potentially effective off the get-go and then are supported by some marginal amount of evidence.

This doesn't seem true to me, but I'm not an "old guard EA".  I'd be curious to know what examples of this you have in mind.

Strongly upvoted, but this should be its own top-level post.

For me, the big revelation was that EA was not just about causes that are supported by RCTs/empirical evidence. It has this whole element of hits-based giving. In fact, the first time I realized this, I ended up creating a question on the forum about the misleading definition.

EA aims to be cause neutral, but there is actually quite a lot of consensus in the EA movement about what causes are particularly effective right now.

Actually, notice that the consensus might be based more on internal culture, because founder effects are still quite strong. That being said I think the community puts effort in remaining cause neutral, and that's good.

I don't think a consensus on what cause is most effective is incompatible with cause-neutrality as it's usually conceived (which I called cause-impartiality here).

Makes sense, and I agree

This is great. I remember a similar post from maybe a year or two ago, but I am unable to find it. Something along the lines of "things that it took me way too long to figure out about EA". Anyone else remember this?

Regarding the first 4 points about EA as a movement - when I first read about EA on Wikipedia, I basically thought "oh, cool!", and then went on with my life because I couldn't imagine there was an actual movement rather than just a philosophical school. Only a few months later a local group showed up and I got to know about it.

Regarding the confusing jargon - I disagree, you won't get used to it, and people should stop writing like that.

I was really surprised by the focus on AI safety when I first looked at 80000hours. Then over time I became convinced it actually was important.

Curated and popular this week
Relevant opportunities