I learned a lot about EA from my local group before reading that much by myself. So when I finally started to actively read EA material, I was quite confused about EA as a global social movement and who the people behind all those texts were. Here is a list that would have helped me put EA texts in the correct context; maybe it will be helpful for someone else.
There is a social EA movement
- EA is not just the way some people happen to think. There is also an active global EA movement that runs the EA forum, organizes events and facilitates collaboration between EA individuals and organizations.
- There are a lot of real professional people in EA, and those people are influencing things in the real world – EA is by no means just a philosophy discussion club, even if your local EA club is one (and it does not have to be one forever!)
- It is even possible to work in “movement building”, for example helping EAs have more positive impact, communicating about EA to new people and supporting local volunteer groups. There is funding for this kind of work because a lot of people believe that right now, it is even more important to establish a good EA movement than to work directly on EA cause areas.
- EA related work might already influence the world around you. (I was really surprised to learn that the Finnish Ministry for Foreign Affairs had funded a research project on x-risk in 2017. It made EA seem a lot like something in the real world and less like just a bunch of well-meaning people on the internet.)
Size of EA
- Even if EA has relevant real world influence, the EA movement has way fewer people than I initially thought. There are currently (March 2022) less than 21 000 members in the Effective Altruism FB-group and the EA survey 2020 collected 2 166 answers, so probably the number of engaged EAs is somewhere between those numbers. (To understand what that number means, I remind myself that my favorite role-playing event Ropecon had almost 5 000 attendees in 2019. You should use something you can easily imagine instead.)
- There are a lot of “famous EA figures” whose names most engaged EAs would recognize at least on the level of “this is an important person in the movement”. Also, a lot of EAs know each other personally, because they have worked together, run EA groups together or even live together. (Understanding these two things made it suddenly a lot easier to read the EA forum: knowing that users might have other information than just the contents of a certain post or comment helped to make sense of some interactions.)
- When you first get to EA, it feels like there is an EA text about everything. That is not true: actually, the amount of texts commenting on issues from an EA perspective is limited. (Personally, I would love for someone to write about balancing between moral impartiality and wanting to help locally because they live in a middle-to-low-income country. Being from Finland myself, it is relatively easy for me to let our Nordic welfare state take care of local issues, but this is not the case everywhere.)
- In particular, there is no secret EA database of estimates of effectiveness of every possible action (sadly). When you tell people effective altruism is about finding effective, research-based ways of doing good, it is a natural reaction to ask: “so, what are some good ways of reducing pollution in the Baltic Sea / getting more girls into competitive programming / helping people affected by [current crisis that is on the news]” or “so, what does EA think of the effectiveness of [my favorite charity]”. Here, the honest answer is often “nobody in EA knows”, and it is easy to sound dismissive by adding “and we are not going to find out anytime soon, since it is obvious that the thing you wanted to know about is not going to be the most effective thing anyway”. (Personally, I try to admit that I don’t know the answer and ask if the person would want to try discussing possible effective ways of dealing with the problem they mentioned. In the best case, I can learn something more about the problem and the person might learn something about estimating the effectiveness of interventions.)
- If you ask EAs about a certain topic, they might direct you to a forum/blog post about it. There are big differences in how much research the writer of said post has put in writing it. Some posts aim for the scrutiny level of scientific papers and are written by professionals in the field, and some are just a collection of thoughts by a random person (like this post).
EA cause areas
- EA aims to be cause neutral, but there is actually quite a lot of consensus in the EA movement about what causes are particularly effective right now. (I guess some people would say there is not that much of a consensus, but I was surprised how confident many EAs are about having already found a lot of very effective things to work on.)
EA “brand” and EA language
- EA is something like a brand, and you should be somewhat careful about how you communicate about EA in order to not harm the brand (Thinking like this helped me understand why it was not an universally accepted truth that everybody should tell everybody about EA by all means and immediately. I think a lot of new people in EA go through this “tell everyone” phase, because EA can be super exciting.)
- Not all somewhat reasonably altruistically motivated things are considered “EA” by EAs: EAs want to keep the focus on the most effective ways to do good. (But it can be hard to know where to draw the line.)
- “EA job” often means “a job in an EA organization”, not just any potentially impactful job (but EAs can choose to have many kinds of jobs for EA reasons)
- and “EA organization” often means “an organization that was founded because of EA”, not any effectively altruistic organization (but EAs can choose work at or collaborate with many kinds of organizations for EA reasons)
- There are some ways to say “EA” when you don’t want to use the words “effective altruism”: these include “aiming for positive impact”, “being ambitious about doing good” and “ensuring a good future for humanity”. (It made a difference for me when I started to be able to recognize that someone has an interest in EA even if they are not explicitly stating it.)
- A lot of EA related material is written by EAs, even if it is not explicitly stated (for example Vox Future Perfect is not called Vox Effective Altruism). I somehow previously thought that most materials on the EA intro program were EA related things “found in the wild”, not produced by people who wrote them because they were already part of the EA movement.
- It can be a bit hard to understand some EA texts because of EA/rationalist slang. Don’t worry, you’ll get used to it (and this might help).
EA and other movements
- There is something called the rationalist movement, and a lot of EAs are part of it. However, there are also a lot of EAs who are not part of the movement even if they like thinking about their altruistic choices rationally, and of course, not all rationalists have altruistic motivations.
- There is also something called longtermism, which is not a separate movement, but an idea developed within EA, and this is why people sometimes talk for example about longtermist movement building. A lot of EAs are also longtermists, but not all (and the most used antonym for longtermism is neartermism, not shorttermism).
- And there is an ethical theory called utilitarianism that a lot of people in EA like, but not everyone in EA is a utilitarian.
Also, there are still a lot of things I don’t know about EA as a social movement – for example, I’m quite out of the loop of what leading EA figure founded what organization, or what is the relation between EA and animal advocacy movements. (I know they are related, but I don’t know the details.)
Is there anything that surprised you about EA or the EA movement?