An issue a lot of EAs have, but I think few have formalized in writing, is a concern with cause representation within the EA movement.
The basic idea behind representation is that the materials created or aimed at being part of the public-facing view of EA are in line with what a suitably broad selection of EAs truly think. To take an obvious example, if someone was very pro-Fair Trade and started saying EA was all about Fair Trade, this would not really be representative of what most EAs think (even if this person was convinced that Fair Trade was the best cause within the EA framework). Naturally, in a movement as large as the EA movement, there remains a diversity of viewpoints, but, nonetheless, I think it's fairly easy for experienced EAs to have a sense of what is a common EA view, and what is not (although, people can definitely be affected by peer group pressure and city selection in creating a bubble). There have been a lot of implicit and explicit conflicts around this issue and there are different possible solutions to dealing with it.
First, some examples of clear problems with representativeness.
- The EA Handbook
- EA Global
- Funding gaps
- EA chapter building
The EA Handbook
The EA Handbook was one of the most public examples of the problem at hand: you can look at the facebook comments and EA Forum comments to get a sense of what people have said. If I had to summarize a group of people’s concerns, it would have to do with representativeness. As was put it in one of the comments:
“I don't feel like this handbook represents EA as I understand it. By page count, AI is 45.7% of the entire causes sections. And as Catherine Low pointed out, in both the animal and the global poverty articles (which I didn't count toward the page count), more than half the article was dedicated to why we might not choose this cause area, with much of that space also focused on far-future of humanity. I'd find it hard for anyone to read this and not take away that the community consensus is that AI risk is clearly the most important thing to focus on.”
I personally think people would not have reacted so strongly to the Handbook if it had not seemed like being part of a bigger trend, one I hope to crystallize in this blog post.
EA Global
EA Global is an area that is pretty public and pretty numerically measurable. If you look at all the talks given in 2018 by their cause area, they end up looking like 3 hours worth of global poverty, 4.5 hours animal welfare talks, and 11.5 hours of x-risk. This is not counting the meta talks that could have been about any cause area but were often effectively x-risk-related (e.g. 80,000 Hours’ career advice). This is also counting the “far future animal welfare talks” as normal animal welfare talks. You can also split it up as near future cause areas, with 5.5 hours dedicated on them, vs far future cause areas, with 13.5 hours spent on those.
This concern has been true for the last few EAGs and it’s getting more noticeable over time. Part of the reason I only go to every 2nd EAG, and why many of the people I would describe as leaders in EA poverty do not go at all, is due to the lack of representation, and thus the lack of draw of EAs who would want to talk about other causes. This is a self-perpetuating problem as well, since if fewer EAs go the events they become intrinsically less and less friendly towards the EAs of that cause area. After a couple years, you could even do a survey and say “well, the average attender of EAG thinks X cause is of the highest impact”, but that would only be true because of everyone with different views dropping out over time due to frustration and a feeling of disconnection. This is another issue I have talked about with a lot of involved EAs, and is part of the reason there is interest in a different EA related conference.
Funding Gaps
Details on funding gaps can be found here. Generally, however, claiming that “the EA movement is not largely funding constrained” is another example of a general trend of implying that the particular things that are representative of particular groups of EAs ultimately represent the movement as a whole.
Saying that “the far future is funding-filled and thus, if you care about it, you should not like earning-to-give as much” is more honest and true than claims along the lines of “the whole EA movement is funding-filled”.
EA Chapter Building
The final example is more subtle to quantify, but it’s also one I have heard about from quite a few different sources. EA chapter building is currently fairly tightly controlled and focused heavily on the creation of far future and AI-focused EAs. Again, if an organization is genuine about this, that is one thing, but I feel as though the average EA (unless they have had direct experience with trying to run a chapter) would guess that groups are generally discussing all cause areas and are getting supported similarly, regardless of focus.
While these are not the only examples, I feel they are sadly enough to point at a more overarching trend.
I would also like to include some areas where I feel this has not happened. Some good examples:
- EA Forum
- EA Facebook jobs
- EA Wikipedia
- Doing Good Better
The EA Forum is surprisingly diverse, and the current karma system does not seem to consistently favour one cause area or EA organization over another. As stated in this post, it’s true that frequent forum users tend to have a diversity of views. This could change in the future, given the upcoming changes, but currently, I see this medium as one of the less controlled systems within the EA.
The EA Facebook jobs group has helped a lot of people (including many of the staff currently working at EA organisations) find jobs from a wide range of EA-related organizations. If you take a sampling of the job ads, they tend to be disperse and more representative of the different cause areas.
The EA Wikipedia page currently shows all three causes and concepts that most EAs would broadly agree with as core to the movement and representative of those within.
Doing Good Better, much like the Wikipedia page, does not hold an aggressively single cause focus throughout the book. Instead, it covers classic EA and issues that almost all would agree with.
How do we know what is representative?
Representativeness is defined as “typical of a class, group, or body of opinion”. So the representativeness of the EA movement would be expressed via what is typical of many EAs within the movement. This would, ideally be determined via a random sample that hits a large percentage of the EA movement. For example, through the EA survey or by gathering the perspective of everyone who has signed up to the EA Forum. Both of these would hit a very large percentage of the EA movement relative to more informal measures.
What is representative of EA leaders?
One of the responses against having a representative sample is that perhaps there are EAs who are more well-informed than others. To take a more objective criteria, perhaps the average EA who has been involved in the EA movement for 5 years or more is more informed than the average EA who has been involved for 5 days. I think there are ways to go about determining things like this from more aggregate data (for example, duration of involvement, or the percentage donated, might both correlate with more involved EAs). Perhaps even do a survey which makes sure to sample every organization that over 50% of the broader EA community thinks of as of an “EA organization”.
While this post does not aim to determine the “perfect” way to sample EAs or EA leaders, it does aim to point in the right direction in the face of the numerous issues with sampling EAs. Clearly, a survey of only the EA leaders within my city (or any other specific location) would be critically biased, as would be one with a disproportional focus on a particular organization. Another unrepresentative sample might be derived from among “EAG leaders”, as the leaders are chosen by a single organization and generally hold that organization’s cause as salient. This issue is worth another post altogether.
Possible solutions
Have a low but consistent bar for representativeness, allowing multiple groups to put forward competing presentations of EA. For example, anyone can make an EA handbook that’s heavily focused on a single cause area and call it an EA handbook.
Pros - This solution is fairly easy to implement, and allows a wide variety of ideas to co-exist and flourish. Things will naturally get more popular if they represent the EAs better as they will be shared more throughout the movement.
Cons - Leaves the movement pretty vulnerable to co-option and misrepresentation (e.g. an EA Fair Trade handbook),which could harm movement building/newer people’s views of EA.
Have a high and consistent bar for representativeness. For example, if something is branded in a way that suggests that it is representative of EA, it exhibits at least 20% of each cause area (x-risk, AR, poverty) and does not clearly pitch or favour a single organization or approach. Alternatively, some kind of a more formal system, based off objective measures from the community, could be installed.
Pros - Does not make EA easy to co-opt, and makes sure that the most seen EA content gives appropriate representation to different ideas.
Cons - Ratios and exact numbers would be hard to calculate and get a sense of. They would also change over time (e.g. if a new cause got added).
Community-building organizations could strive for cause indifference. For example, current EA is built via a few different movement-building organizations. A case could be made that organizations focused specifically on movement-building should strive to be representative or cause indifferent. One of the ways they could do this is through cross-organization consultation before hosting events or publishing materials meant to represent the movement as a whole.
Pros - Reduces the odds of duplicating movement outreach work (e.g. AI EA chapters and poverty EA chapters). Increases the odds that long term the EA movement will be cause diverse, leading to higher odds of finding Cause X, a cause that’s better than currently existing cause areas that we haven’t discovered yet.
Cons - Many of the most established EA organisations have a cause focus of some sort. Would be hard to enforce, but could nonetheless be an ideal worth striving towards.
Nice comment; I'd also like to see a top-level post.
One quibble: Several of your points risk conflating "far-future" with "existential risk reduction" and/or "AI". But there is far-future work that is non-x-risk focused (e.g. Sentience Institute and Foundational Research Institute) and non-AI-focused (e.g. Sentience Institute) which might appeal to someone who shares some of the concerns you listed.