EA is elitist. Should it stay that way?

by Diogenes20th Jan 201635 comments



Preliminary note: for most readers of this forum, this post will be preaching to the choir. However, I decided to write it for two reasons:

  1. To create common knowledge in EA around the concept I will discuss. (This excellent post from Scott Aaronson provided some motivation.)
  2. I have noticed a recent stream of forum posts and movement-growth efforts that do not seem to take the concept I will discuss into account.

On prioritization

While it is the case that anyone can contribute to the EA movement, it is also important to remember that one of EA's most important concepts is prioritization: it is possible to save and improve many more lives if you prioritize where you direct your money and efforts.

There is a skewed distribution of the effectiveness of interventions, such that by prioritizing the most effective interventions, you can have many many times the impact. Given limited resources, if you care about massively improving the world, you should focus most of your attention on that small percentage of highly effective interventions. GiveWell only promotes a small percentage of charities based on this principle.

If the distribution of the effectiveness of people is similarly skewed, then EA should take seriously the idea of prioritizing outreach for primarily the most effective people. Is this distribution similarly skewed?

Yes. We live in a world where there is a skewed distribution for the amounts of good various people can do with their resources. The richest person in America, Bill Gates, has roughly $79,000,000,000 in assets. The median net worth of an American is $44,900. You would need to recruit over 1,000,000 Americans to match what your impact could potentially be by recruiting Bill Gates. If your goal is to have as much money as possible be donated in the best ways possible, then you should seriously consider whether the expected value of recruiting Bill Gates or other billionaires is higher than the expected value of recruiting as many people as possible. For example, it is likely that GiveWell's recruitment of Cari Tuna and Dustin Moskovitz was higher impact than all other EA donor-recruitment efforts combined.

Likewise, the difference in influence ability between Hilary Clinton and the average American is likely to be at least an order of magnitude difference. Similarly the difference in productive output between an Elon Musk and the average American is likely to be at least an order of magnitude difference.

In a world where everyone's ability to save and improve lives is equal, you might prefer mass-movement strategies and not worry much about who your outreach is directed toward. If, however, we live in a world in which there is a skewed distribution (likely even a power law distribution) of wealth, talent, and influence, you might prioritize strategies which try to recruit people for whom there is evidence of outsized effectiveness. We live in the latter world.

On the implications of prioritization

This can be a difficult conclusion for an effective altruist to come to. Our lives are based on compassion for others. So to prioritize some people over others based on their effectiveness can be an emotionally difficult idea. However, it is worth noting that every self-identifying EA already engages in this behavior. Behind every intervention or charity that we choose to deprioritize in favor of others which do more good, there are people. Similarly, behind every EA organization are decisions to hire the most effective people. In doing so, EA organizations are also choosing to prioritize certain people over others. Many or most of these people - for both cases above - have praiseworthy intentions and identities strongly associated with doing lots of good. Does deprioritizing certain people make EA inhumane?

Clearly, the answer is no! An EA chooses to prioritize for the most humane reasons possible, almost by definition.

So far I have described a fact about the world (the skewed distribution of personal effectiveness) and the consequences for an EA (prioritizing recruitment of the most effective people). What are more specific implications?

  • Organizations like Giving What We Can, The Life You Can Save, and the Centre for Effective Altruism might focus less on the amount of people recruited and more on the effectiveness of people recruited. For instance, recruiting one Mark Zuckerberg could move more money than the cumulative money moved from all GWWC and TLYCS pledges to date. Likewise, 100 hours spent recruiting one Angela Merkel would likely be higher impact than 100 hours spent recruiting 100 of the usual types of people who are attracted to EA. (I deliberately chose examples that I believe which could be within the EA movement's grasp given the current set of connections that I am aware of.)
  • Welcomingness should continue to be promoted, but not at the cost of lowering community standards. For instance, you would really not want to learn that your nation's medical schools promote low barriers to entry at all costs. If they prioritized welcomingness over effectiveness when you or someone you know is on the operating table, you would probably be upset. You would also not want the system for generating qualified scientists and engineers to drop their many-tiered filters - unless you want bridges buildings to fall. In our case, the stakes for finding well-qualified people are much, much higher. (It's important to note here, however, that welcomingness is a very different concept than diversity. EA will need highly effective people from many different types of backgrounds to tackle problems of extreme complexity. Strategies to increase the diversity of qualified candidates will help satisfy this need; strategies which lower effectiveness in favor of welcomingness will not necessarily help this need, and will occasionally harm it.)
  • Researchers in the EA community might investigate evidence from psychology, from the business literature, and from interviewing top hiring managers and recruiters on which attributes predict effectiveness. After this evidence is synthesized, EA movement-builders might try to figure out most cost-effective ways to find people with these attributes. 
  • Chapters and movement-builders might prefer one-on-one outreach and niche marketing to mass-marketing strategies.
  • If it is possible for current EAs to dramatically self-improve, then they should figure out how to do so. While there may be some genetic component to personal effectiveness, there is growing evidence that personal ability may be much less fixed than previously assumed. (Indeed, to some extent this seems to be the hypothesis that CFAR is testing. (And possibly Leverage as well?))
In general, one implication could be that EA should not try to be a mass movement, like Occupy Wall Street. Instead, it might look more like the scientific revolution, or the process that went into founding America, where a relatively small set of people were able to have a gigantic impact.

This all said, the accusation of elitism, even if it's accurate, can feel hurtful. Nevertheless there is an important thought experiment to run: In the hypothetical world where elitism is in fact the best strategy for saving and improving the most lives (even after accounting for reputational risk), how many happy lives am I willing to sacrifice in order to not be accused of elitism? Thankfully, for most of us - and for those whose fulfilling lives depend our our successful efforts - the answer is clear: zero.

That said, I'd be very interested to hear alternative arguments and change my thoughts on this topic. (Especially since my motivational system would be quite satisfied to hear that everything written above is false!)


[Important note: much of this content is not original - is has been based on a series of conversations with several members of the EA movement who have asked to stay anonymous. Parts of this have even been copy-and-pasted from those conversations with permission.]