Introduction
Elitism often gets a bad rap. Its role in EA is complicated, and though it sometimes leaves a bad taste in our mouth, we think that elitism is better understood in shades of gray rather than black and white. In this post, we’ll look at when elitism can be useful in EA and when it can be detrimental. Hopefully, a more nuanced understanding of elitism and its benefits/drawbacks can lead to a more productive conversation around its place in community building.
A closer look
Elitism in EA usually manifests as a strong preference for hiring and funding people from top universities, companies, and other institutions where social power, competence, and wealth tend to concentrate. Although elitism can take many other forms, for our purposes, we’ll be using this definition moving forward.
We’ve categorized several traits in the following table by whether they’ll be selected for during prestigious recruiting/hiring processes, or whether they’ll be independent/selected against. Feel free to propose edits or other traits in the comments! We’ve found this table useful for thinking through situations when elitism may or may not be appropriate.
Traits that elitism tends to select for | Traits that elitism tends to select against (or neutral) |
- Ambition/desire for power - Problem-solving - Self-motivation and self-regulation - Academic/intellectual competence - Possession of social power - Access to resources | - Altruism/desire to help others - Agency/agentic-ness - Critical thinking - Risk-taking/rebelliousness (e.g. choosing safer career options like finance, medicine, Big Tech) |
Pros of elitist selection
1. Talent
Prestigious programs select for a baseline of traits EA generally looks for. Making something more elite in an academic context will draw in competent and ambitious talent. Given that impact is heavy-tailed, there are often several orders of magnitudes difference between the expected impact of median vs top percentile talent[1]
2. Class and social selection
Elite selection (e.g. at top universities) will often select for people who have a baseline of financial stability. EA careers aren’t as stable as alternative career pathways for most people (e.g. teacher, doctor, researcher), and financial stability is an important prerequisite to getting more involved. It’s far easier to consider earning to give if you’re making $100k+ a year.
Edit: It’s challenging for students/workers from even middle-class families to give up several hours a week to preventing a far-off risk while struggling to pay off $50,000+ in student loans and support their families.
Cons of elitist selection
1. Optics and Demotivation
As EA becomes more mainstream, we should be careful with how EA’s image grows. An elitist reputation may kneecap recruitment efforts and the movement’s impact. Additionally, internal perceptions of EA may turn sour as conferences and organizations start being viewed as “only for the elite” vs. “open-to-all.” It can be incredibly demotivating being told that your potential for impact is far less than a select few.
2. Epistemics and Homogeneity
Recruiting from the same 10-20 universities who all have similar demographics makes it more likely to end up engaging in groupthink. This is problematic since novel and creative solutions are in high demand. Lack of diversity is also already a problem that EA struggles with, and promotes a self-perpetuating cycle.
3. Altruism
Prestige doesn’t select for people who want to do the most good. This can be counteracted by recruitment processes that select more heavily for altruism and the self-selection effects of EA as a movement, but given the importance of strong value-alignment within EA, this is potentially damaging in the long-term.
4. Practicality
Elitist selection will miss great people who haven’t had access to these elite institutions and environments. It might pass over the weird and non-traditional candidates, the ones who might be able to make the largest impact. It doesn’t really select for traits that we might want, such as risk-aversion, contrarianism, and agency. At the same time, no selection process is perfect, and it all depends on the specific situation and traits that we want to select for.
When is elitism appropriate?
There are many situations in which elitist recruitment processes can be instrumentally useful. Here are a few examples:
Senior-level positions
In these cases, elitism can be beneficial because specialized competence and leadership are key to being successful in senior roles (e.g. CEOs, senior AI engineers, research leads) and these are traits highly correlated with elite environments.
Cofounder searches
Many of the points in the previous example also apply here. The initial team of an organization often makes or breaks it, so this team should be especially competent, cohesive, self-driven, and well-financed. Having a strong network of elite connections and access to resources is also crucial for ambitious ventures.
On the other hand, things are more complicated when we consider the type of organization that is hiring and the traits they are looking for. For new AI research labs, selecting for prestige probably correlates with finding good cofounders. If the organization is pursuing a riskier approach, where technical expertise is less necessary, elitism might select against the kind of people you’re looking for.
University groups at early stages
Early stage university groups seem to follow a principle of recruiting every new member to become an exec. Though useful for the short-term, it potentially sacrifices the group’s effectiveness in the long-term. Disagreements may crop up, and it’s much harder to remove discordous team members after placing them in positions of power.
The initial team also benefits greatly from being highly competent and self-motivated.
“The original Mac team taught me that A-plus players like to work together, and they don't like it if you tolerate B-grade work.” — Steve Jobs
High-level conferences
Field-specific conferences—such as an AI safety or a biosecurity conference—benefit from restricting the conference to those with expertise. This ensures that everyone in attendance can contribute to the conversations or otherwise will benefit greatly from being exposed to the content.
When is elitism unhelpful?
On the flip side, in many instances being elitist isn’t the best tool for the job, and won’t select for the desired traits:
Project funding and entrepreneurship
Groundbreaking entrepreneurship usually requires a good amount of altruistic instinct, risk-taking, and ability to think outside the status quo of what already exists/is likely to be successful. These traits don’t correlate that much with elite environments, and funding regular people within EA also gives them access to resources they are less likely to have access to compared to people in elite environments.
Entry-level employees
For entry-level positions (e.g. research interns, junior engineers) competence differences between those from elite and non-elite backgrounds matter less. Favoring non-elites at this level also gives them an opportunity to gain experience, which is generally easier for elites to obtain.
EAGx conferences and some EAGs
General conferences are a great place for new people to gain connections and opportunities within EA, and are probably most benefited by people who would otherwise not have opportunities to network with working professionals
Generalist/operations positions
Skills that make people good generalists/managers/assistants are not fully correlated with elite environments. For example, traits such as critical thinking and a sharp intuition are useful for generalists.
Conclusion
Discussions about elitism in EA have tended to refer to it as a trait or a characteristic that should either be built in or phased out of EA culture. This tends to create an all-or-nothing understanding of elitism as a concept. We think that instead framing elitism as a tool that can be useful in some circumstances and anti-useful in others is a more accurate and productive way of tracking how it fits into EA culture and best practices. Sparking more discussion about elitism and the attitudes that community-builders have towards it would be really valuable for mapping the current and future landscape of community building, so any and all comments are much appreciated!
- ^
In EA, there’s a pretty solid correlation between people who have started big and impactful projects and their origins in elite environments (Sam Bankman-Fried, Will MacAskill, Holden Karnofsky, etc.). Some of the most successful companies in the world (e.g. Google, Apple, Paypal) have historically also been quite selective and operate within a sphere of prestige.
What do you mean by competence? Is it the skills, knowledge, connections, and presentation that advance these institutions? Does the advancement include EA-related innovation? Is this competence generalizable to EA-related projects?
Is social power the influence over acceptable norms due to representing that institution or having an identity that motivates others to make a mental shortcut for such 'deference to authority'? Could social power be gained without appealing to traditional power-related biases?
Critical thinking in solving problems related to achieving the institutions' objectives are supported while critical engagement with these objectives may be deselected against. This also implies that no one thinks about the objectives, which can be boring/make people feel lacking meaning: companies could be glad to entertain conversations about the various possible objectives.
Effective altruism - desire to help others the most while valuing all, even those outside of one's immediate circles, more equally. Elite decisionmaking is to an extent based on favors and dynamics among friends and colleagues.
I'd say acceptance/internalization of the specific traditional hierarchical structure and understanding oneself as competent to progress within this structure.
I am assuming that you are assuming the 'eliteness' metric as a sum of school name, parents' income, and Western background? Please reduce my bias.
Is the correlation apparent? For example, imagine that instead of (elite) Rob Mather gaining billions for a bednet charity a (non-elite) thoughtful person with high school education and $5/day started organizing their (also non-elite) friends talking about cost-effective solutions to all issues in sub-Saharan Africa in 2004 and was gaining the billions since, as solutions were developed. Maybe, many more problems would have been solved better.
Counter-examples (started big and impactful projects from non-elite background) may include Karolina Sarek, William Foege (Wiki), and Jack Rafferty. It can be interesting to see this percentage in the context of the % of elite vs. non-elite people in EA (%started impactful projects from elite/%elite in EA)/(%started impactful projects from non-elite/%non-elite in EA). Further insights on the relative success of top vs. median elite talent can be gained by controlling for equal opportunities (which can be currently assumed if funding is awarded on the basis of competence).
So, while EA was funding constrained, it used to make sense to attract elites. Now, this argument applies to a lesser extent.
Unless it is true, such as if impact is interpreted as representing an institution that aspires for normative change, in which case you realize that speaking with elite people in an elite way is not really for you anyway and do something else, such as running projects or developing ideas. This is an equal dynamic where potential for impact is a phrase.
Thinking diversity norms can be more influential in having vs. not having issues with groupthink than the composition of the group, considering that people interact with others. For example, if the norm is prototyping solutions with intended beneficiaries, engaging them in solving the issues and stating their priorities in a way which mitigates experimenter bias and motivates thoughtful sincerity, and considering a maximally expanded moral circle, then the quality of solutions should not be reduced if people from only 10-20 schools are involved. On the other hand, if the norm is, for instance, that everyone reads the same material and is somewhat motivated to donate to GiveWell and spread the word, then even a diverse group engages in groupthink.
Prestige selects for people of whom the highest share wants to do the most good when being offered reasoning and evidence on opportunities, at least if prestige is interpreted as such. Imagine, for instance, a catering professional being presented with evidence on doing the most good by vegan lunches. Their normative background may not much allow for impact consideration if that would mean forgone profit, unless it does. If EA should keep value by altruistic rather than other (e. g. financial) motivation, then recruitment should attract altruistic people who want to be effective and discourage others.
So, it depends on the senior-level positions. If you want to make changes in an authoritarian government, an (elite) insider will be very helpful. Similarly, a (non-elite) insider would be helpful if they need to develop solutions within a non-elite context, such as solve priorities in Ghana under $100m. It does not matter if normative solution developers (such as AI strategy researchers) are elite or not, as long as they understand and equally weigh everyone's interests. Positive discrimination for roles that elites may have better background in (e. g. due to specialized school programs), such as technical AI safety research, may be counterproductive to the success of the area, because less competent people would lead the organizations and since the limited number of applicants from non-elite roles is not caused by unwelcomingness but limited opportunities to develop background skills, positive discrimination would not further increase diversity.
Complementarity can be considered. For example, someone who can find the >$100m priorities in Ghana and someone who can get the amount needed. However, own network funding can also prevent the entire network fund a much better project in the future, so not all elite people should be supported in advancing their own projects, since there is so relatively many elites and so few elite networks - unless offering an opportunity to fund a relatively less unusual project first enables the support of a more unusual (and impactful) project later. If the project objective is well-defined and people receive training, then anyone who can understand the training and will make sure that it gets done can qualify.
You are grading 'playing with Macs.' I think Bill Gates dropped out of college. And, just based on these two examples - if you compare their philanthropy ... This means that whoever is not cool cannot participate? Also, if students get used to upskilling others (and tolerating or benefiting from that), then EA can get less skills-constrained later and create more valuable opportunities for the engagement of people who score around the 70th(95th) percentile on standardized exams.
While a biosecurity conference should probably only 'benefit' people who are 'vetted' by elite (if so defined) institutions that they will not actually think about making pathogens since biosecurity is currently relatively limited, an AI safety conference can be somewhat more inclusive in including 'possibly risky' people. This assumes that making an unaligned superintelligence is much more difficult than creating a pathogen.
AI safety conferences should exclude people who would make the field non-prestigious/without the spirit of 'the solution to a great risk,' for example, seem like an appeal of online media users for the platforms to reduce biases in the algorithms because they are affecting them negatively. Perhaps even more so than one's elite background, the ability to keep up that spirit can be correlated with traditionally empowered personal identity (such as gender and race) and internalization of these norms of power (rather than critical thinking about them). Not everyone with that ability of 'upholding a unique solution narrative' must be from that demographic and not everyone has to have this ability in that group (only a critical mass has to). This applies as long as people negatively affected by traditional power structures perceive a negative emotion which would prevent them from presenting objective evidence and reasoning to decisionmakers.
So everything except community building and entry-level employment? Should there be community building in non-elite contexts (while elites (in some way) within or beyond these contexts may or may not be preferred)? A counterargument is similar to the AI safety 'spirit' one above: people would be considered suffering by disempowerment and thus appeal less effectively and to your standards one: people who would slack with Bs in impact would just be ok with some problems unresolved. Arguments for include epistemic, problem awareness, and solution-relevant insights diversity and facilitating mutually beneficial cooperation (e. g. elites gain the wellbeing of people who have more time for developing non-strategic relationships and non-elites gain the standards of perfecting solutions), in EA and as project outcomes.
It may depend on the org. Some orgs (e. g. high-profile fundraising) that generally prefer people from elite backgrounds can prefer them also for entry-level positions. This can be accounting for the 'As are disgraced by Bs and would not do a favor for them since they do not gain acknowledgement from other As but can be perceived as weak or socially uncompetitive' argument of the 'target audiences' of these orgs.
If doing nothing and waiting for social norms to change is appropriate, non-elites should excluded from these entry-level roles. The org can actively change the norms by training non-elites to resemble elites (which can be suboptimal due to exhibiting the acceptance of the elite standard, which is (thus) exclusive) or by accepting anyone who can make the target audience elites realize that their standard is not absolute. In that case, the eliteness of one's background should not contribute to hiring decisions.
Depending on the attitude of the key decisionmakers at EAGs/EAGxs, such as large funders, eliteness should be preferred, not a selection criterion, or dis-preferred. It is possible that anyone who demonstrates willingness and potential to make high impact can be considered elite in this context.
Is it that elites have less sharp intuition than non-elites? An argument for is that elites are in their positions because they reflect the values of their institution without emotional issues, which requires the reduction of one's intuitive reasoning. If an institution values critical thinking, gaining information from a diversity of sources, and forming opinions without considerations of one's acceptance in traditional hierarchies, then elites can develop intuition.