Hide table of contents

EA loves genius. 


I understand that all else equal, you probably want smarter people working for you. When it comes to generating new ideas and changing the world, sometimes quantity cannot replace quality.

But what is the justification for being so elitist that we significantly reduce the number of people on the team? Why would we filter for the top 1% instead of the top 10%? Or, more accurately, the top 0.1% instead of the top 1%?

I’d appreciate any posts, academic papers or case studies that support the argument that EA should be extra elitist. 

Full disclosure: I’m trying to steelman the case for elitism so that I can critique it (unless the evidence changes my mind!).


 

38

1
3

Reactions

1
3
New Answer
New Comment

10 Answers sorted by

Presumably it just depends upon how much greater impact the very top candidates have over the merely good? In many fields, I'd expect the top expert in the world to have vastly more impact than ten people who are at the 90th percentile of ability (i.e. just making the top 10%). And the world's richest person has much more wealth than ten people at the 90th percentile of wealth, etc.

How Much Does Performance Differ Between People by Max Daniel and Benjamin Todd goes into this

Also there’s a post on “vetting-constrained” I can’t recall off the top of my head. The gist is that funders are risk-adverse (not in the moral sense, but in the relying on elite signals sense) because Program Officers don’t have enough time / knowledge as they’d like for evaluating grant opportunities. So they rely more on credentials than ideal

1
Stan Pinsent
6mo
Thanks, this is the kind of source I'm excited about!

I agree that this is the key question. It's not clear to me that "effectiveness" scales superlinearly with "expertness". With things where aptitude is distributed according to a normal curve (maybe intelligence), I suspect the top 0.1% are not adding much more value than the top 1% in general.

There are probably niche cases where having the top 0.1% really really matters. For example, in situations where you are competing with other top people, like football leagues or TV stations paying $millions for famous anchors.

But when I think about mainstream EA jobs... (read more)

9
Richard Y Chappell
6mo
For research, at least, it probably depends on the nature of the problem: whether you can just "brute force" it with a sufficient amount of normal science, or if you need rare new insights (which are perhaps unlikely to occur for any given researcher, but are vastly more likely to be found by the very best). Certainly within philosophy, I think quality trumps quantity by a mile. Median research has very little value. It's the rare breakthroughs that matter.  Presumably funders think the same is true of, e.g., AI safety research.

One way to think about it is that the aim of EA is to benefit the beneficiaries - the poorest people in the world, animals, future beings.

We should choose strategies that help the beneficiaries the most rather than strategies that help people that happen to be interested in EA (unless that also helps the beneficiaries - things like not burning out).

It makes sense to me that we should ask of those who have had the most privilege to give back the most, if you have more money you should give more of it away. If you have a stronger safety net and access to influence, you should use more of that to help others rather than helping yourself.


I think with the salaries, for most  people, they could probably earn more in other sectors if they only cared about monetary gain rather than including impact in their career choice. If you're coming from a charity/public service sector they may seem higher, if you're coming from a private sector career they seem lower.

Interesting. Are there any examples of EA jobs which are more poorly-paid than their private-sector counterparts?

It's not necessarily that the "EA" jobs are more poorly paid, just that the people that take these roles could realistically earn much more elsewhere. 

The most comparable example I can think of are software engineers, where EA positions generally compare poorly to tech. Even Lightcone admits it isn't paying market salaries.

5
Will Bradshaw
6mo
I think this is the great majority of EA jobs that aren't in operations. In our case (as an EA-adjacent biosecurity org), it's simultaneously the case that (a) most of our staff are well-paid relative to academic and nonprofit benchmarks, and (b) most of our staff could make much more money working in the private sector. Several of our best (and best-compensated) performers took dramatic pay cuts to work for us. I think this is the norm for EA-adjacent organizations, and is roughly the correct norm to be pursuing. In discussions of EA compensation, it's very common to equivocate between "EA jobs are well-paid relative to nonprofit/academic benchmarks", "EA jobs are well-paid relative to the average person" and "EA-jobs are well-paid relative to for-profit benchmarks". I think only the last of these is actually cause for concern, and is quite rarely true. That said, I have seen EA operations roles (primarily at EV orgs) that I think were significantly overpaid, so I'm not going to claim this never happens.
4
calebp
6mo
(I agree with the above) One thing worth noting is that some people either 1. might not have clear well paying counterfactual salaries (e.g. in tech) but either could fairly quickly transition into those roles 2. or decided not to pursue those roles and instead pursued lower paying altruistically motivated work but could have earned a lot of money if they had made different choices early on. I am pretty confused about how much you "should" pay this kind of person - particularly in the second case. It seems like many people can make the claim that they "could" be earning more money doing x, even if x wasn't really an option for them. At the same time, I don't want to punish people for making altruistic sacrifices early in their careers.
4
Will Bradshaw
6mo
Yeah, I agree this is a real and hard case. Similarly, I think there are roles where the only readily available benchmarks are in academia or the nonprofit sector - in these cases we can assume that those benchmarks are too low, but we don't know by how much, so determining fair compensation is hard. Community building plausibly falls into this bucket.
4
DavidNash
6mo
There are a lot of private sector community roles, some with salaries up to $180k - Here are some examples from a community manager job board.
2
Will Bradshaw
6mo
TIL! I think this strengthens my confidence in my original comment re: nearly all EA roles being paid under market rate.

Stan - this is a legitimate and interesting question. I don't know of good, representtive, quantitative data that's directly relevant.

However, I can share some experiences from teaching EA content that might be illuminating, and semi-relevant. I've taught my 'Psychology of Effective Altruism' course (syllabus here), four times at a large American state university where the students show a very broad range of cognitive ability. This is an upper-level undergraduate seminar restricted mostly to juniors and seniors. I'd estimate the IQ range of the students taking the course to be about 100-140, with a mean around 115.

In my experience, the vast majority of the students really struggle with central EA concepts and rationality concepts like scope-sensitivity, neglectedness, tractability, steelmanning, recognizing and avoiding cognitive biases, and decoupling in general. 

I try very hard to find readings and videos that explain all of these concepts as simply and clearly as possible. Many students kinda sorta get some glimpses into what it's like to see the world through EA eyes. But very few of them can really master EA thinking to a level that would allow them to contribute significantly to the EA mission. 

I would estimate that out of the 80 or so students who have taken my EA classes, only about 3-5 of them would really be competitive for EA research jobs, or good at doing EA public outreach. Most of those students probably have IQs above about 135. So this is mostly a matter of raw general intelligence (IQ), and partly a matter of personality traits such as Openness and Conscientiousness, and partly a matter of capacity for Aspy-style hyper-rationality and decoupling. 

So, my impression from years of teaching EA to a wide distribution of students is that EA concepts are just intrinsically really, really difficult for ordinary human minds to understand, and that only a small percentage of people have the ability to really master them in an EA-useful way. So, cognitive elitism is mostly warranted for EA.

Having said that, I do think that EAs may under-estimate how many really bright people are out there in non-elitist institutions, jobs, and cities. The really elite universities are incredibly tiny in terms of student numbers. There might be more really smart people at large, high-quality state universities like U. Texas Austin (41,000 undergrads) or U. Michigan (33,000 undergrads) than there are at Harvard (7,000 undergrads) or Columbia (9,000 undergrads). Similar reasoning might apply in other countries. So, it would seem reasonable for EAs to consider broadening our search for EA-capable talent beyond super-elite institutions and 'cool' cities and tech careers, into other places where very smart people might be found.

Thanks, Geoffrey!

You seem surprisingly confident that you know the "raw general intelligence" of your classes in general and the subgroup of those who would compete for EA jobs in particular. Isn't there a danger that you are conflating "aptitude for EA ideas" with intelligence? Or even that the aspy intelligence that is associated with EA fluency might be misconstrued as very high general intelligence?

I'm more open to the idea that "EA orthodoxy" is a quality that is very unevenly distributed, and in many jobs would have an outsize impact on effectiveness. Less convinced that general intelligence is one of those things. 

3
Geoffrey Miller
6mo
Stan - those are legitimate concerns, that there might be some circularity in judging general intelligence in relation to understanding EA concepts, in a classroom context. I do have a pretty good sense of my university undergrads' overall intelligence distribution from teaching many other classes on many topics over the last 23 years, and knowing the SAT and ACT distributions of the undergrads.  Within each class, I guess I'm judging overall intelligence mostly from participation in class discussions and online discussion forums, and term paper proposals, revisions, and final drafts.  As I mentioned, it would be nice to have some more quantitative, representative data on how IQ predicts capacity to understand EA concepts -- and whether having certain other traits (e.g. Aspy-style thinking, Openness, etc) might add some more predictive validity over and above IQ.

I agree with your point about broadening beyond elite institutions, and there's also an interesting argument that a focus on elite institutions could select for undesirable qualities as well as intelligence -- e.g. a preoccupation with jumping through well-defined hoops in order to achieve social status, and general disregard for "little people". For example, in 2014 a Yale prof wrote:

Our system of elite education manufactures young people who are smart and talented and driven, yes, but also anxious, timid, and lost, with little intellectual curiosity a

... (read more)

PS I should add that, when I taught EA concepts to my undergrads at Chinese University of Hong Kong - Shenzhen (CUHK-SZ) (c. 2020-2021), which is a much more cognitively selective university that the American state university where I usually teach, the Chinese undergrads had a much easier time understanding the EA ideas. Despite having much lower familiarity with other aspects of the Euro-American culture, charity system, Rationalism subculture, etc.

So I take this as (weak but suggestive) evidence that cognitive ability is a major driver of ability to unde... (read more)

This doesn't help steelman, because I'm generally sympathetic to the concern that EA is too elitist. The case for elitism, I think, broadly rests on the idea that (a) elitism helps select for more intelligent/more able talent; and (b) this increase in intelligence/ability amongst the talents you recruit outweighs the overall smaller talent pool.)

I'm especially sceptical of (a), and I say that as someone who comes from a country where elitism is government policy; where meritocracy is the law of the land and intelligence the measure of a man. Given the global demographics, a lot of the smartest and more able people (in a vacuum) will just be random people in lower and middle income countries, and yet (i) poverty and a lack of access to education means they don't get to develop to their full potential; and (ii) the limited scope of existing selection systems (e.g. using top universities as a proxy, EA being a rich-world and indeed Anglosphere-focused phenomenon) means we don't get access to these people.

For (b), the only thing I will say is that it certainly doesn't hold in a lot of cases where we're looking to scale - and where greater ability obviously helps, but doubling personnel doubles output in a way that doubling salary to increase quality of personnel doesn't (e.g. doing mass outreach is a good example of this).

Perhaps more concerningly - and this is the deeper problem with EA - is that if you can't build popular support and hence a mass movement out of your ideas, it limits your ability to gain and hold political power, which is ultimately where the most impactful things can be done.

(Disclosure: Stan and I are colleagues, though we haven't discussed this issue before).

Feelings of scarcity are bad for people on many levels that affect impact. It makes them more stressed out and less moral, especially when morality involves challenging the people who pay them. 

Except at the very top, EA jobs are much less stable. Government and academic positions are bad comparisons because those come with a lot of security. 

EA salaries are generally well above critical thresholds of scarcity, aren't they?

I take the point about less stability. That would lead me to think that EA ought to move away from "contractor" contracts, not necessarily to try to compensate for instability with salary.

There is no justification for it. EA was intended to be a more mass movement at the onset, and that is the way for it to reach it's true potential.

Hi Stan!

Some of this has been discussed before, maybe a good starting point would be this post (https://forum.effectivealtruism.org/posts/Rts8vKvbxkngPbFh7/should-ea-shift-away-a-bit-from-elite-universities) or this one (https://forum.effectivealtruism.org/posts/LCfQCvtFyAEnxCnMf/a-slightly-i-think-different-slant-on-why-ea-elitism-bias).

Both pieces take a more critical view of "elitism" so might not be what you are looking for in terms of steelmanning but hope it helps nonetheless! :)

Thank you very much!!

I shared this idea pretty strongly a few years back, but have changed my mind due to personal experience of running an organisation with lots of people. I think a community has enough parallels for it to be a useful comparison.

Here's what changed my mind:

1. The number of one-to-one relationships increases with the factorial of the number of people in a group. Shit gets complicated and it creates a breeding ground for bad behaviour; you can behave horribly and then move on to a new group of people without facing ramifications, because it's unlikely that those two groups are talking to one and other.

2. At least within an organisation, having a lot of people necessitates hierarchy which itself has major drawbacks e.g. it increases the links in the Chinese whisper chain that is organisational communication, affecting information flow both on the way up and on the way down. 

3. You'd think that the productivity of a team would be  n * average productivity, but it's more like n * productivity of the worst person; mediocre people lower the standards people hold themselves to and higher performers leave in disgust. New hires then conform the ever lowering standard.

4. A lot of processes are at least somewhat serialized. You can't make them go faster by increasing the number of people, only increasing the quality will move the needle (e.g. a team of 100000 sprinters will still reach the finish line slower than Usain Bolt)

5. A person is smart. People are stupid. When you have large groups, reputation starts to become decoupled from reality. Rumor becomes the dominant mode of information transfer. People start having very strong opinions about people they've barely met or interacted with.

6. The importance of work is power-law distributed. The top priority is usually more important than the tenth through to the nth combined. What really matters is nailing the most important stuff.


 

That being said, here are some changes I'd like to see in who we hire / invite:

- Someone who went to Oxford despite being born poor is likely much smarter than their richer peers; familial social economic status should be considered..
- Greater focus on people who've sacrificed more. It's a costly signal of value-alignment
- Prioritise people are cannot have achieved their position through office politics / bullshiting / being carried by their teams e.g. soloprenuers >> management consultant.

Thanks for sharing!

I think a hypothesis or framework that you might want to try for examining the question is "What is the characteristics of the market for EA-based philanthropy?" or in less elitist language: "Follow the money!"

A large fraction of the money for the EA movement comes from very wealthy people. Founders pledge consists almost exclusively of wealthy entrepreneurs. Open Philanthropy is funded mostly by very wealthy people.  Very wealthy people tend to take the approach of paying a higher price for a premium product or service.  It is only logical if you have a lot more money than other people in the market. 

According to the Federal Reserve, the top 0.1% of households own $18.6 trillion, the next 0.9% of households own $27.2 trillion, and the next 9% own $54.8 trillion.  The remaining 90% own $44.3 trillion in assets. So the top 10% of households own $100.6 trillion which is more than twice as much as everyone else.  So naturally the EA movement preferentially serves elite donors, and as a result it has many of the characteristics of an elitist movement: focus on elite universities, lack of diversity, etc. 

What is the alternative? Does the EA movement want to bear the cost of resisting the incentives it has to preferentially serve elitist donors and to perhaps unconciously take on elitist characteristics?  

The easiest path for most of the EA movement is to simply say that the key focus is on maximizing impact of each particular organizations. In this case, each organization will maximize funding over the short-term by focusing on serving morality-aligned donors who have the most to give and who provide the greatest short-term donation potential.  This will keep the EA movement dependent on richer donors.

But the resulting elitism will probably have an adverse PR impact over the long-term. This is because giving less attention to 90% of donors because they have less money than the elite donors will probably alienate a majority of the philanthropic public and prevent the movement from reaching its full growth potential over time. 

Keeping this in mind, it might be useful for some of EAs current elite donors to invest in less elitist grassroots EA outreach in order to minimize long-term EA movement PR damage. While this is probably not revenue-maximizing over the short-term, it may make greater inroads into the donations from the $44.3 trillion in assets (and some portion of income) of the 90% majority of people in richer countries who might eventually support EA and increase the movement's impact in ways that complement the donations that they give. 

Interesting take, thanks for sharing!

My intuition is that what might be easier than "invest[ing] in less elitist grassroots EA outreach in order to minimize long-term EA movement PR damage" is simply projecting a different image of EA. 

It seems like standard practice for large organizations  / movements is to project an image that is substantially more diverse or more inclusive than the reality. It's dishonest, but probably does broaden the base of people who engage with the brand. Eventually, reality starts to resemble the image.

People in the 90%-98% range are still about as instinctively driven to pursue social status as people in the 98%-100% range, and also predisposed to misevaluating themselves as in the 98-100% range (since completely accurate self-evaluations are a specific state, and departures/mistakes in either direction are reasonably likely).

Even people in the 98%-99% range are enough of a risk, since they grew up in an environment that got them used to being the smartest person in the room most of the time. Also, smarter people often got treated like crap growing up, due to intelligence causing them to do something or other differently and therefore stand out and fail at conformity.

It causes all kinds of weird personality issues, some of which suddenly manifest at unexpected times. As a result, fewer people mean fewer liabilities.

Theoretical moloch-preventing civilizations like dath ilan have evaluated this problem and implemented solutions, but our real world has barely grappled with it.

Comments4
Sorted by Click to highlight new comments since: Today at 11:38 PM

I wish you distinguished between elitist, smart, and well-paying.  "Overvaluing pretigious universities and ignoring fantastic people because they went to a state school" is a very different problem than "pays too much (including to state school employees)"

Perhaps I could have done a better job at that. The way I see it, EA places a high premium on getting the "best of the best", even when it means getting substantially fewer people on board. This premium often comes in the form of high pay.

Smartness is not central to my question. Although, separately, I am perplexed by some things which seem to indicate a belief that people lie along a "general awesomeness" continuum, on which the best people are in a class of their own. $50k scholarships for high-schoolers, for example, indicate to me a very strong faith that the 99th percentile in high school are a lot more valuable than the 98th percentile.

I wouldn’t be surprised if the rise of AI safety has played a role in this focus.

Let’s suppose your main focus is global charity. Well, you need high quality analysis, but you don’t need that many analysts. GiveWell is small.

On the other hand, AI safety has a huge demand for talent and it’s only recently that some of the research direction, like interpretability, became more scalable.

This is a comment because it's not actually a justification for EA elitism. 

There are some okay-ish ways to quantify where students interested in Effective Altruism might end up. If we assume that, for a student to be interested in effective altruism, they need to have independently pursued some kind of extracurricular activity involving a skill of the kind that Effective Altruism might discuss, we can look at where the top competitors for those kinds of extracurriculars are.

One thing to beware is confounding factors. People who would be good for EA might be too busy to participate in these activities (either because they have busy class schedules or are involved in research or because they work outside of school). People might also be doing activities because they are superficially impressive, which probably isn't a good sign for thinking in a very EA way.

Here are some brief summaries of where top competitors in different American extracurriculars come from:

Ethics Bowl (https://en.wikipedia.org/wiki/Intercollegiate_Ethics_Bowl)- no clear pattern among the universities

Bioethics Bowl (https://en.wikipedia.org/wiki/Bioethics_Bowl)- similar to above

National Debate Tournament (https://en.wikipedia.org/wiki/List_of_National_Debate_Tournament_winners)- often but by no means exclusively prestigious US schools, seems to lean towards private schools a bit also? but I'm just eyeballing it

US Universities Debating Championship (https://en.wikipedia.org/wiki/US_Universities_Debating_Championship)- Mostly Ivy-League or similarly prestigious schools

Putnam Exam (https://en.wikipedia.org/wiki/William_Lowell_Putnam_Mathematical_Competition)- Strongly dominated by MIT

College Model UN (https://bestdelegate.com/2022-2023-north-american-college-model-u-n-final-rankings-world-division/)- no clear pattern besides DC-based schools tending to do well

I'm sure other people can add more to this list.

If you think that Putnam results are a strong predictor of Effective Altruism, that could justify more elitism. Personally, I doubt that.

Curated and popular this week
Relevant opportunities