Thanks to Alexander Gordon-Brown, Amy Labenz, Ben Todd, Jenna Peters, Joan Gass, Julia Wise, Rob Wiblin, Sky Mayhew, and Will MacAskill for assisting in various parts of this project, from finalizing survey questions to providing feedback on the final post.
Clarification on pronouns: “We” refers to the group of people who worked on the survey and helped with the writeup. “I” refers to me; I use it to note some specific decisions I made about presenting the data and my observations from attending the event.
This post is the second in a series of posts where we aim to share summaries of the feedback we have received about our own work and about the effective altruism community more generally. The first can be found here.
Each year, the EA Leaders Forum, organized by CEA, brings together executives, researchers, and other experienced staffers from a variety of EA-aligned organizations. At the event, they share ideas and discuss the present state (and possible futures) of effective altruism.
This year (during a date range centered around ~1 July), invitees were asked to complete a “Priorities for Effective Altruism” survey, compiled by CEA and 80,000 Hours, which covered the following broad topics:
- The resources and talents most needed by the community
- How EA’s resources should be allocated between different cause areas
- Bottlenecks on the community’s progress and impact
- Problems the community is facing, and mistakes we could be making now
This post is a summary of the survey’s findings (N = 33; 56 people received the survey).
Here’s a list of organizations respondents worked for, with the number of respondents from each organization in parentheses. Respondents included both leadership and other staff (an organization appearing on this list doesn’t mean that the org’s leader responded).
- 80,000 Hours (3)
- Animal Charity Evaluators (1)
- Center for Applied Rationality (1)
- Centre for Effective Altruism (3)
- Centre for the Study of Existential Risk (1)
- DeepMind (1)
- Effective Altruism Foundation (2)
- Effective Giving (1)
- Future of Humanity Institute (4)
- Global Priorities Institute (2)
- Good Food Institute (1)
- Machine Intelligence Research Institute (1)
- Open Philanthropy Project (6)
Three respondents work at organizations small enough that naming the organizations would be likely to de-anonymize the respondents. Three respondents don’t work at an EA-aligned organization, but are large donors and/or advisors to one or more such organizations.
What this data does and does not represent
This is a snapshot of some views held by a small group of people (albeit people with broad networks and a lot of experience with EA) as of July 2019. We’re sharing it as a conversation-starter, and because we felt that some people might be interested in seeing the data.
These results shouldn’t be taken as an authoritative or consensus view of effective altruism as a whole. They don’t represent everyone in EA, or even every leader of an EA organization. If you’re interested in seeing data that comes closer to this kind of representativeness, consider the 2018 EA Survey Series, which compiles responses from thousands of people.
What types of talent do you currently think [your organization // EA as a whole] will need more of over the next 5 years? (Pick up to 6)
This question was the same as a question asked to Leaders Forum participants in 2018 (see 80,000 Hours’ summary of the 2018 Talent Gaps survey for more).
Here’s a graph showing how the most common responses from 2019 compare to the same categories in the 2018 talent needs survey from 80,000 Hours, for EA as a whole:
And for the respondent’s organization:
The following table contains data on every category (you can see sortable raw data here):
- Two categories in the 2019 survey were not present in the 2018 survey; these cells were left blank in the 2018 column. (These are "Personal background..." and "High level of knowledge and enthusiasm...")
- Because of differences between the groups sampled, I made two corrections to the 2018 data:
- The 2018 survey had 38 respondents, compared to 33 respondents in 2019. I multiplied all 2018 figures by 33/38 and rounded them to provide better comparisons.
- After this, the sum of 2018 responses was 308; for all 2019 responses, 351. It’s possible that this indicates a difference in how many things participants thought were important in each year, but it also led to some confusing numbers (e.g. a 2019 category having more responses than its 2018 counterpart, but a smaller fraction of the total responses). To compensate, I multiplied all 2018 figures by 351/308 and rounded them.
- These corrections roughly cancelled out, with the 2018 sums reduced by roughly 1%, but I opted to include and mention them anyway. Such is the life of a data cleaner.
- While the groups of respondents in 2018 and 2019 overlapped substantially, there were some new survey-takers this year; shifts in perceived talent needs could partly reflect differences in the views of new respondents, rather than only a shift in the views of people who responded in both years.
- Some skills were named as important more often in 2019 than 2018. Those that saw the greatest increase (EA as a whole + respondent’s organization):
- Economists and other quantitative social scientists (+8)
- One-on-one social skills and emotional intelligence (+8)
- The ability to figure out what matters most / set the right priorities (+6)
- Movement building (e.g. public speakers, “faces” of EA) (+6)
- The skills that saw the greatest total decrease:
- Operations (-16)
- Other math, quant, or stats experts (-6)
- Administrators / assistants / office managers (-5)
- Web development (-5)
Other comments on talent needs
- “Some combination of humility (willing to do trivial-seeming things) plus taking oneself seriously.”
- “More executors; more people with different skills/abilities to what we already have a lot of; more people willing to take weird, high-variance paths, and more people who can communicate effectively with non-EAs.”
- “I think management capacity is particularly neglected, and relates strongly to our ability to bring in talent in all areas.”
The 2019 results were very similar to those of 2018, with few exceptions. Demand remains high for people with skills in management, prioritization, and research, as well as experts on government and policy.
Differences between responses for 2018 and 2019:
- Operations, the area of most need in the 2018 survey, is seen as a less pressing need this year (though it still ranked 6th). This could indicate that we’ve begun to succeed at closing the operations skill bottleneck.
- However, more respondents perceived a need for operations talent for their own organizations than for EA as a whole. It might be the case that respondents perceive that the gap has closed more for other organizations than it actually has.
- This year saw an increase in perceived need for movement-building skills and for “one-on-one skills and emotional intelligence”. Taken together, these categories seem to indicate a greater focus on interpersonal skills.
This year, we asked a question about how to ideally allocate resources across cause areas. (We asked a similar question last year, but with categories that were different enough that comparing the two years doesn’t seem productive.)
The question was as follows:
What (rough) percentage of resources should the EA community devote to the following areas over the next five years? Think of the resources of the community as something like some fraction of Open Phil's funding, possible donations from other large donors, and the human capital and influence of the ~1000 most engaged people.
This table shows the same data as above, with median and quartile data in addition to means. (If you ordered responses from least to greatest, the “lower quartile” number would be one-fourth of the way through the list [the 25th percentile], and the “upper quartile” number would be three-fourths of the way through the list [the 75th percentile].)
Other comments (known causes)
- “7% split between narrow long-termist work on non-GCR issues (e.g. S-risks), 7% to other short-termist work like scientific research”
- “3% to reducing suffering risk in carrying out our other work”
- “15% to explore various other cause areas; 7% on global development and economic growth (as opposed to global *health*); 3% on mental health.”
Our commentary (known causes)
Though many cause areas are not strictly focused on either the short-term or long-term future, one could group each of the specified priorities into one of three categories:
Near-term future: Global health, farm animal welfare, wild animal welfare
Long-term future: Positively shaping AI (shorter or longer timelines), biosecurity and pandemic preparedness, broad longtermist work, other extinction risk mitigation
Meta work: Building the EA community, research on cause priorisation
With these categories, we can sum each cause to get a sense for the average fraction of EA resources respondents think should go to different areas:
- Short-term: 23.5% of resources
- Long-term: 54.3%
- Meta: 20.3%
(Because respondents had the option to suggest additional priorities, these answers don’t add up to 100%.)
While long-term work was generally ranked as a higher priority than short-term or meta work, almost every attendee supported allocating resources to all three areas.
What do you estimate is the probability (in %) that there exists a cause which ought to receive over 20% of EA resources (time, money, etc.), but currently receives little attention?
Of 25 total responses:
- Mean: 42.6% probability
- Median: 36.5%
- Lower quartile: 20%
- Upper quartile: 70%
Other comments (Cause X):
- "I'll interpret the question as follows: "What is the probability that, in 20 years, we will think that we should have focused 20% of resources on cause X over the years 2020-2024?"" (Respondent’s answer was 33%)
- “The probability that we find the cause within the next five years: 2%” (Respondent’s answer to the original question was 5% that the cause existed at all)
- "~100% if we allow narrow bets like 'technology X will turn out to pay off soon.' With more restriction for foreseeability from our current epistemic standpoint 70% (examples could be political activity, creating long-term EA investment funds at scale, certain techs, etc). Some issues with what counts as 'little' attention."” (We logged this as 70% in the aggregated data)
- "10%, but that's mostly because I think it's unlikely we could be sure enough about something being best to devote over 20% of resources to it, not because I don't think we'll find new effective causes.”
- “Depends how granularly you define cause area. I think within any big overarching cause such as "making AI go well" we are likely (>70%) to discover new angles that could be their own fields. I think it's fairly unlikely (<25%) that we discover another cause as large / expansive as our top few.” (Because this answer could have been interpreted as any of several numbers, we didn’t include it in the average)
- “I object to calling this ‘cause X’, so I’m not answering.”
Finally, since it turned out that no single response reached a mean of 20%, it seems likely that 20% was too high a bar for “Cause X” — that would make it a higher overall priority for respondents than any other option. If we ask this question again next year, we’ll consider lowering that bar.
Overall, how funding-constrained is your organization?
(1 = how much things cost is never a practical limiting factor for you; 5 = you are considering shrinking to avoid running out of money)
Overall, how talent-constrained is your organization?
(1 = you could hire many outstanding candidates who want to work at your org if you chose that approach, or had the capacity to absorb them, or had the money; 5 = you can't get any of the people you need to grow, or you are losing the good people you have)
Note: Responses from 2018 were taken on a 0-4 scale, so I normalized the data by adding 1 to all scores from 2018.
Other constraints noted by respondents
Including the 1-5 score if the respondent shared one:
- “Constraints are mainly internal governance and university bureaucracy.” (4)
- “Bureaucracy from our university, and wider academia; management and leadership constraints.” (3)
- “Research management constrained. We would be able to hire more researchers if we were able to offer better supervision and guidance on research priorities.” (4)
- “Constrained on some kinds of organizational capacity.” (4)
- “Constraints on time, management, and onboarding capacity make it hard to find and effectively use new people.” (4)
- “Need more mentoring capacity.” (3)
- “Management capacity.” (5)
- “Limited ability to absorb new people (3), difficulty getting public attention to our work (3), and limited ability for our cause area in general to absorb new resources (2); the last of these is related to constraints on managerial talent.”
- “We’re doing open-ended work for which it is hard to find the right path forward, regardless of the talent or money available.”
- “We’re currently extremely limited by the number of people who can figure out what to do on a high level and contribute to our overall strategic direction.”
- “Not wanting to overwhelm new managers. Wanting to preserve our culture.”
- “Limited management capacity and scoped work.”
- “Management-constrained, and it’s difficult to onboard people to do our less well-scoped work.”
- “Lack of a permanent CEO, meaning a hiring and strategy freeze.”
- “We are bottlenecked by learning how to do new types of work and training up people to do that work much more than the availability of good candidates.”
- “Onboarding capacity is low (especially for research mentorship)”
- “Institutional, bureaucratic and growth/maturation constraints (2.5)”
Respondents’ views of funding and talent constraints have changed very little within the last year. This may indicate that established organizations have been able to roughly keep up with their own growth (finding new funding/people at the pace that expansion would require). We would expect these constraints to be different for newer and smaller organizations, so the scores here could fail to reflect how EA organisations as a whole are constrained on funding and talent.
Management and onboarding capacity are by far the most frequently-noted constraints in the “other” category. They seemed to overlap somewhat, given the number of respondents who mentioned them together.
Bottlenecks to EA impact
What are the most pressing bottlenecks that are reducing the impact of the EA community right now?
These options are meant to refer to different stages in a “funnel” model of engagement. Each represents movement from one stage to the next. For example, “grabbing the interest of people who we reach” implies a bottleneck in getting people who have heard of effective altruism to continue following the movement in some way. (It’s not clear that the options were always interpreted in this way.)
These are the options respondents could choose from:
- Reaching more people of the right kind (note: this term was left undefined on the survey; in the future, we’d want to phrase this as something like “reaching more people aligned with EA’s values”)
- Grabbing the interest of people who we reach, so that they come back (i.e. not bouncing the right people)
- More people taking moderate action (e.g. making a moderate career change, taking the GWWC pledge, convincing a friend, learning a lot about a cause) converted from interested people due to better intro engagement (e.g. better-written content, ease in making initial connections)
- More dedicated people (e.g. people working at EA orgs, researching AI safety/biosecurity/economics, giving over $1m/year) converted from moderate engagement due to better advanced engagement (e.g. more in-depth discussions about the pros and cons of AI) (note: in the future, we’ll probably avoid giving specific cause areas in our examples)
- Increase the impact of existing dedicated people (e.g. better research, coordination, decision-making)
Other notes on bottlenecks:
- “It feels like a lot of the thinking around EA is very centralized.”
- “I think ‘reaching more people’ and ‘not bouncing people of the right kind’ would look somewhat qualitatively different from the status quo.”
- “I’m very tempted to say ‘reaching the right people’, but I generally think we should try to make sure the bottom of the funnel is fixed up before we do more of that.”
- “Hypothesis: As EA subfields are becoming increasingly deep and specialized, it's becoming difficult to find people who aren't intimidated by all the understanding required to develop the ambition to become experts themselves.”
- “I think poor communications and lack of management capacity turn off a lot of people who probably are value-aligned and could contribute a lot. I think those two factors contribute to EAs looking weirder than we really are, and pose a high barrier to entry for a lot of outsiders.”
- “A more natural breakdown of these bottlenecks for me would be about the engagement/endorsement of certain types of people: e.g. experts/prestigious, rank and file contributors, fans/laypeople. In this breakdown, I think the most pressing bottleneck is the first category (experts/prestigious) and I think it's less important whether those people are slightly involved or heavily involved.”
Problems with the EA community/movement
Before getting into these results, I’ll note that we collected almost all survey responses before the event began; many sessions and conversations during the event, inspired by this survey, covered ways to strengthen effective altruism. It also seemed to me, subjectively, as though many attendees were cheered by the community’s recent progress, and generally optimistic about the future of EA. (I was onsite for the event and participated in many conversations, but I didn’t attend most sessions and I didn’t take the survey.)
CEA’s Ben West interviewed some of this survey’s respondents — as well as other employees of EA organizations — in more detail. His writeup includes thoughts from his interviewees on the most exciting and promising aspects of EA, and we’d recommend reading that alongside this data (since questions about problems will naturally lead to answers that skew negative).
Here are some specific problems people often mention. Which of them do you think are most significant? (Choose up to 3)
What do you think is the most pressing problem facing the EA community right now?
- “I think the cluster around vetting and training is significant. Ditto demographic diversity.”
- “I think a lot of social factors (many of which are listed in your next question: we are a very young, white, male, elitist, socially awkward, and in my opinion often overconfident community) turn people off who would be value aligned and able to contribute in significant ways to our important cause areas.”
- “People interested in EA being risk averse in what they work on, and therefore wanting to work on things that are pretty mapped out and already thought well of in the community (e.g. working at an EA org, EtG), rather than trying to map out new effective roles (e.g. learning about some specific area of government which seems like it might be high leverage but about which the EA community doesn't yet know much).”
- “Things for longtermists to do other than AI and bio.”
- “Giving productive and win-generating work to the EAs who want jobs and opportunities for impact.”
- “Failure to reach people who, if we find them, would be very highly aligned and engaged. Especially overseas (China, India, Arab world, Spanish-speaking world, etc). “
- “Hard to say. I think it's plausibly something related to the (lack of) accessibility of existing networks, vetting constraints, and mentorship constraints. Or perhaps something related to inflexibility of organizations to change course and throw all their weight into certain problem areas or specific strategies that could have an outsized impact.”
- “Relationship between EA and longtermism, and how it influences movement strategy.”
- “Perception of insularity within EA by relevant and useful experts outside of EA.”
- “Not reaching the best people well.”
- Answers from one respondent:
- (1) EA community is too centralized (leading to groupthink)
- (2) the community has some unhealthy and non-inclusive norms around ruthless utility maximization (leading to burnout and exclusion of people, especially women, who want to have kids)
- (3) disproportionate focus on AI (leading to overfunding in that space and a lot of people getting frustrated because they have trouble contributing in that space)
- (4) too tightly coupled with the Bay Area rationalist community, which has a bad reputation in some circles
What personally most bothers you about engaging with the EA community?
- “I dislike a lot of online amateurism.”
- “Abrasive people, especially online.”
- “Using rationalist vocabulary.”
- “The social skills of some folks could be improved.”
- “Insularity, lack of diversity.”
- “Too buzzword-y (not literally that, but the thing behind it).”
- “Perceived hostility towards suffering-focused views.”
- “People aren't maximizing enough; they're too quick to settle for 'pretty good'.”
- “Being associated with "ends justify the means" type thinking.”
- “Hubris; arrogance without sufficient understanding of others' wisdom.”
- “Time-consuming and offputting for online interaction, e.g. the EA Forum.”
- “Awkward blurring of personal and professional. In-person events mainly feel like work.”
- “People saying crazy stuff online in the name of EA makes it harder to appeal to the people we want.”
- “Obnoxious, intellectually arrogant and/or unwelcoming people - I can't take interested but normie friends to participate, [because EA meetups and social events] cause alienation with them.”
- “That the part of the community I'm a part of feels so focused on talking about EA topics, and less on getting to know people, having fun, etc.”
- “Tension between the gatekeeping functions involved in community building work and not wanting to disappoint people; people criticizing my org for not providing all the things they want.”
- “To me, the community feels a bit young and overconfident: it seems like sometimes being "weird" is overvalued and common sense is undervalued. I think this is related to us being a younger community who haven't learned some of life's lessons yet.”
- “People being judgmental on lots of different axes: some expectation that everyone do all the good things all the time, so I feel judged about what I eat, how close I am with my coworkers (e.g. people thinking I shouldn't live with colleagues), etc.”
- “Some aspects of LessWrong culture (especially the norm that the saying potentially true things tactlessly tends to reliably get more upvotes than complaints about tact). By this, I *don't* mean complaints about any group of people's actual opinions. I just don't like cultures where it's socially commendable to signal harshness when it's possible to make the same points more empathetically.”
Most responses (both those above, and those which respondents asked we not share) included one or more of the following four “themes”:
- People in EA, or the movement as a whole, seeming arrogant/overconfident
- People in EA engaging in rude/socially awkward behavior
- The EA community and its organizations not being sufficiently professional, or failing to set good standards for work/life balance
- Weird ideas taking up too much of EA’s energy, being too visible, etc.
Below, we’ve charted the number of times we identified each theme:
What are some mistakes you're worried the EA community might be making? If in five years we really regret something we're doing today, what is it most likely to be?
- “The risk of catastrophic negative PR/scandal based on non-work aspects of individual/community behavior.”
- “Restricting movement growth to focus too closely on the inner circle.”
- “Overfunding or focusing too closely on AI work.”
- “Making too big a bet on AI and having it turn out to be a damp squib (which I think is likely). Being shorttermists in movement growth — pushing people into direct work rather than building skills or doing movement growth. Not paying enough attention to PR or other X-risks to the EA movement.”
- “Not figuring out how to translate our worldview into dominant cultural regimes.”
- “Focusing too much on a narrow set of career paths.”
- “Still being a community (for which EA is ill-suited) versus a professional association or similar.”
- “Not being ambitious enough, and not being critical enough about some of the assumptions we're making about what maximizes long-term value.”
- “Not focusing way more on student groups; not making it easier for leaders to communicate (e.g. via a group Slack that's actually used); focusing so much on the UK community.”
- “Not having an answer for what people without elitist credentials can do.”
- “Not working hard enough on diversity, or engagement with outside perspectives and expertise.”
- “Not finding more / better public faces for the movement. It would be great to find one or two people who would make great public intellectual types, and who would like to do it, and get them consistently speaking / writing.”
- “Careless outreach, especially in politically risky countries or areas such as policy; ill-thought-out publications, including online.”
- “Not thinking arguments through carefully enough, and therefore being wrong.”
- “I'm very uncertain about the current meme that EA should only be spread through high-fidelity 1-on-1 conversations. I think this is likely to lead to a demographic problem and ultimately to groupthink. I think we might be too quick to dismiss other forms of outreach.”
- “I think a lot of the problems I see could be natural growing pains, but some possibilities:
- (a) we are overconfident in a particular Bayesian-utilitarian intellectual framework
- (b) we are too insular and not making enough of an effort to hear and weigh the views of others
- (c) we are not working hard enough to find ways of holding each other and ourselves accountable for doing great work.”
The most common theme in these answers seems to be the desire for EA to be more inclusive and welcoming. Respondents saw a lot of room for improvement on intellectual diversity, humility, and outreach, whether to distinct groups with different views or to the general population.
The second-most common theme concerned standards for EA research and strategy. Respondents wanted to see more work on important problems and a focus on thinking carefully without drawing conclusions too quickly. If I had to sum up these responses, I’d say something like: “Let’s hold ourselves to high standards for the work we produce.”
Overall, respondents generally agreed that EA should:
- Improve the quality of its intellectual work, largely by engaging in more self-criticism and challenging some of its prior assumptions (and by promoting norms around these practices).
- Be more diverse in many ways — in the people who make up the community, the intellectual views they hold, and the causes and careers they care about.
Having read these answers, my impression is that participants hoped that the community continues to foster the kindness, humility, and openness to new ideas that people associate with the best parts of EA, and that we make changes when that isn’t happening. (This spirit of inquiry and humility was quite prevalent at the event; I heard many variations on “I wish I’d been thinking about this more, and I plan to do so once the Forum is over.”)
Once again, we’d like to emphasize that these results are not meant to be representative of the entire EA movement, or even the views of, say, the thousand people who are most involved. They reflect a small group of participants at a single event.
Some weaknesses of the survey:
- Many respondents likely answered these questions quickly, without doing serious analysis. Some responses will thus represent gut reactions, though others likely represent deeply considered views (for example, if a respondent had been thinking for years about issues related to a particular question).
- The survey included 33 people from a range of organizations, but not all respondents answered each question. The average number of answers across multiple-choice or quantitative questions was 30. (All qualitative responses have been listed, save for responses from two participants who asked that their answers not be shared.)
- Some questions were open to multiple interpretations or misunderstandings. We think this is especially likely for “bottleneck” questions, as we did not explicitly state that each question was meant to refer to a stage in the “funnel” model.