- EAs rate a wide variety of different causes as requiring “significant resources”.
- Global Poverty remains the most popular single cause in our sample as a whole.
- There are substantial differences in cause prioritisation across groups. On the whole, more involved groups appear to prioritise Global Poverty and Climate Change less and AI and Long-Term Future causes more.
Respondents were asked to indicate how far they prioritised different causes from the following options:
- This cause should be the top priority (Please choose one)
- This cause should be a near-top priority
- This cause deserves significant resources but less than the top priorities
- I do not think this is a priority, but should receive some resources
- I do not think any resources should be devoted to this cause
- Not considered / Not sure
2455 out of 2601 (94%) self-identified EAs in our sample gave a response regarding at least one cause.
The simplest (though crudest) way to analyse these responses is to look at the number of ‘top priority’ responses for each cause.
As in previous years, Global Poverty was the largest single cause out of the options presented.
See the graph from 2017 (with slightly different categories) below for a rough comparison.
Note: images can be viewed in full size if opened in a new tab.
These analyses clearly do not correspond precisely to the traditional broader division between EA cause areas (Global Poverty, Existential Risk/Long-Term Future etc.), however, due to the inclusion of multiple, more fine-grained categories within Long-Term Future, in essence, ‘splitting the vote.’
In principle, one could simply account for splitting responses into more fine-grained causes through combining the responses for all ‘Existential Risk/Catastrophic Risk/Long Term Future’ causes together. As a significant number of respondents (403, 16%) selected multiple causes as ‘top priority’ it would also be necessary to account for this by counting each respondent selecting at least one cause in the general category as one response for that category.
A further complication, however, is that what the boundaries of the relevant broader clusters are and which responses should fit them, is unclear. AI, Biosecurity, Nuclear Security and Existential Risk (other) seem fairly uncontroversially to fit into a rough, combined “Long-Term Future/Existential (or Catastrophic) Risk” cluster, though plausibly other “top priority” responses (e.g. improving decision-making, cause prioritisation, climate change or animal welfare) might also be motivated by Long-Term Future (if not existential risk) considerations. Similarly, at least some cause prioritisations which are nominally motivated by concern for the long-term future, would perhaps not intuitively fit with this category as usually understood by EAs (for example, prioritisations of moral circle expansion or wild-animal suffering), suggesting the need for more fine-grained cause options. For future iterations of the EA Survey, to get better insight into these issues, we are considering adding a separate question asking about Long-Term Future cause prioritisation as whole, alongside more specific cause options.
In the graph below, we combine AI, Biosecurity, Nuclear Security and Existential Risk (other)) together into one Long-Term Future category. When doing so, the gap between Global Poverty and (combined) Long-Term Future is substantially reduced, though still significant.
Full Range of Responses
Looking at the full range of responses (rather than just single “top cause” selection) suggests a more even distribution of interest in causes across EAs.
In the likert graph, causes are listed in order of the percent of respondents giving that cause as a near-top or top priority. Most listed causes received substantial support, with no cause receiving fewer than 50% of respondents suggesting that it should receive at least “significant resources.” Most causes were judged by 31-48% of respondents as being the “top priority” or “near the top priority” (the exceptions being Nuclear Security and Mental Health, which received only 22% of respondents ranking each of them that highly, and Global Poverty which 65% of respondents rated as a top or near top priority).
If we were to convert each of these options into a numerical point on a five point scale (ranging from (1) ‘I do not think any resources should be devoted to this cause’, to (5) ‘This cause should be the top priority’), the mean ratings for all but two of the causes (Mental Health and Nuclear Security) would be between 3 and 4. Notably the median rating for every cause was 3, except for global poverty, which was rated 4.
Predictors of cause preference
Identifying predictors of individual cause prioritisation was substantially more difficult than identifying influences on donation data. In this section we present the results of both multinomial and ordinal regressions as well as Multiple Correspondence Analysis. None of our models manage to account for more than 17% of the variance in cause preference data. They do, however, all appear to point to a similar set of factors being influential. Any interpretation of these findings should be highly tentative.
We also present simple descriptive analyses of the causes prioritised by different groups. Of course, these analyses especially cannot be read as suggesting a causal influence on cause selection, but they do show what causes different groups (such as EA Forum members vs non-members, within our sample) prioritise.
We examined differences in cause prioritisation across various different groups within our sample.
Given that the EA Survey appears to receive responses from a fairly large number of people who are fairly new to the movement and/or not involved in many core EA groups or activities, we thought it would be interesting to examine the responses of a narrower population who were plausibly more actively engaged and informed in core EA discussions. We chose initially to look at members of the EA Forum, who comprised about 20% of the total 2018 sample, at 523 people (we are informed that the number of active members on the EA Forum is in the ballpark of ~500 people, though there are considerably more inactive members).
There were substantial differences in “top priority” cause selections between Forum members and non-members. Most strikingly, Climate Change was the third highest prioritised cause in the sample as a whole, but was significantly more popular among respondents who weren’t a member of the Forum, while being among the least often selected causes among Forum members. Likewise, in line with our other analyses, AI, Cause Prioritisation and Meta Charity received a much higher proportion of people selecting them as top priority among Forum members, whereas Global Poverty was selected by a much lower portion.
Interestingly, this pattern was evident even in the case of the broader population of the EA Facebook group, who comprised more than 50% of the sample and is less associated with in-depth, advanced content than the EA Forum. EA Facebook members had lower support for Poverty and Climate Change and higher support for AI, Cause Prioritisation and Meta Charities than non-members.
We also examined differences between respondents who were LessWrong members or not. As one would expect, given the historical focus of the site, there was substantially higher support for AI among LessWrong members than non-members.
Looking at mean preferences on the 5-point scale, rather than simply “top priority” selections, across causes for these sub-groups, in the graph below, Global Poverty (and Climate Change) and AI Risks appear to follow the opposite pattern from each other: with Global Poverty being most endorsed by EAs who are neither a member of the EA Forum nor LessWrong and least by those who are members of both, and the converse for AI risk.
We also examined differences in cause prioritisation according to gender. According to the survey data, a plurality of both men and women chose Global Poverty as the top cause. However, men and women appear to run in opposite directions on Climate Change and AI Risks. 15.7% of women chose Climate Change as top priority (making it the second most preferred cause in this group) and 9.3% chose AI Risks, but with men this is the opposite (7.9% to 22.6%, respectively). Similarly, looking at the mean score across the scale, the trend is the same, with about a 1 point gender gap between women’s preference for Climate Change and men’s preference for AI Risks.
The following graph shows the extent of the gender difference across each cause. The Y-axis shows the difference in mean cause preference score between women and men.
Of course, to reiterate what we said above, such descriptive differences across groups in our sample should not be taken as suggesting that gender causes differences in cause prioritisation e.g. because of possible confounders.
As one might expect, the identification of animal welfare as a top priority was strongly associated with dietary choice. Unlike in our analysis last year, when we only discussed vegetarians and vegans as a single category, this year we can reveal a divide between vegans and others on animal welfare prioritisation. Vegans make up a large plurality of those who rank animal welfare as a top or near top cause, but the percentage of vegetarians and those who eat meat don’t differ substantially. As shown in the first graph below, 76.9% of vegans rank animal welfare as a top or near top priority, compared to only 43.7% of vegetarians and 19% of those who eat meat. Similarly, as the second graph shows, 47% of those selecting Animal Welfare as the top or near top priority were vegans, and 27% were vegetarians.
Regressions: Top Cause Selection
To investigate potential predictors of different cause selections, we ran multinomial logit regressions on Top Priority cause choice, in order to see the relative effects of potential predictors of choice while controlling for possible confounders. As Global Poverty is the most popular top priority it was used as the base category in the regression.
Drawing on trends in the descriptive statistics we looked at what EA groups respondents are members of, diet, demographics, and how and when individuals became involved in EA. (link to regression table)
Below we present the average marginal effects from substantive (effect size ±10 percentage points) and statistically significant predictors for some select causes. While the following regression results are suggestive, the models’ ability to explain a large amount of variation in outcomes was quite low (Pseudo R^2: 0.17).
LessWrong membership was associated with an increase in likelihood of selecting AI Risks as the top cause, as well as a lower likelihood of ranking Climate Change and Animal Welfare as a top priority. Likewise, being a member of the EA Forum was associated with a lower likelihood of selecting Global Poverty as the top cause and a higher chance of selecting AI. This effect was even larger for those who were both members of the EA Forum and LessWrong. Those who report becoming involved in EA via GWWC were more likely to to select Global Poverty as the top cause and less likely to select AI as the top cause. In contrast, those who became involved via personal contact or 80,000 Hours were were more likely to to select AI as the top cause and less likely to select Global Poverty as the top cause.
There also appears to be a general trend of a greater preference for AI Risks and a lesser preference for Global Poverty and Climate Change across LessWrong members, EA Forum members and members of both (LessWrong-EA Forum), though other than for those effects mentioned in the first paragraph, the Confidence Intervals (shown on the figure above) crossed zero. It may tempting to interpret this as representing a general trend for EAs to shift in the direction of AI Risks, and away from Global Poverty and Climate Change the more involved in EA (online) discussions they become. Though this interpretation should, of course, be heavily caveated given its post hoc nature and the weakness of the model overall.
In addition, veganism was strongly associated with a greater likelihood of selecting Animal Welfare as the top cause. It also appeared that women were more likely to select Global Poverty and Climate change as the top cause and less likely to select AI Risk.
Regression: Full Scale
Looking at only the Top Priority cause selection misses out on a large chunk of the data and opinion within the EA community. Nevertheless, running ordinal regressions on the five-point scale points towards many of the same predictors overall. For reasons of space and simplicity only the most prominent results will be discussed. [For ordered logit models and more average marginal effects click here]
As suggested by the Top Priority only model, indicators of being highly engaged in EA discussions, such as membership the EA Forum, the EA Facebook Page and LessWrong, and becoming more involved via Personal Contacts are associated with placing AI Risks in a higher priority category and Global Poverty and/or Climate Change in lower categories. In contrast, getting into EA through The Life You Can Save (TLYCS) and Giving What We Can (GWWC) are associated with placing Global Poverty and Climate Change higher on the scale. This also seems fairly intuitive, given the historical focus of TLYCS and GWWC on Global Poverty.
Getting into EA via 80,000 Hours also seems to be associated with rating Global Poverty lower on the scale and prioritising AI, Cause Prioritisation and other long-term future causes, along with Mental Health and Improving Rationality, more highly. In addition, receiving 80,000 Hours coaching is associated with a lower rating of Poverty and Climate Change and higher ratings of AI, Cause Prioritisation and Biosecurity. This is important given the growing influence of 80,000 Hours as a source for new EAs. Having shifted one’s career because of EA is also associated with placing AI Risks higher on the scale and with placing Global Poverty and Climate Change lower on the scale.
In most cases, when a person first heard of EA didn’t prove to be substantive or significant, when controlling for other factors, though there were small associations between hearing about EA more recently and increased support for Climate Change and Mental Health, as well as even smaller ones for Cause Prioritisation, Improving Rationality and Nuclear Security. At first glance, this runs counter to the difference found between “veterans” and “newcomers” in previous analyses, but is understandable given that newer members are significantly less likely to be members of LessWrong or the EA Forum (-0.2561 , p<0.01), which we do find to be predictors.
An EA’s age also appears to play a role, with older EAs more likely to give Climate Change top priority and younger EAs more likely to give top priority to AI Risk.
Multiple Correspondence Analysis
We also used multiple correspondence analysis (MCA) to look for patterns in cause prioritization in our categorical variables. Variables relating to getting more involved in EA in different ways and membership in EA groups as well as gender, diet, politics and career coaching were examined. About 13% of observed variance was explained in the first two axes (link to external document). EA Facebook membership, local group membership, EA Forum membership, and becoming more involved in EA through a personal contact or local group, in that order, were the top five contributing variables to the first axis, where being a member or being involved with any of these groups corresponds to the right side of the axis. The top 5 contributors to the 2nd axis were LW membership, getting more involved via SSC and TLYCS, left-leaning politics and female gender,where LW membership or SS involvement correspond to the top of the 2nd axis, and TLYCS membership, left- politics and female gender correspond to the bottom of this axis
Generally, less involvement in the EA community corresponds to assigning a high ranking of Climate Change and Global Poverty. In this MCA biplot, the point cloud of individuals is colour coded for cause prioritization (ellipses give 95% confidence intervals), and shown against the positioning of the variables on the first two axes (to aid interpretation only the top 20 contributing variables are shown).
This pattern contrasts with that observed for the ranking of AI and X-risks where ranking positively corresponds to EA community engagement.
EA Survey data suggests that EAs continue to judge that a wide array of causes warrant “significant resources” and most listed causes are judged to be either the top priority or “near top priority” by more than a third of respondents.
We found substantial differences in cause prioritisation across different groups within our sample, with AI and other Long-Term Future causes receiving significantly more support (and Global Poverty less) among members of EA groups like the Forum and EA Facebook. Though neither our regressions nor our Multiple Correspondence Analysis could explain much of the variance in cause prioritisation and so should be interpreted very tentatively, they seem to point in a similar direction, with various forms of group membership, suggestive of more involvement in EA, being associated with greater support for Long-Term Future causes.
This concludes the planned posts for the 2018 EA Survey Series!
We will, however, also be following up with some supplementary posts examining involvement in different EA groups and influences on involvement, the geography of EA and looking in more depth at GWWC pledge retention and EA growth metrics.
This post was written and with analysis by David Moss, Neil Dullaghan and Kim Cuddington.
Thanks to Peter Hurford, Tee Barnett, Luisa Rodriguez, Derek Foster, Jason Schukraft and others for review and editing.
The annual EA Survey is a project of Rethink Charity with analysis and commentary from researchers at Rethink Priorities.
Other Articles in the 2018 EA Survey Series Future articles we write about the 2018 Survey will be added here.
Prior EA Surveys conducted by Rethink Charity