Hide table of contents

Summary

  • EAs rate a wide variety of different causes as requiring “significant resources”.
  • Global Poverty remains the most popular single cause in our sample as a whole.
  • There are substantial differences in cause prioritisation across groups. On the whole, more involved groups appear to prioritise Global Poverty and Climate Change less and AI and Long-Term Future causes more.

Methodology

Respondents were asked to indicate how far they prioritised different causes from the following options:

  • This cause should be the top priority (Please choose one)
  • This cause should be a near-top priority
  • This cause deserves significant resources but less than the top priorities
  • I do not think this is a priority, but should receive some resources
  • I do not think any resources should be devoted to this cause
  • Not considered / Not sure

2455 out of 2601 (94%) self-identified EAs in our sample gave a response regarding at least one cause.

Top Causes

The simplest (though crudest) way to analyse these responses is to look at the number of ‘top priority’ responses for each cause.

As in previous years, Global Poverty was the largest single cause out of the options presented.



See the graph from 2017 (with slightly different categories) below for a rough comparison.



Note: images can be viewed in full size if opened in a new tab.

These analyses clearly do not correspond precisely to the traditional broader division between EA cause areas (Global Poverty, Existential Risk/Long-Term Future etc.), however, due to the inclusion of multiple, more fine-grained categories within Long-Term Future, in essence, ‘splitting the vote.’

In principle, one could simply account for splitting responses into more fine-grained causes through combining the responses for all ‘Existential Risk/Catastrophic Risk/Long Term Future’ causes together. As a significant number of respondents (403, 16%) selected multiple causes as ‘top priority’ it would also be necessary to account for this by counting each respondent selecting at least one cause in the general category as one response for that category.

A further complication, however, is that what the boundaries of the relevant broader clusters are and which responses should fit them, is unclear. AI, Biosecurity, Nuclear Security and Existential Risk (other) seem fairly uncontroversially to fit into a rough, combined “Long-Term Future/Existential (or Catastrophic) Risk” cluster, though plausibly other “top priority” responses (e.g. improving decision-making, cause prioritisation, climate change or animal welfare) might also be motivated by Long-Term Future (if not existential risk) considerations. Similarly, at least some cause prioritisations which are nominally motivated by concern for the long-term future, would perhaps not intuitively fit with this category as usually understood by EAs (for example, prioritisations of moral circle expansion or wild-animal suffering), suggesting the need for more fine-grained cause options. For future iterations of the EA Survey, to get better insight into these issues, we are considering adding a separate question asking about Long-Term Future cause prioritisation as whole, alongside more specific cause options.

In the graph below, we combine AI, Biosecurity, Nuclear Security and Existential Risk (other)) together into one Long-Term Future category. When doing so, the gap between Global Poverty and (combined) Long-Term Future is substantially reduced, though still significant.



Full Range of Responses

Looking at the full range of responses (rather than just single “top cause” selection) suggests a more even distribution of interest in causes across EAs.



In the likert graph, causes are listed in order of the percent of respondents giving that cause as a near-top or top priority. Most listed causes received substantial support, with no cause receiving fewer than 50% of respondents suggesting that it should receive at least “significant resources.” Most causes were judged by 31-48% of respondents as being the “top priority” or “near the top priority” (the exceptions being Nuclear Security and Mental Health, which received only 22% of respondents ranking each of them that highly, and Global Poverty which 65% of respondents rated as a top or near top priority).





If we were to convert each of these options into a numerical point on a five point scale (ranging from (1) ‘I do not think any resources should be devoted to this cause’, to (5) ‘This cause should be the top priority’), the mean ratings for all but two of the causes (Mental Health and Nuclear Security) would be between 3 and 4. Notably the median rating for every cause was 3, except for global poverty, which was rated 4.



Predictors of cause preference

Identifying predictors of individual cause prioritisation was substantially more difficult than identifying influences on donation data. In this section we present the results of both multinomial and ordinal regressions as well as Multiple Correspondence Analysis. None of our models manage to account for more than 17% of the variance in cause preference data. They do, however, all appear to point to a similar set of factors being influential. Any interpretation of these findings should be highly tentative.

We also present simple descriptive analyses of the causes prioritised by different groups. Of course, these analyses especially cannot be read as suggesting a causal influence on cause selection, but they do show what causes different groups (such as EA Forum members vs non-members, within our sample) prioritise.

Descriptives

We examined differences in cause prioritisation across various different groups within our sample.

Given that the EA Survey appears to receive responses from a fairly large number of people who are fairly new to the movement and/or not involved in many core EA groups or activities, we thought it would be interesting to examine the responses of a narrower population who were plausibly more actively engaged and informed in core EA discussions. We chose initially to look at members of the EA Forum, who comprised about 20% of the total 2018 sample, at 523 people (we are informed that the number of active members on the EA Forum is in the ballpark of ~500 people, though there are considerably more inactive members).

There were substantial differences in “top priority” cause selections between Forum members and non-members. Most strikingly, Climate Change was the third highest prioritised cause in the sample as a whole, but was significantly more popular among respondents who weren’t a member of the Forum, while being among the least often selected causes among Forum members. Likewise, in line with our other analyses, AI, Cause Prioritisation and Meta Charity received a much higher proportion of people selecting them as top priority among Forum members, whereas Global Poverty was selected by a much lower portion.



Interestingly, this pattern was evident even in the case of the broader population of the EA Facebook group, who comprised more than 50% of the sample and is less associated with in-depth, advanced content than the EA Forum. EA Facebook members had lower support for Poverty and Climate Change and higher support for AI, Cause Prioritisation and Meta Charities than non-members.



We also examined differences between respondents who were LessWrong members or not. As one would expect, given the historical focus of the site, there was substantially higher support for AI among LessWrong members than non-members.



Looking at mean preferences on the 5-point scale, rather than simply “top priority” selections, across causes for these sub-groups, in the graph below, Global Poverty (and Climate Change) and AI Risks appear to follow the opposite pattern from each other: with Global Poverty being most endorsed by EAs who are neither a member of the EA Forum nor LessWrong and least by those who are members of both, and the converse for AI risk.



We also examined differences in cause prioritisation according to gender. According to the survey data, a plurality of both men and women chose Global Poverty as the top cause. However, men and women appear to run in opposite directions on Climate Change and AI Risks. 15.7% of women chose Climate Change as top priority (making it the second most preferred cause in this group) and 9.3% chose AI Risks, but with men this is the opposite (7.9% to 22.6%, respectively). Similarly, looking at the mean score across the scale, the trend is the same, with about a 1 point gender gap between women’s preference for Climate Change and men’s preference for AI Risks.



The following graph shows the extent of the gender difference across each cause. The Y-axis shows the difference in mean cause preference score between women and men.



Of course, to reiterate what we said above, such descriptive differences across groups in our sample should not be taken as suggesting that gender causes differences in cause prioritisation e.g. because of possible confounders.

As one might expect, the identification of animal welfare as a top priority was strongly associated with dietary choice. Unlike in our analysis last year, when we only discussed vegetarians and vegans as a single category, this year we can reveal a divide between vegans and others on animal welfare prioritisation. Vegans make up a large plurality of those who rank animal welfare as a top or near top cause, but the percentage of vegetarians and those who eat meat don’t differ substantially. As shown in the first graph below, 76.9% of vegans rank animal welfare as a top or near top priority, compared to only 43.7% of vegetarians and 19% of those who eat meat. Similarly, as the second graph shows, 47% of those selecting Animal Welfare as the top or near top priority were vegans, and 27% were vegetarians.





Regressions: Top Cause Selection

To investigate potential predictors of different cause selections, we ran multinomial logit regressions on Top Priority cause choice, in order to see the relative effects of potential predictors of choice while controlling for possible confounders. As Global Poverty is the most popular top priority it was used as the base category in the regression.

Drawing on trends in the descriptive statistics we looked at what EA groups respondents are members of, diet, demographics, and how and when individuals became involved in EA. (link to regression table)

Below we present the average marginal effects from substantive (effect size ±10 percentage points) and statistically significant predictors for some select causes. While the following regression results are suggestive, the models’ ability to explain a large amount of variation in outcomes was quite low (Pseudo R^2: 0.17).



LessWrong membership was associated with an increase in likelihood of selecting AI Risks as the top cause, as well as a lower likelihood of ranking Climate Change and Animal Welfare as a top priority. Likewise, being a member of the EA Forum was associated with a lower likelihood of selecting Global Poverty as the top cause and a higher chance of selecting AI. This effect was even larger for those who were both members of the EA Forum and LessWrong. Those who report becoming involved in EA via GWWC were more likely to to select Global Poverty as the top cause and less likely to select AI as the top cause. In contrast, those who became involved via personal contact or 80,000 Hours were were more likely to to select AI as the top cause and less likely to select Global Poverty as the top cause.

There also appears to be a general trend of a greater preference for AI Risks and a lesser preference for Global Poverty and Climate Change across LessWrong members, EA Forum members and members of both (LessWrong-EA Forum), though other than for those effects mentioned in the first paragraph, the Confidence Intervals (shown on the figure above) crossed zero. It may tempting to interpret this as representing a general trend for EAs to shift in the direction of AI Risks, and away from Global Poverty and Climate Change the more involved in EA (online) discussions they become. Though this interpretation should, of course, be heavily caveated given its post hoc nature and the weakness of the model overall.

In addition, veganism was strongly associated with a greater likelihood of selecting Animal Welfare as the top cause. It also appeared that women were more likely to select Global Poverty and Climate change as the top cause and less likely to select AI Risk.

Regression: Full Scale

Looking at only the Top Priority cause selection misses out on a large chunk of the data and opinion within the EA community. Nevertheless, running ordinal regressions on the five-point scale points towards many of the same predictors overall. For reasons of space and simplicity only the most prominent results will be discussed. [For ordered logit models and more average marginal effects click here]



As suggested by the Top Priority only model, indicators of being highly engaged in EA discussions, such as membership the EA Forum, the EA Facebook Page and LessWrong, and becoming more involved via Personal Contacts are associated with placing AI Risks in a higher priority category and Global Poverty and/or Climate Change in lower categories. In contrast, getting into EA through The Life You Can Save (TLYCS) and Giving What We Can (GWWC) are associated with placing Global Poverty and Climate Change higher on the scale. This also seems fairly intuitive, given the historical focus of TLYCS and GWWC on Global Poverty.

Getting into EA via 80,000 Hours also seems to be associated with rating Global Poverty lower on the scale and prioritising AI, Cause Prioritisation and other long-term future causes, along with Mental Health and Improving Rationality, more highly. In addition, receiving 80,000 Hours coaching is associated with a lower rating of Poverty and Climate Change and higher ratings of AI, Cause Prioritisation and Biosecurity. This is important given the growing influence of 80,000 Hours as a source for new EAs. Having shifted one’s career because of EA is also associated with placing AI Risks higher on the scale and with placing Global Poverty and Climate Change lower on the scale.

In most cases, when a person first heard of EA didn’t prove to be substantive or significant, when controlling for other factors, though there were small associations between hearing about EA more recently and increased support for Climate Change and Mental Health, as well as even smaller ones for Cause Prioritisation, Improving Rationality and Nuclear Security. At first glance, this runs counter to the difference found between “veterans” and “newcomers” in previous analyses, but is understandable given that newer members are significantly less likely to be members of LessWrong or the EA Forum (-0.2561 , p<0.01), which we do find to be predictors.

An EA’s age also appears to play a role, with older EAs more likely to give Climate Change top priority and younger EAs more likely to give top priority to AI Risk.




Multiple Correspondence Analysis

We also used multiple correspondence analysis (MCA) to look for patterns in cause prioritization in our categorical variables. Variables relating to getting more involved in EA in different ways and membership in EA groups as well as gender, diet, politics and career coaching were examined. About 13% of observed variance was explained in the first two axes (link to external document). EA Facebook membership, local group membership, EA Forum membership, and becoming more involved in EA through a personal contact or local group, in that order, were the top five contributing variables to the first axis, where being a member or being involved with any of these groups corresponds to the right side of the axis. The top 5 contributors to the 2nd axis were LW membership, getting more involved via SSC and TLYCS, left-leaning politics and female gender,where LW membership or SS involvement correspond to the top of the 2nd axis, and TLYCS membership, left- politics and female gender correspond to the bottom of this axis

Generally, less involvement in the EA community corresponds to assigning a high ranking of Climate Change and Global Poverty. In this MCA biplot, the point cloud of individuals is colour coded for cause prioritization (ellipses give 95% confidence intervals), and shown against the positioning of the variables on the first two axes (to aid interpretation only the top 20 contributing variables are shown).



This pattern contrasts with that observed for the ranking of AI and X-risks where ranking positively corresponds to EA community engagement.



Conclusions

EA Survey data suggests that EAs continue to judge that a wide array of causes warrant “significant resources” and most listed causes are judged to be either the top priority or “near top priority” by more than a third of respondents.

We found substantial differences in cause prioritisation across different groups within our sample, with AI and other Long-Term Future causes receiving significantly more support (and Global Poverty less) among members of EA groups like the Forum and EA Facebook. Though neither our regressions nor our Multiple Correspondence Analysis could explain much of the variance in cause prioritisation and so should be interpreted very tentatively, they seem to point in a similar direction, with various forms of group membership, suggestive of more involvement in EA, being associated with greater support for Long-Term Future causes.

Coda

This concludes the planned posts for the 2018 EA Survey Series!

We will, however, also be following up with some supplementary posts examining involvement in different EA groups and influences on involvement, the geography of EA and looking in more depth at GWWC pledge retention and EA growth metrics.

Credits

This post was written and with analysis by David Moss, Neil Dullaghan and Kim Cuddington.

Thanks to Peter Hurford, Tee Barnett, Luisa Rodriguez, Derek Foster, Jason Schukraft and others for review and editing.

The annual EA Survey is a project of Rethink Charity with analysis and commentary from researchers at Rethink Priorities.

Supporting Documents

Other articles in the 2018 EA Survey Series:

I - Community Demographics & Characteristics

II - Distribution & Analysis Methodology

III - How do people get involved in EA?

IV - Subscribers and Identifiers

V - Donation Data

VII- EA Group Membership

VIII- Where People First Hear About EA and Higher Levels of Involvement

IX- Geographic Differences in EA

X- Welcomingness- How Welcoming is EA?

XI- How Long Do EAs Stay in EA?

XII- Do EA Survey Takers Keep Their GWWC Pledge?

Prior EA Surveys conducted by Rethink Charity:

The 2017 Survey of Effective Altruists

The 2015 Survey of Effective Altruists: Results and Analysis

The 2014 Survey of Effective Altruists: Results and Analysis

Comments11
Sorted by Click to highlight new comments since: Today at 2:03 PM

Another thing I'd be interested in seeing would be the percentage changes in support for causes year-on-year as that would indicate what the internal dynamics of the movement are. I'm (at least) partly motivated to see this because mental health, which I've written quite a lot on, may be the smallest top priority cause, but this is also the first time it's snuck into the list.

I think you can get a very rough sense of possible changes by comparing the results from different years (as in the first two graphs in the post), but given the difficulties in interpreting these differences I would be wary of presenting these as % changes. Aside from possible differences in the sample across different years, changing categories for causes would also obviously distort things (we start with a fairly strong presumption against changing categories for this reason, but in some cases, the development of Mental Health as a field being one, it's unavoidable).

Roger. Points taken.

You might also like seeing this report from last year on how cause preferences have changed.

This post was awarded an EA Forum Prize; see the prize announcement for more details.

My notes on what I liked about the post, from the announcement:

EA Survey 2018 Series: Cause Selections”, like the other posts in that series, makes important data from the EA Survey much easier to find. The summary and use of descriptive headings both increase readability, and the methodological details help to put the post’s numbers in context.

As a movement, we collect a lot of information about ourselves, and it’s really helpful when authors report that information in a way that makes it easier to understand. All the posts in this series are worth reading if you want to learn about the EA community.

Thanks for this. Were there any causes you considered adding beyond those stated? Those seems like the main causes EAs support, but it would be nice to include 'minor' ones to see what the community feeling is about those, e.g. wild animal suffering, education, social justice, immigration reform, etc.

Yeh, I certainly think this would be valuable, although it would need to be weighed against the fact that we already have more than 10 causes listed, which may be pushing it. We may be able to accommodate this by splitting out the questions into questions about broader cause areas and then about more specific causes.

If EA engagement predicts relatively more support for AI relative to climate change and global poverty, I'm sure people have been asking as to whether EA engagement causes this, or if people from some cause areas just engage more for some other reason. Has anyone drawn any conclusions?

Either way: one _might_ conclude that "climate change" and "global poverty" are more "mainstream" priorities, where "mainstream" is defined as the popular opinion of the populations from which EA is drawing. Would this be a valid conclusion?

Do people know of data on what cause prioritization looks like among various non-EA populations who might be defined as more "mainstream" or more "hegemonic" in some fashion? Bearing in mind that "mainstream" / distance from EA is a continuum, and it would be useful to sample multiple points on that continuum. (For example, "College Professors" might be representative of opinions that are both more mainstream and more hegemonic within a certain group)

(I'll try to come back to this comment and link any relevant data myself if I come across it later)

https://forum.effectivealtruism.org/posts/MDxaD688pATMnjwmB/to-grow-a-healthy-movement-pick-the-low-hanging-fruit

https://www.dropbox.com/s/9ywputy5v0qzu3t/Understanding Effective Givers.pdf?dl=0

This isn't really what I was looking for, but it's an"online national sample of Americans" polled on giving to deworming vs make a wish and the local choir. I'm hoping to find something more focused on the diversity of causes within EA, and more well defined and more adjacent populations.

I mentioned college professors above, but I can think of lots of different populations e.g ."students from specific colleges" or "members of adjacent online forums", or "startup founders" or 'doctors without borders people" or "teach for america people" or even "Non-EA friends and relatives of EAs" which might be illustrative as points of comparison - some easier to poll than others. Generally I think the most useful data comes from those who are representative of people who are already sort of adjacent to EA, represent key institutions, and whose buy-in would be most practically useful for movement building over decades, which is why I went for "college professors" first.

Thanks for your stimulating questions and comments Ishaan.

one might conclude that "climate change" and "global poverty" are more "mainstream" priorities, where "mainstream" is defined as the popular opinion of the populations from which EA is drawing. Would this be a valid conclusion?

This seems a pretty uncontroversial conclusion relative to many of the cause areas we asked about (e.g. AI, Cause Prioritization, Biosecurity, Meta, Other Existential Risk, Rationality, Nuclear Security and Mental Health)

Do people know of data on what cause prioritization looks like among various non-EA populations who might be defined as more "mainstream"

We don’t have data on how the general population would prioritize these causes- indeed, it would be difficult to gather such data, since most non-EAs would not be familiar with what many of these categories refer to.

We can examine donation data from the general population however. In the UK we see the following breakdown:

Charities Aid Foundation Giving Report 2018

As you can see, the vast majority of donor money is going to causes which don’t even feature in the EA causes list.

(For example, "College Professors" might be representative of opinions that are both more mainstream and more hegemonic within a certain group)

I imagine that college professors might be quite unrepresentative and counter-mainstream in their own ways. Examining (elite?) university students or recent graduates might be interesting (though unrepresentative) as a comparison, as a group that a large number of EAs are drawn from.

Elite opinion and what representatives from key institutions think seems like a further interesting question, though would likely require different research methods.

If EA engagement predicts relatively more support for AI relative to climate change and global poverty, I'm sure people have been asking as to whether EA engagement causes this, or if people from some cause areas just engage more for some other reason. Has anyone drawn any conclusions?

I think there are plausibly multiple different mechanisms operating at once, some of which may be mutually reinforcing.

  • People shifting in certain directions as they spend more time in EA/become more involved in EA/become involved in certain parts of EA: there certainly seem to be cases where this happens (people change their views upon exposure to certain EA arguments) and it seems to mostly be in the direction we found (people updating in the direction of Long Term Future causes, so this seems quite plausible
  • Differential dropout/retention across different causes: it seems fairly plausible that people who support causes which receive more official and unofficial sanction and status (LTF) would be more likely to remain in the movement than causes which receive less (and support for which is often implicitly or explicitly presented as indicating that you don’t really get EA. So it’s possible that people who support these other causes drop out in higher numbers than those who support LTF causes. (I know many people who talk about leaving the movement due to this, although none to my knowledge have).
  • There could also be third factors which drive both higher EA involvement and higher interest in certain causes (perhaps in interaction with time in the movement or some such). Unfortunately without individual level longitudinal data we don’t know exactly how far people are changing views vs the composition of different groups changing.

In my area the main issues are economic inequality, social intolerance , immigration, incarceration, drug wars, and huge gaps in power between the 'meritocracy' (usually in universities) and those not in that class. (Social intolerance works many ways --you can have homophobes worried about islamaphobia, poor white nationalists worried about 'people of color' but not economic inequality , etc.) Then there are issues like trade and tarrifs (eg US vs China and Mexico) , and the environment.

The EA people I have met tend to be either grad students in applied sciences or philosophers, and I guess many in IT --they don' t speak the language of theoretical sciences.

One could probably do a behavioral genetics type analyses of 'heritability' or a cultural transmission model of who identifies with EA. (Most progressive types around this area identify more with what are called 'socialists' (eg Bernie Sanders, E. Warren, and AOC --all US politicians). They also tend to think to deal with global poverty and other issues one has to deal with local issues as well. )