David_Moss

I am the Principal Research Manager at Rethink Priorities working on, among other things, the EA Survey, Local Groups Survey, and a number of studies on moral psychology, focusing on animal, population ethics and moral weights.

In my academic work, I'm a Research Fellow working on a project on 'epistemic insight' (mixing philosophy, empirical study and policy work) and moral psychology studies, mostly concerned either with effective altruism or metaethics.

I've previously worked for Charity Science in a number of roles and was formerly a trustee of EA London.

Sequences

EA Survey 2020

Topic Contributions

Comments

How many people have heard of effective altruism?

Thanks!

I find the proportion of people who have heard of EA even after adjusting for controls to be extremely high. I imagine some combination of response bias and just looking up the term is causing overestimation of EA knowledge.

Just so I can better understand where and the extent to which we might disagree, what kind of numbers do you think are more realistic? We make the case ourselves in the write-up that, due to over-claiming, we would we generally expect these estimates to err on the side of over-estimating those who have heard of and have a rough familiarity with EA, that one might put more weight on the more 'stringent' coding, and that one might want to revise even these numbers down due to the evidence we mention that even that category of responses seems to be associated with over-claiming, which could take the numbers down to around 2%. I think there are definitely reasonable grounds to believe the true number is lower (or higher) than 2% (and note the initial estimate itself ranged from around 2-3% if we look at the 95% HDI), but around 2% doesn't strike me as "extremely high."

For context, I think it's worth noting, as we discuss in the conclusion, that these numbers are lower than any of the previous estimates, and I think our method of classifying EAs were generally more conservative. So I think some EAs have been operating with more optimistic numbers and would endorse more permissive classification of whether people seem likely to have heard of EA (these numbers might suggest a downward update in that context).

given that I expect EA knowledge to be extremely low in the general population, I’m not sure what the point of doing these surveys is. It seems to me you’re always fighting against various forms of survey bias that are going to dwarf any real data. Doing surveys of specific populations seems a more productive way of measuring knowledge.

I think there are a variety of different reasons, some of which we discuss in the post.

  • Firstly, these surveys could confirm whether awareness of EA is generally low, which as I note above isn't completely uncontroversial, and as we discuss in the post, seems to be generally suggested by these numbers whatever the picture in terms of overclaiming (i.e. the estimates at least suggest that the proportion who have heard of EA according to the stringent standard is <3%).
  • I think just doing surveys on "specific populations" (I assume implicitly you have in mind populations where we expect the percentage to be higher) has some limitations, although it's certainly still valuable. Notably our data drawn from the general population (but providing estimates for specific populations) seems broadly in accord with the data from specific populations (notwithstanding the fact that our estimates seem somewhat lower and more conservative). So we should do both, with both sources of data providing checks on the other.
    • I think this is particularly valuable given that it is very difficult to get representative samples from "specific  populations." I think it's sometimes possible to approximate this and one could apply weighting for basic demographics in a college setting, for example, but this is generally more difficult / what you can do is more limited. And for other "specific populations"  (where we don't have population data) this would be impossible.
    • I also think this applies in cases like estimating how many US students have heard of EA, where taking a large representatively weighted sample, as we did here, and getting estimates for different kinds of students, seems likely to give better estimates than just specifically sampling US students without representative weighting, as in the earlier CEA-RP brand survey we linked.
  • I think that getting estimates in the general population (where our priors might be that the percentages are very low), also provides valuable calibration for populations where our priors may allow that the percentages are much higher (but are likely much more uncertain). If we just look at estimates in these specific populations, where we think percentages could be much higher, it is very hard to calibrate those estimates against anything to see if they are realistic. If we think the true percentage in some specific population could be as high as 30% or could be much lower, it is hard to test whether our measures which suggest the true figure is ~20% are well-calibrated or not. However, if we've employed these or similar measures in the general population, then we can get a better sense of how the measures are performing and whether they are under-estimating or over-estimating (i.e. whether our classification is too stringent or too permissive).
    • I think we get this kind of calibration/confirmation when we compare our estimates to those in Caviola et al's recent survey, as we discuss in the conclusion. Since we employed quite similar measures, and found broadly similar estimates for that specific population, if you have strong views that the measures are generally over-estimating in one case, then you could update your views about the results using similar measures accordingly (and, likewise, you can generally get independent confirmation by comparing the two results and seeing they are broadly similar). Of course, that would just be informal calibration/confirmation; more work could be done to assess measurement invariance and the like.
    • I would also add that even if you are very sceptical about the absolute level of awareness of terms directly implied by the estimates due to general over-claiming, you may still be able to draw inferences about the awareness of effective altruism relative to other terms (and if you have a sense of the absolute prevalence of those terms, this may also inform you about the overall level of awareness of EA). For example, comparing the numbers (unscreened) claiming to have heard of different terms, we can see that effective altruism is substantially less commonly cited than 'evidence-based medicine', 'cell-based meat', 'molecular gastronomy', but more commonly cited than various other terms, which may give a sense of upper and lower bounds of the level of awareness, relative to these other terms. One could also compare estimates for some of these terms to their prevalence estimated in other studies (though these tend not to be representative) e.g. Brysbaert et al (2019), to get another reference point.
  • Likewise, data from the broader population seems to be necessary to assess many differences across groups (and so, more generally, what influences exposure to and interest in EA). As noted, previous surveys on specific populations found suggestive interesting associations between various variables and whether people had heard of or were interested in EA. But since these were focused on specific populations, we would expect these associations to be attenuated (or otherwise influenced) by range restriction or other limitations. If you only look at specific population that are highly educated, high SAT, high SES, low age etc., then it's going to be very difficult to assess the influence of any of these variables. So, insofar as we are interested in these results, then it seems necessary to conduct studies on broader populations, otherwise we can't get informative estimates of the influence of these different factors (which are probably, implicitly, driving choices about which specific populations, we would otherwise choose to focus on).
How many people have heard of effective altruism?

Peter Singer seems to be higher profile than the other EAs on your list. How much of this do you think is from popular media, like The Good Place, versus from just being around for longer? 

 

Interesting question. It does seem clear that Peter Singer is known more broadly (including among those who haven’t heard of EA, and for some reasons unrelated to EA). It also seems clear that he was a widely known public figure well before ‘The Good Place’ (it looks like he was described as “almost certainly the best-known and most widely read of all contemporary philosophers” back in 2002, as one example). 

So, if the question is whether he’s more well known due to popular media (narrowly construed) like The Good Place, it seems likely the answer is ‘no.’ If the question is whether he’s more well known due to his broader, public intellectual work, in contrast to his narrowly academic work, then that seems harder to assign credit for, since much of his academic work has been extremely popular and arguably a prerequisite of the broader public intellectual work. 

If the question is more whether he’s more well known than some of the other figures listed primarily because of being around longer, that seems tough to answer, since it implies speculation about how prominent some of those other figures might become with time. 

I wonder if people who indicated they only heard about Peter Singer  (as opposed to only hearing about MackAskill, Ord, Alexander, etc.) scored lower on ratings of understanding EA? 

This is another interesting question. However, a complication here seems to be that, I think we’d generally expect people who have heard of more niche figures associated with X to be more informed about X, than people who have only heard of a very popular figure associated with X for indirect reasons (unrelated to the quality of information transmitted from those figures).

Also kinda sad EA is being absolutely crushed by taffeta. 

Agreed. I had many similar experiences while designing this survey, where I conducted various searches to try to identify terms that were less well known than ‘effective altruism’ and kept finding that they were much more well known. (I remember one dispiriting example was finding that the 'Cutty Sark' seemed to be much more widely known than effective altruism).


 

EA is more than longtermism

For considering "recruitment, retention, and diversity goals" I think it may also be of interest to look at cause preferences across length of time in EA, across years. Unlike in the case of engagement, we have length of time in EA data across every year of the EA Survey, rather than just two years.

Although EAS 2017 messes up what is otherwise a beautifully clear pattern*, we can still quite clearly see that:

  • On average people start out (0 years) in EA favouring neartermist causes  and gradually cohorts become more longtermist. (Note that this is entirely compatible with non-longtermists dropping out, rather than describing individual change: though we know many individuals do change cause prioritization, predominantly in a longtermist direction.)
  • Each year (going up the graph vertically) has gradually become more longtermist even among people who have only been EA 0 years. Of course, this could partly be explained by non-longtermists dropping out within their first year of hearing about EA, but it could also reflect EA recruiting progressively more longtermist people.
  • We can also descriptively see that the jump between 2015 and 2018-2020 is quite dramatic. In 2015 all cohorts of EA (however long they'd been in EA) were strongly near-termist leaning. By 2018-2020, even people who'd just joined of EA were dramatically more favourable to longtermism. And by 2020, even people who had been in EA a couple of years were on average roughly equally longtermist/neartermist leaning and beginning to be on average longtermist leaning.

* EAS 2017 still broadly shows the same pattern until the oldest cohorts (those who have been in EA the longest, which have a very low sample size). In addition, as Appendix 1 shows, while EAS 2018-2020 have very similar questions, EAS 2015-2017 included quite different options in their questions.

I've included a plot excluding EAS 2017 below, just so people can get a clearer look at the most recent years, which are more comparable to each other.

EAS 2017 excluded

 

 



 

Bad Omens in Current Community Building

Fwiw, my intuition is that EA hasn't been selecting against, e.g. good epistemic traits historically, since I think that the current community has quite good epistemics by the standards of the world at large (including the demographics EA draws on).


I think it could be the case that EA itself selects strongly for good epistemics (people who are going to be interested in effective altruism have much higher epistemic standards than the world or large, even matched for demographics), and that this explains most of the gap you observe, but also that some actions/policies by EAs still select against good epistemic traits (albeit in a smaller way).

I think these latter selection effects, to the extent they occur at all, may happen despite (or, in some cases, because of) EA's strong interest in good epistemics. e.g. EAs care about good epistemics, the criteria they use to select for good epistemics are in practice  the person expressing positions/arguments they believe are good ones, this functionally selects more for deference than good epistemics.

EA is more than longtermism

Thanks for the nice comment!

Do you have data on the trends over time? I’m interested to know if the three attributes are getting closer together or further apart at both ends of the engagement spectrum.

We only have a little data on the interaction between engagement and cause preference over time, because we only had those engagement measures in the EA Survey in 2019 and 2020. We were also asked to change some of the cause categories in 2020 (see Appendix 1), so comparisons across the years are not exact.

Still, just looking at differences between those two years, we see the patterns are broadly similar. True to your prediction, longtermism is slightly higher among the less engaged in 2020 than 2019, although the overall interaction between engagement, cause prioritisation and year is not significant (p=0.098). (Of course, differences between 2019 and 2020 need not be explained by particular EAs changing their views (they could be explained by non-longtermists dropping out, or EAs with different cause preferences being more/less likely to become more engaged between 2019 and 2020)).

EA is more than longtermism

EA is less focused on longtermism than people might think based on elite messaging. IIRC this is affirmed by past community surveys

 

This is somewhat less true when one looks at the results across engagement levels. Among the less engaged ~50% of EAs (levels 1-3), neartermist causes are much more popular than longtermism. For level 4/5 engagement EAs, the average ratings of neartermist, longtermist and meta causes are roughly similar, though with neartermism a bit lower. And among the most highly engaged EAs, longtermist and meta causes are dramatically more popular than neartermist causes.

Descriptively, this adds something to the picture described here (based on analyses we provided), which is that although the most engaged level 5 EAs are strongly longtermist on average, the still highly engaged level 4s are more mixed. (Level 5 corresponds to roughly EA org staff and group leaders, while level 4 is people who've "engaged extensively with effective altruism content (e.g. attending an EA Global conference, applying for career coaching, or organizing an EA meetup)." 

One thing that does bear emphasising is that even among the most highly engaged EAs, neartermist causes do not become outright unpopular in absolute terms. On average they are rated around the midpoint as "deserv[ing] significant resources." I agree this may not (seem to be) reflected in elite recommendations about what people should support on the margins though.

FTX/CEA - show us your numbers!

Back when LEAN was a thing we had a model of the value of local groups based on the estimated # of counterfactual actively engaged EAs, GWWC pledges and career changes, taking their value from 80,000 Hour $ valuations of career changes of different levels. 

The numbers would all be very out of date now though,  and the EA Groups Surveys post 2017 didn't gather the data that would allow this to be estimated.

FTX/CEA - show us your numbers!

I also agree this would be extremely valuable. 

I think we would have had the capacity to do difference-in-difference analyses (or even simpler analyses of pre-post differences in groups with or without community building grants, full-timer organisers etc.) if the outcome measures tracked in the EA Groups Survey were not changed across iterations and, especially, if we had run the EA Groups Survey more frequently (data has only been collected 3 times since 2017 and was not collected before we ran the first such survey in that year).

“Should you use EA in your group name?” An update on PISE’s naming experiment

Makes sense! 

One other thing I'd flag is that, although I think it's very plausible that there is a cross-over interaction effect (such that people who are predisposed to be positively inclined to EA prefer the "Effective Altruism" name and people who are not so predisposed prefer the "Positive Impact" name), it doesn't sound like the data which you mention doesn't necessary suggest that.

i.e. (although I may be mistaken) it broadly sounds like you asked people beforehand (many of whom liked PISE) and you later asked a different set of people who already had at least some exposure to effective altruism (who preferred EAE). But I would expect people who've been exposed to effective altruism (even a bit) to become more inclined to prefer the name with "effective altruism" in it. So what we'd want to do is expose a set of people (with no exposure to EA) to the names and observe differences in those who are more or less positively pre-disposed to EA (or even track them to see whether they, in fact, go on to engage with EA long term).

Load More