I am the Principal Research Manager at Rethink Priorities working on, among other things, the EA Survey, Local Groups Survey, and a number of studies on moral psychology, focusing on animal, population ethics and moral weights.
In my academic work, I'm a Research Fellow working on a project on 'epistemic insight' (mixing philosophy, empirical study and policy work) and moral psychology studies, mostly concerned either with effective altruism or metaethics.
I've previously worked for Charity Science in a number of roles and was formerly a trustee of EA London.
Thanks!
I find the proportion of people who have heard of EA even after adjusting for controls to be extremely high. I imagine some combination of response bias and just looking up the term is causing overestimation of EA knowledge.
Just so I can better understand where and the extent to which we might disagree, what kind of numbers do you think are more realistic? We make the case ourselves in the write-up that, due to over-claiming, we would we generally expect these estimates to err on the side of over-estimating those who have heard of and have a rough familiarity with EA, that one might put more weight on the more 'stringent' coding, and that one might want to revise even these numbers down due to the evidence we mention that even that category of responses seems to be associated with over-claiming, which could take the numbers down to around 2%. I think there are definitely reasonable grounds to believe the true number is lower (or higher) than 2% (and note the initial estimate itself ranged from around 2-3% if we look at the 95% HDI), but around 2% doesn't strike me as "extremely high."
For context, I think it's worth noting, as we discuss in the conclusion, that these numbers are lower than any of the previous estimates, and I think our method of classifying EAs were generally more conservative. So I think some EAs have been operating with more optimistic numbers and would endorse more permissive classification of whether people seem likely to have heard of EA (these numbers might suggest a downward update in that context).
given that I expect EA knowledge to be extremely low in the general population, I’m not sure what the point of doing these surveys is. It seems to me you’re always fighting against various forms of survey bias that are going to dwarf any real data. Doing surveys of specific populations seems a more productive way of measuring knowledge.
I think there are a variety of different reasons, some of which we discuss in the post.
Peter Singer seems to be higher profile than the other EAs on your list. How much of this do you think is from popular media, like The Good Place, versus from just being around for longer?
Interesting question. It does seem clear that Peter Singer is known more broadly (including among those who haven’t heard of EA, and for some reasons unrelated to EA). It also seems clear that he was a widely known public figure well before ‘The Good Place’ (it looks like he was described as “almost certainly the best-known and most widely read of all contemporary philosophers” back in 2002, as one example).
So, if the question is whether he’s more well known due to popular media (narrowly construed) like The Good Place, it seems likely the answer is ‘no.’ If the question is whether he’s more well known due to his broader, public intellectual work, in contrast to his narrowly academic work, then that seems harder to assign credit for, since much of his academic work has been extremely popular and arguably a prerequisite of the broader public intellectual work.
If the question is more whether he’s more well known than some of the other figures listed primarily because of being around longer, that seems tough to answer, since it implies speculation about how prominent some of those other figures might become with time.
I wonder if people who indicated they only heard about Peter Singer (as opposed to only hearing about MackAskill, Ord, Alexander, etc.) scored lower on ratings of understanding EA?
This is another interesting question. However, a complication here seems to be that, I think we’d generally expect people who have heard of more niche figures associated with X to be more informed about X, than people who have only heard of a very popular figure associated with X for indirect reasons (unrelated to the quality of information transmitted from those figures).
Also kinda sad EA is being absolutely crushed by taffeta.
Agreed. I had many similar experiences while designing this survey, where I conducted various searches to try to identify terms that were less well known than ‘effective altruism’ and kept finding that they were much more well known. (I remember one dispiriting example was finding that the 'Cutty Sark' seemed to be much more widely known than effective altruism).
For considering "recruitment, retention, and diversity goals" I think it may also be of interest to look at cause preferences across length of time in EA, across years. Unlike in the case of engagement, we have length of time in EA data across every year of the EA Survey, rather than just two years.
Although EAS 2017 messes up what is otherwise a beautifully clear pattern*, we can still quite clearly see that:
* EAS 2017 still broadly shows the same pattern until the oldest cohorts (those who have been in EA the longest, which have a very low sample size). In addition, as Appendix 1 shows, while EAS 2018-2020 have very similar questions, EAS 2015-2017 included quite different options in their questions.
I've included a plot excluding EAS 2017 below, just so people can get a clearer look at the most recent years, which are more comparable to each other.
Fwiw, my intuition is that EA hasn't been selecting against, e.g. good epistemic traits historically, since I think that the current community has quite good epistemics by the standards of the world at large (including the demographics EA draws on).
I think it could be the case that EA itself selects strongly for good epistemics (people who are going to be interested in effective altruism have much higher epistemic standards than the world or large, even matched for demographics), and that this explains most of the gap you observe, but also that some actions/policies by EAs still select against good epistemic traits (albeit in a smaller way).
I think these latter selection effects, to the extent they occur at all, may happen despite (or, in some cases, because of) EA's strong interest in good epistemics. e.g. EAs care about good epistemics, the criteria they use to select for good epistemics are in practice the person expressing positions/arguments they believe are good ones, this functionally selects more for deference than good epistemics.
Thanks for the nice comment!
Do you have data on the trends over time? I’m interested to know if the three attributes are getting closer together or further apart at both ends of the engagement spectrum.
We only have a little data on the interaction between engagement and cause preference over time, because we only had those engagement measures in the EA Survey in 2019 and 2020. We were also asked to change some of the cause categories in 2020 (see Appendix 1), so comparisons across the years are not exact.
Still, just looking at differences between those two years, we see the patterns are broadly similar. True to your prediction, longtermism is slightly higher among the less engaged in 2020 than 2019, although the overall interaction between engagement, cause prioritisation and year is not significant (p=0.098). (Of course, differences between 2019 and 2020 need not be explained by particular EAs changing their views (they could be explained by non-longtermists dropping out, or EAs with different cause preferences being more/less likely to become more engaged between 2019 and 2020)).
EA is less focused on longtermism than people might think based on elite messaging. IIRC this is affirmed by past community surveys
This is somewhat less true when one looks at the results across engagement levels. Among the less engaged ~50% of EAs (levels 1-3), neartermist causes are much more popular than longtermism. For level 4/5 engagement EAs, the average ratings of neartermist, longtermist and meta causes are roughly similar, though with neartermism a bit lower. And among the most highly engaged EAs, longtermist and meta causes are dramatically more popular than neartermist causes.
Descriptively, this adds something to the picture described here (based on analyses we provided), which is that although the most engaged level 5 EAs are strongly longtermist on average, the still highly engaged level 4s are more mixed. (Level 5 corresponds to roughly EA org staff and group leaders, while level 4 is people who've "engaged extensively with effective altruism content (e.g. attending an EA Global conference, applying for career coaching, or organizing an EA meetup)."
One thing that does bear emphasising is that even among the most highly engaged EAs, neartermist causes do not become outright unpopular in absolute terms. On average they are rated around the midpoint as "deserv[ing] significant resources." I agree this may not (seem to be) reflected in elite recommendations about what people should support on the margins though.
Back when LEAN was a thing we had a model of the value of local groups based on the estimated # of counterfactual actively engaged EAs, GWWC pledges and career changes, taking their value from 80,000 Hour $ valuations of career changes of different levels.
The numbers would all be very out of date now though, and the EA Groups Surveys post 2017 didn't gather the data that would allow this to be estimated.
I also agree this would be extremely valuable.
I think we would have had the capacity to do difference-in-difference analyses (or even simpler analyses of pre-post differences in groups with or without community building grants, full-timer organisers etc.) if the outcome measures tracked in the EA Groups Survey were not changed across iterations and, especially, if we had run the EA Groups Survey more frequently (data has only been collected 3 times since 2017 and was not collected before we ran the first such survey in that year).
Makes sense!
One other thing I'd flag is that, although I think it's very plausible that there is a cross-over interaction effect (such that people who are predisposed to be positively inclined to EA prefer the "Effective Altruism" name and people who are not so predisposed prefer the "Positive Impact" name), it doesn't sound like the data which you mention doesn't necessary suggest that.
i.e. (although I may be mistaken) it broadly sounds like you asked people beforehand (many of whom liked PISE) and you later asked a different set of people who already had at least some exposure to effective altruism (who preferred EAE). But I would expect people who've been exposed to effective altruism (even a bit) to become more inclined to prefer the name with "effective altruism" in it. So what we'd want to do is expose a set of people (with no exposure to EA) to the names and observe differences in those who are more or less positively pre-disposed to EA (or even track them to see whether they, in fact, go on to engage with EA long term).
Thanks for spotting! Edited.