DM

David_Moss

Principal Research Director @ Rethink Priorities
8553 karmaJoined Working (6-15 years)

Bio

I am the Principal Research Director at Rethink Priorities. I lead our Surveys and Data Analysis department and our Worldview Investigation Team. 

The Worldview Investigation Team previously completed the Moral Weight Project and CURVE Sequence / Cross-Cause Model. We're currently working on tools to help EAs decide how they should allocate resources within portfolios of different causes, and to how to use a moral parliament approach to allocate resources given metanormative uncertainty.

The Surveys and Data Analysis Team primarily works on private commissions for core EA movement and longtermist orgs, where we provide:

  • Private polling to assess public attitudes
  • Message testing / framing experiments, testing online ads
  • Expert surveys
  • Private data analyses and survey / analysis consultation
  • Impact assessments of orgs/programs

Formerly, I also managed our Wild Animal Welfare department and I've previously worked for Charity Science, and been a trustee at Charity Entrepreneurship and EA London.

My academic interests are in moral psychology and methodology at the intersection of psychology and philosophy.

How I can help others

Survey methodology and data analysis.

Sequences
3

RP US Public AI Attitudes Surveys
EA Survey 2022
EA Survey 2020

Comments
577

Thanks for asking ezrah. We currently plan to leave the survey open until December 31st, though it’s possible we might extend the window, as we did last time. 

I think the possibility that outreach to younger age groups[1] might be net negative is relatively neglected. That said, the two possible reasons suggested here didn't strike me as particularly conclusive.

The main reasons why I'm somewhat wary of outreach to younger ages (though there are certainly many considerations on both sides):

  • It seems quite plausible that people are less apt to adopt EA at younger ages because their thinking is 'less developed' in some relevant way that seems associated with interest in EA.
    • I think something related to but distinct from your factor (2) could also be an influence here, namely reaching out to people close to the time when they are making relevant decisions might be more effective at engaging people.
  • It also seems possible (though far from certain) that the counterfactual for many people engaged by outreach to younger age groups, is that they could have been reached by outreach targeted at a later date, i.e. many people we reach as high schoolers could simply have been reached once they were at university. 

These questions seem very uncertain, but also empirically tractable, so it's a shame that more hasn't been done to try to address them. For example, it seems relatively straightforward to compare the success rates of outreach targeting different ages. 

We previously did a little work to look at the relationship between the age when people first got involved in EA and their level of engagement. Prima facie, younger age of involvement seemed associated with higher engagement, though there's a relative dearth of people who joined EA at younger ages, making the estimates uncertain (when comparing <20s to early 20s, for example), and we'd need to spend more time on it to disentangle other possible confounds.

 

 

  1. ^

    Or it might be that 'life stages' are the relevant factor rather than age per se, i.e. a younger person who's already an undergrad might have similar outcomes when exposed to EA as a typical-age undergrad, whereas reaching out to people while in high school (regardless of age) might be associated with negative outcomes.

I think the difference between me and Yudkowsky has less to do with social effects on our speech and more to do with differing epistemic practices, i.e. about how confident one can reasonably be about the effects of poorly understood future technologies emerging in future, poorly understood circumstances. 

This isn't expressing disagreement, but I think it's also important to consider the social effects of our speaking in line with different epistemic practices, i.e.,

  • When someone says "AI will kill us all" do people understand us as expressing 100% confidence in extinction, or do they interpret it as mere hyperbole and rhetoric, and infer that what we actually mean is that AI will potentially kill us all or have other drastic effects
  • When someone says "There's a high risk AI kills us all or disempowers us" do people understand this as us expressing very high confidence that it kills us all or as saying it almost certainly won't kill us all.

I think these questions are relevant in a variety of ways:

  • Whether overall public awareness is high or low seems relevant to outreach in various ways, in different scenarios.
    • For example, this came up just a few days here in a discussion of outreach. In addition to knowing overall sentiment, knowing the overall level of awareness of EA is important, since it informs us about the importance and potential for change in sentiment (e.g., in this case, it seems very few people are even aware of EA at all, so even if negative sentiment had increased, its scope would be limited).
    • In general, after major public events pertaining to EA (like FTX), we might want to know whether these have affected awareness of EA (for good or ill), so we can respond accordingly.
    • Knowing the overall level of awareness of EA in the population (the 'top of the funnel') also informs us about the shape of the funnel, and how many people drop out after the first exposure stage, which is relevant to assessing how many people are interested in EA (as it is currently presented).
    • Still more generally, if we have any sense of what the ideal growth rate or size of EA should be (decision-makers' views on this are explored in the forthcoming results from Meta Coordination Forum Survey), then we presumably want to know where the actual growth rate or size falls relative to that.
  • Knowing about how awareness of EA varies across different groups is also relevant to our outreach.
    • For example, it could inform us about which groups we should be targeting more heavily to ensure we reach those groups.
    • It could also help identify which groups we are trying to reach but failing to make aware of EA (for whatever reason).
    • Moreover, if we know that some groups are more heavily represented in the EA community, then knowing how many people from those groups have heard of EA in the first place informs us about what point in the funnel the problem is (people not hearing about EA, hearing about it but not liking it, hearing about it, joining the community and then dropping out etc.). Our data does suggest some such disparities at the level of first-awareness for both race and gender.
  • Knowing about public sentiment towards EA seems directly relevant for outreach.
    • For example, post-FTX there was much discussion about whether the EA brand had become so toxic that we should simply abandon it (which would have entailed huge costs, even if it had been the right thing to do on balance). I won't elaborate too much on this since it seems relatively straightforward.
  • Knowing about difference in sentiment across groups is also relevant.
    • For example, if sentiment dramatically differed between men and women, or other demographics, this would potentially suggest the need for change (whether in terms of our messaging or features of the community etc.

One move which is sometimes made to suggest that these things aren't relevant, is to say that we only need to be concerned about awareness and attitudes among certain specific groups (e.g. policymakers or elite students). But even if we think that knowing about awareness and attitudes towards EA among certain groups is highly important, it doesn't suggest that broader public attitudes are not important. 

  • For example, even in cases where EA were supported by elites (of whatever kind) action may be difficult in the face of broad, public opposition.
  • The attitudes of elites (or whatever other specific, narrow group we think is of interest) and broader public opinion are not completely autonomous, so broader awareness and attitudes are likely to penetrate whatever other group we're interested in.
  • I think we actually are interested in the awareness, attitudes and involvement of a broader public, not just specific narrow groups, particularly in the long-term. At the least, some subsets of EA are interested in this, even if other subsets of EA actors might be focused more narrowly on particular groups.[1]
  1. ^

    As a practical matter, it's also worth bearing in mind that large representative surveys like this can generate estimate for some niche subgroups, for example, just not really niche ones like elite policymakers), particularly with larger sample sizes. 

We didn't directly examine why worry is increasing, across these surveys. I agree that would be an interesting thing to examine in additional work.

That said, when we asked people why they agreed or disagree with the CAIS statement, people who agreed mentioned a variety of factors including "tech experts" expressing concerns and the fact that they had seen Terminator etc., and directly observing characteristics of AI (e.g. that it seemed to be learning faster than we would be able to handle). In the CAIS statement writeup, we only examined the reasons why people disagreed (the responses tended to be more homogeneous, because many people were just saying ~ it's a serious threat), but we could potentially do further analysis of why they agreed. We'd also be interested to explore this in future work.

It's also perhaps worth noting that we originally wanted to run Pulse monthly, which would allow us to track changes in response to specific events (e.g. the releases of new LLM versions). Now we're running it quarterly (due to changes in the funding situation), that will be less feasible.

Addressing only the results reported in this post, rather than the survey as a whole:

  • How many people in the US public are aware of effective altruism and other key EA-related orgs, public figures etc.
  • What people's attitudes towards effective altruism are, among those who have encountered it
  • What people's attitudes are towards effective altruism (when described) among those who have not encountered it
  • How these differ across different subgroups
  • And, in the future, we will also be assessing whether these are changing across time (we have reported the results of some surveys on these questions previously, but this is the first formal wave of the Pulse iteration)

I kind of feel like the most important version of a survey like this would be certain subsets of people (eg, tech, policy, animal welfare).

We agree these would be valuable surveys to conduct (and we'd be happy to conduct them if someone wants to fund us to do so). But they'd be very different kinds of surveys. Large representative surveys like this do allow us to generate estimates for relatively niche subsets of the population, but if you are interested in a very small subset of people (e.g. those working in animal welfare), it would be better to run a separate targeted survey.

Also why didn't you call out that the more people know what EA is, the less they seem to like it? Or was that difference not statistically significant?

("Sentiment towards EA among those who had heard of it was positive (51% positive vs. 38% negative among those stringently aware, and 70% positive, vs. 22% negative among those permissively aware)."

This comparison wouldn't strictly make sense for a few reasons:

  • The permissive vs stringent classifications are not about whether people know more about EA, but about our confidence, based on their response, that the person has encountered EA. So a very specific response, which reveals clear awareness of EA, but which was overtly factually mistaken could count as stringent, whereas a less specific response which leaves it less clear that the person has encountered EA might only reach the bar for permissive.
  • The two categories are not independent. Every stringent response also passes the bar for the permissive categorisation.
  • A response which referred to a connection between FTX/SBF and EA would be sufficient to meet our stringent classification, because if the person knows about such a (putative) connection, then they have clearly encountered EA (even if their overall conception might be very limited or mistaken). This means that the stringent category is particularly likely to contain people aware of FTX and more than half of the stringently classified respondents who expressed a negative sentiment about EA mentioned FTX.
  • Considering the two groups as independent, there are only 34 and 39 exclusively permissive and stringent respondents, respectively, meaning small sample sizes for a comparison of the two groups.

I do think it is noteable that sentiment is more postive among those who did not report awareness of EA, and responded to a particular presentation of EA, compared to sentiment among those who were classified as having encountered EA. However, this is also not a straightforward comparison: the composition of these groups is different, and the people who did not claim awareness were responding only to one particular presentation of EA. More research would be required to assess whether learning more about EA leads to people having more negative opinions about it.

I believe all of that is true, but at the same time, I’m almost certain we’ve lost significant credibility with key stakeholders... Friendly organisations have explicitly stated they do not want to publicly associate with us due to our EA branding, as the EA brand has become a major drawback among their key stakeholders

 

I definitely agree this is true, just not sufficient in itself to mean that movement building for EA is impossible or less viable than promoting other ideas (for that we'd need to assess alternative brands/framings).

Agree that this is likely explained by people thinking they recognise the familiar terms and conflating it with the Humane Society or other local Humane Societies. We didn't include specific checks of real awareness for The Humane League or other orgs and figure on our list, because they weren't key outcomes we were interested in verifying awareness of per se and survey length is limited. They were included primarily to provide a point of comparison (alongside a mixture of fake, real but very low incidence, and real and very common, items), and to allow us another check by assessing whether responses were associated with each other in ways that made sense (i.e. we would expect EA-related terms to show sensible associations with each other, charities in general to be associated with each other, and tech-related items to be associated with each other).

Based on google trends, I'd expect The Humane League to be a bit less well known than GiveWell, and the Humane Society to be much more well known.

Great talk, thanks!

The thing is, broad awareness of EA is still really low—around 2%.  This is from research that was done last summer between Rethink Priorities and CEA, and Breakwater. They found even though in specific groups that we care about, like some elite circles, it might be higher on the whole awareness of EA, it’s just still very low.

Agreed with this. 

That said, I'd also add that sentiment is still positive even among those who have heard of EA

Our research on elite university students (unpublished but referenced by CEA here), also found that among those who were familiar with EA, only a small number mentioned FTX.

Load more