DM

David_Moss

Principal Research Director @ Rethink Priorities
6085 karmaJoined Aug 2014Working (6-15 years)

Bio

I am the Principal Research Director at Rethink Priorities and currently lead our Surveys and Data Analysis department. Most of our projects involve private commissions for core EA movement and longtermist orgs, where we provide:

  • Private polling to assess public attitudes
  • Message testing / framing experiments, testing online ads
  • Expert surveys
  • Private data analyses and survey / analysis consultation
  • Impact assessments of orgs/programs

Formerly, I also managed our Wild Animal Welfare department and I've previously worked for Charity Science as an employee and a trustee was formerly a trustee of EA London.

My academic interests are in moral psychology and methodology at the intersection of psychology and philosophy.

How I can help others

Survey methodology and data analysis.

Sequences
3

RP US Public AI Attitudes Surveys
EA Survey 2022
EA Survey 2020

Comments
452

But there are also polls showing that almost half of U.S. adults "support a ban on factory farming." I think the correct takeaway from those polls is that there's a gap between vaguely agreeing with an idea when asked vs. actually supporting specific, meaningful policies in a proactive way.

 

I broadly agree with the conclusion as stated. But I think there are at least a couple of important asymmetries between the factory farming question and the AI question, which mean that we shouldn't expect there to be a gap of a similar magnitude between stated public support and actual public support regarding AI. 

  • Ending factory farming ban is in direct conflict with most respondents' (perceived) self-interest in a way that a pause on AI is not (since those respondents willingly continue to consume animal products).
  • Questions about support for factory farming are more likely to elicit socially desirable responding than questions about the AI pause, since most of those respondents believe factory farming is bad and widely viewed as such, so actively supporting factory farming seems bad. I would expect this to be much less the case regarding AI (we looked into this briefly here and found no evidence of socially desirable responding in either direction).

I think both of these factors conduce to a larger gap between stated attitudes and actual support in the animal farming case. That said, I think this is an ameliorable problem: in our replications of the SI animal farming results, we found substantially lower support (close to 15%). 

So, I think the conclusion to draw is that polling certain questions can find misleadingly high support for different issues (even if you ask a well known survey panel to run the questions), but not that very high support found in surveys just generally doesn't mean anything. [Not that you said this, but I wanted to explain why I don't think it is the case anyway]

Investigating the effects of talking (or not talking) about climate change in different EA/longtermist context seems to be neglected (e.g. through surveys/experiments and/or focus groups), despite being tractable with few resources. 

It seems like we don't actually know either the direction or magnitude of the effect, and are mostly guessing, despite the stakes potentially being quite high (considered across EA/longtermist outreach).

We could potentially survey the EA community on this later this year. Please feel free to reach out if you have specific requests/suggestions for the formulation of the question.

I worry this heuristic works if and only if people have reasonable substantive views about what kind of thing they want to see more/less on the Forum. 

For example, if people vote in accordance with the view 'I want to see more/less [things I like/dislike or agree/disagree with], then this heuristic functions just the same as like/dislike or agree/disagree vote (which I think would be bad). If people vote in accordance with the view 'I want to see more/less [posts which make substantive contributions, which others may benefit from, even if I strongly disagree with them/don't think they are well made]', then the heuristic functions much more like Matt's.

I think Monmouth's question is not exactly about whether the public believe AI to be an existential threat. They asked:
"How worried are you that machines with artificial intelligence could eventually pose a
threat to the existence of the human race – very, somewhat, not too, or not at all worried?" The 55% you cite is those who said they were "Very worried" or "somewhat worried."

Like the earlier YouGov poll, this conflates an affective question (how worried are you) with a cognitive question (what do you believe will happen). That's why we deliberately split these  in our own polling, which cited Monmouth's results, and also asked about explicit probability estimates in our later polling which we cited above.

Still, it’s hard to see how tweaking EA can lead to a product that we and others be excited about growing. 

 

It's not clear to me how far this is the case. 

  • Re. the EA community: evidence from our community survey, run with CEA, suggests a relatively limited reduction in morale post-FTX. 
  • Re. non-EA audiences, our work reported here and here (though still unpublished due to lack of capacity) suggest relatively low negative effects in the broader population (including among elite US students specifically).

I agree that:

  • Selection bias (from EAs with more negative reactions dropping out) could mean that the true effects are more negative. 
    • I agree that if we knew large numbers of people were leaving EA this would be another useful datapoint, though I've not seen much evidence of this myself. Formally surveying the community to see how many people know of leaving could be useful to adjudicate this.
    • We could also conduct a 'non-EA Survey' which tries to reach people who have dropped out of EA, or who would be in EA's target audience but who declined to join EA (most likely via referrals), which would be more systematic than anecdotal evidence. RP discussed doing with with researchers/community builders at another org, but haven't run this due to lack of capacity/lack of funding.
  • If many engaged EAs are dropping out but growth is continuing only because "new recruits are young and naive about EA’s failings," this is bad. 
    • That said, I see little reason to think this is the case.
    • In addition, EA's recent growth rates seem higher than I would expect if we were seeing considerable dropout. 

Especially considering that we have the excellent option of just talking directly about the issues that matter to us, and doing field-building around those ideas - AI safety, Global Priorities Research, etc. and so on. This would be a relatively clean slate, allowing us to do more (as outlined in 11), to discourage RB, and stop bad actors.

It's pretty unclear to me that we would expect these alternatives to do better. 

One major factor is that it's not clear that these particular ideas/fields are in a reputationally better position than EA. Longtermist work may have been equally or more burned by FTX than EA. AI safety and other existential risk work have their own reputational vulnerabilities. And newer ideas/fields like 'Global Priorities Research' could suffer from being seen as essentially a rebrand of EA, especially if they share many of the same key figures/funding sources/topics of concern, which (per your 11a) risks being seen as deceptive. Empirical work to assess these questions seems quite tractable and neglected.

Re. your 10f-g, I'm less sanguine that the effects of a 'reset' of our culture/practices would be net positive. It seems like it may be harder to maintain a good culture across multiple fragmented fields in general. Moreover, as suggested by Arden's point number 1 here, there are some reasons to think that basing work solely around a specific cause may engender a less good culture than EA, given EA's overt focus on promoting certain virtues.

There are people who I would consider "EA" who I wouldn't consider a "community member" (e.g. if they were not engaging much with other people in the community professionally or socially), but I'd be surprised if they label themselves "EA" (maybe they want to keep their identity small, or don't like being associated with the EA community). 

 

Fwiw, I am broadly an example of this category, which is partly why I raised the example: I strongly believe in EA and engage in EA work, but mostly don't interact with EAs outside professional contexts. So I would say "I am an EA", but would be less inclined to say "I am a member of the EA community" except insofar as this just means believes in EA/does EA work.

Thanks!

For instance, I personally found it surprising how few people disbelieve AI being a major risk (only 23% disbelieve it being an extinction level risk)

Just to clarify, we don't find in this study that only 23% of people disbelieve AI is an extinction risk. This study shows that of those who disagreed with the CAIS statement 23% explained this in terms of AI not causing extinction

So, on the one hand, this is a percentage of a smaller group (only 26% of people disagreed with the CAIS statement in our previous survey) not everyone. On the other hand, it could be that more people also disbelieve AI is an extinction risk, but that wasn't their cited reason for disagreeing with the statement, or maybe they agree with the statement but don't believe AI is an extinction risk.

Fortunately, our previous survey looked at this more directly: we found 13% expressed that there was literally 0 probability of extinction from AI, though around 30% indicated 0-4% (the median was 15%, which is not far off some EA estimates). We can provide more specific figures on request. 

Many people I would consider "EA" in the sense that they work on high impact causes, socially engage with other community members etc. don't consider themselves EA, might I think would likely consider themselves community members

 

This is reasonable, but I think the opposite applies as well. i.e. people can be EA (committed to the philosophy, taking EA actions) but not a member of the community. Personally, this seems a little more natural than the reverse, but YMMV (I have never really felt the intuitive appeal of believing in EA and engaging in EA activities but not describing oneself as "an EA"). 

Thanks!
 

it might be easiest if you share a draft of the planned questions so that people can see what is already in there and what seems in scope to include.

Makes sense. We're trying to elicit another round of suggestions here first (since people may well have new requests since the original announcement).

Load more