DM

David_Moss

Principal Research Director @ Rethink Priorities
6852 karmaJoined Aug 2014Working (6-15 years)

Bio

I am the Principal Research Director at Rethink Priorities and currently lead our Surveys and Data Analysis department. Most of our projects involve private commissions for core EA movement and longtermist orgs, where we provide:

  • Private polling to assess public attitudes
  • Message testing / framing experiments, testing online ads
  • Expert surveys
  • Private data analyses and survey / analysis consultation
  • Impact assessments of orgs/programs

Formerly, I also managed our Wild Animal Welfare department and I've previously worked for Charity Science, and been a trustee at Charity Entrepreneurship and EA London.

My academic interests are in moral psychology and methodology at the intersection of psychology and philosophy.

How I can help others

Survey methodology and data analysis.

Sequences
3

RP US Public AI Attitudes Surveys
EA Survey 2022
EA Survey 2020

Comments
486

This is a neat idea, but I think that's probably putting more weight on the (absence) of small differences at particular levels of the response scale than the smaller sample size of the Extra EA Survey will support. If we look at the CIs for any individual response level, they are relatively wide for the EEAS, and the numbers selecting the lowest response levels were very low anyway. 

Many thanks!

I think it would be very valuable to closely examine the cohort of people who report having changed their behavior...

All behaviour changes were correlated with each other (except for stopping referring to EA, while still promoting it, which was associated with temporarily stopping promoting EA, but somewhat negatively associated with other changes).

All behaviour changes were associated with lower satisfaction, with most behavioural changes common only among people with satisfaction below the midpoint, and quite rare with satisfaction above the midpoint (again, with the exception of stopping referring to EA, while still promoting it, which was more common across levels).

People who reported a behavioural change were more likely, on the whole, to mention factors as reasons for dissatisfaction. (When interpreting these it's important to account for the fact that people being more/less likely to mention a factor at a particular importance level might be explained by them being less/more likely to mention it at a different importance level, with less difference in terms of their overall propensity to mention it).

Similarly, there was no obvious pattern of particular factors being associated with lower satisfaction. In general, people who mentioned any given factor were less satisfied.

In principle, we could do more to assess whether any particular factors predict particular behavioural changes, controlling for relevant factors, but it might make more sense to wait for the next full iteration of the EA Survey, when we'll have a larger sample size, and can ask people explicitly whether each of these things are factors (rather than relying on people spontaneously mentioning them.

For the other measures, differences are largely as expected, i.e. people who made a behaviour change are more likely to desire more community change, more likely to strongly agree there's a leadership vacuum,[1] and trust was higher among people who had not made a behaviour change.

Updating analyses of community growth seems like it should be a high priority... I’ve been a longstanding proponent of conducting regular analyses of community growth..

I still agree with this, unfortunately, we've been unsuccessful in securing any funding for more analysis of community growth metrics.

  1. ^

    I personally don't put too much weight on this question. I worry that it's somewhat leading, and that people who are generally more dissatisfied are more likely to agree with it, but it's unclear that leadership vacuum is really an active concern for people or that it's what's driving people's dissatisfaction.

Thanks!

For satisfaction, we see the following patterns.

  • Looking at actual satisfaction scores post-FTX, we see more engaged people were more highly satisfied than less engaged people. In comparison, for current satisfaction, this is no longer the case or is only minimally so (setting aside the least engaged who remain less satisfied than the moderately to highly engaged). Every group's satisfaction has decreased, with moderate to highly engaged EAs' satisfaction declining to similar levels (implying a larger decrease among the more highly engaged). 
    • The pattern is roughly similar, but less clear, looking only at changes within matched subjects (smaller sample size).
  • Looking at people's recalled post-FTX satisfaction, there is no significant difference between the moderately to highly engaged (though they weakly lean in the opposite direction). So the recalled vs current comparison implies a slightly bigger positive gap for more highly engaged EAs (though we did not formally test this comparison).

For reasons for dissatisfaction, there are a few systematic differences across engagement levels:

  • More highly engaged respondents are more likely to mention Leadership
  • More highly engaged respondents were more likely to mention scandals
  • More highly engaged respondents were more likely to mention JEID at the lower importance levels (Important or Slightly important vs Very important), but it's a less clear pattern at the Very important level
  • More highly engaged responents were more likely to mention Epistemics as Very important
  • The most highly engaged respondents were much more likely to mention Funding (though still less than the top factors)
  • Looking within the most highly engaged only, this that Leadership and Scandals are at the top, followed by Cause prioritization and JEID receiving similar levels of mentions.

Does "mainly a subset" mean that a significant majority of responses coded this way were also coded as cause prio? 

 

That's right, as we note here:

The Cause Prioritization and Focus on AI categories were largely, but not entirely, overlapping. The responses within the Cause Prioritization category which did not explicitly refer to too much focus on AI, were focused on insufficient attention being paid to other causes, primarily animals and GHD.  

Specifically, of those who mention Cause Prioritization, around 68% were also coded as part of the AI/x-risk/longtermism category. That said, a large portion of the remainder mentioned "insufficient attention being paid to other causes, primarily animals and GHD" (which one may or may not think is just another side of the same coin). Conversely, around 8% of comments in the AI/x-risk/longtermism category were not also classified as Cause Prioritization (for example, just expressing annoyance about supporters of certain causes wouldn't count as about Cause Prioritization per se). 

So over 2/3rds of Cause Prioritization was explicitly about too much AI/x-risk/longtermism. A large part of the remainder is probably connected, as part of a 'too much x-risk/too little not x-risk' category. The overlap between categories is probably larger than implied by the raw numbers, but we had to rely on what people actually wrote in their comments, without making too many suppositions.

We did note this explicitly:

As we noted in our earlier report, individuals who are particularly dissatisfied with EA may be less likely to complete the survey (whether they have completely dropped out of the community or not), although the opposite effect (more dissatisfied respondents are more motivated to complete the survey to express their dissatisfaction) is also plausible.

I don't think there's any feasible way to address this within this, smaller, supplementary survey. Within the main EA Survey we do look for signs of differential attrition.

Thanks Ulrik!

We can provide the percentages broken down by different groups. I would advise against thinking about this in terms of 'what would the results be if weighted to match non-actual equal demographics' though: (i) if the demographics were different (equal) then presumably concern about demographics would be different [fewer people would be worried about demographic diversity if we had perfect demographic diversity], and (ii) if the demographics were different (equal) then the composition of the different demographic groups within the community would likely be different [if we had a large increase in the proportion of women / decrease in the proportion of men, the people making up those groups would plausibly differ from the current groups].

That said, people identified as a woman or anything other than a man, were more likely to mention JEID as at last of somewhat importance, and they were also more likely to mention cause prioritization and excessive focus on AI/x-risk/longtermism as a concern. Conversely, men were more likely to refer to scandals, leadership and epistemics.

I would be even more cautious about interpreting the differences based on race due to the low sample size (the total number would be much larger in the full EA Survey), and the fact that the composition of non-white respondents as a group differs from what you would see in a 'perfectly equal demographics' scenario (i.e. more Asian, unequal representation across countries).

Hopefully in the next couple/few weeks, though we're prioritising the broader community health related questions from that followup survey linked above.

I can confirm that there's not been so dramatic a shift since the 2020 cause results (below for reference), i.e. global poverty and AI are still very similarly ranked. The new allocation-of-resources data should hopefully give an even clearer sense of 'how much of EA' people want to be this or that.

We did gather cause prioritization data in the most recent EA Survey, we just delayed publishing that report because we gathered additional cause prioritization data in this followup survey, which we ran in December. This was looking at what share of resources EAs would allocate to different causes, rather than just their rating of different causes, which I think adds an important new angle.

We stopped gathering information about donations to individual charities in 2020 as part of a drive to make the EA Survey shorter to increase participation. However, that does mean that you can access donation data more recent than 2019, in our 2020 report (and the accompanying bookdown), which reports a breakdown by charity cause area.

Thanks! I agree that allocating a percentage of "resources", where this contains very different kinds of resources (money and labour) can be difficult. Still, we wanted this to largely match the question asked at the Meta Coordination Forum, which also combined this, so we matched their wording.

Thanks!

We'll definitely be reporting on changes in awareness of and attitudes towards EA in our general reporting of EA Pulse in 2024. I'm not sure if/when we'd do a separate dedicated post towards changes in EA awareness/attitudes. We have a long list (this list is very non-exhausive) of research which is unpublished due to lack of capacity. A couple of items on that list also touch on attitudes/awareness of EA post-FTX, although we have run additional surveys since then.

Feel free to reach out privately if there are specific things it would be helpful to know for EA Netherlands.

Load more