DM

David_Moss

Principal Research Director @ Rethink Priorities
9059 karmaJoined Working (6-15 years)

Bio

I am the Principal Research Director at Rethink Priorities. I lead our Surveys and Data Analysis department and our Worldview Investigation Team. 

The Worldview Investigation Team previously completed the Moral Weight Project and CURVE Sequence / Cross-Cause Model. We're currently working on tools to help EAs decide how they should allocate resources within portfolios of different causes, and to how to use a moral parliament approach to allocate resources given metanormative uncertainty.

The Surveys and Data Analysis Team primarily works on private commissions for core EA movement and longtermist orgs, where we provide:

  • Private polling to assess public attitudes
  • Message testing / framing experiments, testing online ads
  • Expert surveys
  • Private data analyses and survey / analysis consultation
  • Impact assessments of orgs/programs

Formerly, I also managed our Wild Animal Welfare department and I've previously worked for Charity Science, and been a trustee at Charity Entrepreneurship and EA London.

My academic interests are in moral psychology and methodology at the intersection of psychology and philosophy.

How I can help others

Survey methodology and data analysis.

Sequences
4

EA Survey 2024
RP US Public AI Attitudes Surveys
EA Survey 2022
EA Survey 2020

Comments
603

I'm imagining someone googling "ethical career" 2 years from now and finding 80k, noticing that almost every recent article, podcast, and promoted job is based around AI, and concluding that EA is just an AI thing now.

 

I definitely agree that would eventually become the case (eventually all the older non-AI articles will become out of date). I'm less sure it will be a big factor 2 years from now (though it depends on exactly how articles are arranged on the website and so how salient it is that the non-AI articles are old).

It could also be bad even for AI safety: There are plenty of people here who were initially skeptical of AI x-risk, but joined the movement because they liked the malaria nets stuff. Then over time and exposure they decided that the AI risk arguments made more sense than they initially thought, and started switching over.

I also think this is true in general (I don't have a strong view about the net balance in the case of 80K's outreach specifically). 

Previous analyses we conducted suggested that over half of Longtermists (~60%) previously prioritised a different cause and that this is consistent across time.

You can see the overall self-reported flows (in 2019) here.

Thanks Arden!

I also agree that prima facie this strategic shift might seem worrying given that 80K has been the powerhouse of EA movement growth for many years.

That said, I share your view that growth via 80K might reduce less than one would naively expect. In addition to the reasons you give above, another consideration is our finding is that a large percentage of people get into EA via 'passive' outreach (e.g. someone googles "ethical career" and finds the 80K website', and for 80K specifically about 50% of recruitment was 'passive'), rather than active outreach, and it seems plausible that much of that could continue even after 80K's strategic shift.

Our framings will probably change. It's possible that the framings we use more going forward will emphasise EA style thinking a little less than our current ones, though this is something we're actively unsure of.

As noted elsewhere, we plan to research this empirically. Fwiw, my guess is that broader EA messaging would be better (on average and when comparing the best messaging from each) at recruiting people to high levels of engagement in EA (this might differ when looking to recruit people directly into AI related roles), though with a lot of variance within both classes of message.

Thanks for posting!

We did ask about this in the 2015 and 2017 EA Surveys. We've not repeated it since then due to limited space.

In both cases, respondents were primarily realist (50.9% in 2015, 42.5% in 2017).

image.png
4508bf5d-e1e0-4898-90d4-d99c61b32e9d.png

A broader coalition of actors will be motivated to pursue extinction prevention than longtermist trajectory changes... For instance, see Scott Alexander on the benefits of extinction risk as a popular meme compared to longtermism.

This might vary between: 

  • The level of the abstract memes:
    • I agree "reducing risk of extinction (potentially in the near term)" may be more appealing than "longtermist trajectory change"
  • The level of concrete interventions:
    • "Promoting democracy" (or whatever one decides promotes long term value) might be more appealing than "reducing risk from AI"[1] (though there is likely significant variation within concrete interventions).
  1. ^

    Though our initial work does not suggest this.

I agree this is a potential concern.

As it happens, since 2020, the community has continued to age. As of the end of last year, it's median 31, mean 22.4, and we can see that it has steadily aged across years.

It's clear that a key contributor to our age distribution is the age at which people first get involved with EA, which is median 24, mean 26.7, but the age at which people first get involved has also increased over time.

I think people sometimes point to our outreach focusing on things like university groups to explain this pattern. But I think this is likely over-stated, as this accounts for only a small minority of our recruiting, and most of the ways people first hear about EA seems to be more passive mechanisms, not tied to direct outreach, which would be accessible to people at older ages (we'll discuss this in more detail in the 2024 iteration of this post). 

That said, different age ranges do appear to have different levels of awareness of EA, with highest awareness seeming to be at the 25-34 or 35-44 age ranges. (Though our sample size is large, the number of people who we count as aware of EA are very low, so you can see these estimates are quite uncertain. Our confidence in these estimates will increase as we run more surveys). This suggests that awareness of EA may be reaching different groups unevenly, which could partly contribute to lower engagement from older age groups. But this need not be the result of differences in our outreach. It could result from different levels of interest from the different groups.

Matthew Yglesias has written more critically about this tendency (which he thinks is widely followed in activist circles, but is often detrimental). For example, here he describes what he refers to as "activist chum", which is good for motivating and fundraising (very important for the self-interest of (those leading) movement), but can lead to focusing on "wins" that aren't meaningful and may be unhelpful.

The chum comes from the following political organizing playbook that is widely followed in progressive circles:

  1. Always be asking for something.
  2. Ask for something of someone empowered to give it to you.
  3. Ask for something from someone who cares what you think.

That's interesting, but seems to be addressing a somewhat separate claim to mine.

My claim was that that broad heuristics are more often necessary and appropriate when engaged in abstract evaluation of broad cause areas, where you can't directly assess how promising concrete opportunities/interventions are, and less so when you can directly assess concrete interventions.

If I understand your claims correctly they are that:

  • Neglectedness is more likely to be misleading when applied to broad cause areas
  • When considering individual solutions, it's useful to consider whether the intervention has already been tried.

I generally agree that applying broad heuristics to broad cause areas is more likely to be misleading than when you can assess specific opportunities directly. Implicit in my claim is that where you don't have to rely on broad heuristics, but can assess specific opportunities directly, then this is preferable. I agree that considering whether a specific intervention has been tried before is useful and relevant information, but don't consider that an application of the Neglectedness/Crowdedness heuristic.

I think this depends crucially on how, and to what object, you are applying the ITN framework:

  • Applying ITN to broad areas in the abstract, treating what one would do in them as something of a black box (a common approach in earlier cause prioritisation IMO), one might reason:
    • Malaria is a big problem (Importance)
    • Progress is easily made against malaria (Tractability)
    • ...  It seems clear that Neglectedness should be added to these considerations to avoid moving resources into an area where all the resources needed to solve X are already in place
  • Applying ITN to a specific intervention or action, it's more common to be able to reason like so:
    • Malaria is a big problem (Importance)
    • Me providing more malaria nets [does / does not] easily increase progress against malaria, given that others [are / are not] already providing them (Tractability)
    • ... In this case it seems that all you need from Neglectedness is already accounted for in Tractability, because you were able to account for whether the actions you could take were counterfactually going to be covered.

On the whole, it seems to me that the further you move aware from abstract evaluations of broad cause areas, and more towards concrete interventions, the less it's necessary or appropriate to depend on broad heuristics and the more you can simply attempt to estimate expected impact directly.

Thanks for the comment!

I agree these would be interesting things to include.

  • We used to ask about income as part of our donation data section. But we've not included the donation questions since EAS 2020. If you like, I can provide further analyses of the income data from prior surveys. On the face of it, EA income and donations have been surprisingly low historically. But this is likely explained by the high percentage of student/very early career people in the community.
  • Also in 2020, we were asked to include a question about experiences of financial or employment instability as a child, which found relatively low levels. As noted in that post, we would generally recommend using a different measure if you want a proxy for SES, but this is the one we were asked to include. I do think that SES is a relatively neglected demographic (EA seems to be strikingly high SES background).

We can assess this empirically, to some extent, by looking at changes in respondents who we can track over time (i.e. those who logged in to their EA account to pull their previous responses or provided their email). This allows us both to compare both how many individuals with different political views in 2022 changed to different views in 2024 and how many people with different views in 2022 may have dropped out (because we can't track them in 2024[1]).

TLDR: 

  • Left respondents switch in quite large numbers to the Center-left, Center leftists show a smaller switch to the center, while Centrists showed no major shift.
  • We see no signs of Leftists or Center leftists being more likely to dropout, but some weak evidence of Right and Center rightists being more likely to dropout. 

You can see the total flows between categories on this Sankey diagram.[2]

Due to the low numbers, it can be hard to compare the results for different categories though. 

First, we can compare how many people switched to NA (i.e. were not tracked in 2024 who were in 2022).

We can see that there's very little difference between the Left (77%), Center Left (73%) and Center (74%). We do, however, see that Center right (85%) and Right (83%) respondents seem more likely to not appear in the 2024 dataset, though these are small groups due to the very low number of right-of-center respondents.[3]  

Switch to NA   
LeftNA78577.49%
Center leftNA78872.76%
CenterNA15573.81%
Center rightNA5285.25%
RightNA1583.33%
LibertarianNA10971.71%
OtherNA14475.39%
Prefer not to answerNA16583.76%
Did not answerNA7485.06%
Did not viewNA18297.33%

Then, we can compare changes in political views, setting aside the large number of NA-in-2024 responses, to see the changes more clearly. We can see quite a strong shift (27%) among Left (in 2022) respondents to the Center left (this compares to the total drop in Left respondents being 28.8%). Among the center left, we see a smaller but still noticeable switch to the Center, and for the Center, we see most respondents staying the same.[4] 

Non-NA switches   
LeftLeft15467.54%
LeftCenter left6126.75%
LeftCenter41.75%
LeftCenter right10.44%
LeftRight00.00%
LeftLibertarian10.44%
LeftOther73.07%
  228 
    
Center leftLeft72.40%
Center leftCenter left24684.25%
Center leftCenter3210.96%
Center leftCenter right20.68%
Center leftRight00.00%
Center leftLibertarian31.03%
Center leftOther20.68%
  292 
    
CenterLeft11.85%
CenterCenter left11.85%
CenterCenter4685.19%
CenterCenter right35.56%
Center Right00.00%
CenterLibertarian35.56%
CenterOther00.00%
  54 

These results aren't suggestive of leftists being particularly likely to drop out. But there is some evidence of Left respondents in 2022 switching in quite large numbers (similar to the total change in % Left) to the Center left in 2024.

  1. ^

    Caveat: a respondent might not appear in 2024 because they dropped out of EA or they didn't take the survey or because they took the survey but didn't log in or provide the same email address as last time. Differential disappearance across groups might still be suggestive, however, since seem to be few innocent explanations for why leftists/rightists would systematically become less likely to take the survey or provide their email address across time.

    It's also important to bear in mind that the sub-sample of respondents who logged in or provided their email address might differ from the total sample (e.g. they might be more engaged or more satisfied). But just over 90% of respondents in both 2022 and 2024 fit into this category, so it does not seem a very selective sub-sample.

  2. ^

    NAs in 2022 include only people who did not reach the politics question of the survey, because this sub-sample includes only people who were tracked in the 2022 survey.

  3. ^

    It makes sense that the various other categories for not answering the question would also show high rates of not answering the email question or not logging in.

  4. ^

    Center right and Right not shown because, excluding NAs in 2024, this was only 9 and 3 respondents respectively (though most stayed in the their prior category).

Load more