DM

David_Moss

Principal Research Director @ Rethink Priorities
6530 karmaJoined Aug 2014Working (6-15 years)

Bio

I am the Principal Research Director at Rethink Priorities and currently lead our Surveys and Data Analysis department. Most of our projects involve private commissions for core EA movement and longtermist orgs, where we provide:

  • Private polling to assess public attitudes
  • Message testing / framing experiments, testing online ads
  • Expert surveys
  • Private data analyses and survey / analysis consultation
  • Impact assessments of orgs/programs

Formerly, I also managed our Wild Animal Welfare department and I've previously worked for Charity Science, and been a trustee at Charity Entrepreneurship and EA London.

My academic interests are in moral psychology and methodology at the intersection of psychology and philosophy.

How I can help others

Survey methodology and data analysis.

Sequences
3

RP US Public AI Attitudes Surveys
EA Survey 2022
EA Survey 2020

Comments
468

The most common places that people first or primarily heard about EA seem to be Leaf itself, Non-Trivial, and school — none of these categories show up on the EA survey.

 

"School" does appear in the EA Survey, it's just under the superordinate "Educational course" category (3%) of respondents. 

We have 3 surveys of our own assessing where non-EAs more broadly heard of EA, which also find that education is among the most important source of hearing about EA:

  • In a survey of US students more broadly (not just EAs), we ran for CEA (referenced here) education was the most common source of hearing about EA. 
  • In a different survey of students only at elite universities (also unpublished, referenced here), we also found that hearing about EA either on campus or through a class were among the most common sources. 
  • In additional unpublished results from our survey on how many people have heard of effective altruism, we found >20% of people first heard of EA from an education source.

Peter Singer, and YouTube / Ted talks all seem to have been more important than I would have expected.

Peter Singer is actually very frequently mentioned in the EA Survey, as I have noted before. Individuals just don't appear in the top-line listed categories, which focus on orgs or media. As we highlighted here, Peter Singer was mentioned in 17.6% of people's qualitative comments about where they heard of EA. At a glance, the results for TED Talk and YouTube don't seem too different.

Of course, people who get involved with EA during their teens are a very small minority of total EAs, so I would not be surprised if that particular very small sub-population differs from the broader population in some ways, especially since some major sources like university groups and careers advice (from 80K) are most relevant to slightly older people.

It's also worth bearing in mind that any differences found in a sample of 63 people could easily be noise. For example, purely by way of illustration, if we randomly sampled 63 people from a larger population and found 7 people were such-and-such, the estimated proportion of 11% would be bounded by 95% confidence intervals of around 5-21%.

Over the last ~6 months I've noticed a general shift amongst EA orgs to focus less on reducing risks from AI, Bio, nukes, etc based on the logic of longtermism, and more based on Global Catastrophic Risks (GCRs) directly... My guess is these changes are (almost entirely) driven by PR concerns about longtermism.

 

It seems worth flagging that whether these alternative approaches are better for PR (or outreach considered more broadly) seems very uncertain. I'm not aware of any empirical work directly assessing this even though it seems a clearly empirically tractable question. Rethink Priorities has conducted some work in this vein (referenced by Will MacAskill here), but this work, and other private work we've completed, wasn't designed to address this question directly. I don't think the answer is very clear a priori. There are lots of competing considerations and anecdotally, when we have tested things for different orgs, the results are often surprising. Things are even more complicated when you consider how different approaches might land with different groups, as you mention.

We are seeking funding to conduct work which would actually investigate this question (here), as well as to do broader work on EA/longtermist message testing, and broader work assessing public attitudes towards EA/longtermism (which I don't have linkable applications for).

I think this kind of research is also valuable even if one is very sceptical of optimising PR. Even if you don't want to maximise persuasiveness, it's still important to understand how different groups are understanding (or misunderstanding) your message.  

I'd be excited about a world where CEA posts generic costs+benefits of all of the big programs. I wouldn't fault CEA, as no one else does this yet, but I think some of this would be very useful, though perhaps too confrontational. 

 

Agreed that this could be very useful. It could also (as I argued here) be useful to have more such models produced by independent evaluators.[1]

 

  1. ^

    Although, I also think there is value in seeing models from the orgs themselves for ~ reasoning transparency purposes.

Points in favour of cortical neuron counts as a proxy for moral weight:

  1. Neuron counts correlate with our intuitions of moral weights. Cortical counts would say that ~300 chicken life years are morally equivalent to one human life year, which sounds about right.

 

That neuron counts seem to correlate with intuitions of moral weight is true, but potentially misleading. We discuss these results, drawing on our own data here

I would quite strongly recommend that more survey research be done (including more analysis, as well as additional surveys: we have some more unpublished data of our own on this), if before taking the correlations as a reason to prefer using neuron count as a proxy (in contrast to a holistic assessment of different capacities).

I think that revealed preference can be misleading in this context, for reasons I outline here.

It's not clear that people's revealed preferences are what we should be concerned about compared to, for example, what value people would reflectively endorse assigning to animals in the abstract. People's revealed preference for continuing to eat meat, may be influenced by akrasia, or other cognitive distortions which aren't relevant to assessing how much they actually endorse animals being valued.[1] We may care about the latter, not the former, when assessing how much we should value animals (i.e. by taking into account folk moral weights) or how much the public are likely to support/oppose us allocating more aid to animals.

But on the specific question of how the public would react to us allocating more resources to animals: this seems like a directly tractable empirical question. i.e. it would be relatively straightforward through surveys/experiments to assess whether people would be more/less hostile towards if we spent a greater share on animals, or if we spent much more on the long run future vs supporting a more diverse portfolio, or more/less on climate change etc. 

 

  1. ^

    Though of course we also need to account for potential biases in the opposite direction as well.

"We want to publish but can't because the time isn't paid for" seems like a big loss, and a potentially fixable one. Can I ask what you guys have considered for fixing it? This seems to me like an unusually attractive opportunity for crowdfunding or medium donors, because it's a crisply defined chunk of work with clear outcomes.


Thanks! I'm planning to post something about our funding situation before the end of the year, but a couple of quick observations about the specific points you raise:

  • I think funding projects from multiple smaller donors is just generally more difficult to coordinate than funding from a single source
  • A lot of people seem to assume that our projects already are fully funded or that they should be centrally funded because they seem very much like core community infrastructure, which reduces inclination to donate

they seem centered on social reality not objective reality. But I value a lot of RP's other work, think social reality investigations can be helpful in moderation, and my qualms about these questions aren't enough to override the general principle. 

I'd be curious to understand this line of thinking better if you have time to elaborate. "Social" vs "objective" doesn't seem like a natural and action-guiding distinction to me. For example:

  • Does everyone we want to influence hate EA post-FTX?
  • Are people more convinced by outreach based on "longtermism" or "existential risk" or principles-based effective altruism or specific concrete causes more effective?
  • Do people who first engage with EA when they are younger end up less engaged with EA than those who first engage when they are older?
  • How fast is EA growing?

all strike me as objective social questions of clear importance. Also, it seems like the key questions around movement building will often be (characterisable as) "social" questions. I could understand concerns about too much meta but too much "social" seems harder to understand.[1]

  1. ^

    A possible interpretation I would have some sympathy for is distinguishing between concern with what is persuasive vs what is correct. But I don't think this raises concerns about these kinds of projects, because:


    - A number of these projects are not about increasing persuasiveness at all (e.g. how fast is EA growing? Where are people encountering EA ideas?). Even findings like "does everyone on elite campuses hate EA?" are relevant for reasons other than simply increasing persuasiveness, e.g. decisions about whether we should increase or decrease spending on outreach at the top of the funnel.

    - Even if you have a strong aversion to optimising for persuasiveness (you want to just present the facts and let people respond how they will), you may well still want to know if people are totally misunderstanding your arguments as you present them (which seems exceptionally common in cases like AI risk).

    - And, of course, I think many people reasonably think that if you care about impact, you should care about whether your arguments are persuasive (while still limiting yourself to arguments which are accurate, sincerely held etc.).

    - The overall EA portfolio seems to assign a very small portion of its resources to this sort of research as it stands (despite dedicating a reasonably large amount of time to a priori speculation about these questions (1)(2)(3)(4)(5)(6)(7)(8)) so some more empirical investigation of them seems warranted.

     

I definitely agree that funding is a significant factor for some institutional actors.

For example, RP's Surveys and Data Analysis team has a significant amount of research that we would like to publish if we had capacity / could afford to do so: our capacity is entirely bottlenecked on funding and as we are ~ entirely reliant on paid commissions (we don't receive any grants for general support) time spent publishing reports is basically just pro bono, adding to our funding deficit.

Example of this sort of unpublished research include:

  • The two reports mentioned by CEA here about attitudes towards EA post-FTX among the general public, elites, and students on elite university campuses.
  • Followup posts about the survey reported here about how many people have heard of EA, to further discuss people's attitudes towards EA, and where members of the general public hear about EA (this differs systematically)
  • Updated numbers on the growth of the EA community (2020-2022) extending this method and also looking at numbers of highly engaged longtermists specifically
  • Several studies we ran to develop reliable measure of how positively inclined towards longtermism people are, looking at different predictors of support for longtermism and how these vary in the population
  • Reports on differences between neartermists and longtermists within the EA community and on how neartermist / longtermist efforts influence each other (e.g. to what extent does neartermist outreach, like GiveWell, Peter Singer articles about poverty, lead to increased numbers of longtermists)
  • Whether the age at which one first engaged with EA predicts lower / higher  future engagement with EA

A significant dynamic here is that even where we are paid to complete research for particular orgs, we are not funded for the extra time it would take to write up and publish the results for the community. So doing so is usually unaffordable, even where we have staff capacity.

Of course, much of our privately commissioned research is private, such that we couldn't post it. But there are also significant amounts of research that we would want to conduct independently, so that we could publish it, which we can't do purely due to lack of funding. This includes:

  • More message testing research related to EA /longtermism (for an example see Will MacAskill's comment referencing our work here), including but not limited to:
    • Testing the effectiveness of specific arguments for these causes
    • Testing how "longtermist" or "existential risk" or "effective altruist" or "global priorities" framings/brandings compare in terms of how people respond to them (including comparing this to just advocating for specific concrete x-risks without 
    • Testing effectiveness of different approaches to outreach in different populations for AI safety / particular policies

Agreed. As we note in footnote 2:

There are many reasons why respondents may erroneously claim knowledge of something. But simply put, one reason is that people like demonstrating their knowledge, and may err on the side of claiming to have heard of something even if they are not sure. Moreover, if the component words that make up a term are familiar, then the respondent may either mistakenly believe they have already encountered the term, or think it is sufficient that they believe they can reasonably infer what the term means from its component parts to claim awareness (even when explicitly instructed not to approach the task this way!). 

For reference, in another check we included, over 12% of people claim to have heard of the specific term “Globally neutral advocacy”: A term that our research team invented, which returns no google results as a quote, and which is not recognised as a term by GPT—a large-language model trained on a massive corpus of public and private data. “Globally neutral advocacy” represents something of a canary for illegitimate claims of having heard of EA, in that it is composed of terms people are likely to know, and the combination of which they might reasonably think they can infer the meaning or even simply mistakenly believe they have encountered. 

I think this is one reason why "effective altruism" gets higher levels of claimed awareness than other fake or low incidence terms (which people would be very unlikely to have encountered).

The full process is described in our earlier post, and included a variety of other checks as well. 

But, in brief, the "stringent" and "permissive" criteria refer to respondents' open comment explanations of what they understand "effective altruism" to means, and whether they either displayed clear familiarity with effective altruism, such that it would be very unlikely someone would give that response if they were not genuinely familiar with effective altruism (e.g. by referring to using evidence and reason to maximise the amount of good done with your donations or career; or referring to specific EA figures, books, orgs, events etc.), or whether it was merely probable based on their comment that they had heard of effective altruism (e.g. because the responses were more vague or less specific).

Thanks for the reply!

I think there are also important advantages to internal impact evaluation: the results are more likely to be bought into internally, and important context or nuance is less likely to be missed.

I agree there are some advantages to internal evaluation. But I think "the results are more likely to be bought into internally" is, in many cases, only an advantage insofar that orgs are erroneously more likely to trust their own work than external independent work. 

That said, I agree that the importance of orgs bringing important context and nuance (and just basic information) to the evaluation can hardly be over-stated. My general take here is that the ideal arrangement is for the org and external evaluators to work very closely on an evaluation, so they can combine the benefits of insider knowledge and external expertise.

I would even say that in those kinds of cases, it's not extremely important whether the evaluation is primarily lead by the org or primarily lead by the external evaluator (so long as there's still scope for the external evaluator to offer an independent, and maybe even dissenting, take on the org's conclusions). I think people can reasonably disagree about how important it is that, in addition, the external evaluator is truly independent (i.e. ideally funded by an external funder, not selected and contracted by the org in question, which obviously potentially risks biasing the evaluator).

Load more