David_Moss

I am the Principal Research Manager at Rethink Priorities working on, among other things, the EA Survey, Local Groups Survey, and a number of studies on moral psychology, focusing on animal, population ethics and moral weights.

In my academic work, I'm a Research Fellow working on a project on 'epistemic insight' (mixing philosophy, empirical study and policy work) and moral psychology studies, mostly concerned either with effective altruism or metaethics.

I've previously worked for Charity Science in a number of roles and was formerly a trustee of EA London.

Sequences

Wiki Contributions

Comments

EA Survey 2019 Series: How many people are there in the EA community?

I think this post mostly stands up and seems to have been used a fair amount. 

Understanding roughly how large the EA community seems moderately fairly, so I think this analysis falls into the category of 'relatively simple things that are useful to the EA community but which were nevertheless neglected for a long while'.

One thing that I would do differently if I were writing this post again, is that I think I was under-confident about the plausible sampling rates, based on the benchmarks that we took from the community. I think I was understandably uneasy, the first time we did this, basing estimates of sampling rates based on the handful of points of comparisons (EA Forum members, the EA Groups survey total membership, specific local groups, and an informal survey in the CEA offices), so I set pretty wide confidence intervals in my guesstimate model. But, with hindsight, I think this assigns too much weight to the possibility that the broader population of highly engaged EAs were taking the EA Survey at a higher rate than members of all of these specific highly engaged groups. As a result, the overall estimates are probably a bit too uncertain but, in particular, the smaller estimates of the size of the community are probably less likely.

One of the more exciting developments following this post is that, now that we have more than one year of data, we can use this method to estimate growth in the EA community (as discussed here and in the thread below). This method has since been used, for example, here and here. Estimating the growth of the EA community may be more important than estimating the size of the EA community, so this is a neat development. I put a guesstimate model for estimating growth here, which suggests around 14% growth in the number of highly engaged EAs (the number of less engaged EAs is much less certain). For simplicity of comparison, I left confidence intervals as wide as they were in 2019, even though, as discussed, I think this suggests implausible levels of uncertainty about the estimates.

What questions relevant to EA could be answered by surveying the public?

It seems plausible that we should assign weight to what past generations valued (though one would likely not use survey methodology to do this), as well as what future generations will value, insofar as that is knowable.

EA Survey 2019 Series: How EAs Get Involved in EA

Summary: I think the post mostly holds up. The post provided a number of significant, actionable findings, which have since been replicated in the most recent EA Survey and in OpenPhil’s report. We’ve also been able to extend the findings in a variety of ways since then. There was also one part of the post that I don’t think holds up, which I’ll discuss in more detail.

The post highlighted (among other things):

  • People first hear about EA from a fairly wide variety of sources, rather than a small number dominating. Even the largest source, personal contacts, only accounted for about 14% of EAs.
  • In the most recent years, a large number of sources seem to be recruiting fairly similar numbers of EAs (LessWrong, SlateStarCodex, books, podcasts, local EA groups all 5-8%).
  • Nevertheless, there were some large differences between sources (e.g. personal contacts, 80,000 Hours and LessWrong have historically recruited about 10x more than some other outreach sources).
  • Routes into EA have changed across time with 80,000 Hours recruiting many more EAs in recent years. Local EA Groups also increased as a recruitment source in recent years.
  • We also examined qualitative comments offering more detail about how people got into EA. This provided more details about some of the categories. For example, within the books category, influence seemed to be spread fairly evenly across EA books. In contrast, Podcasts were quite heavily dominated by Sam Harris’s podcasts and the TED Talk category almost exclusively contained reference to Peter Singer’s TED talk.
  • There are differences in the pattern of results for what factors were important for getting involved in EA, in contrast to where individuals first hear of EA.
  • There are significant (and sometimes large) differences in routes into EA and influences on involvement based on gender and race. In particular, personal contacts and local groups seemed particularly important for non-male respondents, perhaps suggesting that providing opportunities for personal connection is important in this regard.
  • There are only slight differences in where high vs low engagement EAs first heard about EA (for example, a larger number of highly engaged EAs first heard from a personal contact or local group). There are more and larger differences in what factors were important for high/low engagement EAs getting involved in EA (e.g. personal contacts and local groups were more commonly selected by the highly engaged).

Implications

I think these findings have a lot of implications for the EA community. Many of them seem fairly obvious/straightforward implications of the data (i.e. which factors have or have not been important historically). This is not to imply that there aren’t important caveats and complications to the interpretation of this information, just that the direct implications of some of the results (e.g. a lot of people being recruited by 80,000 Hours compared to some other sources) are fairly straightforward. Important factors influencing the interpretation of this information would include methodological ones (e.g. whether the survey recruits more people from 80,000 Hours and this influences the result) and substantive ones about the community (e.g. how many resources are spent on or by 80,000 Hours, what is the average impact of people recruited from different sources (which is indirectly addressed by the data itself). In the main EA Survey posts we consciously err on the side of not stating some of these implications, aiming instead to to present the data neutrally and let it speak for itself. I think something important would be lost if the EA Survey series lost this neutrality (i.e. if it came to be associated with advocating for a particular policy, then people who disagreed with this policy might be less likely to take/trust the EA Survey), but this is definitely not without its costs (and it also relies on an assumption that other movement builders are working to draw out these implications).

One area where we discussed the possible implications of the results more than usual, while still holding back from making any specific policy proposals, is the finding that there seemed to be relatively little difference in the average engagement of EAs who first heard of EA from different sources. One might expect that certain sources would recruit a much larger proportion of highly engaged EAs than others, such that, even though some sources recruit a larger total number of EAs others recruit a larger number of highly engaged EAs. One might even speculate that the number of EAs recruited and the proportion of highly engaged EAs recruited would be negatively correlated, if one supposes either that broader outreach leads to less highly engaged recruits (on average) or that all diminishing returns mean that later recruits from a source tend to be less engaged than the ‘low hanging fruit’.

I think the 2019 EA Survey results (and likewise the 2018 and 2020 results) present evidence suggesting that there are not particularly large differences between recruitment routes in this regard. This is suggested by the analysis showing that there were a small number of significant, but slight, differences in the proportion low/high engagement EAs recruited from different sources. To give a sense of the magnitude of the differences: there were 164 highly engaged who first heard about EA from a personal contact, whereas we would expect only 151 if there were no difference in engagement across different routes into EA (a difference of 13 people).
 

Although I think the substantive conclusion stands up, one aspect of the presentation which I would change is the graph below:

This was an indirect replication of a graph we included in 2018. This offers a neat, simple visual representation of the relationship between the number of total EAs and number of highly engaged EAs recruited from each source, but it risks being misleading because the two variables (total EAs recruited and highly engaged EAs recruited) are not independent (the total includes the highly engaged EAs). As such, we’d expect there to be some correlation between the two, simply in virtue of this fact (how much depends, inter alia, on how many highly engaged EAs there are in the total population). 

As it happens, when we repeated the analysis looking at the relationship between the number of low engagement EAs and the number of highly engaged EAs independently (which I think is a less intuitive thing to consider), we found a very similar pattern of results. Nevertheless, although the simplicity of this presentation is nice, I think it’s generally better to just not include any variants of this graph (and we dropped this graph from EAS 2020) and just include analyses of whether the proportion or average level of engagement varies across different recruitment sources. Unfortunately, I think these are less intuitive and less striking (see e.g. the models included here), so I do worry that they are less likely to inform decisions.

New findings since 2019

I think most of these findings have only gained further support through being replicated in our 2020 post.

Since 2019 these results have also been supported by OpenPhil’s 2020 survey of “a subset of people doing (or interested in) longtermist priority work”. OpenPhil found very similar patterns to the EA Survey 2019’s results despite using different categories and a different analysis (for one thing, OP’s allowed percentages to sum to more than 100% whereas ours did not). 

In addition, part of this difference is likely explained by the fact that OP’s post compared their select highly engaged sample to the full EA Survey sample. If we limit our analysis to only self-reported very highly engaged EAs the gaps shrink further (i.e. highly engaged EAs were more likely to select personal contact and less likely to select SlateStarCodex.

Overall, I have been surprised by the extent to which OP’s highly impactful longtermist data aligns with the EA Survey data, when adjusting for relevant factors like engagement.


 

What questions relevant to EA could be answered by surveying the public?

Fwiw, I think that both moral uncertainty and non-moral epistemic uncertainty (if you'll allow the distinction) both suggest we should assign some weight to what people say is valuable.

  • Moral uncertainty may suggest we should assign some weight to views other than hedonistic utilitarianism. This includes other moral views , not just people's preferences, and we can discern what moral views people endorse through surveys (as I mention here). So we should ask about what people value and/or think is morally right and good, not merely what they prefer.
  • In addition, some moral views assign value to things which can be determined through surveys, including preferences (which you mention), but potentially including things like respecting people's values, autonomy/self-determination, democratic will, and not traducing people's wishes or coercing them.
  • But, separately, even if we only value maximizing wellbeing, given uncertainty about what promotes this / measurement error in our measures of it (and perhaps also conceptual uncertainty about what it consists in, though this may collapse into moral uncertainty), I think it's plausible we should assign some weight to what people say they prefer in judging what is likely to promote their wellbeing. For example, if we observe that having children seems to lead to lower wellbeing, but that people report that they value and prefer having children, that seems like it should be assigned some weight.
Stress - effective ways to reduce it

Thanks for collating these different ideas!

Fwiw, I think that it might be better if you were to simply drop the "Strength of effect: %" column from your sheet and not rank interventions according to this. 

 As an earlier commenter pointed out this is comparing "% reductions" in very different things (e.g. percentage reduction in cortisol vs percentage change in stress scores). But it also seems like this is going to be misleading in a variety of other ways. As far as I can tell, it's not only comparing different metrics for different interventions, it's sometimes combining percentage changes in different metrics to generate a single percentage change score for the same intervention. This systems also means that some of the most relevant pieces of evidence also get left out (e.g. the effect sizes for the meta-analyses for CBT), because they're reported as standardised effect sizes, rather than percentage change.  I would probably also drop 'percentage of studies [we reviewed ] where results  were statistically significant' as a metric since this seems likely to be misleading (I also wasn't sure how this was working in some cases e.g. CBT has 86 studies reviewed, but it seems to have 2/2 studies with significant results. Is this counting the two meta-analyses as single studies?).

I think it might be better to do this and instead take the approach of presenting this as a collection of possible interventions with some evidence people could evaluate and potentially try (collating the evidence so people can evaluate it themselves) rather than trying to rank the interventions according to simple, but likely to be misleading metrics. If you were to lean into this approach you could also include a lot more potential interventions which have at least as much of an evidence base as things listed here. For example, Examine.com reviews over 20 interventions for stress (and more for anxiety, mood, depression) and there are plenty of things which could be included (for example, here's a meta-analysis and systematic review of b-vitamins for stress). 

I think there can be reasonable disagreement about whether it's good to include more things, with less review and/or a lower bar for evidence, or fewer things with more review and/or a stricter bar for evidence. But in this case it seems like it may not be feasible within your project to provide meaningful review of the evidence for the different interventions and it seems like a lot of things are included which aren't better evidenced than things which are excluded. 

Is it no longer hard to get a direct work job?

We have a sense of this from questions we asked before (though only as recently in 2019, so they don't tell us whether there's been a change since then).

At that point 36.6% of respondents included EA non-profit work (i.e. working for an EA org) in their career plans. It was multiple select, so their plans could include multiple things, but it seems plausible that often EA org work is people's most preferred career and other things are backups. 

At that time 32% of respondents cited too few job opportunities as a barrier to their involvement in EA. This was the most commonly cited barrier (and the third most cited was it being too hard to get an EA job!).

These numbers were higher among more engaged respondents.

I think these numbers speak to EA jobs being very hard to get (at least in 2019).

Number of applications people are writing could be interesting to some degree, though I think there are a couple of limitations. Firstly, if people find that it is too hard to get a job and drop out of applying , this may make the numbers look better without the number of people who want a job and can't get one decreasing, and even without it becoming appreciably easier for those still applying for jobs. Secondly, if there are fewer (more) jobs for people to apply to this may reduce (increase) the number of applications, but this would be actually be making it harder (easier) for people to get jobs.

To assess the main thing that I think these numbers would be useful for (how competitive jobs actually are), I think hiring data from orgs would be most useful (i.e. how many applicants to how many roles). The data could also be useful to assess how much time EAs are spending applying (since this is presumably at some counterfactual cost to the community), but for that we might simply ask about time spent on applications directly.

Is it no longer hard to get a direct work job?

Not that many people respond to surveys, so the total EA population is probably higher than 2k, but it's difficult to say how much higher.

We give an estimate of the total population engaged at levels 3-5/5 here, which suggests ~2700 (2300 to 3100) at the highest levels of engagement  (5000-10000 overall).

We then estimate that the numbers of the most engaged have increased by ~15% between 2019 and 2020 (see the thread with Ben Todd here and the discussion in his EA Global talk.

This suggests to me there are likely 3000 or more highly engaged EAs at present (there has likely been further growth since December 2020).

It's also important to note that (in my experience in different hiring rounds) a significant number of people who are successfully hired would not have been at levels 4-5 at the time of their application, which increases the numbers quite substantially.

We’re Rethink Priorities. Ask us anything!

can I just ask what WAW stands for? Google is only showing me  writing about writing, which doesn't seem likely to be it...

 

"WAW" = Wild Animal Welfare (previously often referred to as "WAS" for Wild Animal Suffering).

And how often does RP decide to go ahead with publishing academia? 

I'd say a small minority of our projects (<10%).

We’re Rethink Priorities. Ask us anything!

Thanks for asking. We've run around 30 survey projects since we were founded. When I calculated this in June we'd run a distinct survey project (each containing between 1-7 surveys), on average, every 6 weeks. 

Most of the projects aren't exactly top secret, but I err on the side of not  mentioning the details or who we've worked with unless I'm certain the orgs in question are OK with it. Some of the projects, though, have been mentioned publicly, but not published: for example, CEA mentioned in their Q1 update that we ran some surveys for them to estimate how many US college students have heard of EA.

An illustrative example of the kind of project a lot of these are would be an org approaching us saying they are considering doing some outreach (this could be for any cause area) and wanting us to run a study (or studies) to assess what kind of message would be most appropriate. Another common type of project is just polling support for different policies of interest and  testing the robustness of these results with different approaches. Both these kinds of projects are the most common but generally take up proportionately less time.

There are definitely a lot of other things that we can do and have done. For example the 'survey' team has also used focus groups before and would be interested in doing so again (which we think would be useful for a lot of EA purposes), and much of David Reinstein's work is better described as behavioural experiments (usually field experiments), rather than surveys. 

Another aspect of our work that has increased a lot recently to a degree that was slightly surprising is what Peter refers to here as "ad hoc analysis requests" and consulting (e.g. on analysis and survey design), without us actually running a full project ourselves. I'd say we've provided services like this to 8-9 different orgs/researchers (sometimes taking no more than a couple of hours, sometimes taking multiple days) in the last few weeks alone. As Peter mentions in that post, these can be challenging from a fund-raising perspective, although I strongly encourage people not to not reach out to us on that basis. 

The projects we did used to be more FAW leaning, but over time the composition has changed a bit and, perhaps unsurprisingly, now contains more longtermist projects. Because the things we work on are pretty responsive to requests coming from other orgs, the cause-composition can change unexpectedly in a short space of time. Right now the projects we're working on are roughly evenly split between animals, movement building and meta, but it wouldn't be that surprising if it became majority longtermism over the next 6 months.

Load More