I am the Principal Research Manager at Rethink Priorities working on, among other things, the EA Survey, Local Groups Survey, and a number of studies on moral psychology, focusing on animal, population ethics and moral weights.

In my academic work, I'm a Research Fellow working on a project on 'epistemic insight' (mixing philosophy, empirical study and policy work) and moral psychology studies, mostly concerned either with effective altruism or metaethics.

I've previously worked for Charity Science in a number of roles and was formerly a trustee of EA London.


What myths or misconceptions prevent people from supporting EA organizations that work on animal welfare or long-termist causes?

Another one that plausibly applies to aid/charity within the global poverty field is that many donors under-estimate the difference in effectiveness between interventions relative to experts. (Caviola et al, 2020)

My mistakes on the path to impact

Yeah, I think it's very difficult to tell whether the trend which people take themselves be perceiving is explained by there having been a larger amount of low hanging fruit in the earlier years of EA, which led to people encountering a larger number of radical new ideas in the earlier years, or whether there's actually been a slowdown in EA intellectual productivity. (Similarly, it may be that because people tend to encounter a lot of new ideas when they are first getting involved in EA, people perceive the insights being generated by EA as slowing down). I think it's hard to tell whether EA is stagnating in a worrying sense in that it is not clear how much intellectual progress we should expect to see now that some of the low hanging fruit is already picked.

That said, I actually think that the positive aspects of EA's professionalisation (which you point to in your other comment) may explain some of the perceptions described here, which I think are on the whole mistaken. I think in earlier years, there was a lot of amateur, broad speculation for and against various big questions in EA (e.g. big a priori arguments about AI versus animals, much of which was pretty wild and ill-informed). I think, conversely, we now have a much healthier ecosystem, with people making progress on the myriad narrower, technical problems that need to be addressed in order to address those broader questions.

My mistakes on the path to impact

"Stagnation" was also the 5th most often mentioned reason for declining interest in EA, over the last 12 months, when we asked about this in the 2019 EA Survey, accounting for about 7.4% of responses.

Personality traits associated with being involved in EA?

Unfortunately, none of them are online at the moment, but we'll re-upload previous years somewhere once last year's data has been processed for public release.

Personality traits associated with being involved in EA?

Roughly speaking, I would predict a bunch of traits related to cognition (largely related to being more deliberative) and moral motivation (e.g. empathy) would likely be correlated. Another way to think about this would be as tracking the effectiveness and the altruism respectively.

On the cognition side: Need for Cognition (which we already tested in the 2018 EA survey and found that EAs scored extremely highly on), Cognitive Reflection Test, reflection-impulsivity and the Actively Open Minded Thinking scale, and possibly other things which are components of the Rationality Quotient. Higher Maximising and Alternative Search tendency.

On the moral motivation side: potentially higher Empathic Concern from the IRI (we tested this in the 2018 survey and nothing jumped out). I think it's possible that the Empathic Concern measures track too much of the purely intuitive or emotional side of empathy (see Bloom), rather than the pure construct of compassion, or being motivated to help people. It also seems possible that EAs (on average) place higher importance on morality in their self-identity. I also expect there to be some things which crosscut to the cognitive and moral-motivational groups here, for example, systematising versus empathy and people versus things.

My sense is that these two sets of things, roughly speaking, each contribute to making people more inclined to to be more utilitarian. So I would expect measures of utilitarian thinking, like the Oxford Utilitarianism Scale to somewhat pick up on these. I don't think this implies anything particularly strongly about whether people who explicitly adopt a non-utilitarian philosophy can be EAs or whether there is any logical conflict, since I think we should distinguish between the psychological tendency to think in a utilitarian (or more strictly speaking, consequentialist) way and explicit endorsement of the philosophy of utilitarianism or anything else (since most people don't explicitly endorse any moral philosophy).

Also, although people talk a lot about the big five and we have used that before, I think if we used to the closely related HEXACO six factor model, then Honesty-Humility would also likely be correlated.

Please Take the 2020 EA Survey

Hi Dale. Thanks for your comment.

The gender question and many of the other demographic questions were selected largely to ensure comparability with other surveys run by CEA.

That aside, I think your claim that open comment gender questions are "considered poor survey technique" is over-stated. The literature discusses pros and cons to both formats. From this recent article in the International Journal of Social Research Methodology:

One of the simplest ways to collect data on gender identity is to use an open text box (see Figure 2) which allows participants the freedom to describe their gender in whatever way they see fit while accommodating changing norms around acceptable terminology. Terms commonly used around gender evolve over time... It would therefore be misguided of researchers to attempt to find the most contemporary terminology and use it to the exclusion of all other terms. Research teams are also likely to find such a process difficult and frustrating (Herman et al., 2012). Thus, an open text box is certainly the most accommodating approach to a range of evolving terms to describe gender identity.

If open text boxes are used for research that intends to analyze by category, however, researchers will still ultimately be categorizing the gender identities in order to define groups for statistical analysis and groups to which the findings might be generalized... These decisions will also need to be made if researchers using a multiple-choice approach choose to provide a long list of as many gender identity terms as possible. This approach is a fine option, but researchers need to be cognisant that terminology that was in common use when a tool was published may no longer be current when research is conducted using that tool... Good arguments can be made for the value of participants being able to see the specific term for their gender identity among a list of possibilities, but even Herman’s and Kuper’s lists, published within the past decade, contain terms that are increasingly considered problematic and do not contain some terms that are more common today.

An approach which provides a smaller number of options for gender identity has benefits and drawbacks. Providing fewer categories inevitably forces gender minority participants to place themselves into categories that the researcher provides, but gives the advantage that the participant, not researcher, chooses the categories in which they will be included.

Please Take the 2020 EA Survey

Thanks for the suggestion. We have considered it and might implement it in future years for some questions. For a lot of variables, I think we'd rather have most data from almost all respondents every other year, than data from half of respondents every year. This is particularly so for those variables which we want to use in analyses combined with other variables, but applies less in the case of variables like politics where we can't really do that.

Please Take the 2020 EA Survey

Thanks for your feedback! It's very useful for us to receive public feedback about what questions are most valued by the community.

Your concerns seem entirely reasonable. Unfortunately, we face a lot of tough choices where not dropping any particular question means having to drop others instead. (And many people think that the survey is too long anyway implying that perhaps we should cut more questions as well.)

I think running these particular questions every other year (rather than cutting them outright) may have the potential to provide much of the value of including them every year, given that historically the numbers have not changed significantly across years. I would be less inclined to think this if we could perform additional analyses with these variables (e.g. to see whether people with different politics have lower NPS scores), but unfortunately with only ~3% of respondents being right-of-centre, there's a limit to how much we can do with the variable. (This doesn't apply to the diet measure which actually was informative in some of our models.)

Please Take the 2020 EA Survey


If you are referring to the question I think you're referring to, then we really do mean that people should select up to one option in each column: one column for whichever of the options (if any) was the source of the most important thing you learned and one column for whichever of the options (if any) was the source of the most important new connection you made.

How to best address Repetitive Strain Injury (RSI)?

Oh dear, I guess I'm too used to always following any capital E with an A automatically.

Load More