David_Moss

I am the Principal Research Manager at Rethink Priorities working on, among other things, the EA Survey, Local Groups Survey, and a number of studies on moral psychology, focusing on animal, population ethics and moral weights.

In my academic work, I'm a Research Fellow working on a project on 'epistemic insight' (mixing philosophy, empirical study and policy work) and moral psychology studies, mostly concerned either with effective altruism or metaethics.

I've previously worked for Charity Science in a number of roles and was formerly a trustee of EA London.

David_Moss's Comments

How much will local/university groups benefit from targeted EA content creation?

There was no way to ask whether people knew about all the resources that currently existed (although in the next survey we could ask whether they know about the EA Hub's resources specifically). We do know from other questions in this survey and in 2017's that many group leaders are not aware of existing services in general though.

How much will local/university groups benefit from targeted EA content creation?

The 2019 Local Group Organizers Survey found large percentages of organizers reporting that more "written resources on how to run to run a group" and "written resources on EA thinking and concepts" would be highly useful.

Thoughts on electoral reform

It's great to see more reflection about approval voting and possible alternatives. I think the EA community should probably favour a lot more research into these alternatives before it invests resources in promoting any of these options.

Excessive political polarisation, especially party polarisation in the US, makes it harder to reach consensus or a fair compromise, and undermines trust in public institutions. Efforts to avoid harmful long-term dynamics, and to strengthen democratic governance, are therefore of interest to effective altruists.

I will note that many political theorists think that reducing polarisation and increasing consensus should not be our goals in democracy and need not be positive things e.g. agonistic theorists. This is especially so when, increasing consensus and compromise solutions is identified with "moderate" or centrist (which, as you note, could be construed as a bias).

How do you feel about the main EA facebook group?

I agree that the main EA Facebook group has many low quality comments which "do not meet the bar for intellectual quality or epistemic standards that we should have EA associated with." That said, it seems that one of the main reasons for this is that the Facebook group contains many more people with very low or tangential involvement with EA. I think we should be pretty cautious about more heavily moderating or trying to exclude the contributions of newer or less involved members

As an illustration: the 2018 EA Survey found >50% of respondents were members of the Facebook group, but only 20% (i.e. 1 in 5) were members of the Forum. Clearly the Facebook group has many more users who are even less engaged with EA, who don't take the EA Survey. The forthcoming 2019 results were fairly similar.

At the moment I think the EA Facebook group plays a fairly important role alongside the EA Forum (which only a small minority of EAs are involved with) in giving people newer to the community somewhere where they can express their views. Higher moderation of comments would probably add to the pervasive (we will discuss this in a future EA Survey post) sense that EA is exclusive and elitist.

I do think it's worth considering whether low quality discussion on the EA Facebook group will cause promising prospective EAs to 'bounce' i.e. see the low quality discussion, infer that EA is low quality and leave. The extent to which this happens is a tricky broader question, but I'm inclined to hope that it wouldn't be too frequent since readers can easily see the higher quality articles and numerous Forum posts linked on Facebook and I would also hope that most readers will know that online discussion on Facebook is often low quality and not update too heavily against EA on the basis of it.

It also seems worth bearing in mind that since most members of the Facebook group clearly don't make the decision to move over to participating in the EA Forum, that efforts to make the EA Facebook discussion more like the Forum, may just put off a large number of users.

Is vegetarianism/veganism growing more partisan over time?

I think this is a good explanation of at least part of the phenomenon. As you note, where we do samples of the general population and only 5% of people report being vegetarian or vegan, then even a small number of lizardperson answering randomly, oddly or deliberately trolling could make up a large part of the 5%.

That said, I note that even in surveys which are deliberately solely targeting identified vegetarians or vegans (so 100% of people in the sample identified as vegetarian or vegan), large percentages then say that they eat some meat. Rethink Priorities has an unpublished survey (report forthcoming soon) which sampled exclusively people who have previously identified as vegetarian or vegan (and then asked them again in the survey whether they identified as vegetarian or vegan) and we found just over 25% of those who answered affirmatively to the latter question still seemed to indicate that they consumed some meat product in a food frequency questionnaire. So that suggests to me that there's likely something more systematic going on, where some reasonably large percentage of people identify as vegetarian or vegan despite eating meat (e.g. because they eat meat very infrequently and think that's close enough). Of course, it's also possible that the first sampling to find self-identified vegetarian or vegans sampled a lot of lizardpersons, meaning that there was a disproportionate number of lizardpersons in the second sampling, meaning that there was a disproportionate number of lizardpersons who then identified as vegetarian or vegan in our survey. And perhaps lizardpersons don't just answer randomly but are disproportionately likely to identify as vegetarian or vegan when asked, which might also contribute.

EA Survey 2019 Series: Geographic Distribution of EAs

I don't think that really explains the observed pattern that well.

I agree that in general, people not appearing in the EA Survey could be explained either by them dropping out of EA or them just not taking the EA Survey. But in this case, what we want to explain is the appearance of a disproportionate number of people who took the EA Survey in 2018, not taking the EA Survey in 2019, among the most recent cohorts of EAs who took the EA Survey in 2018 (2015-2017) compared to earlier cohorts (who have been in EA longer).

The explanation that this is due to EAs disproportionately drop out during their first 3 years seems to make straightforward intuitive sense.

The explanation the people who took the EA Survey in 2018 and joined within 2015-2017 specifically, were disproportionately less likely to take the EA Survey in 2019 seems less straightforward. Presumably the thought is that these people might have taken the EA Survey once, realised it was too long or something, and decided to not take it in 2019, whereas people who joined in earlier years have already taken the EA Survey and so are less likely to drop out of taking it, if they haven't already done so? I don't think that fits the data particularly well. Respondents from the 2015 cohort, would have had opportunities to take the survey at least 3 times, including 2018, before stopping in 2019, so it's hard to see why they specifically would be less likely to stop taking the EA Survey in 2019 compared to earlier EAs. Conversely EAs from before 2015 all the way back to 2009 or earlier, had at most 1 extra opportunity to be exposed to the EA Survey (we started in 2014), so it's hard to see why these EAs would be less likely to stop taking the EA Survey in 2019 having taken it in 2018.


In general, I expect the observation may have more than one explanation, including just random noise, but I think higher rates of dropout among particular more recent cohorts makes sense as an explanation, whereas these people specifically being more likely to take the EA Survey in 2018 and not in 2019 doesn't really.

Growth and the case against randomista development

That's certainly true. I don't know exactly what they had in mind when they claimed that "most seem to be long-termists in some broad sense," but the 2019 survey at least has data directly on that question, whereas 2018 just has the best approximation we could give, by combining respondents who selected any of the specific causes that seemed broadly long-termist and Long Term Future lost out to Global Poverty using that method in both 2018 and 2019.*

*As noted in the posts, that method depends on the controversial question of what fine-grained causes should be counted as part of the 'Long Term Future' group. If Climate Change (the 2nd most popular cause in 2019, 3rd in 2018) were counted as part of LTF, then LTF would win by a mile. However, I am sceptical that most Climate Change respondents in our samples count as LTF in the relevant (EA) sense. i.e. normal (non-EA) climate change supporters who have no familiarity with LTF reasoning and think we need to be sustainable and think about the world 100 years or more in advance, seem quite different from long-termist EA (it seems they don't and generally would not endorse LTF reasoning about other areas). An argument against this is that that we see from the 2019 analysis, that people who selected Climate Change as a specific cause predominantly broke in favour of LTF when asked to select a broader cause area. I'm not sure how dispositive that is though. It seems likely to me that people who most support a specific cause other than Global Poverty (or Animals or Meta) would probably be more to select a broader, vaguer cause category, which their preferred cause could plausibly fit into (as Climate Change does into 'long term future/existential risk'), than one of the other specific causes, and as noted above, people might like the vague category of concern for the 'long term future' without actually supporting LTF the EA cause area. Some evidence for this comes from the other analyses in 2018 and 2019 which found that respondents who supported Climate Change were quite dissimilar from those who supported LTF causes in almost all respects (e.g. they tended to be newer to EA- very heavily skewed towards the most recent years- and less engaged with EA, generally following the same trends as Global Poverty and the opposite to AI, see here).

Growth and the case against randomista development

Which cause is most popular depends on cause categorisation and most surveyed EAs seem to be long-termists in some broad sense. EA Survey 2018 Series: Cause Selection"

This is clearly fairly tangential to the main point of your post, but since you mention it, the more recent EA Survey 2019: Cause Prioritization post offers clearer evidence for your claim that most surveyed EAs seem to be long-termists, as 40.08% selected the 'Long Term Future / Catastrophic and Existential Risk Reduction' (versus 32.3% selecting Global Poverty) when presented with just 4 broad EA cause areas. That said, the claim in the main body of your text that "Global poverty remains a popular cause area among people interested in EA" is also clearly true, since Global Poverty was the highest rated and most often selected 'top cause' among the more fine-grained cause areas (22%).

The Labour leadership election: a high leverage, time-limited opportunity for impact (*1 week left to register for a vote*)

I have to wonder whether EAs voting on the Labour leadership is positive in expectation. A priori, I would have expected it would be, but to my surprise, the EAs I know personally whose views on Labour politics I also know have not (in my view) had generally better views, been more thoughtful or more informed than the average Labour party members (I have been a Labour party member for some years). Nor have their substantive views seemed better to me, though of course this is more controversial (and this fact leads me to reduce my confidence in my own views considerably). Notably, the above is drawing from a reference class of people who were already quite engaged with Labour politics, things may be different (and perhaps worse) for the class of EAs who were not Labour party members, but who were persuaded their vote would be valuable by a forum post.

It also seems possible that votes by EAs generally being positive in expectation holds true for general elections, where choices are more stark and there is generally more consensus among EAs, and their votes are being compared against a wider reference class, but does not hold for more select votes about more nuanced issues, comparing against groups of relatively engaged and informed voters.

[Link] Aiming for Moral Mediocrity | Eric Schwitzgebel

The first two sentences of his article "Aiming For Moral Mediocrity" are:

I have an empirical thesis and a normative thesis. The empirical thesis is: Most people aim to be morally mediocre. [I'm including this as a general reference for other readers, since you seem to have read the article yourself]

I take the fact that people systematically evaluate themselves as being significantly (morally) better than average, as strong evidence against the claim that people are aiming to be morally mediocre. If people systematically believed themselves to be better than average and were aiming for mediocrity, then they could (and would) save themselves effort and reduce their moral behaviour until they no longer thought themselves to be above average.

Note that the evidence Schwitzgebel cites for his empirical thesis doesn't show that "People behave morally mediocre" any more than it shows that people aim to be morally mediocre: it shows people's behaviour goes up or down when you tell them that a reference class is behaving much better or worse, but not that most people's behaviour is anywhere near the mediocre reference point. For example, in Cialdini et al (2006), 5% of people took wood from a forest when told that "the vast majority of people did not" and 7.92% did when told that "many past visitors" had (which was not a significant difference, as it happened). Unfortunately, the reference points "vast majority" and "many" are vague, but it doesn't suggest that most people are behaving anywhere near the mediocre reference point.

I recognise that Schwitzgebel acknowledges this "gap" between his evidence and his thesis in section 4, but I think he fails to appreciate that extent of the gap (near total) or that the evidence he cites can actually be seen as evidence against his thesis if we infer on the basis of these results that most people don't seem to be acting in line with the mediocre reference point.


In the "aiming for a B+" section you cite he actually seems to shift quite a bit to be more in line with my claim.

Here he suggests that "B+ probably isn’t low enough to be mediocre, exactly. B+ is good. It’s just not excellent. Maybe, really, instead of aiming for mediocrity, most people aim for something like B+ – a bit above mediocre, but shy of excellent." This is in line with my claim, that people take themselves to be above average morally and aim to keep sailing along at that level, but quite different from his claim previously that people "calibrate toward approximately the moral middle" and aim to be "so-so."

He reconciles this with the claim that people think of themselves and aim for above average (and "good") "most people who think they are aiming for B+ are in fact aiming lower." His passage doesn't make entirely clear what he means by that.

In the first instance he seems to suggest that people's beliefs are just mistaken about where they are really aiming (he gives the example of a student who professes to aim for a B+, but won't work harder if they get a C). But I don't see any reason to think that people are systematically mistaken about what moral standard they are really aiming at.

However, in a later passage he says "when I say that people aim for mediocrity, I mean not that they aim for mediocrity-by-their-own rationalized-self-flattering-standards. I mean that they are calibrating toward what is actually mediocre." Elsewhere he also says "It is also important here to use objective moral standards rather than people’s own moral standards." It's slightly unclear to me whether he means to refer to what is mediocre according to objective descriptive standards of how people actually behave, or according to objective normative standards i.e. what (Schwitzgebel thinks) is actually morally mediocre. If it's the former, we are back to the claim that although people think they are morally good and think they are aiming for morally good behaviour (according to their standards), they actually aim their behaviour towards median behaviour in their reference class (which I don't think we have any evidence for). If it's the latter then it's just the claim that the level of behaviour that most people actually end up approximating is mediocre (according to Schwitzgebel), which isn't a very interesting thesis to me.

Load More