David_Moss

I am the Principal Research Manager at Rethink Priorities working on, among other things, the EA Survey, Local Groups Survey, and a number of studies on moral psychology, focusing on animal, population ethics and moral weights.

In my academic work, I'm a Research Fellow working on a project on 'epistemic insight' (mixing philosophy, empirical study and policy work) and moral psychology studies, mostly concerned either with effective altruism or metaethics.

I've previously worked for Charity Science in a number of roles and was formerly a trustee of EA London.

Comments

Correlations Between Cause Prioritization and the Big Five Personality Traits

Thanks again for writing it! It's nudged me to go back and look at our data again when I have some time. I expect that we'll probably replicate at least some of your broad findings.

Evidence on correlation between making less than parents and welfare/happiness?

There is research on the links between downward social mobility and happiness, however:


These empirical studies show little consensus when it comes to the consequences of intergenerational social mobility for SWB: while some authors suggest that upward mobility is beneficial for SWB (e.g. Nikolaev and Burns, 2014), others find no such relationship (e.g. Zang and de Graaf, 2016; Zhao et al., 2017). In a similar vein, some researchers suggest that downward mobility is negatively associated with SWB (e.g. Nikolaev and Burns, 2014), while others do not (e.g. Zang and de Graaf, 2016; Zhao et al., 2017)

This paper suggests that differences in culture may influence the connection between downward social mobility and happiness:
 

the United States is an archetypical example of a success-oriented society in which great emphasis is placed on individual accomplishments and achievement (Spence, 1985). The Scandinavian countries are characterized by more egalitarian values (Schwartz, 2006; Triandis, 1996, Triandis and Gelfand, 1998; see also Nelson and Shavitt, 1992)...A great cultural salience of success and achievement may make occupational success or failure more important markers for people’s SWB. 

And they claim to find this:

In line with a previous study from Nikolaev and Burns (2014) we found that downward social mobility is indeed associated with lower SWB in the United States. This finding provides evidence for the “falling from grace hypothesis” which predicts that downward social mobility is harmful for people’s well-being. However, in Scandinavian Europe, no association between downward social mobility and SWB was found. This confirms our macro-level contextual hypothesis for downward social mobility: downward social mobility has greater consequences in the United States than in the Scandinavian countries.

This is, of course, just one study so not very conclusive.

Correlations Between Cause Prioritization and the Big Five Personality Traits

Thus, intellectually curious people—those who are motivated to explore and reflect upon abstract ideas—are more inclined to judge the morality of behaviors according to the consequences they produce.

This is probably mentioned in the paper, but the Cognitive Reflection Test is also associated with utilitarianism, Need for Cognition is associated with utilitarianism, Actively Open-minded Thinking is associated with utilitarianism and numeracy is associated with utilitarianism. Note that I don't endorse all of these papers' conclusions (for one thing, some are using the simple 'trolley paradigm' which I think likely isn't capturing utilitarianism very well).

Notably in the EA Survey we measured Need for Cognition  respondents scored ludicrously highly, with the maximum response for each item being the modal response. 

What actually is the argument for effective altruism?

I agree that if the first two premises were true, but the third were false, then EA would still be important in a sense, it's just that everyone would already be doing EA

Just to be clear, this is only a small part of my concern about it sounding like EA relies on assuming (and/or that EAs actually do assume) that the things which are high impact are not the things people typically already do.

One way this premise could be false, other than everyone being an EA already, is if it turns out that the kinds of things people who want to contribute to the common good typically do are actually the highest impact ways of contributing to the common good. i.e. we investigate, as effective altruists and it turns out that the kinds of things people typically do to contribute to the common good are (the) high(est) impact. [^1]

To the non-EA reader, it likely wouldn't seem too unlikely that the kinds of things they typically do are actually high impact.  So it may seem peculiar and unappealing for EAs to just assume [^2] that the kinds of things people typically do are not high impact.

[^1] A priori, one might think there are some reasons to presume in favour of this (and so against the EA premise), i.e. James Scott type reasons, deference to common opinion etc.

[^2] As noted, I don't think you actually do think that EAs should assume this, but labelling it as a "premise" in the "rigorous argument for EA" certainly risks giving that impression.

Nathan Young's Shortform

This is true, although for whatever reason the responses to the podcast question seemed very heavily dominated by references to MacAskill. 

This is the graph from our original post, showing every commonly mentioned category, not just the host (categories are not mutually exclusive). I'm not sure what explains why MacAskill really heavily dominated the Podcast category, while Singer heavily dominated the TED Talk category.

What actually is the argument for effective altruism?

Novelty: The high-impact actions we can find are not the same as what people who want to contribute to the common good typically do.

 

It's not entirely clear to me what this means (specifically what work the "can" is doing). 

If you mean that it could be the case that we find high impact actions which we not the same are what people who want to contribute to the good would typically do,  then I agree this seems plausible as a premise for engaging in the project of effective altruism.

If you mean that the premise is that we actually can find high impact actions which are not the same as what people who want to contribute to the common good typically do, then it's not so clear to me that this should be a premise in the argument for effective altruism. This sounds like we are assuming what the results of our effective altruist efforts to search for the actions that do the most to contribute to the common good (relative to their cost) will be: that the things we discover are high impact will be different from what people typically do. But, of course, it could turn out to be the case that actually the highest impact actions are those which people typically do (our investigations could turn out to vindicate common sense, after all), so it doesn't seem like this is something we should take as a premise for effective altruism. It also seems in tension with the idea (which I think is worth preserving) that effective altruism is a question (i.e. effective altruism itself doesn't assume that particular kinds of things are or are not high impact).

I assume, however, that you don't actually mean to state that effective altruists should assume this latter thing to be true or that one needs to assume this in order to support effective altruism. I'm presuming that you instead mean something like: this needs to be true for engaging in effective altruism to be successful/interesting/worthwhile. In line with this interpretation, you note in the interview something that I was going to raise as another objection: that if everyone were already acting in an effective altruist way, then it would be likely false that the high impact things we discover are different from those that people typically do.

If so, then it may not be false to say that "The high-impact actions we can find are not the same as what people who want to contribute to the common good typically do", but it seems bound to lead to confusion, with people misreading this as EAs assuming that he highest impact things are not what people typically do. It's also not clear that this premise needs to be true for the project of effective altruism to be worthwhile and, indeed, a thing people should do: it seems like it could be the case that people who want to contribute to the common good should engage in the project of effective altruism simply because it could be the case that the highest impact actions are not those which people would typically do.

Nathan Young's Shortform

This seems quite likely given EA Survey data where, amongst people who indicated they first heard of EA from a Podcast and indicated which podcast, Sam Harris' strongly dominated all other podcasts.

More speculatively, we might try to compare these numbers to people hearing about EA from other categories. For example, by any measure, the number of people in the EA Survey who first heard about EA from Sam Harris' podcast specifically is several times the number who heard about EA from Vox's Future Perfect. As a lower bound, 4x more people specifically mentioned Sam Harris in their comment than selected Future Perfect, but this is probably dramatically undercounting Harris, since not everyone who selected Podcast wrote a comment that could be identified with a specific podcast. Unfortunately, I don't know the relative audience size of Future Perfect posts vs Sam Harris' EA podcasts specifically, but that could be used to give a rough sense of how well the different audiences respond.

Thomas Kwa's Shortform

Thanks for writing this.

I also agree that research into how laypeople actually think about morality is probably a very important input into our moral thinking. I mentioned some reasons for this in this post for example. This project on descriptive population ethics also outlines the case for this kind of descriptive research. If we take moral uncertainty and epistemic modesty/outside-view thinking seriously, and if on the normative level we think respecting people's moral beliefs is valuable either intrinsicaially or instrumentally, then this sort of research seems entirely vital.

I also agree that incorporating this data into our considered moral judgements requires a stage of theoretical normative reflection, not merely "naively deferring" to whatever people in aggregate actually believe and that we should probably go back and forth between these stages to bring our judgements into reflective equillibrium (or some such).

That said, it seems like what you are proposing is less a project and more an enormous research agenda spanning several fields of research, a lot of which is ongoing across multiple disciplines, though much of it is in its early stages. For example, there is much work in moral psychology, which tries to understand what people believe, and why, at different levels, (influential paradigms include Haidt's Moral Foundations Theory, and Oliver Scott Curry's (Morality as Cooperation / Moral Molecules theory), a whole new field of sociology of morality (see also here) , anthropology of morality is a whole long-standing field, and experimental philosophy has just started to seek to empirically examine how people think about morality too. 

Unfortunately, I think our understanding of folk morality remains exceptionally unclear and in its very early stages. For example, despite a much touted "new synthesis" between different disciplines and approaches, there remains much distance between different approaches, to the extent that people in psychology, sociology and anthropology are barely investigating the same questions >90% of the time. Similarly, experimental philosophy of morality seems utterly crippled by validity issues (see my recent paper with Lance Bush here) . There is also, I have argued, a necessity to also gather qualitative data, in part due to the limitations with survey methodology for understanding people's moral views, which experimental philosophy and most psychology have essentially not started to do at all.  

I would also note that there already cross-cultural moral research on various questions, but this is usually limited to fairly narrow paradigms: for example, aside from those I mentioned above, the World Values Survey's focus on Traditional/Secular-Rational and Survival/Self-expressive values; research on the trolley problem (which also dominates the rest of moral psychology), or the Schwartz Values Survey. So these lines of research doesn't really give us insight into people's moral thinking in different cultures as a whole.

I think the complexity and ambition involved in measuring folk morality becomes even clearer when we consider what is involved in studying specific moral issues. For example, see Jason Schukraft's discussion of how we might investigate how much moral weight the folk ascribe to the experiences of animals of different species.

There are lots of other possible complications with cross-cultural moral research. For example, there is some anthropological evidence that the western concept of morality is idiosyncratic and does not overlap particularly neatly with other cultures, see here.

So I think, given this, the problem is not simply that it's "too expensive", as we might say of a really large survey, but that it would be a huge endeavour where we're not even really clear about much of the relevant theory and categories. Also training a significant number of EA anthropologists, who are competent in ethnography and the relevant moral philosophy would be quite a logistical challenge.

---

That said I think there are plenty of more tractable research projects that one could do roughly within this area. For example, more large scale representative surveys examining people's views and their predictors across a wider variety of issues relevant to effective altruism/prioritisation would be relatively easy to do with a budget of <$10,000, by existing EA researchers. This would also potentially contribute to understanding influences on the prioritisation of EAs, rather than just what non-EA things, which would also plausibly be valuable.

Yale EA Virtual Fellowship Retrospective - Summer 2020

Thanks for the post! This definitely isn't addressed at you specifically (I think this applies to all EA groups and orgs), so I hope this doesn't seem like unfairly singling you out over a very small part of your post, but I think EAs should stop calculating and reporting the 'NPS score' when they ask NPS or NPS-style questions. 

I assume you calculated the NPS score in the 'standard' way i.e. asking people “Would you recommend the Fellowship to a friend?” on a 0-10 or 1-10 scale, and subtracting the percentage of people who answered with a 6 or lower ("Detractors") from the percentage of people who answered with a 9 or 10 "Promoters"). The claim behind the NPS system is that people who give responses within these ranges are qualitatively different 'clusters' (and also the people responding with a 7-8 are also a distinct cluster "Passives" who basically don't matter and so who don't figure in the NPS scores at all) and that just subtracting the percentages of one cluster from another is the "easiest-to-understand, most effective summary of how a company [is] performing in this context."

Unfortunately, it does not seem to me that there's a sound empirical basis for analysing an NPS style scale in this way (and the company behind it are quite untransparent about this basis (see discussion here).  This way of analysing responses to a scale is pretty unusual and obscures most of the information about the distribution of responses, which it seems like it would be pretty easy for an EA audience to understand.  For example, it seems like it would be pretty easy to depict the distribution of responses, as we did in the EA Survey Community information post.

And it seems like calculating the mean and median response would also give a more informative, but equally easy to understand summary of performance on this measure (more so than the NPS score, which for example, completely ignores whether people respond with a 0 or a 6). This would also allow easy significance testing of the differences between events/groups.

Load More