David_Moss

I am the Principal Research Manager at Rethink Priorities working on, among other things, the EA Survey, Local Groups Survey, and a number of studies on moral psychology, focusing on animal, population ethics and moral weights.

In my academic work, I'm a Research Fellow working on a project on 'epistemic insight' (mixing philosophy, empirical study and policy work) and moral psychology studies, mostly concerned either with effective altruism or metaethics.

I've previously worked for Charity Science in a number of roles and was formerly a trustee of EA London.

Comments

My Meta-Ethics and Possible Implications for EA

The learned meaning of moral language refers to our recollection/reaction to experiences. These reactions include approval, preferences and beliefs... Preferences enter the picture when we try to extend our use of moral language beyond the simple cases learned as a child. When we try to compare two things that are apparently both bad we might arrive at a preference for one over the other, and in that case the preference precedes the statement of approval/disapproval.

Thanks for the reply. I guess I'm still confused about what specific attitudes you see as involved in moral judgments, whether approval, preferences, beliefs or some more complex combination of these etc. It sounds like you see the genealogy of moral terms as involving a melange of all of these, which seems to leave the door quite open as to what moral terms actually mean.

It does sound though, from your reply, that you do think that moral language exclusively concerns experiences (and our evaluations of experiences). If so, that doesn't seem right to me. For one, it seems that the vast majority of people (outside of welfarist EA circles) don't exclusively or even primarily make moral judgements or utterances which are about the goodness or badness of experiences (even indirectly). It also doesn't seem to me like the kind of simple moral utterances which ex hypothesi train people in the use of moral language at an early age primarily concern experiences and their badness (or preferences for that matter). It seems equally if not more plausible to speculate that such utterances typically involve injunctions (with the threat of punishment and so on).

Thanks for bringing up the X,Y,Z point; I initially had some discussion of this point, but I wasn't happy with my exposition, so I removed it. Let me try again: In cases when there are multiple moral actors and patients there are two sets of considerations. First, the inside view, how would you react as X and Y. Second, the outside view, how would you react as person W who observes X and Y. It seems to me that we learn moral language as a fuzzy mixture of these two with the first usually being primary.

Thanks for addressing this. This still isn't quite clear to me i.e. what exactly is meant by 'how would you react as person W who observes X and Y'? What conditions of W observing X and Y are required?. For example, does it only specifically refer to how I would react if I were directly observing an act of torture in the room or does it permit broader 'observations' i.e. I can observe that there is such-and-such level of inequality in the distribution of income in a society. The more restrictive definitions don't seem adequate to me to capture how we actually use moral language, but the more permissive ones, which are more adequate, don't seem to suffice to rule out me making judgements about the repugnant conclusion and so on.

Much as with population ethics, I suspect this endeavor should be seen as... beyond the boundary of where our use of language remains well-defined.

I agree that answers to population ethics aren't directly entailed by the definition of moral terms. But I'm not sure why we should expect any substantive normative answers to be implied by the meaning of moral language. Moral terms might mean "I endorse x", but any number of different considerations (including population ethics, facts about neurobiology) might be relevant to whether I endorse x (especially so if you allow that I might have all kinds of meta-reactions about whether my reactions are based on appropriate considerations etc.).

Where the QALY's at in political science?

Effective Thesis has some suggested topics within political science.

Replaceability Concerns and Possible Responses

It is somewhat surprising the EA job market is so competitive. The community is not terribly large. Here is an estimate...This suggests to me a very large fraction of highly engaged EAs are interested in direct work.

We have data from our careers post which addresses this. 688 (36.6% of respondents to that question) indicated that they wanted to pursue a career in an EA non-profit. That said, this was a multi-select question so people could select this alongside other options. Also 353 people reported having applied to an EA org for a job. There were 207 people who indicated they currently work at an EA org which, if speculatively we take that as a rough proxy for current positions, suggests a large mismatch between people seeking positions and total positions. 

Of those who included EA org work within their career paths and were not already employed in an EA org, 29% identified as "highly engaged" (defined with examples such as having worked in an EA org or leading a local group). A further 32% identified with the next highest level of engagement, which includes things like "attending an EA Global conference, applying for career coaching, or organizing an EA meetup." Those who reported applying for an EA org job were yet more highly engaged: 37.5% "highly engaged" and 36.4% the next highest level of engagement.

My Meta-Ethics and Possible Implications for EA

Thanks for the post.  

I found myself having some difficulty understanding the core of your position. Specifically, I'm not sure whether you're claiming that the meaning of moral language is to do with how we would react (what we would approve/disapprove of) in certain scenarios or whether you are specifically claiming that moral language is about experiences and our reactions if we were to experience certain things or even, specifically, what we would prefer if we were to experience certain things or what we would believe if we experienced certain things.

Note that there are lots of variations within the above categories, of course. For example, if morality is about what we would believe if we lived the relevant experiences, it's not clear to me whether this means what I would believe about whether X should torture Y, if I were Y being tortured, if I were X torturing Y, or if I were Z who had experienced both and then combined that with my own moral dispositions etc.

Either way, I'm not sure that the inclusion of meta-reactions and the call to universality (which I agree are necessary to make this form of expressivism plausible) permit the conclusions you draw.

For example you write: "it seems that personal experience with animals (and their suffering) becomes paramount overriding evidence from neuron counts, self-awareness experiments and the like." But if you allow that I can be concerned with whether my own reactions are consistent, impartial and proportionate to others' bad experiences, then it seems like I can be concerned with whether helping chickens or helping salmon causes there to be fewer bad experiences, or with whether specific animals are having negative experiences at all. And if so. it seems like I should be concerned about what the evidence from neuron counts, self-awareness experiments etc. would tell us about the extent to which these creatures are suffering. Moral claims being about what my reactions would be in such-and-such circumstance doesn't give me reason to privilege my actual reactions upon personal experiences (in current circumstances). Doing so seems to imply that when I'm thinking about whether, say, swatting a fly is wrong, I should simply ask myself what my reactions would be if I swatted a fly; but that doesn't seem plausible as an account of how we actually think morally, where what I'm actually concerned about (inter alia) is whether the fly would be harmed if I swatted it.

3 suggestions about jargon in EA

Academia, especially in the social sciences and humanities, also strikes me as being extremely pro-concealment (either actively or more commonly passively, by believing we should not gather information in the first place) on topics which they actually view as objectionable for explicitly altruistic reasons.

Resources to learn how to do research

If you are interested in EA research/an EA research job, I would recommend just reading EA research on this forum and on the websites of EA research organisations. Much of this research doesn't involve any research method beyond general desk/secondary research, i.e. reading relevant literature and synthesising it.

In the cases where you see EA research relies on some specific technical methodology, such as stats, cost-effectiveness modelling, surveys etc., I would just recommend googling the specific method and finding resources that way. In general, I think there are too many different methods and approaches even within these categories, for it to be too helpful to link to a general introduction to stats (although here's one, for example, since depending on what you want to do, a lot won't be relevant.

EA Survey 2019 Series: How many people are there in the EA community?

I think "been influenced by EA to do EA-like things" covers a very wide array of people.

In the most expansive sense, this seems like it would include people who read a website associated with EA (this could be Giving What We Can, GiveWell, The Life You Can Save or ACE or others...) decide "These sound like good charities" and donate to them. I think people in this category may or may not have heard of EA (all of these mention effective altruism somewhere on the website) and they may even have read some specific formulation that expresses EA ideas (e.g. "We should donate to the most effective charity") and decided to donate to these specific charities as a result. But they may not really know or understand what EA means (lots of people would platitudinously endorse 'donating to to the best charities') or endorse it, let alone identify with or be involved with EA in any other way.

I agree that there are many, many more people who are in this category. As we note in footnote 7, there are literally millions of people who've read the GiveWell website alone, many of whom (at least 24,000) will have been moved to donate. Donating to a charity influenced by EA principles was the most commonly reported activity in the EA survey by a long way, with >80% of respondents reporting having done so, and >60% even among the second lowest level of engagement.

I think we agree that while getting people to donate to effective charities is important (perhaps even more impactful than getting people to 'engage with the effective altruism community' in a lot of cases) these people, don't count as part of the EA community in the sense discussed here. But I think they also wouldn't count as part of the "wider network of people interested in effective altruism" that David Nash refers to (i.e. because many of them aren't interested in effective altruism).

I think a good practical test would be: if you went to some of these people who were moved to donate to a GiveWell/ACE etc. charity and said "Have you heard that many adherents of effective altruism, believe that we should x?", if their response is some variation on "What's that?" or "Why should I care?" then they're not part of the community or network of people interested in EA. I think this is a practically relevant grouping because this tells you who could 'be influenced by EA to do EA things', where we understand "influenced by EA" to refer to EA reasoning and arguments and "EA things" to refer to EA things in general, as opposed to people who might be persuaded by an EA website to do some specific thing which EAs currently endorse but who would not consider anything else or consider maximising effectiveness more generally.

EA Survey 2019 Series: How many people are there in the EA community?

Thanks for the reply!

So then it is a question of whether action or identification is more important-I would favor action.

This is the kind of question I had in mind when I said: "Of course, being part of the “EA community” in this sense is not a criterion for being effective or acting in an EA manner- for example, one could donate to effective charity, without being involved in the EA community at all..."

It seems fairly uncontroversial to me that someone who does a highly impactful, morally motivated thing, but hasn't even heard of the EA community, doesn't count as part of the EA community (in the sense discussed here).

I think this holds true even if an activity represents the highest standard that all EAs should aspire to. The fact that something is the highest standard that EAs should aspire to doesn't mean that many people might not undertake the activity for reasons unrelated to EA, and I think those people would fall outside the "EA community" in the relevant sense, even if they are doing more than many EAs.

EA Survey 2019 Series: How many people are there in the EA community?

I agree this would both not be very inspiring and risk sounding elitist. I don't have any novel ideas, I would probably just say something vague about wanting to spread the ideas carefully and ensure they aren't lost or distorted in the mass media and try to redirect the topic.

EA Survey 2019 Series: How many people are there in the EA community?

We'll be addressing this indirectly in the next couple of posts as it happens.

Load More