We just released a podcast with me about what the core arguments for effective altruism actually are, and potential objections to them.
I wanted to talk about this topic because I think many people – even many supporters – haven’t absorbed the core claims we’re making.
As a first step in tackling this, I think we could better clarify what the key claim of effective altruism actually is, and what the arguments for that claim actually are. I think it would also help us improve our understanding of effective altruism.
The most relevant existing work is Will MacAskill's introduction to effective altruism in the Norton Introduction to Ethics, though it argues for the claim that we have a moral obligation to pursue effective altruism, and I wanted to formulate the argument without making it a moral obligation. What I say is also in line with MacAskill's definition of effective altruism.
I think a lot more work is needed in this area, and don’t have any settled answers, but I hoped this episode would get discussion going. There are also many other questions about how best to message effective altruism after it's been clarified, which I mostly don't get into.
In brief, here’s where I’m at. Please see the episode to get more detail.
The claim: If you want to contribute to the common good, it’s a mistake not to pursue the project of effective altruism.
The project of effective altruism: is defined as the search for the actions that do the most to contribute to the common good (relative to their cost). It can be broken into (i) an intellectual project – a research field aimed at identifying these actions and, (ii) a practical project to put these findings into practice and have an impact.
I define the ‘common good’ in the same way Will MacAskill defines the good in “The definition of effective altruism”, as what most increases welfare from an impartial perspective. This is only intended as a tentative and approximate definition, which might be revised.
The three main premises supporting the claim of EA are:
- Spread: there are big differences in how much different actions (with similar costs) contribute to the common good.
- Identifiability: We can find some of these high-impact actions with reasonable effort.
- Novelty: The high-impact actions we can find are not the same as what people who want to contribute to the common good typically do.
The idea is that if some actions do far more to contribute than others, we can find those actions, and they’re not the same as what we’re already doing, then – if you want to contribute to the common good – it’s worth searching for these actions. Otherwise, you’re failing to achieve as much for the common good as you could, and could better achieve your stated goal.
Moreover, we can say that it’s more of a mistake not to pursue the project of effective altruism the greater the degree to which each of the premises hold. For instance, the greater the degree of spread, the more you’re giving up by not searching (and same for the other two premises).
We can think of the importance of effective altruism quantitatively as how much your contribution is increased by applying effective altruism compared to what you would have done otherwise.
Unfortunately, there’s not much rigorously written up about how much actions differ in effectiveness ex ante, all considered, and I’m keen to see more research in this area.
In the episode, I also discuss:
- Some broad arguments for why the premises seem plausible.
- Some potential avenues to object to these premises – I don’t think these objections work as stated, but I’d like to see more work on making them better. (I think most of the best objections to EA are about EA in practice rather than the underlying ideas.)
- Common misconceptions about what EA actually is, and some speculation on why these got going.
- A couple of rough thoughts on how, given these issues, we might improve how we message effective altruism.
I’m keen to see people running with developing the arguments, making the objections better, and thinking about how to improve messaging. There’s a lot of work to be done.
If you're interested in working on this, I may be able to share you on some draft documents with a little more detail.
It's not entirely clear to me what this means (specifically what work the "can" is doing).
If you mean that it could be the case that we find high impact actions which we not the same are what people who want to contribute to the good would typically do, then I agree this seems plausible as a premise for engaging in the project of effective altruism.
If you mean that the premise is that we actually can find high impact actions which are not the same as what people who want to contribute to the common good typically do, then it's not so clear to me that this should be a premise in the argument for effective altruism. This sounds like we are assuming what the results of our effective altruist efforts to search for the actions that do the most to contribute to the common good (relative to their cost) will be: that the things we discover are high impact will be different from what people typically do. But, of course, it could turn out to be the case that actually the highest impact actions are those which people typically do (our investigations could turn out to vindicate common sense, after all), so it doesn't seem like this is something we should take as a premise for effective altruism. It also seems in tension with the idea (which I think is worth preserving) that effective altruism is a question (i.e. effective altruism itself doesn't assume that particular kinds of things are or are not high impact).
I assume, however, that you don't actually mean to state that effective altruists should assume this latter thing to be true or that one needs to assume this in order to support effective altruism. I'm presuming that you instead mean something like: this needs to be true for engaging in effective altruism to be successful/interesting/worthwhile. In line with this interpretation, you note in the interview something that I was going to raise as another objection: that if everyone were already acting in an effective altruist way, then it would be likely false that the high impact things we discover are different from those that people typically do.
If so, then it may not be false to say that "The high-impact actions we can find are not the same as what people who want to contribute to the common good typically do", but it seems bound to lead to confusion, with people misreading this as EAs assuming that he highest impact things are not what people typically do. It's also not clear that this premise needs to be true for the project of effective altruism to be worthwhile and, indeed, a thing people should do: it seems like it could be the case that people who want to contribute to the common good should engage in the project of effective altruism simply because it could be the case that the highest impact actions are not those which people would typically do.
Just to be clear, this is only a small part of my concern about it sounding like EA relies on assuming (and/or that EAs actually do assume) that the things which are high impact are not the things people typically already do.
One way this premise could be false, other than everyone being an EA already, is if it turns out that the kinds of things people who want to contribute to the commo... (read more)