We just released a podcast with me about what the core arguments for effective altruism actually are, and potential objections to them.
I wanted to talk about this topic because I think many people – even many supporters – haven’t absorbed the core claims we’re making.
As a first step in tackling this, I think we could better clarify what the key claim of effective altruism actually is, and what the arguments for that claim actually are. I think it would also help us improve our understanding of effective altruism.
The most relevant existing work is Will MacAskill's introduction to effective altruism in the Norton Introduction to Ethics, though it argues for the claim that we have a moral obligation to pursue effective altruism, and I wanted to formulate the argument without making it a moral obligation. What I say is also in line with MacAskill's definition of effective altruism.
I think a lot more work is needed in this area, and don’t have any settled answers, but I hoped this episode would get discussion going. There are also many other questions about how best to message effective altruism after it's been clarified, which I mostly don't get into.
In brief, here’s where I’m at. Please see the episode to get more detail.
The claim: If you want to contribute to the common good, it’s a mistake not to pursue the project of effective altruism.
The project of effective altruism: is defined as the search for the actions that do the most to contribute to the common good (relative to their cost). It can be broken into (i) an intellectual project – a research field aimed at identifying these actions and, (ii) a practical project to put these findings into practice and have an impact.
I define the ‘common good’ in the same way Will MacAskill defines the good in “The definition of effective altruism”, as what most increases welfare from an impartial perspective. This is only intended as a tentative and approximate definition, which might be revised.
The three main premises supporting the claim of EA are:
- Spread: there are big differences in how much different actions (with similar costs) contribute to the common good.
- Identifiability: We can find some of these high-impact actions with reasonable effort.
- Novelty: The high-impact actions we can find are not the same as what people who want to contribute to the common good typically do.
The idea is that if some actions do far more to contribute than others, we can find those actions, and they’re not the same as what we’re already doing, then – if you want to contribute to the common good – it’s worth searching for these actions. Otherwise, you’re failing to achieve as much for the common good as you could, and could better achieve your stated goal.
Moreover, we can say that it’s more of a mistake not to pursue the project of effective altruism the greater the degree to which each of the premises hold. For instance, the greater the degree of spread, the more you’re giving up by not searching (and same for the other two premises).
We can think of the importance of effective altruism quantitatively as how much your contribution is increased by applying effective altruism compared to what you would have done otherwise.
Unfortunately, there’s not much rigorously written up about how much actions differ in effectiveness ex ante, all considered, and I’m keen to see more research in this area.
In the episode, I also discuss:
- Some broad arguments for why the premises seem plausible.
- Some potential avenues to object to these premises – I don’t think these objections work as stated, but I’d like to see more work on making them better. (I think most of the best objections to EA are about EA in practice rather than the underlying ideas.)
- Common misconceptions about what EA actually is, and some speculation on why these got going.
- A couple of rough thoughts on how, given these issues, we might improve how we message effective altruism.
I’m keen to see people running with developing the arguments, making the objections better, and thinking about how to improve messaging. There’s a lot of work to be done.
If you're interested in working on this, I may be able to share you on some draft documents with a little more detail.
On "large returns to reason": My favorite general-purpose example of this is to talk about looking for a good charity, and then realizing how much better the really good charities were than others I had supported. I bring up real examples of where I donated before and after discovering EA, with a few rough numbers to show how much better I think I'm now doing on the metric I care about ("amount that people are helped").
I like this approach because it frames EA as something that can help a person make a common decision -- "which charity to support?" or "should I support charity X?" -- but without painting them as ignorant or preferring less good (in these conversations, I acknowledge that most people don't think much about decisions like this, and that not thinking much is reasonable given that they don't know how huge the differences in effectiveness can be).