"It was never contended by a sound utilitarian that the lover should kiss his mistress with an eye to the common weal"
– John Austin
- There are at least three important but different things which are consistently being referred to as ‘consequentialism’.
- One use refers to the ethical theory, another to the decision theory, and the third to the general mindset of Explicit Consequence Analysis.
- Mixing them up is probably causing unnecessary confusion and grief.
Three Types of Consequentialism
When the likes of Toby Ord and Will MacAskill talk about Consequentialism, they’re talking about a type of ethical theory under which, roughly speaking, you determine how good something is based on its consequences. (Let’s call this Ethical Consequentialism.) Utilitarianism is an example of this kind of consequentialism. Consequentialist theories are also often agent-neutral: you’re trying to get the ‘best’ consequences not for you, but for everyone.
Notably, this is a property of what is actually good, not of how we decide what is good. It might well be that the way of making choices which leads to the best consequences is not to do with evaluating consequences. Something like 'take the most virtuous action' might be the most robust way to get good outcomes in day-to-day life, where actually explicitly calculating the consequences would take too long. This is actually the topic of Toby Ord’s Thesis, which I highly recommend reading if you consider yourself to be an Ethical Consequentialist.
Meanwhile, when Eliezer Yudkowsky talks about Consequentialism, he means something totally different. He is explicitly talking about the decision theory bit, where an agent weighs up a set of possible actions and picks the one with the best expected utility according to some utility function. (Let’s call this Decision Consequentialism.) But there are no claims about what the underlying utility function is: it might well be ‘number of paperclips in the world’. ‘Consequentialism’ in this sense is a very specific technical term used in alignment theory: it’s a feature of how we’d expect sufficiently advanced artificial intelligences to behave, and it underpins the concept of convergent instrumental goals and some kinds of coherence and selection theorems.
Explicit Consequence Analysis
To make matters worse, there’s a third kind of thing which EAs often seem to mean by ‘consequentialism’, which is the general practice of making choices informed by explicit utility and probability calculations. I’ve never seen it formally defined in this way but it’s the concept being pointed at by posts like “On Caring” and “Scope Insensitivity”. To quote the latter:
“The moral: If you want to be an effective altruist, you have to think it through with the part of your brain that processes those unexciting inky zeroes on paper, not just the part that gets worked up about that poor struggling oil-soaked bird.”
This is the lens which underpins impact analysis, and it’s a fairly identifying feature of EA thinking. It seems to intuitively follow from Ethical Consequentialism that this would be a good idea especially for things like career decisions and charitable donations, and it bears some resemblance to Decision Consequentialism although it is a much less technically precise notion. Crucially, though, it is not the same as either of those things, it is its own thing, which I will tentatively call Explicit Consequence Analysis. (Please, please let me know if you can think of a better name.)
Because it is rarely explicitly flagged as its own thing, I think attempts to talk about it often accidentally invoke properties from the other two kinds of Consequentialism. People get the sense that you have to do this all the time, that it's strictly optimal, that not doing this is a moral failure. This is incorrect, and I suspect that these beliefs are harmful.
So there we have it: Ethical Consequentialism, Decision Consequentialism, and Explicit Consequence Analysis.
The basic issue, I believe, is that people end up thinking they need to do things that they don’t actually need to do. I hope to expand on all of these at length in future, but for now I’ll try to keep it brief.
You don’t need to optimise everything all the time
No serious consequentialist philosopher has ever recommended that you try to optimise everything all the time. Many have recommended against it. Indeed, there are warnings on this very forum. But I think they might not be loud enough, because sometimes people are so eager to do good that they don’t actually take the time to read through all the ambient philosophy. And to be clear, I think this is reasonable: I think the onus is on people introducing EA to be clear about this.
I’ve now seen a handful of cases where people get into EA, and initially think they need to optimise everything only to discover that this is a bad idea. I believe this is downstream of the ethical consequentialism / explicit consequence analysis confusion. Choosing to not always optimise doesn’t mean you’re an imperfect consequentialist, it is in fact almost certainly the right way to do consequentialism. I'm fairly sure no utilitarian philosopher has ever advocated for this, and many have advocated against.
Being unproductive sometimes, taking time for yourself, caring about things that are not high impact, these are all ok. They’re not you failing to be a good utilitarian. Becoming someone who always calculates utility perfectly and acts on it with no sentiment is a fabricated option. Given that we are in fact squishy humans, the path to maximising impact involves recognising that fact and making room for it. 
Relatedly: it seems like EA has a burnout problem. It also happens to be, as far as I can tell, the first large-scale movement with such a high concentration of utilitarians and people explicitly trying to optimise things. I do not think this is a coincidence, although I’m not sure what the causal chain is. I hope to write on this more in future.
Utility calculations shouldn’t totally override commonsense morality
If you do your utility calculations and impact analyses and they suggest that the correct path to highest impact involves deceiving people, manipulating them, or coercing them, you should seriously entertain the possibility that you messed up the calculations. I see a lot of problems in community building as being part of an attitude that treats people as means to an end and not ends in themselves.
Likewise if the calculations suggest that you should do something that feels really bad, and also, if the calculations suggest that you should do exactly the thing you already wanted to do. I think it’s really valuable to have Explicit Consequence Analysis as another lens for evaluating decisions, but it really shouldn’t be the only one.
I feel like people have suitably internalised that it’s very easy to misrepresent things with statistics, but not that it is even easier to do so with hypothetical future calculations.
Consequentialism is not obviously best
There is a way in which consequentialism is in fact obviously best. You can in fact prove that agents need to have transitive preference orders to avoid being Dutch-booked. So it’s true, if you’re not a Decision Consequentialist, then there’s some sequence of trades you’d want to accept which would leave you strictly worse off. But firstly, humans are actually not great at this sometimes, and secondly, this in no way implies that you should be an Ethical Consequentialist. If the distinction has never been made clear to you though, you might quite reasonably get confused.
In the spirit of brevity I will end here and save the elaborations. Concrete recommendations:
For the Ethical Consequentialists
- If you feel like you’re not doing enough and this makes you a bad person, cut yourself some slack.
- Beating yourself up about it is unlikely to help.
- This isn’t abandoning philosophy and truth, it is the sound and reasonable thing to do, well-supported in the literature.
- Read Everyday Longtermism
- When you’re thinking of optimising for something (especially in community building and social interactions), really ask yourself ‘might this actually make things worse?’, or better yet, 'suppose this ends up making this worse, what happened?'
For the Community Builders
- Make sure that when people are introduced to Consequentialism and Utilitarianism, they are also introduced to the decision theory / ethical theory distinction, and be clear about the fact that Explicit Consequence Analysis is a tool rather than a moral obligation
- When talking about Consequentialism, try to be clear about what kind you mean.
- I think by default Consequentialism should refer to Ethical Consequentialism, and the thing to be careful of is when you’re talking about Decision Consequentialism or Explicit Consequence Analysis but it might be misread.
- Always remember that people are people, and the fact that you can use them as terms in impact analyses does not change this
For the philosophers
- Consider reading Toby Ord’s Thesis
- Consider summarising it for the forum?
- If you’re feeling plucky, I think ‘longtermist decision theory’ is an underexplored and valuable avenue of research.
For the naming committee
- Please, I beg of you, a better name than Explicit Consequence Analysis
Part of what prompted me to write this was a friend saying “everybody is ultimately consequentialist whether they realize it or not”. He has offered the following elaboration:
When you dig into the justifications for following a moral rule or virtue even if it leads to what looks like a bad outcome, people will often say it’s still worth doing because following the rule or virtue is expected to lead to overall better outcomes in the long run… which is just another way of saying they care about the expected consequences, just on a longer timeline. But real consequentialists use the same argument against doing things that appear to have a good immediate outcome if they expect it to have a bad one in the long run or if universalized. So error correction seems like the real thing that distinguishes whether someone is a consequentalist or not, and it is a rare deontologist or virtue ethicist who won’t take any expectation or models of outcomes into consideration at all when deciding what moral rules or virtues to follow...
I will freely admit that even if I thought running on self-hate and shame was the path to highest impact I would generally discourage it, but as it happens I am also very confident that in 95% of cases it is not the path to highest impact.
To flesh this out a little: one of the things I find most concering in current community building discourse is recommendations for how to imitate and simulate relationships. Leaving aside my personal distaste at this, I think people are generally very bad at doing this and that the more robust approach is to actually be friends with people.