[ Question ]

How do most utilitarians feel about "replacement" thought experiments?

by richard_ngo 1 min read6th Sep 201939 comments


In this paper, Simon Knutsson discusses an objection to standard utilitarianism: that it endorses killing many (or all) existing people if that leads to their replacement by beings with higher utility. For example, he lays out the following thought experiment:

Suboptimal Earth: Someone can kill all humans or all sentient beings on Earth and replace us with new sentient beings such as genetically modified biological beings, brains in vats, or sentient machines. The new beings could come into existence on Earth or elsewhere. The future sum of well-being would thereby become (possibly only slightly) greater. Traditional utilitarianism implies that it would be right to kill and replace everyone.

People who identify as utilitarians, do you bite the bullet on such cases? And what is the distribution of opinions amongst academic philosophers who subscribe to utilitarianism?

New Answer
Ask Related Question
New Comment

7 Answers

I don't identify as a utilitarian, but I am more sympathetic to consequentialism than the vast majority of people, and reject such thought experiments (even in the unlikely event they weren't practically self-defeating: utilitarians should want to modify their ideology and self-bind so that they won't do things that screw over the rest of society/other moral views, so that they can reap the larger rewards of positive-sum trades rather than negative-sum conflict). The contractarian (and commonsense and pluralism, but the theory I would most invoke for theoretical understanding is contractarian) objection to such things greatly outweighs the utilitarian case.

For the tiny population of Earth today (which is astronomically small compared to potential future populations) the idea becomes even more absurd. I would agree with Bostrom in Superintelligence (page 219) that failing to leave one galaxy, let alone one solar system for existing beings out of billions of galaxies would be ludicrously monomaniacal and overconfident (and ex ante something that 100% convinced consequentialists would have very much wanted to commit to abstain from).

Inevitably, utilitarians would bite the bullet here, since ex hypothesi, there is more utility in the world in which all beings are replaced with beings with higher utility.

I think the question is whether this implication renders utilitarianism implausible. I have several observations.

(1) The assumption here of the thought experiment is that the correct way to assess moral theories is to test them against intuitions about lots of particular cases. And utilitarianism has plenty of counterintuive implications about particular cases: eg the one in the main post, the repugnant conclusion, counting sadistic pleasure, and so on ad infinitum. The problem is that I don't think this is the correct way to assess moral theories.

Many of the moral intuitions people have are best explained by the fact that those intuitions would be useful to have in the ancestral environment, rather than that they apprehend moral reality. eg incest taboos are strong across all cultures, as are beliefs that wrongdoers simply deserve punishment regardless of the benefits of punishment. These would be evolutionarily useful, which makes it hard for us to shake these beliefs. I don't think the belief that subjective wellbeing is intrinsically good is debunkable in the same way, though discussing that is beyond the scope of this post.

Analogy: the current state of moral philosophy is similar to maths if mathematicians judged mathematical proofs and theories on the basis of how intuitive they are. Thus, people's intuition against the Monty Hall Problem was thought to be a good reason to try to build an alternative theory of probability. This form of maths wouldn't get very far. By the same token, moral philosophy doesn't get far in producing agreement because it uses a predictably bad moral epistemology that overwhelmingly focuses on intuitions about particular cases.

(2) Rough outline argument:

a. Subjective experience is all that matters about the world. (Imagine a world without subjective experience - why would it matter? Imagine a world in which people complete their plans but feel nothing - why would it be good?)

b. Personal identity doesn't matter. (See Parfit. or Imagine if you were vaporised in your sleep and then a perfect clone appeared a millisecond afterwards. Why would this be bad?)

From a and b, with some plausible additional premises, you eventually end up with utilitarianism. This means you have to bite the bullet mentioned in the text, and you also find the bullet plausible because you accept a and b and the other premises.

Related to (1), I think a response to utilitarianism that started in the right way would attack these basic premises a and b, along with the other premises. eg It would try and show that something aside from subjective experience matters fundamentally.

It looks like a strawman to me. It conflates (A) a question about evaluation (is Suboptimal Earth axiologically better than current Earth?) with (B) a question about decision/action (would it be right to kill everyone for the sake of Suboptimal Earth), and it omits:

(A) a utilitarian doesn't classify scenarios categorically ("this is good, that is bad"), but through an ordering over possible worlds, such as: (1) current population + everyone alive in Suboptimal Earth is better than (2) Suboptimal Earth scenario minus current population is better than (3) current Earth...

(B) a utilitarian decides according to ex ante expected utility, so it'd have to ask "what's the odds that Suboptimal Earth will occur given my decision?"

Of course, there are huge problems for such reasoning - a more realistic Suboptimal Earth would get close to a Pascal Muggering: imagine that a Super AGI asked you to press this red button, freeing it to turn the whole galaxy into an eternal utopian hedonist simulation, for example.

As someone who has been "fighting" utilitarianism for a long time, I can say that the best objections against it have been produced by utilitarians themselves.

Most utilitarian gotchas are either circular or talking about leaky abstractions. 'Assume higher utility from taking option X, but OH NO, you forgot about consideration Y! Science have gone too far!'

See also aether variables.

I don't know how many academic philosophers are utilitarians specifically, but 23.6% of respondents to this survey accept or lean towards consequentialism, and I think forms of utilitarianism are the most commonly accepted theories of consequentialism. It would be a guess for me to say how many of those 23.6% or the utilitarians among them accept replacement.

I would say that I'm most sympathetic to consequentialism and utilitarianism (if understood to allow aggregation in other ways besides summation). I don't think it's entirely implausible that the order in which harms or benefits occur can matter, and I think this could have consequences for replacement, but I haven't thought much about this, and I'm not sure such an intuition would be permitted in what's normally understood by "utilitarianism".

Maybe it would be helpful to look at intuitions that would justify replacement, rather than a specific theory. If you're a value-monistic consequentialist, treat the order of harms and benefits as irrelevant (the case for most utilitarians, I think), and you

1. accept that separate personal identities don't persist over time and accept empty individualism or open individualism (reject closed individualism),

2. take an experiential account of goods and bads (something can only be good or bad if there's a difference in subjective experiences), and

3. accept either 3.a. or 3.b., according to whether you accept empty or open individualism:

3. a. (under empty individualism) accept that it's better to bring a better off individual into existence than a worse off one (the nonidentity problem), or or,

3. b. (under open individualism) accept that it's better to have better experiences,

then it's better to replace a worse off being A with better off one B than to leave A, because the being A, if not replaced, wouldn't be the same A if left anyway. In the terms of empty individualism, there's A1 who will soon cease to exist regardless of our choice, and we're deciding between A2 and B.

A1 need not experience the harm of death (e.g. if they're killed in their sleep), and the fact that they might have wanted A2 to exist wouldn't matter (in their sleep), since that preference could never have been counted anyway since A1 never experiences the satisfaction or frustration of this preference.

For open individualism, rather than A and B, or A1, A2 and B, there's only one individual and we're just considering different experiences for that individual.

I don't think there's a very good basis for closed individualism (the persistence of separate identities over time), and it seems difficult to defend a nonexperiental account of wellbeing, especially if closed individualism is false, since I think we would have to also apply this to individuals who have long been dead, and their interests could, in principle, outweigh the interests of the living. I don't have a general proof for this last claim, and I haven't spent a great deal of time thinking about it, though, so it could be wrong.

Also, this is "all else equal", of course, which is not the case in practice; you can't expect attempting to replace people to go well.

Ways out for utilitarians

Even if you're a utilitarian, but reject 1 above, i.e. believe that separate personal identities do persist over time and take a timeless view of individual existence (an individual is still counted toward the aggregate even after they're dead), then you can avoid replacement by aggregating wellbeing over each individual's lifetime before aggregating across individuals in certain ways (e.g. average utilitarianism or critical-level utilitarianism, which of course have other problems), see "Normative population theory: A comment" by Charles Blackorby and David Donaldson.

Under closed individualism, you can also believe that killing is bad if it prevents individual lifetime utility from increasing, but also believe there's no good in adding people with good lives (or that this good is always dominated by increasing an individual's lifetime utility, all else equal), so that the killing which prevents individuals from increasing their lifetime utilities would not be compensated for by adding new people, since they add no value. However, if you accept the independence of irrelevant alternatives and that adding bad lives is bad (with the claim that adding good lives isn't good, this is the procreation asymmetry), then I think you're basically committed to the principle of antinatalism (but not necessarily the practice). Negative preference utilitarianism is an example of such a theory. "Person-affecting views and saturating counterpart relations" by Christopher Meacham describes a utilitarian theory which avoids antinatalism by rejecting the independence of irrelevant alternatives.

Hi Richard. You ask, “People who identify as utilitarians, do you bite the bullet on such cases? And what is the distribution of opinions amongst academic philosophers who subscribe to utilitarianism?”

Those are good questions, and I hope utilitarians or similar consequentialists reply.

It may be difficult to find out what utilitarians and consequentialists really think of such cases. Such theories could be understood as sometimes prescribing ‘say whatever is optimal to say; that is, say whatever will bring about the best results.’ It might be optimal to pretend to not bite the bullet even though the person actually does.

Regarding the opinions among academic philosophers who subscribe to traditional utilitarianism. I don’t know of many such people who are alive, but a few are Torbjörn Tännsjö, Peter Singer, Yew-Kwang Ng (is my impression), and Katarzyna de Lazari-Radek (is also my impression). And Toby Ord has written, “I am very sympathetic towards Utilitarianism, carefully construed.” Tännsjö (2000) says, “Few people today seem to believe that utilitarianism is a plausible doctrine at all.” Perhaps others could list additional currently living academic philosophers who are traditional utilitarians, but otherwise it’s a very small population when talking about a distribution. Here is a list https://en.wikipedia.org/wiki/List_of_utilitarians#Living, but it includes people who are not academic philosophers, like Krauss, Layard, Lindström, Matheny and Reese, and it lists negative utilitarian David Pearce, and I doubt it is correct regarding the academic philosophers included in the list.

I can’t think of any traditional utilitarian who has discussed the replacement argument (i.e., the one that involves killing and replacing everyone). Tännsjö has bitten a bullet on another issue that involves killing. As I write here https://www.simonknutsson.com/the-world-destruction-argument/#Appendix_Reliability_of_intuitions, Tännsjö thinks that a doctor ought to kill one healthy patient to give her organs to five other patients who need them to survive (if there are no bad side effects). He argues that if this is counterintuitive, that intuition is unreliable partly because it is triggered by something that is not directly morally relevant. The intuition stems from an emotional reluctance to kill in an immediate manner using physical force, which is a heuristic device selected for us by evolution, and we should realize that it is morally irrelevant whether the killing is done using physical force (Tännsjö 2015b, 67–68, 205–6, 278–79). And as I also write in my paper, he has written, among other things, “Let us rejoice with all those who one day hopefully … will take our place in the universe.” I like his way of writing. It is illuminating, he feels straightforward, and he often writes as if he teaches (in a good way). But I could only speculate about what he thinks about the replacement argument against his form of utilitarianism.