Crosspost

JL Mackie famously wrote The Miracle of Theism. The miracle wasn’t really about theism itself, but instead about how anyone was so silly as to believe it in light of the convincing arguments against it. There is a similar mystery with utilitarians: why do people adopt such a transparently silly view?

Why believe in morality at all? The answer, of course, is that we have moral intuitions. The whole reason to think that the moral edifice isn’t fake is that certain moral claims simply seem obviously right. It seems wrong to torture babies for fun—and it seems that this fact doesn’t depend on what we think about it. Even if the whole world began to favor the senseless slaughter of the innocent, it would still be wrong.

So if the whole reason to believe in morality is on the basis of intuitions, then it should be seen as quite a serious defect of a view that it violates just about every single widespread intuition that anyone has ever had. Yet that is the defect of utilitarianism. It’s about as counterintuitive as views get.

For example, here is something that seems ridiculously obvious to virtually everyone: you shouldn’t shoot your grandmother to make 10,000 dollars, even if you could be assured you’d get away with it. Yet utilitarians must deny this obvious item of moral data! For 10,000 dollars, you can donate to the against malaria foundation and save two lives. Thus, if one is solely concerned with maximizing well-being, they must think this is a noble thing to do.

Problem: it’s obviously not! Similarly, it is wrong to kill one person and harvest their organs to save five, or to frame an innocent man to prevent a mob from killing multiple, or to torture a baby to bring sadistic joy to a jeering crowd. Even if all these actions generated the most aggregate utility, they are obviously wicked things to do. That a view denies their wickedness is not a minor defect; it’s utterly disqualifying.

(Sidenote: I know AI likes the sentence structure “it’s not A, it’s B,” but I refuse to abandon some perfectly respectable sentence structure just because AI uses it).

The utilitarian cannot say that any acts are wrong in principle. They can only say that certain acts tend to make the world worse due to highly unpredictable and contingent effects that they generally have. For any wicked institution you can dream up, the utilitarian must think that if it brought about enough joy to its beneficiaries, instituting it would be highly desirable.

What does the utilitarian say to the error theorist—the person who denies any moral claims are true? Normally the thing people say is that error theory is wildly counterintuitive and you should reject it for being as crazy as any view could be. But the utilitarian’s view is about as counterintuitive. The utilitarian can’t even table pound about the obvious wrongness of torturing babies for fun—the utilitarian thinks you should torture babies for fun provided it’s enough fun! Imagine the utilitarian: “oh error theorist, you so crazily believe that it’s alright to torture babies for just a bit of fun, unlike my intuitive view which is that to justify torturing babies, you must have loads of fun!”

I should say that not every utilitarian must think that. Many utilitarians deny that sadistic pleasure is good for a person. That’s one objection dodged. But any kind of utilitarianism inevitably has the problem of sanctioning framing the innocent, murderous organ harvesting, selling out your friends for slightly greater gains in utility, killing your grandmother for cash (assuming you’d get away with it and the like), and so on and so on.

This is where the utilitarians start to squirm (rather like the small soil worms they care so much about). This is where the outrageous cope sets in. What they will do in response to these objections is point to totally random features of the case whereby the action in question is supposed to have bad consequences. They’ll claim that killing people and harvesting their organs would destabilize broader trust in society, that framing the innocent would undermine the legal system, etc.

Now, this isn’t obvious. As it happens, doctors kill patients all the time. Despite this, people continue using the medical system. Merely pointing to a random downside of an act does nothing to establish the empirical claim that it really turns out for the worst. I’ve never heard a utilitarian present any evidence that their empirical claims about rights-violating acts are correct.

But fine, put aside that obvious colossal problem. The deeper one is that none of these things are at all relevant to the core of the objection. Perhaps in the real world, a doctor shouldn’t murder a patient to save five others. But if we imagine a situation where the doctor will get away with it, the intuition doesn’t change at all. It still seems obviously wicked. The intuition people have is not “you shouldn’t murder your grandmother to provide 10,000 dollars to effective charities because it might have bad effects on society broadly,” it’s “you shouldn’t kill your grandmother to give 10,000 dollars to effective charities because it is very wrong to kill people, and you shouldn’t kill one to save two—especially not a family member.”

And then what are the amazingly compelling arguments for utilitarianism? It’s a somewhat remarkable fact that despite there being a great many utilitarians, there’s been no convergence on what arguments for the view are compelling. This is often a feature of very implausible views (for instance, there is, as best as I can tell, no agreement on the best arguments for the truth of Islam—for none of the arguments are very good).

Well, here is one common claim made by utilitarians: it is alleged that there is some paradox of deontology. What’s the paradox? Well, it is claimed that if you’re really so opposed to murder, then you should support carrying out one murder to prevent multiple others. Thus, deontology is supposed to be paradoxical—how can it be that some act is so wrong that you’re not supposed to do it to prevent multiple others?

But this paradox, it seems to me, comes solely from conceiving of the problem in utilitarian terms. The deontologist’s view is not that murders are so bad that when one happens to prevent two others, the world has gotten worse. Instead, their view is that wrongness and badness come apart. Sometimes the right action won’t produce the best state of affairs.

I know that utilitarians reject such an account. But as far as I can tell, the “paradox,” of deontology gives us no argument against an account like this. It merely assumes at the outset that one shouldn’t think of problems in deontological terms and ends up with the conclusion that one shouldn’t think of problems in deontological terms. But this tells us nothing about whether the deontologist’s approach to the problem is actually correct. I fully grant that one who is a consequentialist should not also be a deontologist—they will have to choose—but that tells us little about which of the two views are right.

Another common complaint against deontology is that deontology is either arbitrary or much too extreme. Ask: should you kill one person to save everyone on Earth? If the answer is no, that is claimed to be much too extreme—and also runs into issues with risk. If the answer is yes, then any cutoff point will be arbitrary. Why that amount rather than some other?

I happily grant that we shouldn’t be absolutist deontologists. Of course we should kill one person to save the world (though I do not see how the utilitarian can be so confident about this if they reject as untrustworthy so many intuitions that basically everyone in the world has). But what’s supposed to be arbitrary about non-absolutist deontology? Why is it any more arbitrary than the threshold at which collections of sand form a heap? Perhaps it’s vague, perhaps it’s indeterminate, but what’s the unique problem supposed to be for the deontologist?

The deontologist doesn’t have to say anything arbitrary. They simply think that multiple things provide moral reasons (not violating rights and promoting utility). Then, given the strength of reasons these each give, there is simply a fact of the matter about how much these should be traded off against one another. This is no more arbitrary than the fact that there is some fact about how many times larger a mountain is than a grain of sand or the number 7 is than the number 1. The utilitarian must also think there are similar facts if they think that there’s some tradeoff between different kinds of pleasure. And the pluralist utilitarian who believes in multiple goods has exactly the same problem.

It’s just never been remotely clear what the problem is supposed to be for the deontologist on this front.

What about the more sophisticated defenses of utilitarianism (rather than those from supercilious philosophy bloggers who were undergrads until five minutes ago and now think we need to seriously investigate whether men occasionally regrow limbs and whether you should torture everyone in the world to prevent a sufficiently large population from getting specks of dust in their eyes)?

 

Seems like this guy could afford to eat a few shrimp.

Consider Richard Y Chappell’s, arguments, for example. Utilitarianism is supposed to track what fundamentally matters better than other theories. But this, it seems to me, smuggles in consequentialism. If morality tracking “what fundamentally matters” means simply that it tracks fortunate and unfortunate states of affairs in the world, then that is just to beg the question. It is to assume that morality is only about consequences.

Perhaps the utilitarian thinks that. But the task is to give a reason for others to think that. The non-utilitarian simply thinks that there’s some disharmony between the goodness of states of affairs and your reasons for action. Even if the world would be made better by you killing someone, you’re not supposed to do it. What’s so surprising about thinking that the things that matter in the world—the things that are fortunate or unfortunate—come apart from those things that give us reason to perform or refrain from actions?

Richard also has a very ingenious paradox wherein he shows that deontologists must think that you ought to sometimes prefer that people do the wrong things (the details are a bit technical, so feel free to read the paper for more). Now, it seems everyone must think that with respect to subjective wrongness. If a killer killing someone happens to save the world, they behaved subjectively wrongly—wrongly in light of their evidence. Nonetheless, we should be glad that they did it!

But Richard’s paradox is supposed to show that there’s some disharmony between all-things-considered preferability and what you’re supposed to do. The deontologist should sometimes think that a perfect person would hope one does the objectively wrong thing. This is supposed to be counterintuitive.

But is it so counterintuitive? I’ll confess that the notion of “preferability,” is a bit fuzzy. I do not have strong intuitions about it—certainly not at the level of my intuitions about wrongness. I have an intuition that certain states of affairs are wrong to bring about. I also have an intuition that certain states of affairs are bad. But is there some notion of what you should prefer distinct from these? Doesn’t seem obvious (certainly not as obvious as, say, that you shouldn’t kill one person to harvest their organs and save five).

It’s also not so clear to me that preferability really would line up with wrongness. This seems like a very natural thing to deny. If the deontologist thinks that wrongness and bestness come apart, then it seems natural for third parties to prefer the best. Consequently, they’ll prefer that people do the wrong thing. They’d rather have a state of affairs where one is killed to save five, but they simply think it’s wrong to bring about.

Or consider the person who saves his own mother over two strangers. Should a third party hope for that? Doesn’t seem so. The strangers have mothers also. There just seems no in principle reason for what one should want and what one should do never to come apart—if one’s view is that the goodness or badness of states of affairs diverges from what actions one has reason to perform.

A moral theory should be judged on its ability to get the central and obvious things right—not to track correspondences between clear normative concepts and somewhat nebulous ones. The central intuition, the one everyone has, is that you shouldn’t kill one person to give 10,000 dollars to an effective charity. In cases of disagreement, that’s what one should hold onto!

So while these arguments from Richard are clearly among the most forceful in defense of utilitarianism, I confess that I just can’t see why anyone would find the view attractive. What’s going on? What am I missing? Why are so many clever people drawn to such an obviously defective view—with almost nothing to be said in its favor? And I hear they even start thinking five shrimp matter as much as one person! What in the world?

 

 


 

7

0
0

Reactions

0
0

More posts like this

Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities