Hide table of contents

[This post is my attempt to explain why EAs who value the practical protections offered by deontic constraints needn't take that to undermine their belief in consequentialism as a moral theory.  People saying "we need more deontology" could be clearer about whether they're just talking about endorsing certain commonsense practical norms, or about deontological justifications of those norms.  I think EA has always acknowledged the importance of good practical norms, and those who point to widespread utilitarian beliefs as somehow in tension with this are (IMO) making the mistake I diagnose below.]

Distinguish practical norms from the theoretical question of what justifies them.

Hypothesis: many are drawn to deontology as a result of conflating these two. People sensibly want to endorse good practical norms like Rɪɢʜᴛs (Don’t violate rights, even if you think it’s for the best). And they assume that this commits them to a deontological theory of why that’s a good norm. But that assumption is mistaken. No such theoretical commitment is required.

After explaining why this is so, I’ll introduce a conceptually simple alternative—deontic fictionalism—for those who find two-level consequentialism hard to fathom.

[Image caption: Why not have your cake and eat it too?]

Background: Norm content vs justification

The distinction between a theory’s criterion (or moral goals) and its recommended decision procedure is central to consequentialism. But others don’t always realize this.[1] Much confusion in moral theory stems from people conflating the practical question of whether to endorse a norm against X with the theoretical question of whether agents have non-instrumental reason to avoid doing X. These are different questions!

As previously explained:

Utilitarians and moderate deontologists alike agree that (i) you shouldn’t go around carving people up for their organs, and (ii) there are conceivable exceptions to this rule. There’s no surface-level practical difference in this respect.[2] The difference is not in whether it’s wrong to kill, but why.

Consider again the norm Rɪɢʜᴛs (Don’t violate rights, even if you think it’s for the best).

Rɪɢʜᴛs is an excellent practical norm! I endorse it wholeheartedly. (One can imagine exceptions to it, of course, as any moderate deontologist will agree; but that doesn’t undermine its status as a good norm, well worth inculcating in ourselves and others.)

Now, I think the justification for Rɪɢʜᴛs is ultimately instrumental: that respecting rights seems likely to result in better actions, yielding better outcomes, than would disregarding them. I think that’s a better justification than what deontologists offer, which is why I reject deontology. The dispute between the theories is not so much about what norms to embrace, but why.

People sometimes get confused at this point, since Rɪɢʜᴛs doesn’t look, on its face, like a “utilitarian” norm or decision procedure. The content of the norm makes no approving references to promoting value. But that’s fine, because moral theories aren’t accounts of what norms to embrace. They’re accounts of fundamental (non-instrumental) reasons (including reasons to embrace some norms over others). Utilitarianism, in particular, fundamentally tells us to promote value. So if embracing Rɪɢʜᴛs promotes value, then utilitarianism straightforwardly implies that we should embrace Rɪɢʜᴛs.

Moreover, this isn’t even “self-effacing” (which is something else that people often seem to get confused about).[3] Acting well is compatible with accurately appreciating that the reasons to embrace Rɪɢʜᴛs are instrumental rather than non-instrumental.[4] So we can perfectly well maintain a utilitarian perspective on the world, and deliberately follow utilitarian reasons—aiming to maximize expected value—while embracing Rɪɢʜᴛs. This is all perfectly coherent, so long as we appreciate that following Rɪɢʜᴛs has higher expected value than blindly following naïve calculations. Utilitarian reasons then direct us to let Rɪɢʜᴛs constrain our actions. (And no, this still isn’t rule utilitarianism.)

Deontic Fictionalism

Some Christian philosophers are religious fictionalists: granting that their religion isn’t literally true, but embracing its rituals and practices nonetheless. When they affirm their church’s dogmas, there’s an implicit “according to the fiction” qualifier attached. They don’t mean this in a dismissive way, though. They think it’s a good and worthwhile pretense to engage in, perhaps for social or emotional reasons.

It’s interesting to consider whether some who are initially drawn to “commonsense” deontology might be satisfied with deontic fictionalism: granting that the theoretical claims of deontology are misguided, but endorsing the practical norms. If it makes it easier for them to maintain motivation, then engaging in deontological pretense—behaving as if the theory were true—might turn out to be good and worthwhile. That’s something you can do without getting stuck with deontology’s theoretical baggage.

On this picture, one can even use moral language like “right” and “wrong” in a way that tracks deontological verdicts: “It’s wrong to push the guy in front of the trolley, even if it would save more lives.” But there’s an implicit “according to the fiction of deontology” qualifier attached. You’re well aware that, in principle, there’s always most reason to do what’s best, and to hope for the best outcome. But you’re now using moral language to do something other than relate the reasons-facts. Maybe you’re instead using it to express support for practical norms like Rɪɢʜᴛs. Indeed, given how poorly others mark the distinctions explained in this post, it may even be that this non-literal mode of moral communication is less misleading for many audiences than the alternative of affirming your literal theoretical beliefs (which they might misinterpret as support for naïve utilitarian practical norms).

Three Options

Compare three different ways one might respond to the instrumental reasons to embrace Rɪɢʜᴛs and related practical norms:

(1) Prudent (two-level) consequentialism, where one accepts Rɪɢʜᴛs and related norms as instrumentally good heuristics, while denying both (i) that these norms specify non-instrumental reasons, and (ii) that such non-instrumental reasons are necessary to justify following the norms.

(2) Deontic fictionalism, where one accepts Rɪɢʜᴛs and related norms due to endorsing (on instrumental grounds) behaving as if deontology[5] were true, without any commitment to the literal truth of deontological theory.

(3) Deontology (via self-effacing consequentialism), where instrumental reasons motivate one to (somehow) believe that deontology is true—or to convince others to believe it.

I personally think #1 is the ideal way to go. But if some find it difficult to grasp, option #2 may prove a conceptually simpler alternative that still maintains epistemic integrity (for those who agree that the theoretical case for consequentialism is strong).

Given these alternatives, it doesn’t seem plausible to me that there’s any practical reason to prefer #3. Whenever people suggest practical reasons to embrace deontological moral theories, one may counter with deontic fictionalism instead. (And when they’re ready to take off the training wheels, they can shift to prudent consequentialism: dropping the pretense entirely while keeping on following good norms like Rɪɢʜᴛs, just for the actually-right reasons.)

  1. ^

    See, e.g., post-FTX anti-utilitarian takes on philosophy twitter, as represented in meme form here.

  2. ^

    One can carefully engineer hypothetical cases to pry the two views apart. But even then, I argued, utilitarians will typically see grounds to criticize the agent (e.g. for recklessness) even if they approve of the act in retrospect, given that it turned out for the best. And, seriously, what kind of person doesn’t approve of things turning out for the best? Not a good one, I'd suggest.

  3. ^

    It’s always possible for a decent moral view to be self-effacing, because having true beliefs isn’t the most important thing in the world. If an evil demon said “Agree to moral brainwashing or I’ll torture everyone for eternity,” then you’d obviously better agree to the brainwashing. But that’s not what’s going on here. Absent evil demons, we don’t need false moral beliefs.

  4. ^

    Sometimes people say things which seem to imply that instrumental reasons don’t count. (“Utilitarians have no in principle objection to slavery!” What, you mean the suffering of the slaves is not enough? “But it could in principle be outweighed by other considerations!” That’s true of moderate deontologists, too. “Well, okay, but it just makes all the difference if some of the outweighed reasons were non-instrumental…” Why? I’m starting to worry that you’re really not giving enough weight to the suffering of the slaves…) So it’s maybe worth being explicit at this point that all this talk of “instrumental reasons” is shorthand for the most obviously important reasons that there are, namely those to save and improve lives.

  5. ^

    Ideally, in a more beneficentric form than one usually finds in the wild.





More posts like this

Sorted by Click to highlight new comments since:

I love this piece - super well argued. Your argument applies to virtue ethics too if you replace “RIGHTS” with any virtue claimed to be intrinsically valuable by the virtue ethicist.

Good point, and a nice addition to the fictionalist reasoning. I would love to see a 'fictionalist virtue ethics' in addition to a fictionalist deontology.

Richard -- excellent post -- it's clear, compelling, reasonable, and actionable. 

A key question concerning your three options is more psychological than philosophical: which kinds of people, with which cognitive, personality, and moral traits, should adopt options 1, 2, or 3, in terms of keeping them from using bad utilitarian reasoning (e.g. self-serving, biased, unempirical, convenient moral reasoning) that violates other people's 'rights' or deters them from pursuing 'virtues'? (Just to be clear, I endorse utilitarianism as a a normative ethical theory; the question here is just how to weave some good norms and rules into our prescriptive morality that we use ourselves in day-to-day life, and promote to others.)

I suspect that many deontologists assume that most people can't handle options 1 or 2, in the sense that those options wouldn't reliably protect us against rights-violating faulty-utilitarian reasoning of the sort that humans evolved to be very good at (according to the 'argumentative theory of reasoning' from Huge Mercier). So these deontologists see it as their job to promote option 3 as if it's true -- even though they might, in their heart of hearts, know that they're really promoting option 2.  However, I suspect that, following science, Nietzsche,  secularism, and the collapse of traditional theological and metaphysical bases for deontology, lots of intelligent people simply can't buy into option 3 any more. So, option-3-style arguments just can't carry as much weight as they used to, and can't motivate rule-like constraints on faulty-utilitarian reasoning.

Conversely, if most people  adopt option 1 (prudent two-level consequentialism), I think they might be too tempted to engage in self-serving faulty-utilitarian reasoning. (Arguably this is what we saw with the FTX debacle -- 'it's OK to steal clients' crypto deposits if it's for the greater good'). However, that's an empirical question, and I'm open to updating.

My hunch is that for most people most of the time, option 2 (deontic fictionalism) strikes the best balance between evidence-based consequentialism and fairly strong guide-rails against self-serving faulty-utilitarian reasoning. So, I think it's worth developing further as a sort of psychologically pragmatic meta-ethics that could work pretty well for our species, given human nature.

Thanks!  Yes, agreed it's an open empirical question how well people (in general, or particular individuals) can pull off the specified options.

I wouldn't be terribly surprised if something like (2) turned out to be best for most people most of the time. But I guess I'm sufficiently Aristotelian to think that if we're raised since childhood to abide by good norms, later learning that they're instrumentally justified shouldn't really undermine them that much. (They certainly haven't for me--my wife finds it funny how strongly averse I am to any kind of dishonesty, "despite" my utilitarian beliefs!)

Simple and useful, thanks.

It’s always possible for a decent moral view to be self-effacing, because having true beliefs isn’t the most important thing in the world. If an evil demon said “Agree to moral brainwashing or I’ll torture everyone for eternity,” then you’d obviously better agree to the brainwashing.

What about the deontologist who says "I can't agree to moral brainwashing because that would involve being complicit in an objective wrong"? I don't see how this position reduces to or implies the belief that "having true beliefs [is] the most important thing in the world".

Or by "decent moral view" did you mean "decent consequentialist moral view"?

Avoiding complicity (whatever that amounts to) also isn't literally the most important thing in the world. Note that even most deontologists reject "though the heavens fall" absolutism.

Curated and popular this week
Relevant opportunities