I've been thinking about the categorical imperative lately and how it obscures more than it illuminates. I'll make the case that the categorical imperative tries to pull a fast one by taking something for granted, and the thing it takes for granted is the thing we care about.
Disclaimer: I'm sure someone has already made this argument elsewhere in more detailed, elegant, and formal terms. Since I'm not a working academic philosopher, I don't really mind that I might be rediscovering something people already know. What I find valuable is thinking about these things for myself. If you think it might be interesting to read my thinking out loud, continue on.
One common English translation of Kant's summary of the categorical imperative is
Act as if the maxims of your action were to become through your will a universal law of nature.
My own translation into more Less Wrong friendly terms:
Act as if you were following norms that should apply universally.
I think an argument could be made—and probably already has—that this is pointing in the same direction as timeless decision theory.
The categorical imperative aims to solve the problem of what norms to pick, but goes on to try to claim universality. I think that's the tricky bit where it's attempting an end-run around the hard part of picking good norms.
My (admittedly loose) translation tries to make this attempted end-run explicit by including the word "should", although the typical translation that includes "law of nature" serves the same purpose, though framed to fit a moral realist worldview. The trick being attempted is to assume the thing we want to prove, namely universality, by assuming that satisfying our judgment of what's best will lead to universality.
Let me give an example to demonstrate the problem.
Suppose a Babyeater tries to apply the categorical imperative. Since they think eating babies is good, they will act in accordance with the norm of baby-eating and be happy to see others adopt their baby-eating ways.
You, a human, might object that you don't like this so it can't be universally true, yet a Babyeater would object that you not eating babies is an outrageous norm violation that will lead to terrible outcomes.
The trouble is that the categorical imperative tries to smuggle in every moral agents' values without actually doing the hard work of aggregating and deciding between them. It's straightforward so long as everyone has the same underlying values, but one iota of difference in what people (or animals, or AIs, or thermostats, or electrons) care about and any attempt to follow the categorical imperative is not substantially different from simply following moral intuition.
The categorical imperative does force one to refine one's intuitions and better optimize norms for working with others who share similar values. That's not nothing, but it's also only going so far as to coordinate around a metaethics grounded in "my favorite theory".
Recent events have some folks rethinking their commitments to consequentialism, and some of those folks are looking closer at deontological ethics. I think that's valuable, but it's also worth keeping in mind that deontology has its own blind spots. A deontologist might not draw any repugnant conclusions, but they are more likely to ignore what people unlike themselves care about.
Thanks for the post! I agree that identifying those universal maxims or norms seems impossibly difficult given the breadth of humanity's views on morality. In fact, much of post-Kantian deontological thinking can be described as an attempt to answer the very question you ask in this post. I'm also not a trained philosopher (and I lean more towards consequentialism myself), but I'll share a few notes that might help:
TLDR: I agree that deontology has serious epistemic problems, and in practice, deontologists might be more prone to ignoring people unlike themselves (because they are far away or because they have different views). However, much work has been done to demystify non-consequentialist theories and make them actionable - it's just highly complex. In general, I tend to agree with Derek Parfit when he argues that all moral theorists are "climbing the same mountain on different sides" in their search for moral truth.
Thanks for the post! You wrote,
But in that case, the babyeater and I would argue over the specifics of the causality of violating the babyeater's maxim.
In deciding between conflicting maxims, you can:
In the end, I have found that either:
By comparing maxims on the basis of context, consequences, and values, you can get from some disagreement over maxims to any potential difference in values. The point is that values are not necessarily where the disagreement occurs.