Crosspost of this from my blog.
The humanist project has been a force for enormous amounts of good—progress in domains both scientific and ethical. Yet unfortunately, at the very heart of it is the word that signifies the ethical error behind it: human. Humanism is about promoting flourishing for humans, a project which, unfortunately, ignores the overwhelming majority of sentient beings on Earth.
Of course, some call themselves humanists and care about animals. With these people, my dispute is just terminological. It would be as if one wanted to promote the welfare of all people, but described their view as whitism—a view originally defined as being about promoting the welfare of white interests. While I’d be broadly on board with their aims—if they truly cared about all people equally—I’d think they should probably change their slogan.
In place of humanism, many have proposed sentientism. Sentientism claims—like humanism—that we should use logic, reason, and evidence to advance welfare. However, it claims, unlike humanism, that we should advance the welfare of all sentient beings, not merely humans.
Sentientists often claim that humanism is objectionably arbitrary. If two beings have similar mental capacities, why in the world should it matter if one is a homo sapien? In reply, people sometimes claim that sentientism is arbitrary—why should we only care about sentient beings? Here, I shall provide several arguments for sentientism, and against the charge of arbitrariness.
Here’s one argument for sentientism—it is intuitive that only sentient beings matter. Most of us think that people matter but philosophical zombies—beings that are physically identical to humans but lacking in consciousness—do not matter. If they really have no conscious experience—no hopes, no dreams, no aspirations, no pain, no thoughts, no love—then it seems that nothing that happens to them really matters. If all is really dark inside the zombie, then it seems that nothing that happens to us matters.
Of course, one might reply that it is intuitive that only humans matter. But this is clearly wrong—if a being is in unimaginable agony, it seems that it’s pain is bad regardless of whether it is a homo sapien. If one found out that actually, despite being consciously identical to the rest of you, I was secretly a bulldog (though I suppose I haven’t done a great job covering it up) my suffering would not seem to be any less bad. If we found out that people from Italy, for example, were as smart and conscious as humans, but were not homo sapiens, we would not declare their suffering irrelevant.
Of course, when confronted with this, the speceisists either bite the bullet or adopt crazy criteria. For example, they will say that what matters is being the same species as beings that are mostly smart. Of course, none of these criteria are immune from counterexample—in the case of the most recent criteria, it would imply that if we find out that some terminally ill babies are not technically homo sapiens, their pain isn’t bad, and that if we found out that mentally disabled people weren’t homo sapiens, it would imply that their suffering wasn’t bad too. But even if they can gerrymander some contorted criterion to avoid counterexample—which just to be clear, none of them have successfully done, ever—it will not be intuitive. It does not seem intuitively like the necessary and sufficient conditions that determine whether one matters involve 31 steps and the intelligence of other beings that one can interbreed with.
So sentientism just seems right. Whenever one claims that some criteria justifies some property, it is always possible to declare it arbitrary. However, if it seems like the type of criteria that can, in fact, justify that property, then it is not arbitrary. Just like it’s not arbitrary to claim that one should have to be good at boxing to be a professional boxer, if the criteria that justifies some way of being treated is actually able to justify it, then it is far from arbitrary.
Theories of welfare
Suppose one says that any way of determining which entities to care about is arbitrary. Presumably then, they will think that the only thing that matters is whether some entity can be harmed or not. If a plant can’t be harmed or benefitted by any action, even if it is within our moral circle, we need not take into account the impact of our actions on plants, just as if our moral circle includes all humans, we’ll still only take into account the effect of our actions on those humans who our action affects.
Thus, sentientism is consistent with a maximally inclusive moral circle. As long as we think that only sentient beings can be harmed or benefitted, we can care about all possible entities—sentient and nonsentient—and nonetheless, we’ll conclude that the only ones that matter practically—the only ones we should take into account in our decisions, are the sentient ones.
Philosophers have explored ways that one can he harmed and benefitted. There are a few primary ones, all of which imply sentientism.
Hedonism: happiness is the only thing that is good for anyone, pain is the only thing that is bad for anyone. This implies sentientism because only sentient beings can be happy.
Desire theory: getting what one wants is good for them, having their desires frustrated is bad for them. This implies sentientism because only sentient beings have desires.
Objective list theory: there are a list of things that are good for people including knowledge, friendship, happiness, desire fulfillment, achievement, and more. These, on most plausible accounts, all require sentience.
So in order to avoid sentientism, one has to have an utterly bizarre view of welfare—according to which various mysterious things are good or bad for people.
The transformation test
One way of testing whether an entity matters is to imagine oneself being turned into that entity and ask whether they’d have a reason to care about what happened to them after they were that entity. For example, if we want to know if pigs matter, we imagine being slowly turned into a pig over the course of two years. The question is whether, after one was fully pig, they would rationally care about whether they were tortured.
Of course they would! If I knew by 2025 I’d be a pig, I’d be very against the pig version of me being tortured. But if I knew I’d turn into a plant, I wouldn’t care. Once I was a plant, nothing would happen to me—I’d have no experience. If you think you wouldn’t, remember, there are debilitating mental conditions that leave people at less than the level of a pig. If one would care about what happened to themself after that point, so too should they care about what would happen to them if they were a pig.
Thus, it is only conscious beings that matter. Claiming that conscious beings matter is not arbitrary and is quite intuitive. It seems obvious that plants only matter if they’re conscious—likewise for animals. This follows from every remotely plausible view of welfare.
But of course, this leaves apart the hard (and intractable) issue of who is sentient and how much.
Without moral weights, perhaps we should try to maximize the amount of the minimum cost sentient beings (rats, flys?) at the expense of everybody else.
Humans are quite similar to each other, and we can talk and compare experiences.
It is easy to put similar moral weights to all people and use the marginal decline of income utility to develop a nice theory for social democratic Utopia.
But nothing so convenient as self reported credible experience is available for the extension of the moral circle beyond humans.