Hide table of contents

I'm curious to hear arguments from people who believe that the wellbeing of animals has moral value, but value the wellbeing of humans many times more. My thought is that if one has a basically utilitarian, nondiscriminatory perspective, the only thing that would cause an individual's suffering to matter more would be if that individual were experiencing a greater degree of suffering. While greater complexity of mind causes an individual to be capable of a greater range of emotions, I don't see a reason to think that it would cause a much greater intensity of simple emotions like physical pain. Physical pain in humans can be just as bad as, or worse than, more complex emotions. Why wouldn't we think that it would be the same in animals?

New Answer
New Comment

4 Answers sorted by

If you start decomposing minds into their computational components, you find many orders of magnitude differences in the numbers of similar components. E.g. both a honeybee and a human may have visual experience, but the latter will have on the order of 10,000 times as many photoreceptors, with even larger disparities in the number of neurons and computations for subsequent processing. If each edge detection or color discrimination (or higher level processing) additively contributes some visual experience, then you have immense differences in the total contributions.

Likewise, for reinforcement learning consequences of pain or pleasure rewards: larger brains will have orders of magnitude more neurons, synapses, associations, and dispositions to be updated in response to reward. Many thousands of subnetworks could be carved out with complexity or particular capabilities greater than those of the honeybee.

On the other side, trivially tiny computer programs we can make today could make for minimal instantiations of available theories of consciousness, with quantitative differences between the minimal examples and typical examples. See also this discussion. A global workspace may broadcast to thousands of processes or billions.

We can also consider minds much larger than humans, e.g. imagine a network of humans linked by neural interfaces, exchanging memories, sensory input, and directions to action. As we increased the bandwidth of these connections and the degree of behavioral integration, eventually you might have a system that one could consider a single organism, but with vastly greater numbers of perceptions, actions, and cognitive processes than a single human. If we started with 1 billion humans who gradually joined their minds together in such a network, should we say that near the end of the process their total amount of experience or moral weight is reduced to that of 1-10 humans? I'd guess the collective mind would be at least on the same order of consciousness and impartial moral weight as the separated minds, and so there could be giant minds with vastly greater than human quantities of experience.

The usual discussions on this topic seem to assume that connecting and integrating many mental processes almost certainly destroys almost all of their consciousness and value, which seems questionable both for the view itself and for the extreme weight put on it. With a fair amount of credence on the view that the value is not almost all destroyed, the expected value of big minds is enormously greater than that of small minds.

Some more related articles:

Is Brain Size Morally Relevant? by Brian Tomasik

Quantity of experience: brain-duplication and degrees of consciousness by Nick Bostrom

I also wrote an article about minimal instantiations of theories of consciousness: Physical theories of consciousness reduce to panpsychism.

I'd guess the collective mind would be at least on the same order of consciousness and impartial moral weight as the separated minds, and so there could be giant minds with vastly greater than human quantities of experience.

My initial intuition is the same here; it seems like nothing is lost, only things are added. I suppose one could object that actually independence of the parts is lost, and this could actually make the parts less conscious than they would have been if separate (although there is now also a greater whole that's more conscious), but I'm not sure why this should be the case; why shouldn't it make the parts more conscious? One reason to believe it would reduce the consciousness in each part (whether increasing or decreasing the amount of consciousness in the system) if the connections are reoptimized is that there's new redundancy; the parts could be readjusted to accomp... (read more)

With a fair amount of credence on the view that the value is not almost all destroyed, the expected value of big minds is enormously greater than that of small minds.

 

I think you may need to have pretty overwhelming credence in such a views, though. EDIT: Or give enough weight to sufficiently superlinear views.

From this Vox article, we have about 1.5 to 2.5 million mites on our bodies. If we straightforwardly consider the oversimplified view that moral weight scales proportionally with the square root of total neuron count in an animal, then these mit... (read more)

You might be interested in Rethink Priorities’ recent reports about comparing capacity for welfare and moral status across species (part 1 here, part 2 here). Some people (myself included) think capacity for welfare, which roughly is how good or bad an animal’s life can go, differs significantly across species. The extent and degree of this sort of difference depends on the correct theory of welfare. Even if a purely hedonic theory is correct, it’s plausible that differences in affective complexity and cognitive sophistication affect the phenomenal intensity of experience and that some neurological differences affect the subjective experience of time (i.e., the phenomenal duration of experience).

However, it’s unclear which way these differences cut. Advanced social, emotional, and intellectual complexity may open up new dimensions of pleasure and suffering that widen the intensity range of experience (e.g., combining physical with emotional intimacy plausibly opens up the possibility of greater overall pleasure than mere physical intimacy). On the other hand, these same faculties may actually suppress the intensity range of experience (e.g., without the ability to conceptualize, rationalize, or time the experience, even modest pain may induce rather extreme suffering).

Comparing the intrinsic moral worth of different animals (including humans) is extraordinarily difficult, and there is tremendous uncertainty, both normative and empirical. Given this large uncertainty, it seems that, all other things equal, it would be better if near-termist EA funding didn’t skew quite so heavily towards humans, and for the funding that is directed at nonhuman animals, it would be better if it didn’t skew quite so heavily towards terrestrial vertebrates.

1) Different options or uncertainty about the moral relevance of different qualia.

It's unclear that physical pain is the same experience for humans, cats, fish, and worms.

Even if it is the same mental experience, the moral value may differ due to the lack of memory or higher brain function. For example, I think there's a good argument that pain that isn't remembered, for instance via the use of Scopolamine, is (still morally relevant but) less bad than pain experienced that is remembered. Beings incapable of remembering or anticipating pain would have intrinsically less morally relevant experiences - perhaps far less.

2) Higher function as a relevant factor in assessing moral badness of negative experiences

I think that physical pain is bad, but when considered in isolation, it's not the worst thing that can happen. Suffering includes the experience of anticipation of bad, the memory of it occurring, the appreciation of time and lack of hope, etc. People would far prefer to have 1 hour of pain and the knowledge that it would be over at that point than have 1 hour of pain but not be sure when it would end. They'd also prefer to know when the pain would occur, rather than have it be unexpected. These seem to significantly change the moral importance of pain, even by orders of magnitude.

3) Different value due to potential for positive emotion.

If joy and elation is only possible for humans, it may be that they have higher potential for moral value than animals. This would be true even if their negative potential was the same. In such a case, we might think that the loss of potential was morally important, and say that the death of a human, with the potential for far more positive experience, is more morally important than the death of an animal.

I think that physical pain is bad, but when considered in isolation, it's not the worst thing that can happen. Suffering includes the experience of anticipation of bad, the memory of it occurring, the appreciation of time and lack of hope, etc.
People would far prefer to have 1 hour of pain and the knowledge that it would be over at that point than have 1 hour of pain but not be sure when it would end. They'd also prefer to know when the pain would occur, rather than have it be unexpected. These seem to significantly change the moral importance of pain, even by orders of magnitude.

It seems this consideration would provide a (pro tanto) reason for valuing nonhumans more than humans. If pain metacognition can reduce the disvalue of suffering, nonhuman animals, who lack such capacities, should be expected to have worse experiences, other things equal.

1
Davidmanheim
4y
It's a bit more complex than that. If you think animals can't anticipate pain, or can anticipate it but cannot understand the passage of time, or understand that pain might continue, you could see an argument for animal suffering being less important than human suffering. So yes, this could go either way - but it's still a reason one might value animals less.

(1) and (2) would be roughly my answers as well. There's also an instrumental factor (which I'm not sure is in the scope of the original question, but seems important) that human suffering and death has far larger knock-on effects on the future than that of non-human animals.

Regarding (3), is there reason to think joy and elation are only possible for humans? It seems likely to me that food, sex, caring for young, pair bonding etc. feel good for nonhuman animals, dogs seem like they're pretty happy a lot of the time, et cetera. Of course (1) and (2) apply

... (read more)
3
Davidmanheim
4y
Regarding 3, no, it's unclear and depends on the specific animal, what we think their qualia are like, and the specific class of experience you think are valuable.
People would far prefer to have 1 hour of pain and the knowledge that it would be over at that point than have 1 hour of pain but not be sure when it would end. They'd also prefer to know when the pain would occur, rather than have it be unexpected. These seem to significantly change the moral importance of pain, even by orders of magnitude.

This seems like an argument for animal pain mattering more compared to human pain when the human expects the pain and/or expects it to end.

EDIT: Mentioned by Pablo already.

Thank you, this is helpful.

Insofar as I value conscious experiences purely by virtue of their valence (i.e. positivity or negativity), I value animals not too much less than humans (discounted to the extent I suspect that they're "less conscious" or "less capable of feeling highly positive states", which I'm still quite uncertain about).

Insofar as I value preference fulfilment in general, I value humans significantly more than animals (because human preferences are stronger and more complex than animals') but not overwhelmingly so, because animals have strong and reasonably consistent preferences too.

Insofar as I value specific types of conscious experiences and preference fulfilment, such as "reciprocated romantic love" or "achieving one's overarching life goals", then I value humans far more than animals (and would probably value posthumans significantly more than humans).

I don't think there are knock-down arguments in favour of any of these approaches, and so I usually try to balance all of these considerations. Broadly speaking, I do this by prioritising hedonic components when I think about preventing disvalue, and by prioritising the other components when I think about creating value.

Would you say the discrepancy between preferences and hedonism is because humans can (and do) achieve much greater highs than nonhuman animals under preferences, but human and nonhuman lows aren't so different?

Also, it seems that for an antifrustrationist with respect to preferences, a human might on average be worse off than a nonhuman animal at a similar positive average hedonistic welfare level, precisely because humans have more unsatisfied preferences.

4
richard_ngo
4y
Something like that. Maybe the key idea here is my ranking of possible lives: * Amazing hedonic state + all personal preferences satisfied >> amazing hedonic state. * Terrible hedonic state ≈ terrible hedonic state + all personal preferences violated. In other words, if I imagine myself suffering enough hedonically I don't really care about any other preferences I have about my life any more by comparison. Whereas that isn't true for feelings of bliss. I imagine things being more symmetrical for animals, I guess because I don't consider their preferences to be as complex or core to their identities.
Curated and popular this week
Relevant opportunities