Hide table of contents

Summary

  1. I outline an argument for nonhuman animals mattering a lot in expectation on non-hedonic views, illustrating with desire and preference-based views (more).
  2. I argue against normalizing each individual's welfare by their desires/preferences about their own suffering (or about anything else) (more).

I also argue for the importance of animals on other views, including rights-based theories, contractualism, virtue ethics and special obligations in another post here.

This post combines two comments I made for Animal Welfare vs Global Health Debate Week, with some additional content and editing.

 

The case for animals on desire and preference views

I think there's a decent case for nonhuman animals mattering substantially in expectation on non-hedonic views, including desire and preference views:

  1. I think it's not too unlikely that nonhuman animals have access to whatever general non-hedonic values you care about (more here), e.g. chickens probably have (conscious) desires and preferences, and there's a decent chance shrimp and insects do, too,[1] and
  2. if they do have access to them, then it's not too unlikely that
    1. their importance reaches heights in nonhumans that are at least a modest fraction of what they do in humans, e.g. by measuring their strength using measures of attention or effects on attention[2] or human-based units, or
    2. interpersonal comparisons aren't possible for those non-hedonic values, between species and maybe even just between humans, anyway (more here and here), so
      1. we can't particularly justify favouring humans or justify favouring nonhumans, and so we just aim for something like Pareto efficiency, across species or even across all individuals, or
      2. we normalize welfare ranges or capacities for welfare based on their statistical properties, e.g. variance or range, which I'd guess favours animal welfare, because
        1. it will treat all individuals — humans and other animals — as if they have similar welfare ranges or capacities for welfare or individual value at stake, and
        2. far greater numbers of life-years and individuals are helped per $ with animal welfare interventions.

 

Against normalizing by suffering

Now, there's an intuitive case for humans mattering much more that I expect people to often have in mind: I'd guess that many humans are much more willing to endure suffering or endorse undergoing suffering, including fairly intense suffering, for their children and other goals than other animals are for anything.[3] So human desires/preferences might often be much stronger than other animals', if we normalize each individual's desires/preferences by their own desires/preferences about their own suffering, perhaps adjusting for differences in some measure of suffering intensity.

This has some directly intuitive appeal, but my best guess is that this involves some wrong or unjustifiable assumptions, and may even have morally repugnant or at least counterintuitive implications between humans.

  1. It's not clear why we should normalize by desires/preferences about suffering in particular. And there are many different experiences of suffering, and different desires/preferences about them to choose from. Any choice seems pretty arbitrary. Other choices also typically face problems similar to those below, so just picking something else probably wouldn't help.
    1. However, we could be morally uncertain about what to normalize by, and so just entertain multiple options.
  2. People probably just have different beliefs/preferences about how much their own suffering matters, and those preferences are plausibly not interpersonally comparable at all. An illustrative case where they don't seem to hold is between the preferences of people with very different moral views, e.g. utilitarians and deontologists. Their preferences may not be interpersonally comparable because their moral views aren't intertheoretically comparable. And the same could apply to other goals people hold. I discuss interpersonal comparisons more here.
  3. Some people may find it easier to reflectively dismiss or discount their own suffering than others for various reasons, like particular beliefs or greater self-control. If interpersonal comparisons are warranted, it could just mean these people care less about their own suffering in absolute terms on average, not that they care more about other things than average. Other animals probably can't easily dismiss or discount their own suffering much, and their actions follow pretty directly from their suffering and other felt desires[4], so they might even care more about their own suffering in absolute terms on average.
  4. We can also imagine moral patients with conscious desires/preferences who can't suffer at all, so it probably wouldn't make sense to normalize their desires/preferences by their desires/preferences about their own suffering. No solution I could think of for this seemed good to me and would ground the interpersonal comparisons.
  5. This reasoning could lead to large discrepancies between humans and individual human utility monsters, because some humans are much more willing to suffer for things than others. The most fanatical humans, who recognize the most at stake relative to their own suffering, might dominate. This could be religious preferences (although we may discount them for being empirically misinformed), absolutist deontological preferences, utilitarian preferences, or other idiosyncratic preferences, even selfish ones.[5] This could be morally repugnant.

 

To be clear, this also undermines a case one might want to make for animals mattering a lot on desire and preference views, e.g. using the fact that humans tend to care about their own suffering a lot, too.

 

  1. ^

    I wrote more here on the possibility of more sophisticated versions of desires and preferences in other animals.

  2. ^

    There are some arguments for weighing ~proportionally with neuron counts:

    1. I could imagine the "size" of attention, e.g. the number of distinguishable items in it, to scale with neuron counts, maybe even proportionally, which could favour global health on the margin.
      1. But probably with decreasing marginal returns to additional neurons, and I give substantial weight to the number of neurons not really mattering at all, once you have the right kind of attention.
    2. Some very weird and speculative possibilities of large numbers of conscious or value-generating subsystems in each brain could support weighing ~proportionally with neuron counts in expectation, even if you assign the possibilities fairly low but non-negligible probabilities (Fischer, Shriver & St. Jules, 2022).
      1. Maybe even faster scaling than proportional in expectation, but I think that leads to double counting too much overlap I'd reject if it's even modestly faster than proportional.

    Moral weights proportional to neuron counts may still support animal welfare over global health, but animal welfare is probably less cost-effective on the margin now than in the cited research, and we still have to worry about weighing welfare different gains, e.g. how bad is nest deprivation for an egg-laying hen relative to her welfare range? I won't take a position either way here about what's best on the margin using moral weights proportional to neuron counts.

  3. ^

    Maybe with some exceptions for some animals, but I expect this not to apply to factory farmed animals, whose circumstances are not conducive to forming such desires/preferences.

    Also, I'd imagine it's actually acute suffering — e.g. panic in response to danger to their children — that would drive an animal to endure severe but less acute suffering.

  4. ^

    Felt desires: desires we feel. They are often classified as one of two types, either a) appetitive — or incentive and typically conducive to approach or consummatory behaviour and towards things — like in attraction, hunger, cravings and anger, or b) aversive — and typically conducive to avoidance and away from things — like in pain, fear, disgust and again anger (Hayes et al., 2014Berridge, 2018, and on anger as aversive and appetitive, Carver & Harmon-Jones, 2009Watson, 2009 and Lee & Lang, 2009). However, the actual approach/consummatory or avoidance behaviour is not necessary to experience a felt desire, and we can overcome our felt desires or be constrained from satisfying them.

    More here.

  5. ^

    Perhaps whoever claims the highest infinities at stake would dominate under some common lexicographic order, as modelled in Russell and Isaacs, 2021, or with surreal numbers as in Chen and Rubio, 2020.

Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities