Tl;dr: You need to make specific moral weight assumptions to decide who to prioritize between animals with different degrees of sentience. And the moral weight of a given being is not its p(sentience). One could very well be certain that shrimp and insects feel pain (and are as badly off as they could possibly be), yet believe that they do so to a degree too small for them to be prioritized. (EDIT: Note that I do not mean to defend this view.) Debates about interspecies tradeoffs should outline welfare ranges, over p(sentience), as the relevant crux.

 

When people defend prioritizing cognitively simpler animals over more complex ones, they tend to appeal to i) their numbers, ii) evidence of their sentience, and iii) some form of expected value maximization, even if slightly risk-averse.

While I am pretty sympathetic to prioritizing cognitively simpler beings, I think the above appeal misses the key crux, by mistakenly making (ii) about p(sentience) rather than about degree of sentience, or welfare range. For the conclusion to follow, (ii) needs to be: evidence of a welfare range that is not too insignificant to crucially undermine the importance of their large numbers.

It is quite easy to believe that shrimp and many insects are likely sentient.[1] In fact, let us, from here on, assume that they are, without doubt. What may be a harder bullet to bite is that their degree of sentience is high enough for their numbers to “carry you the rest of the way”. Maybe you think it’s obvious that what a pig experiences when being pushed into a slaughterhouse is nothing compared to the suffering of millions of mealworms (or whatever the fairest to-scale comparison is).[2] But you need an argument.

And finding a decisive one is non-trivial. It is already astonishingly hard to estimate how many pinpricks could be worse than torture at the individual level.[3] Here, figuring out who to prioritize between, say, mealworms and pigs, not only requires overcoming the high uncertainty this sort of question brings, but also the extra uncertainty of how to compare welfare across species (i.e., how many pig pinpricks is equivalent to a malnourished mealworm).[4] So while I largely share Browning and Veit’s (2023) optimism about our scientific ability to determine, with non-trivial confidence, which beings are sentient, I think reliably estimating their welfare ranges may be far harder.

We might still believe that we have already somewhat overcome this uncertainty about welfare ranges (such that we have a decent sense of which beings to prioritize), but we should then demonstrate this—instead of pointing to the unhelpful mere presence or absence of evidence of sentience. EDIT: To be extra clear, this burden of proof goes both ways. I am not saying those who prioritize more complex beings do not face the exact same problem.

  1. ^

     And that there are numerous. And that arguments for risk or ambiguity aversion do not decisively undermine this—for an argument according to which they may in fact decisively undermine this, see Clatterbuck and Fischer (2025).

  2. ^

     To be clear, I do not think that is what Meghan Barrett implied when she said that the scale of insect welfare “carries you the rest of the way”. I assume she meant “far enough for us to pay attention to them”, rather than “far enough for insects to be our greatest priority”.

  3. ^

     I am charitably setting aside the view that these are not even comparable to begin with, for simplicity, and because it is often used as an easy-to-criticize strawman. You do not need to be anti-agregationist to doubt that many pinpricks equate torture.

  4. ^

     Arguing for substantial uncertainty about cross-species welfare comparisons is out of scope, but some reasons are given by, e.g., Schukraf et al. (2024), Muehlhauser (2018), and Browning (2022; 2023).

  5. Show all footnotes

35

5
0

Reactions

5
0
Comments8
Sorted by Click to highlight new comments since:

. For the conclusion to follow, (ii) needs to be: evidence of a welfare range that is not too insignificant to crucially undermine the importance of their large numbers.

 

I disagree with this. You don't need evidence of a welfare range that is not too insignificant -- you need a presumption of a reasonable probability of a welfare range that is not too small and no significant evidence against it. Without a theory of how you get from neurons to welfare, it seems to me to be very unreasonable to be confident that simpler brains most likely have much smaller welfare ranges. And we don't have a good theory of how to get from neurons to welfare.

it seems to me to be very unreasonable to be confident that simpler brains most likely have much smaller welfare ranges

I agree, and I absolutely did not mean to defend this. What I defend is that, in the absence of a good argument based on welfare ranges and not p(sentience), we don't know if the welfare range of simpler animals is below or above the bar above which their welfare would dominate over that of more complex animals (not that it is below!).

But you disagree with my a priori agnosticism because you think we should (roughly) stick to some precise-ish prior welfare ranges in the absence of significant evidence pointing one way or the other, correct? (And this prior would give simpler animals enough weight for them to likely dominate.) This would explain your disagreement with what you quote.[1] I was implicitly assuming that our prior should be an agnostic imprecise one that offers no action-guidance on its own.

  1. ^

    If that's not where the disagreement is, I don't see how "a presumption of a reasonable probability of a welfare range that is not too small and no significant evidence against it" does not count as "evidence of a welfare range that is not too insignificant." Maybe you're just worried my imprecise phrasing will, while technically correct, lead readers to set the bar too high?

Hi Derek.

You don't need evidence of a welfare range that is not too insignificant -- you need a presumption of a reasonable probability of a welfare range that is not too small and no significant evidence against it.

I find this wording a bit confusing. However, I think you mean that the expected welfare range will be significant (for example, at least 1 % of that of humans) as long as there is one plausible model (for example, which gets 10 % weight) which predicts a significant welfare range (for example, 10 % of that of humans). I have significant concerns about this kind of reasoning. I worry the weights of the models are close to arbitrary. In Bob Fischer's book about comparing welfare across species, there seems to be only 1 line about the weights. "We assigned 30 percent credence to the neurophysiological model, 10 percent to the equality model, and 60 percent to the simple additive model". People usually give weights that are at least 0.1/"number of models", which is at least 3.33 % (= 0.1/3) for 3 models, when it is quite hard to estimate the weights. However, giving weights which are not much smaller than the uniform weight of 1/"number of models" could easily lead to huge mistakes. As a silly example, if I asked random people with age 7 about whether the gravitational force between 2 objects is proportional to "distance"^-2 (correct answer), "distance"^-20, or "distance"^-200, I imagine I would get a significant fraction picking the exponents of -20 and -200. Assuming 60 % picked -2, 20 % picked -20, and 20 % picked -200, one may naively conclude the mean exponent of -45.2 (= 0.6*(-2) + 0.2*(-20) + 0.2*(-200)) is reasonable. Yet, there is lots of empirical evidence against this which the respondants are not aware of. The right conclusion would be that the respondants have practically no idea about the right exponent because they would not be able to adequately justify their picks.

And we don't have a good theory of how to get from neurons to welfare.

I would be great to have more research on this. I wonder whether electromagnetic (EM) field theories of consciousness could shed some light on it. I assume the maximum intensity of the EM fields generated by brain activity depends on the number of neurons, at least when assessed across species (there is little variance in the number of neurons of humans, which means the maximum intensity of EM fields may not vary much in humans).

However, I think you mean that the expected welfare range will be significant (for example, at least 1 % of that of humans) as long as there is one plausible model (for example, which gets 10 % weight) which predicts a significant welfare range (for example, 10 % of that of humans).

Yeah, basically.

I wonder whether electromagnetic (EM) field theories of consciousness could shed some light on it. I assume the maximum intensity of the EM fields generated by brain activity depends on the number of neurons, at least when assessed across species (there is little variance in the number of neurons of humans, which means the maximum intensity of EM fields may not vary much in humans).

This makes me think we're inclined towards a different basic perspective on the determinants of valence. This kind of sounds like you're thinking of pain as a sort of physical magnitude, like weight or charge. Then it is reasonable to think it is likely to scale with size, so that much smaller brains are likely to have much smaller magnitudes. I'm more inclined towards functionalist interpretations of welfare, on which something like relative functional significance determines welfare levels. E.g. something's attention-grabbing capacity helps to determine its welfare significance. In that case, you might be deeply skeptical that small animals have the right functional role at all, but once you grant they do, it is much more plausible that welfare ranges are similar to humans. However, for my point to be right, I think you just need to treat these kinds of functionalist views as in the running. You don't have to be confident that they're true.

This kind of sounds like you're thinking of pain as a sort of physical magnitude, like weight or charge.

Yes.

I'm more inclined towards functionalist interpretations of welfare

Which kind of functionalism? I am very sceptical of at least computational functionalism (CF). Any algorithm run by a digital computer can be executed with pen and paper (although it may take a super long time), and I have a hard time imagining how such process would itself be conscious.

In that case, you might be deeply skeptical that small animals have the right functional role at all, but once you grant they do, it is much more plausible that welfare ranges are similar to humans.

This assumes that the effect on welfare of having the right functional role is not moderated, or is only very weakly moderated, by physical quantities like the number of neurons (as in Bob's book). I find this very counterintuitive. It implies that a human who is the size of a galaxy would have the same welfare as a normal human.

However, for my point to be right, I think you just need to treat these kinds of functionalist views as in the running. You don't have to be confident that they're true.

I agree. However, I think the weight of models which are practically not sensitive to physical quantities could be astronomically low. Mistakes like the one I illustrated above about gravitational force happen when the weights of models are guessed independently of their consequences. I suspect the variance in weights should not be that different from the variance in the consequences. For example, for welfare ranges of a) 10^-100, b) 10^-10, and c) 1, I would guess weights not that different from 1 on a), 10^-10 on b), and 10^-100 on c).

Which kind of functionalism? I am very sceptical of at least computational functionalism (CF). Any algorithm run by a digital computer can be executed with pen and paper (although it may take a super long time), and I have a hard time imagining how such process would itself be conscious.

I'm skeptical of functionalism about consciousness (though I don't know any alternative that fares better.) But functionalism about valence seems much harder to avoid. Maybe if you have a benevolent God? Or some sort of dualism? Otherwise, it seems to me that you're going to be hard-pressed to explain why it is that the functional role of valence aligns with whatever properties constitute the fact that valence matters. Why is it that pain is bad and we avoid it, or pleasure is good and we seek it, if it is not just the case that the things we're inclined to avoid count as pain, and the things we're inclined to seek count as pleasure (very roughly).

Hi Jim. Thanks for the relevant post. I very much agree.

Many people prioritise animals with a higher probability of sentience. Humans, mammals, birds, finfishes, and then invertebrates. However, I suspect most do it for other reasons. A higher probability of sentience implies a higher chance of increasing (and decreasing) welfare, but people routinely take actions which are super unlikely to actually matter:

  • I calculate driving a car for 10 km in Great Britain without a seatbelt leads to 1 additional death with a probability of 1 in 73.0 M. The probability of sentience of shrimps presented in Bob Fischers' book about comparing welfare across species is 29.2 M (= 0.40*73.0*10^6) times as high (40 %).
  • Andrew Gelman found the probability of a voter in a small United States (US) state polling around 50/50 in a close election nationally changing the outcome of the national election could get as high as 1 in 3 million. The above probability of sentience of shrimps is 900 k (= 0.30*3*10^6) times as high.

I also believe there are other factors (besides the probability of sentience) which may be more important for the probability of a small donation increasing animal welfare.

I would take for granted that all animals are sentient (certain to have some kind of valenced experiences), and focus on assessing their welfare range as you suggest. I think there should be way more research on this given the large uncertainty. For example, for welfare range proportional to "individual number of neurons"^"exponent", and exponent from 0 of 2, which covers the range that I consider reasonable, I estimate that the Shrimp Welfare Project's (SWP's) Humane Slaughter Initiative (HSI) has increased the welfare of shrimps 1.68*10^-6 to 1.68 M times as cost-effectively as GiveWell's top charities increase the welfare of humans.

This exact point has been on my mind for some time, so thanks for writing this!

Curated and popular this week
Relevant opportunities