[Draft Amnesty post. I explored this question about two years ago and hoped to find some kind of resolution, but quickly found myself out of my depth and abandoned it. I still think the core question is important and underexplored, so I'm posting this in the hope that someone finds it interesting enough to pick up.]
This post is about the intersection of three important ideas in animal welfare prioritisation. There are a few ways that they can interact, and I'm not sure which is the right interpretation.
The three ideas
- Rethink Priorities' Moral Weights estimates, where they estimated that in a unit of time, a pig can suffer 52% as much as a human (the figures have large error bars, and make assumptions like hedonistic utilitarianism, among other things).
- The idea that the unpleasantness of pain increases superlinearly with its intensity (i.e. an 8/10 on the pain scale is more than twice as bad as a 4/10). The Qualia Research Institute were the first org I heard talking about this, but the idea precedes them.
- Welfare Footprint's Cumulative Pain Framework, where pain is quantified as the cumulative time spent in negative states of different intensities.
I am interested in how these three ideas interact. I will explain the problem in the way I first encountered it, when thinking about insects.
The problem
RP's welfare range project estimates black soldier flies have a "moral weight" of 1%, relative to human's defined moral weight of 100%. The human-suffering-equivalent of a black soldier fly suffering at full capacity for 100 hours could be interpreted in two ways:
- A human suffering at full capacity for 1 hour
- A human suffering at 1% capacity for 100 hours.
Another way of expressing this difference:
- Use the moral weight to scale the duration: the fly's 100 hours at full capacity becomes 1 human hour at full capacity pain (intensity kept constant)
- Use the moral weight to scale the intensity: the fly's full capacity becomes 1% of human capacity pain for 100 hours (duration kept constant)
If the unpleasantness of pain scaled linearly with its intensity, then these two views are equivalent. However, if unpleasantness scales superlinearly (the logarithmic pain hypothesis), then option 1 is far far worse than option 2.
In vague mathematical terms, (assuming the scaling is exponential) the amount of suffering in these two scenarios is given by:
- human suffering at 100% capacity for 1 hour ∝ 1 · e^100
- human suffering at 1% capacity for 100 hours ∝ 100 · e^1
And option 1 is clearly far far larger than option 2.
Why this matters
This difference is incredibly important, as it could point us towards focusing far more on smaller, more numerous animals (e.g. insects, shrimp), and less on larger, less numerous animals (humans, pigs) — or vice versa, depending on which interpretation is correct.
I don't have good arguments either way, and so I'm very uncertain which is the right way to think about this. I think it ultimately depends on how the RP numbers were designed. I've read through all of RP's writing on the topic when they first published it, and I didn't see anything that would suggest one interpretation over the other, although it's very possible that I missed something.
Why the 10-point scale is probably compressing something huge
Here's a simple thought experiment that made me take superlinearity more seriously. In an incredibly underwhelming skateboarding-related incident, I broke my leg, and at the time I rated my pain as 6/10. If I had been unlucky enough to break both my legs simultaneously, I imagine I would have rated it something like 9/10, despite the physical intensity being exactly twice as large. Obviously I couldn't rate it 12/10. The scale has a ceiling, and that ceiling forces compression. In the mericfully-unlikely scenario where I had broken every bone in my body, I presumably would have rated it at 10/10, less than twice as bad as breaking only my leg. Clearly there's something fishy going on when we rate our pain.
https://existentialcomics.com/comic/290
There are some studies which show this effect, e.g. this study reported that 80% of women who recently gave birth preferred 12 hours at 4/10 over 6 hours at 8/10, when linear scaling would predict indifference.
This suggests that if people really did anchor 10/10 to the worst pain imaginable (say, torture), then all normal life pain would barely register above 1/10, making the scale nearly useless for everyday purposes. Instead, people spread their ratings across the range they actually encounter, which means the gaps between adjacent points on the scale must represent vastly different amounts of actual suffering as you move upward. The difference between 9/10 and 10/10 is probably enormous compared to the difference between 1/10 and 2/10.
This becomes especially vivid if you compare 1 minute of excruciating torture (10/10 pain) against 10 minutes of a very mild headache (1/10). It seems obvious that the torture is far, far worse — not just 10x worse adjusted for duration, which is what a linear scale would imply.
The Princess and the Pea problem
There's a related puzzle about interpersonal comparisons. Imagine someone so sheltered from discomfort that she becomes incredibly sensitive, rating minor issues as high on her pain scale. Is she just using a different scale to everyone else — speaking a different language, so we should normalize her ratings when aggregating? Or has she actually increased her sensitivity in a qualia sense, becoming a kind of utility monster where her stubbing her toe is genuinely worse than a regular person being involved in a car crash?
The strongest counterargument I've encountered
I discussed this question with Bob Fischer, who led the Rethink Priorities Moral Weight Project, and he wasn't convinced. His argument, as I understood it, went roughly like this:
Yes, superlinearity in self-reported pain might be a real effect, but this is an artifact of how humans use pain scales, not a feature of the fundamental experience of pain. And crucially, non-human animals don't self-report, so this non-linearity is irrelevant for animal welfare. Animal welfare pain assessment (such as the Cumulative Pain Framework) relies on behavioural indicators. If a researcher judges an animal to be at 10% of its capacity, they simply mean 1/10 as bad as its worst state — there's no question about whether 100% is "really" 10x worse, because that's just what the numbers mean by construction. The human self-report distortions are irrelevant.
Why I'm not fully convinced by this counterargument
I find this argument powerful but not decisive, for a few reasons:
1. Human self-report is behaviour, it's just verbal behaviour. The counterargument draws a sharp line between human self-report (nonlinear, distorted) and animal behavioural indicators (clean, cardinal). But self-report is itself a form of behaviour, it's just the verbal kind. We already know that this verbal behaviour relates nonlinearly to the underlying experience. Why should we expect nonverbal behaviour to be any different? This doesn't argue that pain itself is superlinear, but I think it means we can't dismiss the possibility just because we're using behavioural indicators instead of self-report.
2. There are some evolutionary arguments for superlinearity is species-general. One reason to expect superlinear pain is that organisms need fine-grained discrimination among the mild pains they encounter frequently (is my sore ankle worse than my sunburnt neck?) while also being capable of registering extreme pain. This could lead to a scale where most of the discriminatory resolution is concentrated in the lower intensities, with extreme pain compressed at the top. This evolutionary pressure would apply to any organism that needs to make trade-offs between competing mild threats, not just humans.
Conclusion
- I am still very uncertain whether the superlinearity is physiological, or just a reporting artifact.
- This increases my uncertainty intervals when comparing pain across species.
- Overall I am surprised by how little attention the idea of logarithmic pain receives in EA circles, as if true, it is an incredibly important effect, and would push us in the direction of focusing more on reducing extreme pain (even if the duration of it is low).
Appendices
- This whole thing is further complicated by the fact that the experience of duration might not scale in a linear way either (it could be superlinear due to sensitisation with exposure to pain, or sublinear due to habituation). The subjective perception of time when in pain is definitely worthy of research, as it could inform questions of prioritisation between pain of varying duration.
- Interestingly, when the intensity is low enough, the hierarchy between Options 1 and 2 switches, and Option 2 actually gives larger results than Option 1. E.g. if the insect was suffering at only 0.1% of its maximal capability, then:
- e^0.1
0.1 · e^1
And in this case option 2 is greater than option 1. I'm probably taking the exponential model too literally in this case, but it's a curious artefact.

Here are a few ideas that come to my mind:
Thanks for sharing your draft!