www.jimbuhler.site
Also on LessWrong (with different essays).
To explore whether high-intensity Pain can be experienced by primitive sentient organisms, we reframe the question as one of information resolution within a scale of Pain intensities. Briefly, two evolutionary possibilities are considered for how nervous systems evolved to represent varying Pain intensities, enabling appropriate responses to competing behavioural demands: (1) increasing resolution within a fixed range of intensities (i.e., introducing finer gradations between the minimum and maximum perceived intensity, e.g., 0 to 10) or (2) expanding the range itself of perceived intensities (e.g., extending the maximum perceived intensity value beyond 10, to values like 100, or 1000)
Were you aware of the following? (From Shriver 2024, §3.2.6.)
it’s not immediately obvious that simply having a more discerning ability to evaluate will also entail having the capacity for more intense experiences. Henry Shevlin, in an unpublished draft, proposed two possible accounts of what happens when we add evaluative richness. On the “compression account,” the ends of the spectrum stay as far apart as they were previously, but we add precision in how small the units are that exist between the two poles. However, on the “expansion account,” adding more evaluative states pushes the ends of the spectrum outward. As such, on the expansion account, adding richness would seem to actually increase the range of evaluative states and hence would give us one account of how adding richness could increase a welfare range. To be clear, there was no particular reason to assume that the expansion rather than the compression account is correct, but this does offer one story of how increased neuron counts in affective parts of the brain may increase welfare capacity.
In the above quotes, Shevlin and you are basically saying the same things with different terms, right? Or am I missing something?
(Great post. Thanks for sharing!)
On the cost point — Right, the words I chose made it very unclear whether and when I was talking about only costs, or only benefits, or overall fitness once we combine both, sorry.
On my contradiction — Oops yeah, I meant organisms with lower resolution. My bad.
Thanks for taking the time to reply to all this. Very helpful!
(Sorry for commenting only 3 years after you posted.) ^^
1.8kg of chicken would be replaced by 108g of shrimp which is roughly equal to 7 farmed shrimp
So, in your model, the upside of helping chickens dominates the downside of increasing shrimp consumption at least as long as chickens matter ~seven times more than shrimp.
I am overall highly certain that the change will have the expected effects of improving animal welfare significantly and that substitution effects will not cause this intervention to cause a net reduction in animal suffering.
So you were highly certain that chickens matter at least seven times more than shrimp? Are you, still? Your argument was:
The [mean P(Sentience)-adjusted] welfare range of shrimp is 0.031, whereas for chicken it is 0.332 (Fischer, 2023), suggesting that shrimp have 10% the capacity of pain of [chicken].
So then chickens would matter ten times more than shrimp. But how much weight would you put on the above argument? I could now make the opposite argument based on Rethink Priorities' latest ranges where mean for shrimp is 0.08 and not 0.031), which would make chicken matter only four times more than shrimp (such that the intervention seems net negative according to your model).
Oh yes, I agree with all that. Just to make sure I understand what you think of my original point:
If affective states are whole-organism control states rather than simple sensory readouts, then escalating intensity plausibly requires extra integration, valuation, or modulatory capacity. In that case, intensity beyond “loud enough” would not be strictly neutral, and drift would be limited.
To be clear, extra integration, valuation, and modulatory capacity are costly only if they decrease fitness in some way, right? An unnecessarily louder alarm for some problem hurts only if it impedes your capacity to solve other important problems.
My original suggestion was that, while an unnecessarily loud alarm would generally be maladaptive because of the above (as your post suggests),[1] it might not in genuinely catastrophic situations, specifically. Because the importance of solving the problem signaled by the alarm overwhelmingly dominates. It matters little whether it impedes your capacity to solve other problems.
I get the impression that you agree with this as presently stated (at least depending on one's interpretation of "overwhelmingly" and "little") and were simply making sure I wasn't taking away too much from my point. Is that correct? Or do you in fact see reasons to disagree with the above?[2]
So I see neutral drift as a live alternative, but not the default. The framework is meant to clarify when neutrality is plausible versus when selection should instead cap, reshape, or avoid extreme intensity altogether.
Yes, absolutely. I did not mean to question any of that. I'm just curious about this potential specific deviation from the default in high-stakes situations.[3]
So, yes, I'm not claiming that "extreme felt intensity actually [often] falls into that neutral regime", in that sense.
One counter-argument I see is that, while the unecessarily loud alarm doesn't hurt in genuinely catastrophic situations, there always are risks of false alarm. If the loud alarm misfires, there is no overwhelmingly important problem that largely dominates the cost of reducing your capacity to solve others. This is just purely bad for fitness. I don't know how significant this misfiring risk is, though, and hence how much weight to put on this counter-argument.
I'm curious because it seems highly relevant to, e.g., the question of whether organisms with narrower welfare ranges (EDIT: lower resolution) could feel extreme pain, and how reliably we can estimate the probability of this, which in turn matters for how precise our moral weight estimates can be (see #3 in this informal research agenda).
I agree that in genuinely catastrophic situations, evolution should tolerate very “loud” alarms. The open question, though, is whether those alarms need to be implemented as extreme affective states, rather than through non-affective or lower-intensity control mechanisms.
I was assuming they do not need to be, but might appear and remain anyway if they have no significant downside, like, e.g., humans' protruding chins. How loud the alarm is beyond the "loud enough" point would then just be a matter of luck.[1] Both just-loud-enough alarms and unnecessarily loud ones would be about equally effective so equally likely, all else equal. How plausible do you think this is?
Sorry for not being clear in my first comment, and thanks for helping pin down the crux!
Brilliant work — thanks for sharing.
On the costs of high-intensity affective states, which you suggest are high (such that we'd need a special explanation for why they exist):
In affective neuroscience, core emotions are understood as whole-organism control states that recruit neuroendocrine, autonomic, and motivational systems, reorganizing behavior and physiology in ways that may be adaptive in the short term but biologically consequential if prolonged (Panksepp, 1998; McEwen, 2007). Prolonged or poorly regulated aversive states can interfere with feeding, reproduction, immune function, and behavioral flexibility, and may increase vulnerability to maladaptive stress responses. Empirical studies across taxa show that sustained aversive states are associated with measurable physiological and fitness trade-offs, including reduced growth and altered reproductive behavior (Sneddon et al., 2014). While such findings do not establish the evolutionary costs of extreme pain directly, they support the broader conclusion that escalating affective intensity is unlikely to be biologically neutral. From this perspective, high-intensity pain is not a cost-free refinement of sentience. Even if the neural machinery required to generate affect is relatively inexpensive, maintaining access to extreme affective intensities possibly represents an additional evolutionary investment whose persistence depends on whether, in a given life history and ecological context, its marginal adaptive benefits plausibly outweigh its marginal biological and modulatory costs.
In the highest-stakes situations from the point of view of evolution (possibly imminent death or failure to procreate), aren't these costs of extreme states too negligible to make any non-trivial difference? The more the replication of the species is on the line, the more it's fine if the alarm is unnecessarily loud.
Even if we grant that punishment is more effective than positive reward in shaping behavior, what about the consideration that once the animal learns, it'll avoid situations where it gets punished, but it actively seeks out (and gets better at) obtaining positive reward?
Fair point, though then:
But overall I'd be pretty hesitant to give much weight to theoretical arguments of this sort, especially since you can sometimes think of counterconsiderations like the one above.
In absolute terms, fair. I'm just skeptical that judgment calls on net welfare after empirically studying the lives of wild animals are any better. If there's a logical or evolutionary reason to expect X, this seems like a stronger reason for X than "we've looked at what some wild animals commonly experience and we feel like what we see means X."
Maybe stronger does not mean strong in absolute, though. But then, the conclusion would not be that we shouldn't update much based on theoretical arguments of this sort, but that there is no evidence we can find (whether theoretical or empirical) on which we could base significant updates.
And as a possible counterpoint to the premise, I remember this review of a book on parenting and animal training where it says that training animals with attention on positive reward (but also trying not to reward undesired behavior) works best. That's a different context than evolution's, though.
Interesting, I'll look into this. Thanks!
I think we’d get the best sense of net wild animal welfare not from abstract arguments but by studying individual animals up close.
Tangential to your main point, but I'm actually not sure about this. For example, O'Brien (2021, §2.4) makes a theoretical argument according to which suffering is indirectly selected for by evolution. In this paper draft (sec. 5), I make a similar argument, defending that suffering is directly selected for (although it is partly based on the empirical findings that punishment is often more effective than reward at motivating behaviors). I think this kind of "suffering is a feature, not a bug" argument might in fact be more robust than our impression after looking at empirical stuff,[1] although I'd need to think more about this.
Interesting post, Lukas! Thanks for writing this :)
Related thoughs (on the potential strengh of logical and evolutionary arguments, in particular) in the second paragraph of this section.
I'd be curious to have your take (and anyone else's) on the following.
Say you have a friend who is buying and reselling items. She offers you the following deal A:
She also offers you deal B, where she gives you $1, and that's it.
You want to maximize your money in a risk-neutral neutral way, and value money linearly, here. Also, assume we have a theory of bracketing that overcomes these two problems in a way that makes bracketing recommend deal A.
Still, it is not clear whether you should follow bracketing, happily take the $20, and ignore the rest. Maybe you should prefer robustly good deal B, even though this means you have to accept avoiding transformative changes... I feel conflicted, here.
Thoughts? What are your intuitions in this case? And do you think our real-world situation with animals is disanalogous in a crucial way?
(Emphases are mine.)
Is the bandwidth-acuity distinction the same as the range-resolution one in Alonso & Schuck-Paim (2025)?