www.jimbuhler.site
Also on LessWrong and Substack, with different essays.
I assume the most important reason is that it is something that most people close to them do. Likewise, I think most people prioritise animals with a higher probability of sentience like chickens instead of shrimps because it is what most people close to them do.
Interesting. I think there's something to this analogy, though ofc the social pressure to put your seatbelt on is far higher than that to prioritize chickens over shrimp.
I guess [their motivation] has little to do with the actual probability of sentience of the animals in question.
Yeah, maybe they just rationalize their motivations with moral weight arguments while their real drive is something else (see Simler & Hanson 2018). And highlighting potential biases we have might be helpful. On the other hand, you may wanna mainly stick to red-teaming the importance of p(sentience) as a potential crux (by, e.g., red-teaming Clatterbuck and Fischer), anyway, if that's the reason people give (even if it might not be their real motivation deep down). I generally find this to be the most productive. People rarely update just based on noticing or being reminded of a bias they may have.
I guess most people see voting as fulfilling their duty to improve society.
That also seems part of the picture, yeah! And notice that this bolsters my broader point that it might not be about EV max and that there might be no inconsistency between voting and being difference-making risk-averse.
I wonder to what extent people donate to interventions targeting animals which are more likely to be sentient to boost the probability of increasing welfare. People routinely take actions which are super unlikely to actually matter
This position many animal advocates hold (even if only implicitly) was indeed rationalized/explained with difference-making risk aversion by Clatterbuck and Fischer (2025). And in this case, p(sentience), and moral weights more broadly, indeed seem important, actually.
I think it's very plausible people are inconsistent in how difference-making risk averse they are for different things. However, let me play devil's advocate:
(Tangential but I guess from the above that you think the following is not another example where MNB is sensitive to the individuation of normative views, and I'd like to understand why. Nw at all if you don't have the time to reply, tho.)
Antonia found an intervention that reduces overall animal suffering in the near-term, but she's not sure which is true between
Brian comes along and says he agrees with the above and subdivides L, this way:
Antonia shares Brian's above best guesses and normative uncertainty. They both totally agree. The only difference is that Brian specified normative sub-views.
Now, say Nuutti joins the party, agrees with these two, but recategorizes things this way:
The MNB sceptic would say that Antonia grouping L1-3 together to form L is just as arbitrary as Nuutti grouping L2, L3, and N together to form O.[1]
Is your response: The former seems less arbitrary because
With the consequentist-bracketing version of the individuation problem I present here, the bracketer can appeal to a "only value locations that have been identified can be bracketed in" principle. This saves them if this principle is sound. Here, this doesn't save them. The normative theories have been identified in both cases.
The idea that the unpleasantness of pain increases superlinearly with its intensity (i.e. an 8/10 on the pain scale is more than twice as bad as a 4/10).
Yeah... I wish we would just say that the 4 is actually lower than 4 and directly track what you mean by "unpleasantness" with these scores, since this is what we care about. But that's not how people use the /10 scale, unfortunately. And that's understandable. If they were, they would seldom say that they're suffering above a 1/10.[1]
And yes. When researchers/people assign welfare ranges, they think they're tracking "unpleasantness", but I also suspect they are actually tracking what you mean by "intensity" to a large extent, which may lead to very misguided cross-species welfare tradeoffs. I am extremely skeptical of the following counter-view you describe:
If a researcher judges an animal to be at 10% of its capacity, they simply mean 1/10 as bad as its worst state — there's no question about whether 100% is "really" 10x worse, because that's just what the numbers mean by construction.
Maybe that's what they mean, but I doubt that their estimate is not deeply biased by the "unpleasantness"/"intensity" confusion.
To be clear, though, I don't want people to take away that we should care less about insects and shrimp. There are so many other considerations. If anything, this should make us less confident in precise-ish moral weight estimates (and maybe look for projects robust to this uncertainty).
That's a very important problem you raise! Thank you for this. :)
Great points from you here and from @Mia Fernyhough in another thread! What about in countries where animal advocacy is (almost) nonexistent and where the counterfactual is probably not cage-free, but no change at all? Curious what the two of you (and others) think. I know this does not address all the limitations you raise, but maybe the most crucial ones?
This [post-keynesian economics] literature has produced not just a diagnosis of the [cluelessness] problem but a set of practical heuristics and institutional responses that could meaningfully supplement EA analysis in situations of deep uncertainty.
Fwiw, DiGiovanni and myself argue that following such heuristics is not an appropriate response to the deep-uncertainty situation EAs (at least impartial ones) face. We don't directly respond to the literature you cite, but rather to the arguments found in the following refs you might be interested in: Thorstad & Mogensen 2020; Tomasik 2015; The Global Priorities Institute 2024, §§1.2.1 and 4.2.1; Grant & Quiggin 2013.
Thanks for posting this :)
Fwiw, one can very well agree that all pains are comparable in theory, but that the difference between a pinprick and genuine torture is so large that, in practice, the latter will often dominate. I find this harder to "debunk" than antiaggregationism.
Given our deep uncertainty on i) how many pinpricks outweigh torture and ii) moral weights and welfare ranges,[1]I certainly don't find it implausible that nematodes, shrimp, or even chickens have experiences that are too mild, relative to other beings, to dominate EV calculations---despite their high numbers and assuming aggregationism.[2]
So sure, maybe, in principle, there is a number of warmed up nematodes that outbalances 1 trillion human-years of extreme torture. But this says nothing about tradeoffs we can(not) make between humans and nematodes in the real world.
Well, (i) matters only insofar as it is relevant to (ii), here, but I thought I'd acknowledge (i) separately, still.
And you said things that suggest you agree in this recent interview. You seemed to have deviated from your previous "nematodes (almost) surely dominate" view. Or did I miss something?
I recommend decreasing the uncertainty about how the individual (expected hedonistic) welfare per unit time of different organisms and digital systems compares with that of humans. In particular, I recommend supporting Rethink Priorities (RP) via restricted funding. [...] I committed donating 2 k$ to RP for them to scope out whatever projects they believe would decrease the most cost-effectively the uncertainty about how the individual welfare per unit time of different organisms and digital systems compares with that of humans.
Do you think "whatever projects RP believes would decrease the most cost-effectively the uncertainty" would address your reasons for uncertainty? I haven't yet taken the time to comprehend the details of your current views on hedonistic P(sentience)-adjusted welfare ranges, but I get the sense that your sources of uncertainty on this are not the same as RP's.
I'm worried everyone will just agree that this seems unlikely. That's a very high bar.
I think we don't care about whether it "values animal welfare". We care about what happens to animals. There are many very plausible worlds where these two are uncorrelated (just like in ours, where people have never valued AW that high and it has never been that bad for farmed animals, especially the smaller ones).
That's my favorite version, but I'm worried it invites everyone to just agree on "we should have some extra animal-focused work, anyway" and not red-team each other deeply enough.
So here's a minimal version I propose: AI safety work that helps humans also helps other animals, to some extent.
(The "to some extent" is optional. I added it to invite people to think about whether AIS helps other animals at all, and not just all agree over the uncontroversial and boring claim that "AIS helps humans more than animals".)
I like this minimal formulation because
Thanks for asking us, Toby! Looking forward to this debate week :)