Bio

Participation
4

www.jimbuhler.site

Also on LessWrong and Substack, with different essays.

Sequences
4

On the sign of X-risk reduction
On risks from malevolence
Cluelessness vs Longtermism
What values will control the Future?

Comments
193

Topic contributions
4

The empirical evidence that shrimp have small brains detracts from this probability, but not by much.

I very much agree that pretty much whatever our prior should be, the available evidence does not justify substantially updating away from it. I'm just uncertain about what the prior should be (see below).

I think that conditioning on p(sentience) is sufficient to justify a non-negligible probability of similar levels of welfare range in the absence of any further empirical evidence.

Yeah, agreed that's the crux! :) I think you are applying a principle of indifference (POI) across welfare subjects (or have a significant credence in such a move, at least).[1] While I actually also have sympathy for something of the sort,[2] it is widely criticized in the literature on cluelessness and decision-making under uncertainty. Here's a list of challenges and possible responses taken from a rough paper draft of mine on this exact topic:

  • 1. Uncertainty about how to individuate welfare subjects within a "welfare-containing" entity or within a bigger welfare-containing entity this one is part of → important instance of the problem of the many. Research could help us non-arbitrarily individuate (see, e.g., Gottlieb 2022Fischer et al. 2022McIntyre forthcoming) but this research may face very similar challenges to that on moral weights (on how much we can update away from whatever our prior is) and not bring us far.
    • But maybe biting the bullet and accepting some arbitrariness here is the least bad option we’ve got?
  • 2. Why apply POI at the level of welfare subjects or brains rather than at the level of, e.g., cells?
    • Maybe persons (i.e., welfare subjects) are themselves what is morally relevant rather than their experience moments (see, e.g., Bader 2022), but
      • we’d need a solution to the non-identity problem.
      • and an argument for why following our intuition on this is fine but not with moral weights.
  • 3. Why endorse any form of POI to start with? In a complex cluelessness context like the one we’re in when estimating moral weights,[3] the plausibility of POI is infamously contested (see, e.g., thisthat, and refs therein). Hence, maybe we can’t use POI to justify a precise prior. Maybe we should favor an imprecise one such as each non-human species = (0, X). (where X = 1 or a bit higher.) (which would lead to agnosticism about whether many interspecies tradeoffs we make are justified).
    • However, to the extent that people want to reject such agnosticism (for whatever reason), even as an uninformed prior, they have to pick a precise-ish alternative prior. In this case, "everyone counts for (~)one" may be more advisable than the other options. (Wager on the possibility that we can apply POI).

A tl;dr from Claude that I like: ignorance about X's welfare range doesn't automatically justify treating X's welfare as if it equals human welfare — it might just justify suspending judgment. The move from "we don't know the ratio" to "assume the ratio is 1" needs much more justification.

  1. ^

    See also Dickens and Shepherd et al. (2023), who endorse this move.

  2. ^

    Especially as an alternative to defaulting to our intuitions or "invertebrates don't matter at all until proven otherwise".

  3. ^

    One could nitpick that there's technically no complex cluelessness if we're truly uninformed and ignore the (conflicting) evidence. But in that case, sure, maybe we can start with POI, but then we update towards agnosticism once we consider evidence, so the POI argument for giving everyone the same moral weight wouldn't work.

I just had a naive illumination. Say that sentience first appeared in two different simple creatures, independently, at the same time:

  • Dolores: She's just like her non-sentient siblings, except that she feels unnecessarily severe pain if she's about to die of starvation, although not severe in a way that would impair her ability to do what is necessary not to starve (otherwise, she would die, and it's her non-sentient siblings who would spread their genes.)
  • Mildred: Same, except that her starving pain is milder, and that's enough to motivate her to lexically prioritize solving this problem, just like Dolores.

Judging by what you've written in the post and comments, you could give two different arguments for why Dolores would have lower fitness than Mildred:

  • 1. Dolores's pain would override everything else (e.g., she might be so focused on not starving that she forgets about drinking).
    • But this applies just as much to Mildred, no? No matter how mild her pain is, it will also override everything if that's the only thing she feels. If she feels some pain when starving and nothing while thirsty, she might forget about drinking just the same. In pure isolation, how bad the pain is changes absolutely nothing in terms of fitness, here, no?
  • 2. Dolores needs a more demanding biology than Mildred in order to feel something worse.
    • But how would we know this? Why would subjectively worse mean more demanding energy-wise? Why couldn't it just as well be the more subtle less bad affects that are more demanding?

What am I misunderstanding/missing?

it seems to me to be very unreasonable to be confident that simpler brains most likely have much smaller welfare ranges

I agree, and I absolutely did not mean to defend this. What I defend is that, in the absence of a good argument based on welfare ranges and not p(sentience), we don't know if the welfare range of simpler animals is below or above the bar above which their welfare would dominate over that of more complex animals (not that it is below!).

But you disagree with my a priori agnosticism because you think we should (roughly) stick to some precise-ish prior welfare ranges in the absence of significant evidence pointing one way or the other, correct? (And this prior would give simpler animals enough weight for them to likely dominate.) This would explain your disagreement with what you quote.[1] I was implicitly assuming that our prior should be an agnostic imprecise one that offers no action-guidance on its own.

  1. ^

    If that's not where the disagreement is, I don't see how "a presumption of a reasonable probability of a welfare range that is not too small and no significant evidence against it" does not count as "evidence of a welfare range that is not too insignificant." Maybe you're just worried my imprecise phrasing will, while technically correct, lead readers to set the bar too high?

Curious what motivated you to spend time assessing the impact of bird-safe glass on arthropods, specifically, then. Were you hoping to find out that bird effects dominated but found and shared the opposite unsatisfying results? Or maybe you think "here's another example showing how indirect effects on tiny animals may dominate" and that this will convince some people to also prioritize (i) and (ii)? (people who were not convinced by your previous largely-overlapping posts but might by this one?)

Is there any project you think may not impact arthropods and/or soil animals much more than whatever animals are targeted? I feel like exploring this would be far more insightful at this stage.

Jim Buhler
3
1
0
1
0% agree

Most animals are wild animals, so the answer to this question should focus on them.

Even granting that the overwhelming majority are wild animals, this doesn't necessarily imply we should focus on them. We have to factor in the welfare difference between the two (welfare ranges and quality of life in practice).

Oh good, I have no objection then. Well played.

Are you setting aside wild animals?

this seems to me to imply a greater concern for anthropogenic harm than non-anthropogenic harm. Is that what you meant?

Oh no sorry, increased WAW welfare compared to the "natural" situation counts as impact too.

What I'm saying is: say you help 1 million wild animals out of many or 1 million farmed animals out of fewer. You can't say the former is better because there are more wild animals. It doesn't matter how many there are. What matters is how many you help and how much. And there is an asymmetry here where farmed animals are probably 100% helped if humans are disempowered---the problem is totally fixed---whereas, even in the best case scenario, empowered humans will be nowhere near totally fixing wild animal suffering. This asymmetry may compensate for the fact that there are many more wild animals to help.

Humans increasing or decreasing the number might be the largest impact

As in (D) is more plausible than (C) (in my typology)? I'd agree. Anyway, my argument holds independently of what people find more likely between (C) and (D).

For example, the regeneration of forest is actively opposed in much of Central Europe, because people have cultural ideas about what the landscape should look like. So there's a tension there between environmentalists and traditionalists, and I wouldn't say that the environmentalists are winning.

Oh I didn't know that, thanks. There, of course, is still the question of the marginal impact WAW advocates would have in such debates, but helpful example!

Load more