Hide table of contents

We just published an interview: Bob Fischer on comparing the welfare of humans, chickens, pigs, octopuses, bees, and more. Listen on Spotify or click through for other audio options, the transcript, and related links. Below are the episode summary and some key excerpts.

Episode summary

[One] thing is just to spend time thinking about the kinds of things animals can do and what their lives are like. Just how hard a chicken will work to get to a nest box before she lays an egg, the amount of labour she’s willing to go through to do that, to think about how important that is to her. And to realise that we can quantify that, and see how much they care, or to see that they get stressed out when fellow chickens are threatened and that they seem to have some sympathy for conspecifics.

Those kinds of things make me say there is something in there that is recognisable to me as another individual, with desires and preferences and a vantage point on the world, who wants things to go a certain way and is frustrated and upset when they don’t. And recognising the individuality, the perspective of nonhuman animals, for me, really challenges my tendency to not take them as seriously as I think I ought to, all things considered.

- Bob Fischer

In today’s episode, host Luisa Rodriguez speaks to Bob Fischer — senior research manager at Rethink Priorities and the director of the Society for the Study of Ethics and Animals — about Rethink Priorities’s Moral Weight Project.

They cover:

  • The methods used to assess the welfare ranges and capacities for pleasure and pain of chickens, pigs, octopuses, bees, and other animals — and the limitations of that approach.
  • Concrete examples of how someone might use the estimated moral weights to compare the benefits of animal vs human interventions.
  • The results that most surprised Bob.
  • Why the team used a hedonic theory of welfare to inform the project, and what non-hedonic theories of welfare might bring to the table.
  • Thought experiments like Tortured Tim that test different philosophical assumptions about welfare.
  • Confronting our own biases when estimating animal mental capacities and moral worth.
  • The limitations of using neuron counts as a proxy for moral weights.
  • How different types of risk aversion, like avoiding worst-case scenarios, could impact cause prioritisation.
  • And plenty more.

Producer and editor: Keiran Harris
Audio Engineering Lead: Ben Cordell
Technical editing: Simon Monsour and Milo McGuire
Additional content editing: Katy Moore and Luisa Rodriguez
Transcriptions: Katy Moore

Highlights

Using neuron counts as a proxy for sentience

Luisa Rodriguez: A colleague of yours at Rethink Priorities has written this report on why neuron counts aren’t actually a good proxy for what we care about here. Can you give a quick summary of why they think that?

Bob Fischer: Sure. There are two things to say. One is that it isn’t totally crazy to use neuron counts. And one way of seeing why you might think it’s not totally crazy is to think about the kinds of proxies that economists have used when trying to estimate human welfare. Economists have for a long time used income as a proxy for human welfare. You might say that we know that there are all these ways in which that fails as a proxy — and the right response from the economist is something like, do you have anything better? Where there’s actually data, and where we can answer at least some of these high-level questions that we care about? Or at least make progress on the high-level questions that we care about relative to baseline?

And I think that way of thinking about what neuron-count-based proxies are is the charitable interpretation. It’s just like income in welfare economics: imperfect, but maybe the best we can do in certain circumstances.

That being said, the main problem is that there are lots of factors that really affect neuron count as a proxy that make it problematic. One is that neuron counts alone are really sensitive to body size, so that’s going to be a confounding factor. It seems like, insofar as it tracks much of anything, it might be tracking something like intelligence — and it’s not totally obvious why intelligence is morally important. At least in the human case, we often think that it’s not important, and in fact, it’s a really pernicious thing to make intelligence the metric by which we assess moral value.

And then, even if you think that neuron counts are proxies of some quality for something else, like the intensity of pain states or something… It’s not clear that that’s true, but even if that were true, you’d still have to ask, can we do any better? And it’s not obvious that we can’t do better. Not obvious that we can, but we should at least try.

Luisa Rodriguez: Yes. Makes sense. Are there any helpful thought experiments there? It doesn’t seem at all insane to me — though maybe you wouldn’t expect it to happen on its own through evolution — that there would be a being who has many fewer neurons than I do, but that those neurons are primarily directed at going from extreme pain to extreme something like euphoria. It doesn’t seem like there’s a good reason that’s not possible, and that that extreme pain could just be much more than the total amount of pain I could possibly feel. Even though the types of pain might be different for me, because I’ve got different kinds of capacities for sadness and shame and embarrassment, like a wider variety of types of pain, it still seems at least theoretically possible that you could house a bunch of pain in a small brain. And that feels like good reason to me to basically do what you’ve done, which is look for better ways than neurons alone.

Bob Fischer: Sure. And some evolutionary biologists have basically said things along these lines. Richard Dawkins actually has this line at some point, where he says maybe simpler organisms actually need stronger pain signals because they don’t learn as much as we do and they don’t remember all these facts, so they need big alarm bells to keep them away from fitness-reducing threats. So it’s always possible that you have a complete inversion of the relationship that people imagine, and you want to make sure that your model captures that.

How to think about the moral weight of a chicken

Luisa Rodriguez: Cool. Just to make sure I understand, the thing is saying that the capacity of welfare or suffering of a chicken in a given instant is about a third of the capacity for the kind of pain and pleasure a human could experience in a given instant. Is that it?

Bob Fischer: That’s the way to think about it. And that might sound very counterintuitive, and I understand that. I think there are a couple of things we can say to help get us in the right frame of mind for thinking about these results.

One is to think about it first like a biologist. If you think that humans’ pain is orders of magnitude worse than the pain of a chicken, you’ve got to point to some feature of human brains that’s going to explain why that would be the case. And I think for a lot of folks, they have a kind of simple picture — where they say more neurons equals more compute equals orders of magnitude difference in performance, or something like that.

And biologists are not going to think that way. They’re going to say, look, neurons produce certain functions, and the number of neurons isn’t necessarily that important to the function: you might achieve the exact same function using many more or many fewer neurons. So that’s just not the really interesting, relevant thing. So that’s the first step: just to try to think more like a biologist who’s focused on functional capacities.

The second thing to say is just that you’ve got to remember what hedonism says. What’s going on here is we’re assuming that welfare is about just this one narrow thing: the intensities of pleasures and pains. You might not think that’s true; you might think welfare is about whether I know important facts about the world or whatever else, right? But that’s not what I’m assessing; I’m just looking at this question of how intense is the pain.

And you might also point out, quite rightly, “But look, my cognitive life is richer. I have a more diverse range of negatively valenced states.” And I’m going to say that I don’t care about the range; I care about the intensity, right? That’s what hedonism says, is that what matters is how intense the pains are. So yeah, “I’m very disappointed because…” — choose unhappy event of your preference — “…my favourite team lost,” whatever the case may be. And from the perspective of hedonism, what matters about that is just how sad did it make me? Not the content of the experience, but just the amount of negatively valenced state that I’m experiencing, or rather the intensity of the negatively valenced state that I’m experiencing. So I think people often implicitly confuse variety in the range of valenced states with intensity.

If these moral weights make you uncomfortable…

Bob Fischer: I think the thing that helps me to some degree is to say, look, we’re doing our best here under moral uncertainty. I think you should update in the direction of animals based on this kind of work if you’ve never taken animals particularly seriously before.

But ethics is hard. There are lots of big questions to ask. I don’t know if hedonism is true. I mean, there are good arguments for it; there are good arguments for all the assumptions that go into the project. But yeah, I’m uncertain at every step, and some kind of higher-level caution about the entire venture is appropriate. And if you look at the way people actually allocate their dollars, they often do spread their bets in precisely this way. Even if they’re really in on animals, they’re still giving some money to AMF. And that makes sense, because we want to make sure that we end up doing some good in the world, and that’s a way of doing that.

Luisa Rodriguez: I guess I’m curious if there’s anything you learned, like a narrative or story that you have that makes this feel more plausible to you? You’ve already said some things, but what story do you have in your head that makes you feel comfortable being like, “Yes, I actually want to use these moral weights when deciding how to allocate resources”?

Bob Fischer: There are two things that I want to say about that. One is I really worry about my own deep biases, and part of the reason that I’m willing to be part of the EA project is because I think that, at its best, it’s an attempt to say, “Yeah, my gut’s wrong. I shouldn’t trust it. I should take the math more seriously. I should try to put numbers on things and calculate. And when I’m uncomfortable with the results, I’m typically the problem, and not the process that I used.” So that’s one thing. It’s a check on my own tendency to discount animals, even as someone who spends most of their life working on animals. So I think that’s one piece.

The other thing is just to spend time thinking about the kinds of things animals can do and what their lives are like. Just how hard a chicken will work to get to a nest box before she lays an egg, the amount of labour she’s willing to go through to do that, to think about how important that is to her. And to realise that we can quantify that, and see how much they care, or to see that they get stressed out when fellow chickens are threatened and that they seem to have some sympathy for conspecifics.

Those kinds of things make me say there is something in there that is recognisable to me as another individual, with desires and preferences and a vantage point on the world, who wants things to go a certain way and is frustrated and upset when they don’t. And recognising the individuality, the perspective of nonhuman animals, for me, really challenges my tendency to not take them as seriously as I think I ought to, all things considered.

The moral weight of bees

Luisa Rodriguez: Yeah, and I was going to do something you probably wouldn’t like, which is do the math and say something like that means that if I’ve got train tracks and I’ve got a human on one side, that means putting 14 bees on the other side. And obviously that’s not taking into account the length of their lives, so that actually isn’t the kind of moral outcome you’d endorse. But trading off an hour of suffering for those two groups feels even more uncomfortable to me. And it sounds like the thing you’d actually stand by is not this kind of 1-to-14, 7% figure, but something like 1-to-100, a couple of orders of magnitude. And even that, I’m still like, “A hundred bees?!” I like bees, but wow.

Bob Fischer: Sure, totally. Again, a couple of things to say. One is that I do think size bias is real. Imagine if bees were the size of rhinos, and you never had to worry about getting stung. You’d probably be pretty into bees all of a sudden. I think we are just affected by the fact that they’re little and they feel very replaceable, we can’t really observe their behaviours, et cetera. So that’s one thing to say.

Luisa Rodriguez: Interesting. OK, so just kind of imagine a really big, fluffy bumblebee buzzing around, being adorable, not stinging you. Yeah, fair enough. I feel like I’d be like, “That thing is cute and important and I’ve gotta protect it.”

Bob Fischer: I know it’s an uncomfortable fact about human psychology that we care about all the wrong things. But anyway, that’s one thing to say.

Second thing to say is that, again, the welfare range estimate is a factor here. The background commitment to something like utilitarianism or welfarist consequentialism, that’s doing a lot of the work. We’re just committed to aggregation if we’re doing this kind of thing, and there’s going to be some number of bees where you’re supposed to flip the lever and kill the human — and that, again, might just make you uncomfortable. If it does, that’s not my fault. That’s a function of the moral theory, not a function of the welfare range estimate.

And the third thing to say is: I do think it’s really important just to learn more about these animals. And of course, bees in particular are very charismatic and cute. And you could go and watch Lars Chittka, who’s a bee scientist, and he’s got these lovely little videos of bees playing and rolling balls around, and it’s adorable. And of course, you can feel lots of sympathy for bees if you watch those kinds of things.

But for me, those actually are not the most interesting cases and compelling cases. For me, it’s the fact that when you look at Drosophila — fruit flies, closely related to black soldier flies — they’re used as depression models for studying humans, as pain models. And you read all these papers, there are a million of them, and they will say, “It’s amazing how similar the neurology of these organisms is to humans! They’re such a perfect model, or such a useful model for understanding these aspects of the most excruciating and terrible human experiences. Isn’t it great that what we can now do is starve them and put them through sleeplessness and all kinds of things, and guess what? They get depressed. And that’s such a great way of studying these horrible symptoms, some of the worst symptoms that humans ever experience.”

So when you see that kind of thing in the literature, and you see the excitement of researchers thinking about these organisms as ways of understanding humans, and seeing the behavioural implications for these organisms, you start to think, man, there’s something going on in that creature that should make me really uncomfortable.

Do octopuses have nine minds?

Bob Fischer: There are actually a lot of animals where it isn’t clear that they have the same kind of unified minds that we think of ourselves as having — open question whether humans have the same kind of unified minds that we think of ourselves as having. But you don’t have the same structures in birds, for instance, between the hemispheres. And you have this kind of distributed cognition, apparently, in octopuses. And you might think, does that mean that you don’t just have one welfare subject in a chicken? Maybe you’ve got two, one for each hemisphere. Or maybe you don’t have one welfare subject, one entity of moral concern, in an octopus: maybe you’ve got nine. And of course, that would really change the moral weight that you assigned to that organism. So that’s why we investigated.

Then the upshot is we basically don’t think you should say that. The short reason is that you want to think functionally about the mind: you want to think about what the overall organism is accomplishing with the ability that it has. And we should assume by default that these organisms evolved to be coordinated systems that are trying to accomplish the collective ends in the world. And I say “collective” as though we’re thinking about the parts as separate individuals — but of course, that’s exactly what’s being contested. And the idea is, once we think about things that way, it becomes a lot less plausible that there’d be an evolutionary reason for things to work in the way that it has these multiple minds. And we think that the empirical evidence itself just isn’t good enough to overcome that fundamental sort of default hypothesis.

Luisa Rodriguez: Can you clarify why we think octopuses might have multiple minds?

Bob Fischer: Yeah. So in the case of octopuses, part of it is just the concentration, the sheer number of neurons in the arms. Part of it is behavioural — if you watch videos of octopuses, you can see examples of this — where it’ll look like arms are kind of operating independently, roving and exploring and reaching out on their own, and they don’t seem to be coordinated with what’s happening with the main attention of the organism, and they’re still off doing their own thing. And that gives people pause and they start to think, is it the case that all these neurons are acting in some semi-coordinated way, independently of what’s happening at the main site of cognition?

And people have written some speculative pieces about this. Of course it’s very hard to test. Of course many of the tests would be horrible. Lots of reasons to think this is either a difficult question to answer, or one that, insofar as we could answer it, perhaps we should not try. But it just looks like it would be really hard to show that that was the case, rather than some more modest hypothesis about the ability to sense more thoroughly through the appendage or whatever.

Luisa Rodriguez: Right. And why is the theory that it’s multiple minds rather than something like multitasking? I feel like you could make a similar observation about me when I’m cooking, and also having a conversation, and also… I don’t know, maybe that’s the most I can do at once. But something like my hands are doing something while it seems like I’m thoroughly mentally engaged in something else?

Bob Fischer: Well, maybe it’s easier as a case study to think about birds. Where in humans, when you sever the corpus callosum — the thing that connects the two hemispheres — you do get these gaps where the hemispheres seem to operate in an uncoordinated way. And you have people report that they don’t know what the other hemisphere ostensibly seems to know, based on the behaviour that they’re engaging in. And so if you don’t have that structure in a bird, you might then wonder, is what you have here effectively a split-brain patient all the time? And then there are these interesting cases, like dolphins, where they seem to have one hemisphere able to go into a sleep mode while the other hemisphere is awake.

And seeing those kinds of things in other species can make you wonder whether there’s just a very different organisational principle at work in those minds. So if that’s your background context, then seeing this really distributed set of neurons in an octopus, and then seeing the behaviour that looks not entirely coordinated, et cetera, can motivate the idea of multiple minds. But admittedly, it’s speculative stuff, a really complicated set of questions. The work on that is in early days, so it’s not like I think there’s some super strong case that one might have had for that view.

What might you care about if you're risk averse?

Bob Fischer: If you’re just comparing humans, chickens, and shrimp, and you’re a straight expected value maximiser, well of course shrimp win because there are trillions of them, so even given very small moral weights for shrimp, they just dominate.

Now suppose that you are worst-case scenario risk averse. Well, now the case for shrimp looks even better than it did before if you were a straight expected value maximiser.

But then suppose you go in for one of those other two forms of risk aversion: you’re worried about difference making or you don’t really like ambiguity. Well, those penalised shrimp, and maybe quite a lot — to the point where the human causes look a lot better than the shrimp causes.

The really interesting thing is that chickens actually look really good across the various types of risk aversion. So if you’re a straight expected value maximiser, the shrimp beat the chickens. But once you’ve got one of those other kinds of risk aversion in play — you’re worried about difference making or ambiguity — actually chickens look really good, and they even beat the human causes. And the reason for that is really simple: it’s just that there are so many chickens and we think they probably matter a fair amount.

Luisa Rodriguez: Yeah, interesting. So it’s like if you have some moderate amount of risk aversion, you might think intuitively that that’s going to rule out the animal welfare interventions. And in fact, that’s just not what happens.

How sensitive are these results, broadly? Maybe the best way to answer that question is just like, how close are chicken interventions to being beat out by one of these other two — such that is it possible that someone could just have slightly different beliefs to Rethink and all of a sudden humans end up looking much better? Or would you have to have really different beliefs to Rethink?

Bob Fischer: You’d have to have quite different views. So Laura Duffy did this wonderful report. She tries to answer this question by looking at different weights that you could have for chickens, and seeing how robustly good chickens end up being, even if you think that they matter a lot less than the Moral Weight Project suggests. And she found, yeah, you could think that they were an order of magnitude less important — so instead of 30% as important, 3% — and still end up with this result that they look really good. So that’s pretty striking.

What if you're not a hedonist?

Luisa Rodriguez: It sounds like we’re both pretty sympathetic to hedonism. But let’s say someone doesn’t buy hedonism. Does that mean that the results of the Moral Weight Project in general aren’t particularly relevant to how they decide how to spend their career and their money?

Bob Fischer: Not at all, because any theory of welfare is going to give some weight to hedonic considerations, so you’re still going to learn something about what matters from this kind of project. The question is just: how much of a welfare range do you think the hedonic portion is? Do you think it’s a lot or do you think it’s a little? And if you think it’s a lot, then maybe you’re learning a lot from this project. If you don’t think that, learning less. But insofar as you’re sympathetic to hedonism, learning about the hedonic differences is going to matter for your cause prioritisation.

Luisa Rodriguez: Yeah. You have this thought experiment that you argue shows that non-hedonic goods and bads can’t contribute that much to an individual’s total welfare range, which you call, for a shorthand, Tortured Tim. Can you walk me through it? It’s a bit complicated, but I think it’s pretty interesting and worth doing.

Bob Fischer: Sure. Well, the core idea is not that complicated. The way to think about it is: just imagine someone whose life is going as well as it can be in all the non-hedonic ways. They’ve got tonnes of friends, they’ve had lots of achievements, they know all sorts of important things, et cetera, et cetera. But they’re being tortured, and they’re being tortured to the point where they’re in as much pain as they can be.

So now we can ask this question: is that life, is that individual, are they well off on balance, or is their life net negative? Is it, on the whole, bad, in that moment? And if you say it’s on the whole bad, then you’re saying, you could have all those great non-hedonic goods — all the knowledge and achievements and everything else — and they would not be enough to outweigh the intensity of that pain. So that suggests that having all those non-hedonic goods isn’t actually more important, isn’t a larger portion of the welfare range than the hedonic portion — and that kind of caps how much good you can be, in principle, getting from all the non-hedonic stuff.

Luisa Rodriguez: Part of me is like, yes, I buy that. I can’t imagine being tortured and still feeling like my love of knowledge and learning, and the fact that my family is out there and doing well, and I’ve got friends who care about me, I can’t really imagine having that outweigh the torture. On the other hand, is it insane to think that there are people being tortured who would prefer not to die because they value life deeply, inherently, or the knowledge of their family and friends and the world existing is worth existing for, despite immense pain? Maybe I’m just not thinking about enough pain. Maybe there is just some extreme level of pain that, because I’ve never experienced torture, I will not at all be able to fully intuit this.

Bob Fischer: Sure. So there are a couple of things to say. One is, as a direct response to your question: no, it’s not crazy. I mean, you can certainly imagine people who do have that view. I go to different universities and give talks about some of these issues, and I gave a talk about the Tortured Tim case at one university, and a guy just said, “This couldn’t be further from my view. It’s just obvious to me that Tim’s life is not just worth living, that it’s one of the better lives.”

And the second thing to say is that maybe there’s a problem in the thought experiment. Maybe it turns out that you can’t really have the objective goods when you’re being tortured. I mean, I don’t really think that’s that plausible, but you could imagine there being various philosophical moves that show that we’re missing something here in the details.

So maybe the takeaway is just: think about how valuable these non-hedonic goods are. Maybe you think they’re much more valuable than I suggest in that thought experiment, but at least maybe it provides a bound; at least maybe it challenges you to think that — at least given your views, the way you want to think about things — you shouldn’t say that the non-hedonic goods are like 100x more important than the hedonic stuff. And as long as you don’t say that, you’re still getting some information from our project about just how important chickens are.

Luisa Rodriguez: Yeah. When I try to make it really concrete, and actually step away from the thought experiment and think about chickens, and I’m like, OK, it does seem like chickens probably have less capacity for the range of experiences that I have. They’re not getting to learn mind-blowing stuff about philosophy the way I am. I am like, OK, but if in fact chickens, while being raised in factory farms, are regularly having their limbs broken, are sometimes starving, as soon as I’m like, if that’s anything like what that would be like for me — where you don’t have to assume anything about whether there’s also stuff about knowledge going on for chickens; it’s just like, if their pain threshold is anything like my pain threshold, that alone is I think basically getting me to the point where I’m like, yes, if I’m living in those conditions, it doesn’t matter that much to me whether I also, in theory, have the capacity for deep philosophical reasoning. And maybe that’s not the whole story here, but that’s the intuition this is trying to push. Does that feel right to you?

Bob Fischer: Yeah, I think something like that is correct. I would just get there via a slightly different route, and would say something like: think about the experience of dealing with children, and what it’s like to watch them be injured or to suffer. It’s intensely gripping and powerful, and they have very few capacities of the kind that we’re describing, and yet that suffering seems extraordinarily morally important. And when I try to look past the species boundary and say, oh look, this is suffering, and it’s intense and it’s acute, it’s powerful. Does it seem like it matters? It just seems that yeah, clearly it does. Clearly it does.

49

0
0

Reactions

0
0

More posts like this

Comments3
Sorted by Click to highlight new comments since: Today at 6:44 AM

This was super informative for me, thank you.

I'm confused by this section in this interview:
 

"Well, the simplest toy example is just going to be, imagine that you have some assessment that says, I think chickens are in a really bad state in factory farms, and I think that if we move layer hens from battery cages into a cage-free environment, we make them 40% better off. And I think that after doing this whole project — whatever the details, we’re just going to make up the toy numbers — I think that chickens have one-tenth of the welfare range of humans.

So now we’ve got 40% change in the welfare for these chickens and we’ve got 10% of the welfare range, so we can multiply these through and say how much welfare you’d be getting in a human equivalent for that benefit to one individual. Then you multiply the number of individuals and you can figure out how much benefit in human units we would be getting."

 

It doesn't seem to me that this follows. Let's assume the "typical" welfare range for chickens is -10 to 10. Let's also assume that for humans it's -100 to 100.  This is how I interpret "chickens have 10% of the welfare range of the humans". Let's also assume moving from cage to cage-free eliminates 50% of the suffering. We still don't know whether that's a move from -10 to -5 or -6 to -3. We also don't know how to place QALYs within this welfare range. When we save a human, should we assume their welfare to be 100 throughout their life?

This also makes it even more crucial to provide a tight technical definition for welfare range so that scientists can place certain experiences within that range.

I suspect he meant something like an improvement of 40 percentage points along the normalized 0-100% scale, so with a scale of -10 to 10, this would be adding 8 to their welfare: 8=40%*(10-(-10)).

(Or it could just be 40 percentage points along the normalized negative part of the welfare scale, so +4 on a scale of -10 to 10.)

Love this episode! I really appreciate all the dissemination of work on prioritizing animals - it really helps me understand the case for animals. I just wished someone would do something similar for x-risk. I feel like I understand really well why we should work on animal welfare, but I feel like I understand much less about why we should work on x-risk.

I am not saying I think x-risk is unimportant. I am just saying I feel like I defer a lot more on x-risk while for animal welfare I could to some degree explain why it is important to a non-EA friend of mine.

Curated and popular this week
Relevant opportunities