Hide table of contents

I've had conversations with many EAs and EA-adjacent people who believe things about qualia that seem wrong to me. I've met one who assigned double-digit probabilities to bacteria having qualia and said they wouldn't be surprised if a balloon flying through a gradient of air experiences pain because it's trying to get away from hotter air towards colder air. Some say they see shrimp have pain receptors and clearly react to "pain"[1], just like humans, and try to avoid future "pain", so they're experiencing pain, and we should care about their welfare. (A commenter says they think any visual information processing is qualia to some extent, even with neural networks[2].)

I think the way they're making these inferences is invalid. In this post, I'll try to explain why. I'll also suggest a direction for experiments that could produce valid evidence one way or the other.

Epistemic status: Having disentangled the models some people had, I'm relatively confident I see where many make invalid inferences as part of their worldviews. But I'm not a biologist, and this is not my area of expertise. A couple of people I talked to agreed with a suggested experiment as something that would potentially resolve the crux.

I'm using the word "qualia" to point at subjective experience. I don't use the word "consciousness" because different people mean completely different things by it.

I tried to keep the post short while communicating the idea. I think this is an important conversation to have. I believe many in the community make flawed arguments and claim that animal features are evidence for consciousness, even though they aren't.

TL;DR: If a being can describe qualia, we know this is caused by qualia existing somewhere. So we can be pretty sure that humans have qualia. But when our brains identify emotions in things, they can think both humans and geometric shapes in cartoons are feeling something. I argue that when we look at humans and feel like they feel something, we know that this feeling is probably correct, because we can make a valid inference that humans have qualia (because they would talk about having conscious experiences). I further argue that when we look at non-human things, our circuits'  recognition of feeling in others is no longer linked to a valid way of inferring that these others have qualia, and we need other evidence.

No zombies among humans

We are a collection of atoms interacting in ways that make us feel and make inferences. The level of neurons is likely the relevant level of abstraction: if the structure of neurons is approximately identical, but the atoms are different, we expect that inputs and outputs will probably be similar, which means that whatever determines the outputs runs on the level of neurons.

If you haven't read the Sequences, I highly recommend doing this. Stuff on zombies (example) is relevant here.

In short, there are some neural circuits in our brains that run qualia. These circuits have inputs and outputs: signals get into our brains, get processed, and then, in some form, get inputted into these circuits. These circuits also have outputs: we can talk about our experience, and the way we talk about it corresponds to how we actually feel.

If a monkey you observe types perfect Shakespeare, you should suspect it's not doing that at random and someone who has access to Shakespeare is messing with the process. If every single monkey you observe types Shakespeare, you can be astronomically confident someone got copies of Shakespeare's writings into the system somehow.

Similarly, we can be pretty confident other people have qualia because other people talk about qualia. Hearing a description of having a subjective experience that matches ours is really strong evidence of outputs from qualia-circuits being in the causal tree of this description. If an LLM talks about qualia, either it has qualia or qualia somewhere else caused some texts to exist, and the LLM read those. When we hear someone about qualia, we can make a valid inference that this is caused by qualia existing or having existed in the past: it'd be surprising to have such a strong match between our internal experience and the description we hear from others by random, without being caused by their own internal experience.

In a world without other things having qualia in a way that affects their actions, hearing about qualia only happens at random, rarely. If you see everyone talking about qualia, this is astronomical evidence qualia caused this.

Note that we don't infer that humans have qualia because they all have "pain receptors": mechanisms that, when activated in us, make us feel pain; we infer that other humans have qualia because they can talk about qualia.

Furthermore, note that lots of stuff that happens in human brains isn't transparent to us at all. We experience many things after the brain processes them. Experiments demonstrated that our brains can make decisions seconds before we experience making these decisions[3].

When we see humans having reactions that we can interpret as painful, we can be confident that they, indeed, experience that pain: we've had strong reasons to believe they have qualia, so we expect information about pain to be input to their qualia.

Reinforcement learning

We experience pain and pleasure when certain processes happen in our brains. Many of these processes are there for reinforcement learning. Having reactions to positive and negative rewards in ways that make the brain more likely to get positive rewards in the future and less likely to get negative rewards in the future is a really useful mechanism that evolution came up with. These mechanisms of reacting to rewards don't require the qualia circuits. They happen even if you train simple neural networks with reinforcement learning: they learn to pursue what gives positive rewards and avoid what gives negative rewards. They can even learn to react to reward signals in-episode: to avoid what gives negative reward after receiving information about the reward without updating the neural network weights. It is extremely useful, from an evolutionary angle, to react to rewards. Having something that experiences information about these rewards wouldn't help the update procedure. For subjective experience to be helpful, the outputs of circuits that run it must play some beneficial role.

What if someone doesn't talk about qualia?

Having observed many humans being able to talk about qualia, we can strongly suspect that it is a universal property of humans. We suspect that any human, when asked, would talk about qualia. We expect that even if someone can't (e.g., they can't talk at all) but we ask them in writing or restore their ability to respond, they'd talk about qualia. This is probabilistic but strong evidence and valid inference.

It is valid to infer that, likely, qualia has been beneficial in human evolution, or it is a side effect of something that has been beneficial in human evolution.

It is extremely easy for us to anthropomorphize everything. We can see a cartoon about geometric shapes and feel like these shapes must be experiencing something. A significant portion of our brain is devoted to that sort of thing.

When we interpret other humans as feeling something when we see their reactions or events happening to them, imagine what it must be like to be like them, feel something we think they must be feeling, and infer there's something they're feeling in this moment, our neural circuits make an implicit assumption that other people have qualia. This assumption is, coincidentally, correct: we can infer in a valid way that neural circuits of other humans run subjective experiences because they output words about qualia, and we wouldn't expect to see this similarity between what we see in ourselves when we reflect and what we hear from other humans to happen by coincidence, in the absence of qualia existing elsewhere.

So, we strongly expect things happening to people to be processed and then experienced by the qualia circuits in the brains of these people. And when we see a person's reaction to something, our brains think this person experiences that reaction and this is a correct thought.

But when we see animals that don't talk about qualia, we can no longer consciously make direct and strong inferences, the way we can with humans. Looking at a human reacting to something and inferring this reaction is to something experienced works because we know they'd talk about having subjective experience if asked; looking at an animal reacting to something and making the same inference they're experiencing what they've reacted to is invalid, as we don't know they're experiencing anything in the first place. Our neural circuits still recognise emotion in animals like they do in humans, but it is no longer tied to a valid way of inferring that there must be an experience of this emotion. In the future (if other problems don't prevent us from solving this one), we could figure out how qualia actually works, and then scan brains and see whether there are circuits implementing it or not. But currently, we have to rely on indirect evidence. We can make theories about the evolutionary reasons for qualia to exist in humans and about how it works and then look for signs that:

  • evolutionary reasons for the appearance of subjective experience existed in some animal species' evolution,
  • something related to the role we think qualia plays is currently demonstrated by that species, or
  • something that we think could be a part of how qualia works exists in that species.

I haven't thought about this long enough, but I'm not sure there's anything outside of these categories that can be valid evidence for qualia existing in animals that can't express having subjective experiences.

To summarise: when we see animals reacting to something, our brains rush to expect there's something experiencing that reaction in these animals, and we feel like these animals are experiencing something. But actually, we don’t know whether there are neural circuits running qualia in these animals at all, and so we don’t know whether whatever reactions we observe are experienced by some circuits. The feeling that animals are experiencing something doesn't point towards evidence that they're actually experiencing something.

So, what do we do?

Conduct experiments that'd provide valid evidence

After a conversation with an EA about this, they asked me to come up with an experiment that would provide valid evidence for whether fish have qualia.

After a couple of minutes of thinking, the first I came up with what I considered might give evidence for whether fish feel empathy (feel what they model others feeling), something I expect to be correlated with qualia[4]:

Find a fish such that you can scan its brain while showing it stuff. Scan its brain while showing it:

  • Nothing or something random
  • Its own kids
  • A fish of another species with its kids
  • Just the kids of another fish species

See which circuits activate when the fish sees its own kids. If they activate when it sees another fish with its kids more than when it sees just the kids of another fish species, it's evidence that the fish has empathy towards other fish parents: it feels some parental feelings when it sees its own children and that feels more of it when it sees another parent (who it processes as having these feelings) with children than when it sees just that parent's children.

A couple of EAs were happy to bet 1:1 that this experiment would show that fish have empathy. I'm more than happy to bet this experiment would show fish don't have empathy (and stop eating fish that this experiment shows to possess empathy). 

I think there are some problems with this experiment, but I think it might be possible to design actually good experiments in this direction and potentially stop wasting resources on improving lives that don't need improving. 

Reflect and update

I hope some people would update and, by default, not consider that things they don't expect to talk about qualia can have qualia. If a dog reacts to something in a really cute way, remember that humans have selected its ancestors for being easy to feel empathy towards. Dogs could be zombies and not feel anything, having only reactions caused by reinforcement learning mechanisms and programmed into them by evolution shaped by humans; you need actual evidence, not just a feeling that they feel something, to think they feel something.

Personally, I certainly wouldn't eat anything that passes the mirror test, as it seems to me to be pointing at something related to why and how I think qualia appears in evolution. I currently don't eat most animals (including all mammals and birds), as I'm uncertain enough about many of them. I eat fish and shrimp (though not octopuses): I think the evolutionary reasons for qualia didn't exist in the evolution of fish, I strongly expect experiments to show fish have no empathy, etc., and so I'm certain there's no actual suffering in shrimp, it's ok to eat them, the efforts directed at shrimp welfare could be directed elsewhere with greater impact.

  1. ^

    See, e.g., the research conducted by Rethink Priorities.

  2. ^

    `I think it's likely that even simple "RL algorithms" might have a very limited, very shallow, non-self-aware kinds of experience: an image-classifier is doing visual-information-processing, so it probably also produces isolated "experiences of vision"`

    The comment: EA Forum, LW. My reply: EA Forum, LW.

  3. ^

    I think there are better versions of Libet's experiment, e.g., maybe this one (paywalled)

  4. ^

    It's possible to model stuff about others by reusing circuits for modelling stuff about yourself without having experience; and it's also possible to have experience without modelling others similarly to yourself; but I expect the evolutionary role of qualia and things associated with subjective experience to potentially correlate with empathy, so I'd be surprised if an experiment like that showed that fish have empathy, and it'd be enough evidence for me to stop eating it.

-4

2
20

Reactions

2
20

More posts like this

Comments34
Sorted by Click to highlight new comments since:

This post seems to take the weakest argument for sentience (or qualia as you put it) as understood by a layperson in casual conversation. I'll use sentience/qualia interchangably in this response, but please let me know if you understand them differently.

Please let me know if I understand your argument correctly: 

  • You believe the current focus on invertebrate (including shrimp) welfare is based on a flawed inference of sentience, specifically on shallow behavioral observations, presence of pain receptors, and natural human tendencies towards anthropomorphizing everything. 
  • You would like these criteria to be more considered:
    • evolutionary reasons for the appearance of subjective experience existed in some animal species' evolution,
    • something related to the role we think qualia plays is currently demonstrated by that species, or
    • something that we think could be a part of how qualia works exists in that species.
  • You think that being able to communicate details about one's qualia is the ultimate standard for inclusion in the group of qualia possessing species. 
  • You wouldn't eat anything that passes the mirror test
  • Based on your perception that there is a lack of evidence for shrimp possessing qualia, you are recommending to readers that it is "OK to eat shrimp."

 

Supposing this is what you are trying to say, I'd like to bring up some counterpoints:

Direct evidence that individuals of these taxa exhibit features which, according to expert agreement, seem to be necessary –although not sufficient– for consciousness (Bateson, 1991; Broom, 2013; EFSA, 2005; Elwood, 2011; Fiorito, 1986; Sneddon et al., 2014; Sneddon, 2017). These features are:

  • Neuroanatomical structures and physiological functions, such as nociceptors or equivalent structures, centralized information processing, vertebrate midbrain-like function, and physiological responses to nociception or handling. Additionally, it is expected that conscious individuals have opioid-like receptors and analgesics reduce their nociceptive reflexes and avoidant behaviors;
  • Behavioral responses that are potential indicators of pain experience, such as defensive behavior or fighting back, and moving away from noxious stimuli. These reactions seem to take into account a noxious stimulus’ intensity and direction. Other observed behaviors include pain relief learning, and long-term behavior alteration to avoid a noxious stimulus."
  • The mirror test is classically designed for capturing human-like behaviors. In a new format that was designed for natural behaviors of roosters, they actually did pass the mirror test
  • You said, "we don't infer that humans have qualia because they all have "pain receptors": mechanisms that, when activated in us, make us feel pain; we infer that other humans have qualia because they can talk about qualia.
    Couple points about this:
    • I don't know of any scientific research that states that the presence of pain receptors is sufficient for possession of qualia. Generally, the more sentience indicators found, the higher the assigned probability of sentience. 
    • If we were in the age where we didn't have tools for cross language comprehension, then this reasoning would support inferring that Japanese-only speaking people don't understand the subject mater of a test written in english if they are unable to give satisfactory answers in english. 
    • Like the example of the rooster experiment above illustrates, people have historically done a poor job trying to understand which communication signals to look for from other species when designing experiments. However, animal communication is a field that is advancing and can be thought of similarly to the development of cross language comprehension across different human groups.
  • There is a precedent set to avoid assuming individuals can't experience pain just because they cannot communicate it the high standards we set. into the 1980s, many surgeons believed babies could not feel pain and so they rarely used anesthetics in surgery. They attributed the babies' screaming and writhing as “just reflexes”. And even though we still can’t definitively prove babies feel pain, most medical professionals will use anesthetics in surgery because there is evidence of other indicators that they do. Unless it is a huge personal sacrifice to quality of life to not eat fish/shrimp, why not just go with the "better safe than sorry" approach of not eating them until you are more certain about their sentience? 
  • I see the evidence base for invertebrate sentience growing all the time (see further reading links below). Recently, the evidence was even sufficient for invertebrate sentience to be recognized by law. Based on your post, it does not seem like you have a thorough literature review on it. It seems like you have judged the entire base of evidence on conversations with EAs that are not formally working on sentience research. Because of this, I think that the title and conclusion of your post (aka "It's OK to eat shrimp) is based mostly on a straw man fallacy because it argues against the weakest arguments for invertebrate sentience. If you make any updates after exploring the evidence base further, please consider changing this wording to prevent potential harms from people looking for moral license to continue eating shrimp.


Further Reading:
How Should We Go About Looking For Invertebrate Consciousness?
Invertebrate sentience: A review of the neuroscientific literature
Pain, Sentience, and Animal Welfare (in fish)
Invertebrate Sentience, Welfare, & Policy

I want to be precise, so I'll point at what can be parsed from your message differently from what I think.

if I understand your argument correctly

This is not a summary of the argument. My argument is about the specifics of how people make invalid inferences. Most of what you understand was intended to be supplementary and not the core of the argument.

Nonetheless, will clarify on the points (note these are not central and I thought about it less than about the actual argument):

You believe the current focus on invertebrate (including shrimp) welfare is based on a flawed inference of sentience, specifically on shallow behavioral observations, presence of pain receptors, and natural human tendencies towards anthropomorphizing everything.

The "specifically" part is not precise, as it's not just the presence of pain receptors but also behaviour to seek, avoid, make trade-offs, etc., and many other things. There's a specific way I consider the inference people are making to be invalid.

You would like these criteria to be more considered:

I would like them to be what people consciously understand to be the reason of certain facts being evidence one way or another. Those are not specific factors, it was an attempt to describe possible indirect evidence.

You think that being able to communicate details about one's qualia is the ultimate standard for inclusion in the group of qualia possessing species. 

I think if something talks about qualia without ever hearing about it from humans, you should strongly expect it to have qualia. I wouldn't generalise this to the automatic inclusion of the whole species, as it would be a weaker statement and I can imagine edge cases.

You wouldn't eat anything that passes the mirror test

Yep, as it is strong indirect evidence.

Based on your perception that there is a lack of evidence for shrimp possessing qualia, you are recommending to readers that it is "OK to eat shrimp."

It is not just about a lack of evidence, it is about a fundamentally invalid way of thinking shrimp have subjective experience in the first place, and I don't think there's enough valid evidence for subjective experience in shrimp. The evidence people tend to cite is not valid.

And it was not what I was trying to say, but it might still be valuable to reply to your comment.

There are many other  markers of sentience/pain/qualia

The first time I wanted to write this post was a couple of years ago when I saw Rethink Priorities research using many markers that have approximately nothing to do with meaningful evidence for the existence of experience of pain.

features which, according to expert agreement, seem to be necessary –although not sufficient– for consciousness

Remarks: I do mention "something that we think could be a part of how qualia works exists in that species" as a valid way to infer evidence. The absence of certain features might be extremely strong evidence for not having subjective experience, but the presence of many of these features might only only extremely weak evidence. (if you don't have changing parts, like a rock, you don't have qualia; if you're a planet with moving parts, you probably still don't have qualia and it's ok to eat you even if you have hundreds of features like moving/not moving; also note features are not always independent). (Consciousness is an awful word because people mean totally different things by it.)

Neuroanatomical structures

It's maybe okay to defer to them and feel free to eat biological organisms from Earth without those, although I'm not a biologist to verify.

Note that the presence of these things doesn't say much unless you have reasons to believe their evolutionary role is tied to the role of qualia. It is Bayesian evidence if you didn't know anything about a thing and now know it has these properties, but most of it is probably due to (8 billion humans + many mammals and maybe birds) : all things with it compared to all things without it, including rocks or something.

Behavioral responses that are potential indicators of pain experience, such as defensive behavior or fighting back, and moving away from noxious stimuli. These reactions seem to take into account a noxious stimulus’ intensity and direction. Other observed behaviors include pain relief learning, and long-term behavior alteration to avoid a noxious stimulus."

Long-term behaviour alterations to avoid what got you an immediate big negative reward is a really helpful adaptation, but how is also having qualia more helpful? Taking the presence of things like that as meaningful evidence for subjective experience is exactly what shows confusion about ways to make valid inferences and surprised me about Rethink's research a couple of years ago. These things are helpful for a reinforcement learning agent to learn; you need to explain how having qualia is additionally helpful/makes it easier to implement those/is a side effect of implementing those adaptations. Until you have not, this does not provide additional evidence after you know you're talking about an RL adaptation, if you screen off the increased probability of talking about humans or mammals/birds/things we have other evidence about. (And I think some bacteria might have defensive behaviour and fighting back and moving away from certain things, though I'm not a biologist/might be wrong/didn't google sources for that background sort-of-maybe-knowledge.)

The mirror test is classically designed for capturing human-like behaviors. In a new format that was designed for natural behaviors of roosters, they actually did pass the mirror test.

I don't eat chickens, because I spent maybe an hour on this question and was uncertain enough for it to make sense to be safe.

I don't know of any scientific research that states that the presence of pain receptors is sufficient for possession of qualia. Generally, the more sentience indicators found, the higher the assigned probability of sentience. 

Indicators are correlated, and a lot of them are not valid evidence if you've already conditioned on states of valid evidence.

If we were in the age where we didn't have tools for cross language comprehension, then this reasoning would support inferring that Japanese-only speaking people don't understand the subject mater of a test written in english if they are unable to give satisfactory answers in english.

I feel like this is a digression and won't comment.

people have historically done a poor job

Yep. I want people to make valid experiments instead.

There is a precedent set to avoid assuming individuals can't experience pain just because they cannot communicate it the high standards we set. into the 1980s, many surgeons believed babies could not feel pain and so they rarely used anesthetics in surgery

I don't have reasons to believe newborn babies experience pain, but it is probably a good idea to use anaesthesia, as the stress (without any experience of pain) might have a negative impact on the future development of the baby.

animal communication

Wanna bet fish don't talk about having subjective experiences?

Recently, the evidence was even sufficient for invertebrate sentience to be recognized by law

I think for most of the UK history, the existence of god is also recognised by law (at least implicitly? and maybe it is still?). How is that evidence?

Also, I don't eat octopuses.

It seems like you have judged the entire base of evidence on conversations with EAs that are not formally working on sentience research

Nope, I have read a bunch of stuff written by Rethink and I think they should rethink their approach.

I think that the title and conclusion of your post (aka "It's OK to eat shrimp) is based mostly on a straw man fallacy because it argues against the weakest arguments for invertebrate sentience. If you make any updates after exploring the evidence base further, please consider changing this wording to prevent potential harms from people looking for moral license to continue eating shrimp.

I don't feel like you understood or addressed my actual arguments, which are about invalid ways EAs make inferences about qualia in certain things. If you explain my argument to me and then explain how exactly the inferences e.g. Rethink make are actually more valid than what I described, and that these valid ways show there's a meaningful chance shrimp have qualia, I'll be happy to retract all of that and change the post title and add a disclaimer. So far, I think my argument isn't even just strawmanned in your comment: it is not considered at all.

The post is mainly addressed to people who already don't eat shrimp, as I hope they'll reconsider/will make thought-through decisions on their own (I don't think many people are likely to read the conclusion and stop being vegan because a random person on the internet says they can).

I think perhaps the reason you don't think your argument was properly considered in my comment is because I'm perhaps not understanding core parts of it? To be honest, I'm still quite confused after reading your response. It's possible I just addressed the parts that I could understand, which happened to be what you considered to be more supplementary information. I'll respond to your points here:

The "specifically" part is not precise, as it's not just the presence of pain receptors but also behaviour to seek, avoid, make trade-offs, etc., and many other things. There's a specific way I consider the inference people are making to be invalid.

I thought I listed all the ways in which you mentioned ways people infer sentience. The additional examples you give generally seem to fall under the "shallow behavioral observations" that I mentioned so I don't see how I misconstrued your argument here.
 

I would like them to be what people consciously understand to be the reason of certain facts being evidence one way or another. Those are not specific factors, it was an attempt to describe possible indirect evidence.

I am very unclear on what these sentences are trying to convey.
 

I think if something talks about qualia without ever hearing about it from humans, you should strongly expect it to have qualia. I wouldn't generalise this to the automatic inclusion of the whole species, as it would be a weaker statement and I can imagine edge cases.

I broadly do agree with this being strong support for possessing qualia. Do you agree with my point that talking about qualia is a very human-centric metric that may miss many cases of beings possessing qualia, such as all babies and most animals? If so, then it seems to be a pretty superfluous thing to mention in cases of uncertain sentience.  
 

It is not just about a lack of evidence, it is about a fundamentally invalid way of thinking shrimp have subjective experience in the first place, and I don't think there's enough valid evidence for subjective experience in shrimp. The evidence people tend to cite is not valid.

And it was not what I was trying to say, but it might still be valuable to reply to your comment.

I would really appreciate if you would lay out the evidence that people cite and why you think it is invalid. What I saw in the post were the weakest arguments and not reflective of what the research papers cite, which is a much more nuanced approach. At no point in the post did you bring up the stronger arguments so I figured you were basing your conclusions off of things that EAs have mentioned to you in conversation. 

I'm guessing the thing you are saying was "not what I was trying to say," was referring to "It's OK to eat shrimp." I'm only 80% certain this is what you were trying to say so forgive me if the following is a misrepresentation. For me, it seemed reasonable to infer that was what you are trying to say since it is in the title of your post and at the end you also state, "I hope some people would update and, by default, not consider that things they don't expect to talk about qualia can have qualia." That last statement leads me to believe you are saying, "since you wouldn't expect shrimp to talk about qualia, then just assume they don't and that it is OK to eat them." 
 

The first time I wanted to write this post was a couple of years ago when I saw Rethink Priorities research using many markers that have approximately nothing to do with meaningful evidence for the existence of experience of pain.

I don't understand how the evidence is not meaningful. You did explain any of their markers in your post. Presenting the context of the markers seems pretty important too. 

I'll skip some parts I don't have responses to for brevity.

It's maybe okay to defer to them and feel free to eat biological organisms from Earth without [neuroanatomical structures], although I'm not a biologist to verify.

Note that the presence of these things doesn't say much unless you have reasons to believe their evolutionary role is tied to the role of qualia. It is Bayesian evidence if you didn't know anything about a thing and now know it has these properties, but most of it is probably due to (8 billion humans + many mammals and maybe birds) : all things with it compared to all things without it, including rocks or something.

I'm not a biologist either, but I do defer to the researchers who study sentience. It seems reasonable to assume that the role of some neuroanatomical structures are evolutionarily tied with the evolutionary role of qualia since the former is necessary for the later to exist. I'm not clear on the point that the later half of the second paragraph is making with regards to the Bayesian evidence. 

 

Long-term behaviour alterations to avoid what got you an immediate big negative reward is a really helpful adaptation, but how is also having qualia more helpful? Taking the presence of things like that as meaningful evidence for subjective experience is exactly what shows confusion about ways to make valid inferences and surprised me about Rethink's research a couple of years ago. These things are helpful for a reinforcement learning agent to learn; you need to explain how having qualia is additionally helpful/makes it easier to implement those/is a side effect of implementing those adaptations. Until you have not, this does not provide additional evidence after you know you're talking about an RL adaptation, if you screen off the increased probability of talking about humans or mammals/birds/things we have other evidence about. (And I think some bacteria might have defensive behaviour and fighting back and moving away from certain things, though I'm not a biologist/might be wrong/didn't google sources for that background sort-of-maybe-knowledge.)

I don't think that Rethink was trying to say that long term behavioral adaptations were on their own meaningful evidence for subjective experience. It is usually considered in context with other indicators of sentience to tip the scales towards or away from sentience. In one of their reports, they even say, "Whether invertebrates have a capacity for valenced experience is still uncertain.

Starting from the part where you mention reinforcement learning is where I start to lose track of what your argument is. 

 

Indicators are correlated, and a lot of them are not valid evidence if you've already conditioned on states of valid evidence.

I'm not sure what "conditioned on states of valid evidence" means here. 

 

Yep. I want people to make valid experiments instead.

Perhaps it would be more epistemically accurate to say that you want people to make experiments that are up to your standard. Just because some experiments fall short of your bar doesn't mean that they are not "valid".

 

I don't have reasons to believe newborn babies experience pain, but it is probably a good idea to use anaesthesia, as the stress (without any experience of pain) might have a negative impact on the future development of the baby.

Well I commend you on your moral consistency here. 

 

Wanna bet fish don't talk about having subjective experiences?

"Talking" is a pretty anthropocentric means of communication. Animals (including fish) have other modes of communication that we are only starting to understand. Plus, talking is only a small part of overall human communication as we are able to say a lot more through nonverbal signals. 

 

I think for most of the UK history, the existence of god is also recognised by law (at least implicitly? and maybe it is still?). How is that evidence?

Also, I don't eat octopuses.

This seems like a pretty bad faith argument and false analogy. The process of getting legal recognition of invertebrate sentience and the historical legal recognition of God relied on different evidence and methodology.

 

Nope, I have read a bunch of stuff written by Rethink and I think they should rethink their approach.

Why not reference Rethink more in your post then? The very first sentence talks about conversations you've had and some pretty ridiculous things people have mentioned like the possibility of balloons having sentience. Also, the title references "EA's" who make invalid inferences. I think this misleads the reader into thinking that conversations with EA's are what make up the basis of your argument. If you want to make a rebuttal to Rethink, then use their examples and break down their arguments. 


If I were to make my best attempt to understand your core argument, I would start from this:

 

TL;DR: If a being can describe qualia, we know this is caused by qualia existing somewhere. So we can be pretty sure that humans have qualia. But when our brains identify emotions in things, they can think both humans and geometric shapes in cartoons are feeling something. When we look at humans and feel like they feel something, we know that this feeling is probably correct, because we can make a valid inference that humans have qualia (because they would talk about having conscious experiences). When we look at non-human things, this recognition of feeling in others is no longer linked to a valid way of inferring that these others have qualia, and we need other evidence.

To me, this essentially translates into:
 
Valid Method of Inference: subject can describe their qualia, therefore have qualia
Invalid Method of Inference: subject makes humans feel like they have qualia, therefore have qualia

Your argument here is that EA's cannot rely on these invalid methods of inference to determine presence of qualia in subjects, which seems reasonable. However, it seems like a pretty large leap to then go on to say that the current scientific evidence (which is not fully addressed in the post) is not valid and we should believe it is ok to eat shrimp. 

Research compiled by Rethink has only been used to update the overall estimated likelihood of sentience, not as a silver bullet for determining the presence of sentience. For example, the thing that has pain receptors is more likely to be able to experience pain than the thing without pain receptors. And if there is reasonable uncertainty regarding sentience, then shouldn't the conclusion be to promote a cautious approach to invertebrate consumption?

Apologies again for not understanding the core of your position here. I tried my best, but I am probably still missing important pieces of it.

Mikhail gave me a chance to read this ahead of time, but I didn’t get it together to give comments before he posted it. He should get credit for that.

On the whole it seems like this is argument about burden of proof or what we should assume given that we don’t know the real answer. Mikhail seems to say we’re jumping to conclusions when we attribute qualia to others when only talking about qualia is really good evidence. I think most animals between humans and cnidarians on the tree of life should be assumed to have qualia because there isn’t a clear function for qualia in humans that isn’t shared with other animals with brains. (It’s possible qualia are a weird, unnecessary part of the way the mammal brain works, but not, say, the insect brain, but I see no reason to think qualia are only part of the human brain.) I think we should assume other animals phylogenetically close to us also have qualia, and there’s a legitimate question of how far that assumption should go. (“Sentience indicators” are a way of systematizing whether or not other animals are close enough to the only example we know of qualia, humans.) Should it cover shrimp? I think we can’t rule out shrimp qualia, and there are just so many individual shrimp that are harvested because of their small size, so even very diminished experiences seem like they might add up.

Note that we don't infer that humans have qualia because they all have "pain receptors": mechanisms that, when activated in us, make us feel pain; we infer that other humans have qualia because they can talk about qualia.

Yeah we look to criteria like this because we can’t talk to animals. I would be much more skeptical that something without physical receptors for tissue damage feels pain. There are many life forms that do not have pain receptors and they are generally ruled out as having meaningfully negative experience even though we don’t know the relationship of qualia and sentience to sensory perceptions for sure.

Having reactions to positive and negative rewards in ways that make the brain more likely to get positive rewards in the future and less likely to get negative rewards in the future is a really useful mechanism that evolution came up with. These mechanisms of reacting to rewards don't require the qualia circuits.

Yeah, but qualia could just accompany reinforcement learning mechanisms for some reason we don’t yet know (we don’t yet know any reason qualia are necessary or useful over mere unconscious reinforcement), or they could be like a form of common currency for weighing various inputs and coming to a decision. Qualia are not required for anything as far as we know so I don’t think there’s any principled reason to say humans have them but no one else does.

when we see animals reacting to something, our brains rush to expect there's something experiencing that reaction in these animals, and we feel like these animals are experiencing something. But actually, we don’t know whether there are neural circuits running qualia in these animals at all, and so we don’t know whether whatever reactions we observe are experienced by some circuits. The feeling that animals are experiencing something doesn't point towards evidence that they're actually experiencing something.

This is one reason we may be biased to interpret animals as having qualia. But I have a strong presupposition that at least species phylogenetically close to me have qualia as well. Why would qualia only start with humans? Humans do some unusual things but I have no reason to think qualia are particularly involved in them.

might give evidence for whether fish feel empathy (feel what they model others feeling), something I expect to be correlated with qualia[4]

I’m surprised this would change your mind. Why is empathy in the brain any different than pain receptors? We don’t know the relationship of either to qualia/sentience.

There is a considerable academic and scientific literature that engages with many of these points. It would make sense to engage with the literature on that post, as there are numerous papers that have debated and tested many of these points in detail. You mention experiments, but there are many studies that conduct such experiments. Have you reviewed these studies and found them to be missing something, e.g. having a consistent methodological flaw or missing a key indicator of consciousness?

If you think that the authors of those papers have not considered these points (e.g. if you think one indicator of qualia/sentience/moral patienthood is more valid than other indicators; or if you think there are specific methodological flaws in existing studies), then would it not be better to publish a scientific paper on this topic or at least conduct a more thorough literature review? If your argument is robust to criticism, and it withstands scrutiny when you show how your argument addresses shortcomings in the existing academic literature, then you may indeed cause society (and the EA community) to make more informed decisions about which lives to improve. I would be glad to work with you to write and publish this paper.

I think your argument would be more compelling if you listed the specific assumptions in the specific papers on the topic of invertebrate and/or fish sentience (whether academic, such as Birch et al 2021 or similar reviews for fish and insects, or the work by Rethink you are critical of) and then, point-by-point, made the argument that those assumptions are false. That would allow readers to more clearly see whether, and on which specific points, you are diverging from existing thinking. For what it's worth, there are papers that have criticised those reviews I mentioned on various grounds, so this process would also help readers to see whether your criticism is adding anything new to existing debates and whether your points have been made before in the literature (as this has been an area of research and debate for decades). This also applies to the question of "burden of proof", which I think was raised in another comment thread. It is well-accepted that investigations about the subjective experiences of non-human animals necessarily depend on the weight of evidence, rather than one particular smoking gun.

And re: the experiment on empathy - there are a number of studies that have looked at brain activity in various social situations in a few different types of fish. Do none of these studies meet your standards? If so, why not, and would your proposed experiment be an improvement on experiments that have been conducted already?

The title of the post is ("It's OK to eat shrimp") doesn't really follow from the text of the article. There are many reasons, whether precautionary or strategic, why we might think it wise to spend resources on improving particular lives even if we, "by default, not consider that things they don't expect to talk about qualia can have qualia." The article seems more to support a title along the lines of "we should be cautious about attributing qualia to shrimp".

I agree with the point asking for more high-quality studies, which would be a non-controversial view among most academics who research in this area.

(I am a marine biologist and familiar with the literature on fish sentience and shrimp sentience, both of which are pretty complex bodies of literature.)

(edited a bunch for focus and clarity)

P.S. if others are interested, you could read about different types of evidence the Rethink Priorities team looks at when thinking about invertebrate sentience (I found this interesting to skim).

Yep!

I believe that a lot of that is not valid evidence for whether there's the experience of pain etc. or not, and RL+qualia doesn't seem to be in any way a better explanation than just reinforcement learning

The question is not whether these behaviours could strictly be explainable without qualia. The question as what's the most likely explanation given that these animals are related to us and we solve a lot of these problems through qualia (while showing similar external signs).

For example, yeah, a dog could just look like she is in pain. But then we have to invent this new concept of ersatz pain that looks and functions a lot like our pain, but is actually unconscious, in order to describe the dogs mental state in this case. To the extent it looks similar to our pain in a given animal, this looks like a ad hoc move.

The general reason I disagree-voted is that this post seems to make a leap from: A) 'We have less solid evidence for non-humans experiencing qualia than we do for humans' to B) 'We can be certain (some) non-humans don't experience qualia, and it's appropriate to behave towards them as if they don't.'

I agree with A), but I don't think your argument can support B).

I certainly wouldn't eat anything that passes the mirror test

Some fish, such as the cleaner wrasse, pass the mirror test.

evidence that the fish has empathy towards other fish parents

Fish have very different approaches to rearing young than mammals. Many fish species do not spend much effort caring for young, and probably don't meaningfully think of themselves as parents. I think this experiment stacks the deck against fish by expecting them to respond as mammals do.

If the linked study gets independently replicated, with good controls, I’ll definitely stop eating cleaner fish and will probably stop eating fish in general.

I really don’t expect it to replicate. If you place a fish in front of a mirror, and it has a mark, its behavior won’t be significantly different from being placed in front of a fish with the same mark, especially if the mark isn’t made to resemble a parasite and it’s the first time the fish sees a mirror. I’d be happy to bet on this.

Fish have very different approaches to rearing young than mammals

That was an experiment some people agreed would prove them wrong if it didn’t show empathy, but if there aren’t really detectable feelings that fish has towards fish children, the experiment won’t show results one way or the other, so I don’t think it’d be stacking the deck against fish. Are there any situations in which you expect fish to feel empathy, and predict it will show up in an experiment of this sort?

I couldn't find independent replications, but the study was preceded by two similar tests by the same team, which address concerns about weaknesses in the study setup. 

 

I'm a bit confused by your second point. If the fish didn't have detectable feelings towards fish children, wouldn't you think this was evidence that fish don't experience empathy? Or would you think it was no evidence one way or the other? 

If it's no evidence, then it can't 'prove people wrong' who currently think that fish feel empathy. But it seems like you think it could prove people wrong. So I'm confused...!

Are there any situations in which you expect fish to feel empathy?

Maybe; I'm a bit unsure. I don't know much about fish behaviour or evolution. 

Yep, I was able to find studies by the same people.

The experiment I suggested in the post isn’t “does fish have detectable feelings towards fish children”, it’s “does fish have more of feelings similar to those it has towards its children when it sees other fish parents with their children than when it sees just other fish children”. Results one way or another would be evidence about fish experiencing empathy, and it would be strong enough for me to stop eating fish. If fish doesn’t feel differently in presence of its children, the experiment wouldn’t provide evidence one way or another.

Ah okay, thanks, that helps me understand.

In that case, I think you probably don't think that people who currently believe fish experience empathy could be proven wrong by this experiment (since you think it wouldn't be providing evidence one way or another)...?

If fish indeed don’t feel anything towards their children (which is not what at least some people who believe fish experience empathy think), then this experiment won’t prove them wrong. But if you know of a situation where fish do experience empathy, a similarly designed experiment can likely be conducted, which, if we make different predictions, would provide evidence one way or another. Are there situations where you think fish feel empathy?

I may be misinterpreting your argument, but it sounds like it boils down to:

  1. Given that we don't know much about qualia, we can't be confident that shrimp have qualia.
  2. [implicit] Therefore, shrimp have an extremely low probability of having qualia.
  3. Therefore, it's ok to eat shrimp.

The jump from step 1 to step 2 looks like a mistake to me.

You also seemed to suggest (although I'm not quite sure whether you were actually suggesting this) that if a being cannot in principle describe its qualia, then it does not have qualia. I don't see much reason to believe this to be true—it's one theory of how qualia might work, but it's not the only theory. And it would imply that, e.g., human stroke victims who are incapable of speech do not have qualia because they cannot, even in principle, talk about their qualia.

(I think there is a reasonable chance that I just don't understand your argument, in which case I'm sorry for misinterpreting you.)

Hi Mikhail,

Thanks for the post. You raise some interesting points about the nature of consciousness that should give us all some humility when talking about the subject.

As a person involved with effective altruism, I am someone who likes to attach probabilities to empirical claims.

Therefore, I am wondering about the specifics of your probabilistic credences regarding a few questions pertinent to the theme of estimating qualia.

 

A couple of easy questions to start:

You wrote that you are “pretty sure” that humans experience qualia. What is your percent credence that other humans experience qualia?

You wrote that you are “uncertain enough” about the likelihood of qualia mammals and birds for you to refrain from eating them. What is your percent credence that any mammals or birds experience qualia? What percent uncertainty do you have and why is it better to not apply that uncertainty to fish and shrimp?

Since mammals and birds are (presumably) unable to use meaningful and non-regurgitated language in the sense that you said would be sufficient for proof of qualia, what evidence do we have for their experience of qualia?

What is your percent credence that any species of fish or shrimp experience any form of qualia?

What percent credence in a certain species of fish or shrimp having qualia would you deem sufficient to stop eating them?

(Note: this question is assuming that current typical farming practices remain the same (i.e. lack of normal biological function due to immense overcrowding, the vast majority of fish suffocate to death, many fish are fed other fish that suffocate or are crushed to death))

 

A couple of tougher questions, if you are game:
Do you have any sort of study supporting the notion that higher empathy correlates with higher subjective experience of qualia? I am open to the notion, but I would need more empirical evidence to jump on board.

Assuming some types of fishes and shrimps could experience qualia, how do you think it would compare to that of humans or other mammals qualitatively? How do you think mammalian qualia compares to that of humans?

What is your percent credence that evolution designed pain and pleasure to be even more saliently experienced for less intelligent animals than for intelligent animals like humans, given that they cannot reason as much and presumably must operate more off instinct and intuition?

For example, the behavior of humans is motivated by reasoning to an extent that presumably it is not for a dog (e.g. if a human sees an unfamiliar piece of food on a counter, they might not eat it under the presumption that someone else left it in there whereas dogs will unthinkingly eat food from the counter without considering whether or not it is for them).

Assuming equal accessibility, taste, nutritional value, and social acceptance to eating fish, what percent credence would you need in other humans having qualia to begin eating them?

Do you think it’s possible that some arguments for eating animals tend to be subject to motivated reasoning due to the convenience and deliciousness of eating them? What percent of these arguments would you say are colored by this phenomenon?

 

I know some of the questions I asked might be slightly provocative, but I was very impressed at how deeply you have thought about these questions. Very few other people consider their practices to that degree. I hope you can keep pushing me and the rest of the EA community to think deeply about these questions.

 

Thanks so much,

Will

Why would showing that fish "feel empathy" prove that they have inner subjective experience?  It seems perfectly possible to build a totally mechanical, non-conscious system that nevertheless displays signs of empathy.  Couldn't fish just have some kind of built-in, not-necessarily-conscious instinct to protect other fish (for instance, by swimming together in a large school) in order to obtain some evolutionary benefit?

Conversely, isn't it possible for fish to have inner subjective experience but not feel empathy?  Fish are very simple creatures, while "empathy" is a complicated social emotion.  Especially in a solitary creature (like a shark, or an octopus), it seems plausible that you might have a rich inner world of qualia alongside a wide variety of problem-solving / world-modeling skills, but no social instincts like jealousy, empathy, loyalty, etc.  Fish-welfare advocates often cite studies that seem to show fish having an internal sense of pain vs pleasure (eg, preferring water that contains numbing medication), or that bees can have an internal sense of being optimistic/risky vs pessimistic/cautious -- if you think that empathy proves the existence of qualia, why are these similar studies not good enough for you?  What's special about the social emotion of empathy?

Personally, I am more sympathetic to the David Chalmers "hard problem of consciousness" perspective, so I don't think these studies about behaviors (whether social emotions like jealousy or more basic emotions like optimism/pessimism) can really tell us that much about qualia / inner subjective experience.  I do think that fish / bees / etc probably have some kind of inner subjective experience, but I'm not sure how "strong", or vivid, or complex, or self-aware, that experience is, so I am very uncertain about the moral status of animals.

(Personally, I also happily eat fish & shrimp all the time -- this is due to a combination of me wanting to eat a healthy diet without expending too much effort, and me figuring that the negative qualia experienced by creatures like fish is probably very small, so I should spend my efforts trying to improve the lives of current & future humans (or finding more-leveraged interventions to reduce animal farming) instead of on trying to make my diet slightly more morally clean.)

In general, I think this post is talking about consciousness / qualia / etc in a very confused way -- if you think that empathy-behaviors are ironclad proof of empathy-qualia, you should also think that other (pain-related, etc) behaviors are ironclad proof of other qualia.

Both (modeling stuff about others by reusing circuits for modeling stuff about yourself without having experience; and having experience without modelling others similarly to yourself) are possible, and the reason why I think the suggested experiment would provide indirect evidence is related to the evolutionary role I consider qualia to possibly play. It wouldn't be extremely strong evidence and certainly wouldn't be proof, but it'd be enough evidence for me to stop eating fish that has these things.

The studies about optimistic/pessimistic behaviour tell us nothing about whether these things experience optimism/pessimism, as they are an adaptation an RL algorithm would implement without the need to implement circuits that would also experience these things, unless you can provide a story for why circuitry for experience is beneficial or a natural side effect of something beneficial.

One of the points of the post is that any evidence we can have except for what we have about humans would be inderect, and people call things evidence for confused reasons. Pain-related behaviour is something you'd see in neural networks trained with RL, because it's good to avoid pain and you need a good explanation for how exactly it can be evidence for qualia.

Definitely agree that empathy and other social feelings provide indirect evidence for self-awareness (ie, "modeling stuff about yourself" in your brain) in a way that optimism/pessimism or pain-avoidance doesn't.  (Although wouldn't a sophisticated-enough RL circuit, interacting with other RL circuits in some kind of virtual evolutionary landscape, also develop social emotions like loyalty, empathy, etc?  Even tiny mammals like mice/rats display sophisticated social behaviors...)

I tend to assume that some kind of panpsychism is true, so you don't need extra "circuitry for experience" in order to turn visual-information-processing into an experience of vision.  What would such extra circuitry even do, if not the visual information processing itself?  (Seems like maybe you are a believer in what Daniel Dennet calls the "fallacy of the second transduction"?)
Consequently, I think it's likely that even simple "RL algorithms" might have a very limited, very shallow, non-self-aware kinds of experience: an image-classifier is doing visual-information-processing, so it probably also produces isolated "experiences of vision"!  But of course it would not have any awareness of itself as being a thing-that-sees, nor would those isolated experiences of vision be necessarily tied together into a coherent visual field, etc.

So, I tend to think that fish and other primitive creatures probably have "qualia", including something like a subjective experience of suffering, but that they probably lack any sophisticated self-awareness / self-model, so it's kind of just "suffering happening nowhere" or "an experience of suffering not connected to anything else" -- the fish doesn't know it's a fish, doesn't know that it's suffering, etc, the fish is just generating some simple qualia that don't really refer to anything or tie into a larger system.  Whether you call such a disconnected & shallow experience "real qualia" or "real suffering" is a question of definitions.

I think this personal view of mine is fairly similar to Eliezer's from the Sequences: there are no "zombies" (among humans or animals), there is no "second transduction" from neuron activity into a mythical medium-of-consciousness (no "extra circuitry for experience" needed), rather the information-processing itself somehow directly produces (or is equivalent to, or etc) the qualia.  So, animals and even simpler systems probably have qualia in some sense.  But since animals aren't self-aware (and/or have less self-awareness than humans), their qualia don't matter (and/or matter less than humans' qualia).

...Anyways, I think our core disagreement is that you seem to be equating "has a self-model" with "has qualia", versus I think maybe qualia can and do exist even in very simple systems that lack a self-model.  But I still think that having a self-model is morally important (atomic units of "suffering" that are just floating in some kind of void, unconnected to a complex experience of selfhood, seem of questionable moral relevance to me), so we end up having similar opinions about how it's probably fine to eat fish.

I guess what I am objecting to is that you are acting like these philosophical problems of qualia / consciousness / etc are solved and other people are making an obvious mistake.  I agree that I see a lot of people being confused and making mistakes, but I don't think the problems are solved!

I appreciate this comment.

Qualia (IMO) certainly is "information processing": there are inputs and outputs. And it is a part of a larger information-processing thing, the brain. What I'm saying is that there's information processing happening outside of the qualia circuits, and some of the results of the information processing outside of the qualia circuits are inputs to our qualia. 

I think it's likely that even simple "RL algorithms" might have a very limited, very shallow, non-self-aware kinds of experience: an image-classifier is doing visual-information-processing, so it probably also produces isolated "experiences of vision"

Well, how do you know that visual information processing produces qualia? You can match when algorithms implemented by other humans' brains to algorithms implemented by your brain, because all of you talk about subjective experience; how do you, inside your neural circuitry, make an inference that a similar thing happens in neurons that just process visual information?

You know you have subjective experience, self-evidently. You can match the computation run by the neural circuitry of your brain to the computation run by the neural circuitry of other humans: because since they talk about subjective experience, you can expect this to be caused by similar computation. This is valid. Thinking that visual information processing is part of what makes qualia (i.e., there's no way to replace a bunch of your neurons with something that outputs the same stuff without first seeing and processing something, such that you'll experience seeing as before) is something you can make theories about but is not a valid inference, you don't have a way of matching the computation of qualia to the whole of your brain.

And, how can you match it to matrix multiplications that don't talk about qualia, did not have evolutionary reasons for experience, etc.? Do you think an untrained or a small convolutional neural network experiences images to some extent, or only large and trained? Where does that expectation come from?

I'm not saying that qualia is solved. We don't yet know how to build it, and we can't yet scan brains and say which circuits implement it. But some people seem more confused than warranted, and they spend resources less effectively than they could've.

And I'm not equating qualia to self-model. Qualia is just the experience of information. It doesn't required a self-model, also on Earth, so far, I expect these things to have been correlated.

If there's suffering and experience of extreme pain, in my opinion, it matters even if there isn't reflectivity.

I find this post interesting, because I think it’s important to be conceptually clear about animal minds, but I strongly disagree with its conclusions.

It’s true that animals (and AIs) might be automatons: they might simulate qualia without really experiencing them. And it’s true that humans might anthropomorphise by seeing qualia in animals, or AIs, or arbitrary shape that don't really have them. (You might enjoy John Bradshaw’s The Animals Among Us, which has a chapter on just this topic).

But I don’t see why an ability to talk about your qualia would be a suitable test for your qualia's realness. I can imagine talking automatons, and I can imagine non-talking non-automatons. If I prod an LLM with the right prompts, it might describe ‘its’ experiences to me; this is surreal and freaky, but it doesn’t yet persuade me that the LLM has qualia, that there is something which it is to be an LLM. And, likewise, I can imagine a mute person, or a person afflicted with locked-in syndrome, who experiences qualia but can’t talk about it. You write: “We expect that even if someone can't (e.g., they can't talk at all) but we ask them in writing or restore their ability to respond, they'd talk about qualia”. But I don’t see how “restor[ing] their ability to respond” is different to ‘granting animals the ability to respond’; just as you expect humans granted voice to talk about their qualia, I expect many animals granted voice to talk about their qualia. (It seems quixotic, but some researchers are really exploring this right now, using AI to try to translate animal languages). Your test would treat the “very human-like” screaming of pigs at slaughter as no evidence at all for their qualia. The boundary between screams and words is fuzzy, the distinction arbitrary. I think it’s a speciesist way to draw the line: the question is not, Can they talk?

I would be a little out of my depth talking about better tests for animal consciousness, but as far as I know the canonical book on fish consciousness is Do Fish Feel Pain? by Victoria Braithwaite. If you haven’t read it, I think you’d find it interesting. I also second Angelina and Constance's comments, which share valuable information about our evidence base on invertebrate sentience.

Some evidence on animal consciousness is more convincing than other evidence. Braithwaite makes a stronger case than this post. But the questions definitely aren’t answered, and they might be fundamentally unanswerable! So: what do we do? I don’t think we can say, ‘I believe fish and shrimp don’t experience qualia, and therefore there are no ethical issues with eating them.’ We should adopt the Precautionary Principle: ‘I think there’s some chance, even if it’s a low chance, that fish and shrimp experience qualia, so there could be ethical issues with eating them’. In a world with uncertainty about whether fish and shrimp experience qualia, one scenario is the torture and exploitaton of trillions, and another scenario is a slightly narrower diet. Why risk an ethically catastrophic mistake?

(writing in a personal capacity)

Thanks for the comment!

As I mentioned in the post,

If an LLM talks about qualia, either it has qualia or qualia somewhere else caused some texts to exist, and the LLM read those.

If the LLM describes "its experience" to you, and the experience matches your own subjective experience, you can be pretty sure there's subjective experience somewhere in the causal structure behind the LLM's outputs. If the LLM doesn't have subjective experience but talks about it, that means someone had subjective experience, which made them write a text about it, which the LLM then read. You shouldn't expect an LLM to talk about subjective experience if it was never trained by anything caused by subjective experience and doesn't have subjective experience itself.

This means that the ability to talk about qualia is extremely strong evidence for having qualia or having learned about qualia as a result of something that has qualia talking.

I don't think fish simulate qualia; I think they're just automation, simply with nothing like experience and nothing resembling experience. They perform adaptations that include efficient reinforcement learning but don't include experience of processed information.

How do you know whether you scream because of the subjective experience of pain or because of the mechanisms for the instinctive ways to avoid death- how do you know that the scream is caused by the outputs of the neural circuits running qualia and not just by the same stuff that causes the inputs to the circuits that you experience as extremely unpleasant?

It's not about whether they can talk; parrots and LLMs can be trained to say words in reaction to stuff. If you can talk about having subjective experience, it is valid to assume there's subjective experience somewhere down the line. If you can't talk about subjective experience, other, indirect evidence is needed. Assuming you have subjective experience because you react to something external similarly to those with subjective experience is pattern-matching that works on humans for the above reasons, but invalid on everything else without valid evidence for qualia. Neural networks trained with RL would react to pain and whatever is the evolutionary reason for screaming on pain, if you provide similar incentives, RL agents would scream on pain; that doesn't provide evidence for whether there's also experience of anything in them.

I'm certain enough fish don't have qualia to be ok with eating fish; if we solve more critical short-term problems, then, in the future, hopefully, we'll figure out how subjective experience actually works and will know for sure.

Oops. I think I forgot to add a couple of lines, which might've made the argument harder to understand. I slightly updated the post, most of the added stuff is bold. 

We are a collection of atoms interacting in ways that make us feel and make inferences. The level of neurons is likely the relevant level of abstraction: if the structure of neurons is approximately identical, but the atoms are different, we expect that inputs and outputs will probably be similar, which means that whatever determines the outputs runs on the level of neurons.


...

When we interpret other humans as feeling something when we see their reactions or events happening to them, imagine what it must be like to be like them, feel something we think they must be feeling, and infer there's something they're feeling in this moment, our neural circuits make an implicit assumption that other people have qualia. This assumption is, coincidentally, correct: we can infer in a valid way that neural circuits of other humans run subjective experiences because they output words about qualia, which we wouldn't expect to happen randomly, in the absence of qualia.

But when we see animals that don't talk about qualia, ... [our] neural circuits still recognise emotion in animals like they do in humans, but it is no longer tied to a valid way of inferring that there must be an experience of this emotion. 
 

Hot take: it’s not okay to eat (most commercial) shrimp, but the reason is because of human rights considerations, rather than animal welfare considerations.

Curated and popular this week
Relevant opportunities