I've had conversations with many EAs and EA-adjacent people who believe things about qualia that seem wrong to me. I've met one who assigned double-digit probabilities to bacteria having qualia and said they wouldn't be surprised if a balloon flying through a gradient of air experiences pain because it's trying to get away from hotter air towards colder air. Some say they see shrimp have pain receptors and clearly react to "pain"[1], just like humans, and try to avoid future "pain", so they're experiencing pain, and we should care about their welfare. (A commenter says they think any visual information processing is qualia to some extent, even with neural networks[2].)
I think the way they're making these inferences is invalid. In this post, I'll try to explain why. I'll also suggest a direction for experiments that could produce valid evidence one way or the other.
Epistemic status: Having disentangled the models some people had, I'm relatively confident I see where many make invalid inferences as part of their worldviews. But I'm not a biologist, and this is not my area of expertise. A couple of people I talked to agreed with a suggested experiment as something that would potentially resolve the crux.
I'm using the word "qualia" to point at subjective experience. I don't use the word "consciousness" because different people mean completely different things by it.
I tried to keep the post short while communicating the idea. I think this is an important conversation to have. I believe many in the community make flawed arguments and claim that animal features are evidence for consciousness, even though they aren't.
TL;DR: If a being can describe qualia, we know this is caused by qualia existing somewhere. So we can be pretty sure that humans have qualia. But when our brains identify emotions in things, they can think both humans and geometric shapes in cartoons are feeling something. I argue that when we look at humans and feel like they feel something, we know that this feeling is probably correct, because we can make a valid inference that humans have qualia (because they would talk about having conscious experiences). I further argue that when we look at non-human things, our circuits' recognition of feeling in others is no longer linked to a valid way of inferring that these others have qualia, and we need other evidence.
No zombies among humans
We are a collection of atoms interacting in ways that make us feel and make inferences. The level of neurons is likely the relevant level of abstraction: if the structure of neurons is approximately identical, but the atoms are different, we expect that inputs and outputs will probably be similar, which means that whatever determines the outputs runs on the level of neurons.
If you haven't read the Sequences, I highly recommend doing this. Stuff on zombies (example) is relevant here.
In short, there are some neural circuits in our brains that run qualia. These circuits have inputs and outputs: signals get into our brains, get processed, and then, in some form, get inputted into these circuits. These circuits also have outputs: we can talk about our experience, and the way we talk about it corresponds to how we actually feel.
If a monkey you observe types perfect Shakespeare, you should suspect it's not doing that at random and someone who has access to Shakespeare is messing with the process. If every single monkey you observe types Shakespeare, you can be astronomically confident someone got copies of Shakespeare's writings into the system somehow.
Similarly, we can be pretty confident other people have qualia because other people talk about qualia. Hearing a description of having a subjective experience that matches ours is really strong evidence of outputs from qualia-circuits being in the causal tree of this description. If an LLM talks about qualia, either it has qualia or qualia somewhere else caused some texts to exist, and the LLM read those. When we hear someone about qualia, we can make a valid inference that this is caused by qualia existing or having existed in the past: it'd be surprising to have such a strong match between our internal experience and the description we hear from others by random, without being caused by their own internal experience.
In a world without other things having qualia in a way that affects their actions, hearing about qualia only happens at random, rarely. If you see everyone talking about qualia, this is astronomical evidence qualia caused this.
Note that we don't infer that humans have qualia because they all have "pain receptors": mechanisms that, when activated in us, make us feel pain; we infer that other humans have qualia because they can talk about qualia.
Furthermore, note that lots of stuff that happens in human brains isn't transparent to us at all. We experience many things after the brain processes them. Experiments demonstrated that our brains can make decisions seconds before we experience making these decisions[3].
When we see humans having reactions that we can interpret as painful, we can be confident that they, indeed, experience that pain: we've had strong reasons to believe they have qualia, so we expect information about pain to be input to their qualia.
Reinforcement learning
We experience pain and pleasure when certain processes happen in our brains. Many of these processes are there for reinforcement learning. Having reactions to positive and negative rewards in ways that make the brain more likely to get positive rewards in the future and less likely to get negative rewards in the future is a really useful mechanism that evolution came up with. These mechanisms of reacting to rewards don't require the qualia circuits. They happen even if you train simple neural networks with reinforcement learning: they learn to pursue what gives positive rewards and avoid what gives negative rewards. They can even learn to react to reward signals in-episode: to avoid what gives negative reward after receiving information about the reward without updating the neural network weights. It is extremely useful, from an evolutionary angle, to react to rewards. Having something that experiences information about these rewards wouldn't help the update procedure. For subjective experience to be helpful, the outputs of circuits that run it must play some beneficial role.
What if someone doesn't talk about qualia?
Having observed many humans being able to talk about qualia, we can strongly suspect that it is a universal property of humans. We suspect that any human, when asked, would talk about qualia. We expect that even if someone can't (e.g., they can't talk at all) but we ask them in writing or restore their ability to respond, they'd talk about qualia. This is probabilistic but strong evidence and valid inference.
It is valid to infer that, likely, qualia has been beneficial in human evolution, or it is a side effect of something that has been beneficial in human evolution.
It is extremely easy for us to anthropomorphize everything. We can see a cartoon about geometric shapes and feel like these shapes must be experiencing something. A significant portion of our brain is devoted to that sort of thing.
When we interpret other humans as feeling something when we see their reactions or events happening to them, imagine what it must be like to be like them, feel something we think they must be feeling, and infer there's something they're feeling in this moment, our neural circuits make an implicit assumption that other people have qualia. This assumption is, coincidentally, correct: we can infer in a valid way that neural circuits of other humans run subjective experiences because they output words about qualia, and we wouldn't expect to see this similarity between what we see in ourselves when we reflect and what we hear from other humans to happen by coincidence, in the absence of qualia existing elsewhere.
So, we strongly expect things happening to people to be processed and then experienced by the qualia circuits in the brains of these people. And when we see a person's reaction to something, our brains think this person experiences that reaction and this is a correct thought.
But when we see animals that don't talk about qualia, we can no longer consciously make direct and strong inferences, the way we can with humans. Looking at a human reacting to something and inferring this reaction is to something experienced works because we know they'd talk about having subjective experience if asked; looking at an animal reacting to something and making the same inference they're experiencing what they've reacted to is invalid, as we don't know they're experiencing anything in the first place. Our neural circuits still recognise emotion in animals like they do in humans, but it is no longer tied to a valid way of inferring that there must be an experience of this emotion. In the future (if other problems don't prevent us from solving this one), we could figure out how qualia actually works, and then scan brains and see whether there are circuits implementing it or not. But currently, we have to rely on indirect evidence. We can make theories about the evolutionary reasons for qualia to exist in humans and about how it works and then look for signs that:
- evolutionary reasons for the appearance of subjective experience existed in some animal species' evolution,
- something related to the role we think qualia plays is currently demonstrated by that species, or
- something that we think could be a part of how qualia works exists in that species.
I haven't thought about this long enough, but I'm not sure there's anything outside of these categories that can be valid evidence for qualia existing in animals that can't express having subjective experiences.
To summarise: when we see animals reacting to something, our brains rush to expect there's something experiencing that reaction in these animals, and we feel like these animals are experiencing something. But actually, we don’t know whether there are neural circuits running qualia in these animals at all, and so we don’t know whether whatever reactions we observe are experienced by some circuits. The feeling that animals are experiencing something doesn't point towards evidence that they're actually experiencing something.
So, what do we do?
Conduct experiments that'd provide valid evidence
After a conversation with an EA about this, they asked me to come up with an experiment that would provide valid evidence for whether fish have qualia.
After a couple of minutes of thinking, the first I came up with what I considered might give evidence for whether fish feel empathy (feel what they model others feeling), something I expect to be correlated with qualia[4]:
Find a fish such that you can scan its brain while showing it stuff. Scan its brain while showing it:
- Nothing or something random
- Its own kids
- A fish of another species with its kids
- Just the kids of another fish species
See which circuits activate when the fish sees its own kids. If they activate when it sees another fish with its kids more than when it sees just the kids of another fish species, it's evidence that the fish has empathy towards other fish parents: it feels some parental feelings when it sees its own children and that feels more of it when it sees another parent (who it processes as having these feelings) with children than when it sees just that parent's children.
A couple of EAs were happy to bet 1:1 that this experiment would show that fish have empathy. I'm more than happy to bet this experiment would show fish don't have empathy (and stop eating fish that this experiment shows to possess empathy).
I think there are some problems with this experiment, but I think it might be possible to design actually good experiments in this direction and potentially stop wasting resources on improving lives that don't need improving.
Reflect and update
I hope some people would update and, by default, not consider that things they don't expect to talk about qualia can have qualia. If a dog reacts to something in a really cute way, remember that humans have selected its ancestors for being easy to feel empathy towards. Dogs could be zombies and not feel anything, having only reactions caused by reinforcement learning mechanisms and programmed into them by evolution shaped by humans; you need actual evidence, not just a feeling that they feel something, to think they feel something.
Personally, I certainly wouldn't eat anything that passes the mirror test, as it seems to me to be pointing at something related to why and how I think qualia appears in evolution. I currently don't eat most animals (including all mammals and birds), as I'm uncertain enough about many of them. I eat fish and shrimp (though not octopuses): I think the evolutionary reasons for qualia didn't exist in the evolution of fish, I strongly expect experiments to show fish have no empathy, etc., and so I'm certain there's no actual suffering in shrimp, it's ok to eat them, the efforts directed at shrimp welfare could be directed elsewhere with greater impact.
- ^
See, e.g., the research conducted by Rethink Priorities.
- ^
`I think it's likely that even simple "RL algorithms" might have a very limited, very shallow, non-self-aware kinds of experience: an image-classifier is doing visual-information-processing, so it probably also produces isolated "experiences of vision"`
- ^
I think there are better versions of Libet's experiment, e.g., maybe this one (paywalled)
- ^
It's possible to model stuff about others by reusing circuits for modelling stuff about yourself without having experience; and it's also possible to have experience without modelling others similarly to yourself; but I expect the evolutionary role of qualia and things associated with subjective experience to potentially correlate with empathy, so I'd be surprised if an experiment like that showed that fish have empathy, and it'd be enough evidence for me to stop eating it.
I think perhaps the reason you don't think your argument was properly considered in my comment is because I'm perhaps not understanding core parts of it? To be honest, I'm still quite confused after reading your response. It's possible I just addressed the parts that I could understand, which happened to be what you considered to be more supplementary information. I'll respond to your points here:
I thought I listed all the ways in which you mentioned ways people infer sentience. The additional examples you give generally seem to fall under the "shallow behavioral observations" that I mentioned so I don't see how I misconstrued your argument here.
I am very unclear on what these sentences are trying to convey.
I broadly do agree with this being strong support for possessing qualia. Do you agree with my point that talking about qualia is a very human-centric metric that may miss many cases of beings possessing qualia, such as all babies and most animals? If so, then it seems to be a pretty superfluous thing to mention in cases of uncertain sentience.
I would really appreciate if you would lay out the evidence that people cite and why you think it is invalid. What I saw in the post were the weakest arguments and not reflective of what the research papers cite, which is a much more nuanced approach. At no point in the post did you bring up the stronger arguments so I figured you were basing your conclusions off of things that EAs have mentioned to you in conversation.
I'm guessing the thing you are saying was "not what I was trying to say," was referring to "It's OK to eat shrimp." I'm only 80% certain this is what you were trying to say so forgive me if the following is a misrepresentation. For me, it seemed reasonable to infer that was what you are trying to say since it is in the title of your post and at the end you also state, "I hope some people would update and, by default, not consider that things they don't expect to talk about qualia can have qualia." That last statement leads me to believe you are saying, "since you wouldn't expect shrimp to talk about qualia, then just assume they don't and that it is OK to eat them."
I don't understand how the evidence is not meaningful. You did explain any of their markers in your post. Presenting the context of the markers seems pretty important too.
I'll skip some parts I don't have responses to for brevity.
I'm not a biologist either, but I do defer to the researchers who study sentience. It seems reasonable to assume that the role of some neuroanatomical structures are evolutionarily tied with the evolutionary role of qualia since the former is necessary for the later to exist. I'm not clear on the point that the later half of the second paragraph is making with regards to the Bayesian evidence.
I don't think that Rethink was trying to say that long term behavioral adaptations were on their own meaningful evidence for subjective experience. It is usually considered in context with other indicators of sentience to tip the scales towards or away from sentience. In one of their reports, they even say, "Whether invertebrates have a capacity for valenced experience is still uncertain."
Starting from the part where you mention reinforcement learning is where I start to lose track of what your argument is.
I'm not sure what "conditioned on states of valid evidence" means here.
Perhaps it would be more epistemically accurate to say that you want people to make experiments that are up to your standard. Just because some experiments fall short of your bar doesn't mean that they are not "valid".
Well I commend you on your moral consistency here.
"Talking" is a pretty anthropocentric means of communication. Animals (including fish) have other modes of communication that we are only starting to understand. Plus, talking is only a small part of overall human communication as we are able to say a lot more through nonverbal signals.
This seems like a pretty bad faith argument and false analogy. The process of getting legal recognition of invertebrate sentience and the historical legal recognition of God relied on different evidence and methodology.
Why not reference Rethink more in your post then? The very first sentence talks about conversations you've had and some pretty ridiculous things people have mentioned like the possibility of balloons having sentience. Also, the title references "EA's" who make invalid inferences. I think this misleads the reader into thinking that conversations with EA's are what make up the basis of your argument. If you want to make a rebuttal to Rethink, then use their examples and break down their arguments.
If I were to make my best attempt to understand your core argument, I would start from this:
To me, this essentially translates into:
Valid Method of Inference: subject can describe their qualia, therefore have qualia
Invalid Method of Inference: subject makes humans feel like they have qualia, therefore have qualia
Your argument here is that EA's cannot rely on these invalid methods of inference to determine presence of qualia in subjects, which seems reasonable. However, it seems like a pretty large leap to then go on to say that the current scientific evidence (which is not fully addressed in the post) is not valid and we should believe it is ok to eat shrimp.
Research compiled by Rethink has only been used to update the overall estimated likelihood of sentience, not as a silver bullet for determining the presence of sentience. For example, the thing that has pain receptors is more likely to be able to experience pain than the thing without pain receptors. And if there is reasonable uncertainty regarding sentience, then shouldn't the conclusion be to promote a cautious approach to invertebrate consumption?
Apologies again for not understanding the core of your position here. I tried my best, but I am probably still missing important pieces of it.