I want humanity not to lose its long-term potential.
📚 In 2018, I launched a project to print HPMOR in Russian and promote it; it became the most funded crowdfunding campaign in Russian history; we printed 21 000 copies. A startup I own gives me enough income to focus on EA projects and, e.g., donated $30k to MIRI. I do outreach projects (e.g., we're organizing a translation of the 80,000 Hours' Key Ideas series) but am considering switching to direct alignment research.
(The effectiveness and the impact aside, I was the head of the Russian Pastafarian Church for a year, which was fun. And I was a political activist, which was fun except for spending time in jail after the protests; now it’s not as fun since the Russian authorities would probably throw me in prison if I return to Russia.)
I’m 22 now. have Israeli citizenship, and currently live in Tel Aviv, but I don’t want to remain there in the long term and would like to find a country and a city in the world to reside in. Also, I’m bi 🏳️🌈
The existence of a verbal report of qualia is strong evidence that the reporter has subjective experience (or someone they've learned to report this way from having subjective experience). I'm not talking about specific emotional states being reported
I want to be precise, so I'll point at what can be parsed from your message differently from what I think.
if I understand your argument correctly
This is not a summary of the argument. My argument is about the specifics of how people make invalid inferences. Most of what you understand was intended to be supplementary and not the core of the argument.
Nonetheless, will clarify on the points (note these are not central and I thought about it less than about the actual argument):
You believe the current focus on invertebrate (including shrimp) welfare is based on a flawed inference of sentience, specifically on shallow behavioral observations, presence of pain receptors, and natural human tendencies towards anthropomorphizing everything.
The "specifically" part is not precise, as it's not just the presence of pain receptors but also behaviour to seek, avoid, make trade-offs, etc., and many other things. There's a specific way I consider the inference people are making to be invalid.
You would like these criteria to be more considered:
I would like them to be what people consciously understand to be the reason of certain facts being evidence one way or another. Those are not specific factors, it was an attempt to describe possible indirect evidence.
You think that being able to communicate details about one's qualia is the ultimate standard for inclusion in the group of qualia possessing species.
I think if something talks about qualia without ever hearing about it from humans, you should strongly expect it to have qualia. I wouldn't generalise this to the automatic inclusion of the whole species, as it would be a weaker statement and I can imagine edge cases.
You wouldn't eat anything that passes the mirror test
Yep, as it is strong indirect evidence.
Based on your perception that there is a lack of evidence for shrimp possessing qualia, you are recommending to readers that it is "OK to eat shrimp."
It is not just about a lack of evidence, it is about a fundamentally invalid way of thinking shrimp have subjective experience in the first place, and I don't think there's enough valid evidence for subjective experience in shrimp. The evidence people tend to cite is not valid.
And it was not what I was trying to say, but it might still be valuable to reply to your comment.
There are many other markers of sentience/pain/qualia
The first time I wanted to write this post was a couple of years ago when I saw Rethink Priorities research using many markers that have approximately nothing to do with meaningful evidence for the existence of experience of pain.
features which, according to expert agreement, seem to be necessary –although not sufficient– for consciousness
Remarks: I do mention "something that we think could be a part of how qualia works exists in that species" as a valid way to infer evidence. The absence of certain features might be extremely strong evidence for not having subjective experience, but the presence of many of these features might only only extremely weak evidence. (if you don't have changing parts, like a rock, you don't have qualia; if you're a planet with moving parts, you probably still don't have qualia and it's ok to eat you even if you have hundreds of features like moving/not moving; also note features are not always independent). (Consciousness is an awful word because people mean totally different things by it.)
Neuroanatomical structures
It's maybe okay to defer to them and feel free to eat biological organisms from Earth without those, although I'm not a biologist to verify.
Note that the presence of these things doesn't say much unless you have reasons to believe their evolutionary role is tied to the role of qualia. It is Bayesian evidence if you didn't know anything about a thing and now know it has these properties, but most of it is probably due to (8 billion humans + many mammals and maybe birds) : all things with it compared to all things without it, including rocks or something.
Behavioral responses that are potential indicators of pain experience, such as defensive behavior or fighting back, and moving away from noxious stimuli. These reactions seem to take into account a noxious stimulus’ intensity and direction. Other observed behaviors include pain relief learning, and long-term behavior alteration to avoid a noxious stimulus."
Long-term behaviour alterations to avoid what got you an immediate big negative reward is a really helpful adaptation, but how is also having qualia more helpful? Taking the presence of things like that as meaningful evidence for subjective experience is exactly what shows confusion about ways to make valid inferences and surprised me about Rethink's research a couple of years ago. These things are helpful for a reinforcement learning agent to learn; you need to explain how having qualia is additionally helpful/makes it easier to implement those/is a side effect of implementing those adaptations. Until you have not, this does not provide additional evidence after you know you're talking about an RL adaptation, if you screen off the increased probability of talking about humans or mammals/birds/things we have other evidence about. (And I think some bacteria might have defensive behaviour and fighting back and moving away from certain things, though I'm not a biologist/might be wrong/didn't google sources for that background sort-of-maybe-knowledge.)
The mirror test is classically designed for capturing human-like behaviors. In a new format that was designed for natural behaviors of roosters, they actually did pass the mirror test.
I don't eat chickens, because I spent maybe an hour on this question and was uncertain enough for it to make sense to be safe.
I don't know of any scientific research that states that the presence of pain receptors is sufficient for possession of qualia. Generally, the more sentience indicators found, the higher the assigned probability of sentience.
Indicators are correlated, and a lot of them are not valid evidence if you've already conditioned on states of valid evidence.
If we were in the age where we didn't have tools for cross language comprehension, then this reasoning would support inferring that Japanese-only speaking people don't understand the subject mater of a test written in english if they are unable to give satisfactory answers in english.
I feel like this is a digression and won't comment.
people have historically done a poor job
Yep. I want people to make valid experiments instead.
There is a precedent set to avoid assuming individuals can't experience pain just because they cannot communicate it the high standards we set. into the 1980s, many surgeons believed babies could not feel pain and so they rarely used anesthetics in surgery
I don't have reasons to believe newborn babies experience pain, but it is probably a good idea to use anaesthesia, as the stress (without any experience of pain) might have a negative impact on the future development of the baby.
animal communication
Wanna bet fish don't talk about having subjective experiences?
Recently, the evidence was even sufficient for invertebrate sentience to be recognized by law
I think for most of the UK history, the existence of god is also recognised by law (at least implicitly? and maybe it is still?). How is that evidence?
Also, I don't eat octopuses.
It seems like you have judged the entire base of evidence on conversations with EAs that are not formally working on sentience research
Nope, I have read a bunch of stuff written by Rethink and I think they should rethink their approach.
I think that the title and conclusion of your post (aka "It's OK to eat shrimp) is based mostly on a straw man fallacy because it argues against the weakest arguments for invertebrate sentience. If you make any updates after exploring the evidence base further, please consider changing this wording to prevent potential harms from people looking for moral license to continue eating shrimp.
I don't feel like you understood or addressed my actual arguments, which are about invalid ways EAs make inferences about qualia in certain things. If you explain my argument to me and then explain how exactly the inferences e.g. Rethink make are actually more valid than what I described, and that these valid ways show there's a meaningful chance shrimp have qualia, I'll be happy to retract all of that and change the post title and add a disclaimer. So far, I think my argument isn't even just strawmanned in your comment: it is not considered at all.
The post is mainly addressed to people who already don't eat shrimp, as I hope they'll reconsider/will make thought-through decisions on their own (I don't think many people are likely to read the conclusion and stop being vegan because a random person on the internet says they can).
Thanks for the comment!
As I mentioned in the post,
If an LLM talks about qualia, either it has qualia or qualia somewhere else caused some texts to exist, and the LLM read those.
If the LLM describes "its experience" to you, and the experience matches your own subjective experience, you can be pretty sure there's subjective experience somewhere in the causal structure behind the LLM's outputs. If the LLM doesn't have subjective experience but talks about it, that means someone had subjective experience, which made them write a text about it, which the LLM then read. You shouldn't expect an LLM to talk about subjective experience if it was never trained by anything caused by subjective experience and doesn't have subjective experience itself.
This means that the ability to talk about qualia is extremely strong evidence for having qualia or having learned about qualia as a result of something that has qualia talking.
I don't think fish simulate qualia; I think they're just automation, simply with nothing like experience and nothing resembling experience. They perform adaptations that include efficient reinforcement learning but don't include experience of processed information.
How do you know whether you scream because of the subjective experience of pain or because of the mechanisms for the instinctive ways to avoid death- how do you know that the scream is caused by the outputs of the neural circuits running qualia and not just by the same stuff that causes the inputs to the circuits that you experience as extremely unpleasant?
It's not about whether they can talk; parrots and LLMs can be trained to say words in reaction to stuff. If you can talk about having subjective experience, it is valid to assume there's subjective experience somewhere down the line. If you can't talk about subjective experience, other, indirect evidence is needed. Assuming you have subjective experience because you react to something external similarly to those with subjective experience is pattern-matching that works on humans for the above reasons, but invalid on everything else without valid evidence for qualia. Neural networks trained with RL would react to pain and whatever is the evolutionary reason for screaming on pain, if you provide similar incentives, RL agents would scream on pain; that doesn't provide evidence for whether there's also experience of anything in them.
I'm certain enough fish don't have qualia to be ok with eating fish; if we solve more critical short-term problems, then, in the future, hopefully, we'll figure out how subjective experience actually works and will know for sure.
I appreciate this comment.
Qualia (IMO) certainly is "information processing": there are inputs and outputs. And it is a part of a larger information-processing thing, the brain. What I'm saying is that there's information processing happening outside of the qualia circuits, and some of the results of the information processing outside of the qualia circuits are inputs to our qualia.
I think it's likely that even simple "RL algorithms" might have a very limited, very shallow, non-self-aware kinds of experience: an image-classifier is doing visual-information-processing, so it probably also produces isolated "experiences of vision"
Well, how do you know that visual information processing produces qualia? You can match when algorithms implemented by other humans' brains to algorithms implemented by your brain, because all of you talk about subjective experience; how do you, inside your neural circuitry, make an inference that a similar thing happens in neurons that just process visual information?
You know you have subjective experience, self-evidently. You can match the computation run by the neural circuitry of your brain to the computation run by the neural circuitry of other humans: because since they talk about subjective experience, you can expect this to be caused by similar computation. This is valid. Thinking that visual information processing is part of what makes qualia (i.e., there's no way to replace a bunch of your neurons with something that outputs the same stuff without first seeing and processing something, such that you'll experience seeing as before) is something you can make theories about but is not a valid inference, you don't have a way of matching the computation of qualia to the whole of your brain.
And, how can you match it to matrix multiplications that don't talk about qualia, did not have evolutionary reasons for experience, etc.? Do you think an untrained or a small convolutional neural network experiences images to some extent, or only large and trained? Where does that expectation come from?
I'm not saying that qualia is solved. We don't yet know how to build it, and we can't yet scan brains and say which circuits implement it. But some people seem more confused than warranted, and they spend resources less effectively than they could've.
And I'm not equating qualia to self-model. Qualia is just the experience of information. It doesn't required a self-model, also on Earth, so far, I expect these things to have been correlated.
If there's suffering and experience of extreme pain, in my opinion, it matters even if there isn't reflectivity.
Both (modeling stuff about others by reusing circuits for modeling stuff about yourself without having experience; and having experience without modelling others similarly to yourself) are possible, and the reason why I think the suggested experiment would provide indirect evidence is related to the evolutionary role I consider qualia to possibly play. It wouldn't be extremely strong evidence and certainly wouldn't be proof, but it'd be enough evidence for me to stop eating fish that has these things.
The studies about optimistic/pessimistic behaviour tell us nothing about whether these things experience optimism/pessimism, as they are an adaptation an RL algorithm would implement without the need to implement circuits that would also experience these things, unless you can provide a story for why circuitry for experience is beneficial or a natural side effect of something beneficial.
One of the points of the post is that any evidence we can have except for what we have about humans would be inderect, and people call things evidence for confused reasons. Pain-related behaviour is something you'd see in neural networks trained with RL, because it's good to avoid pain and you need a good explanation for how exactly it can be evidence for qualia.
Yep!
I believe that a lot of that is not valid evidence for whether there's the experience of pain etc. or not, and RL+qualia doesn't seem to be in any way a better explanation than just reinforcement learning
Oops. I think I forgot to add a couple of lines, which might've made the argument harder to understand. I slightly updated the post, most of the added stuff is bold.
We are a collection of atoms interacting in ways that make us feel and make inferences. The level of neurons is likely the relevant level of abstraction: if the structure of neurons is approximately identical, but the atoms are different, we expect that inputs and outputs will probably be similar, which means that whatever determines the outputs runs on the level of neurons.
...When we interpret other humans as feeling something when we see their reactions or events happening to them, imagine what it must be like to be like them, feel something we think they must be feeling, and infer there's something they're feeling in this moment, our neural circuits make an implicit assumption that other people have qualia. This assumption is, coincidentally, correct: we can infer in a valid way that neural circuits of other humans run subjective experiences because they output words about qualia, which we wouldn't expect to happen randomly, in the absence of qualia.
But when we see animals that don't talk about qualia, ... [our] neural circuits still recognise emotion in animals like they do in humans, but it is no longer tied to a valid way of inferring that there must be an experience of this emotion.