Hi Brian! Thanks for your reply. I think you're quite right to distinguish between your flavor of panpsychism and the flavor I was saying doesn't entail much about LLMs. I'm going to update my comment above to make that clearer, and sorry for running together your view with those others.
Ah, thanks! Well, even if it wasn't appropriately directed at your claim, I appreciate the opportunity to rant about how panpsychism (and related views) don't entail AI sentience :)
The Brian Tomasik post you link to considers the view that fundamental physical operations may have moral weight (call this view "Physics Sentience").
[Edit: see Tomasik's comment below. What I say below is true of a different sort of Physics Sentience view like constitutive micropsychism, but not necessarily of Brian's own view, which has somewhat different motivations and implications]
But even if true, [many versions of] Physics Sentience [but not necessarily Tomasik's] doesn't have straightforward implications about what high-level systems, like organisms and AI systems, also comprise a sentient subject of experience. Consider: a human being touching a stove is experiencing pain on Physics Sentience; but a pan touching a stove is not experiencing pain. On Physics Sentience, the pan is made up of sentient matter, but this doesn't mean that the pan qua pan is also a moral patient, another subject of experience that will suffer if it touches the stove.
To apply this to the LLMs case:
-Physics Sentience will hold that the hardware on which LLMs run is sentient - after all, it's a bunch of fundamental physical operations.
-But Physics Sentience will also hold that the hardware on which a giant lookup table is running is sentient, to the same extent and for the same reason.
-Physics Sentience is silent on whether there's a difference between (1) and (2), in the way that there's a difference between the human and the pan.
The same thing holds for other panpsychist views of consciousness, fwiw. Panpsychist views that hold that fundamental matter is consciousness don't tell us anything, themselves, about what animals or AI systems are sentient. It just says they are made of conscious (or proto-conscious) matter.
I like it! I think one thing the post itself could have been clearer on is that reports could be indirect evidence for sentience, in that they are evidence of certain capabilities that are themselves evidence of sentience. To give an example (though it’s still abstract), the ability of LLMs to fluently mimic human speech —> evidence for capability C—> evidence for sentience. You can imagine the same thing for parrots: ability to say “I’m in pain”—> evidence of learning and memory —> evidence of sentience. But what they aren’t are reports of sentience.
so maybe at the beginning: aren’t “strong evidence” or “straightforward evidence”
Thanks for the comment. A couple replies:
I want to clarify that these are examples of self-reports about consciousness and not evidence of consciousness in humans.
Self-report is evidence of consciousness in Bayesian sense (and in common parlance): in a wide range of scenarios, if a human says they are conscious of something, you should have a higher credence than if they do not say they are. And in the scientific sense: it's commonly and appropriately taken as evidence in scientific practice; here is Chalmers's "How Can We Construct a Science of Consciousness?" on the practice of using self-reports to gather data about people's conscious experiences:
Of course our access to this data depends on our making certain assumptions: in particular, the assumption that other subjects really are having conscious experiences, and that by and large their verbal reports reflect these conscious experiences. We cannot directly test this assumption; instead, it serves as a sort of background assumption for research in the field. But this situation is present throughout other areas of science. When physicists use perception to gather information about the external world, for example, they rely on the assumption that the external world exists, and that perception reflects the state of the external world. They cannot directly test this assumption; instead, it serves as a sort of background assumption for the whole field. Still, it seems a reasonable assumption to make, and it makes the science of physics possible. The same goes for our assumptions about the conscious experiences and verbal reports of others. These seem to be reasonable assumptions to make, and they make the science of consciousness possible .
It's suppose it's true that self-reports can't budge someone from the hypothesis that other actual people are p-zombies, but few people (if any) think that. From the SEP:
Few people, if any, think zombies actually exist. But many hold that they are at least conceivable, and some that they are possible....The usual assumption is that none of us is actually a zombie, and that zombies cannot exist in our world. The central question, however, is not whether zombies can exist in our world, but whether they, or a whole zombie world (which is sometimes a more appropriate idea to work with), are possible in some broader sense.
So yeah: my take is that no one, including anti-physicalists who discuss p-zombies like Chalmers, really thinks that we can't use self-report as evidence, and correctly so.
Agree, that's a great pointer! For those interested, here is the paper and here is the podcast episode.
[Edited to add a nit-pick: the term 'meta-consciousness' is not used, it's the 'meta-problem of consciousness', which is the problem of explaining why people think and talk the way they do about consciousness]
That may be right - an alternative would be to taboo the word in the post, and just explain that they are going to use people with an independent, objective track record of being good at reasoning under uncertainty.
Of course, some people might be (wrongly, imo) skeptical of even that notion, but I suppose there's only such much one can do to get everyone on board. It's a tricky balance of making it accessible to outsiders while still just saying what you believe about how the contest should work.
Hi Timothy! I agree with your main claim that "assumptions [about sentience] are often dubious as they are based on intuitions that might not necessarily ‘track’ sentience", shaped as they are by potentially unreliable evolutionary and cultural factors. I also think it's a very important point! I commend you for laying it out in a detailed way.
I'd like to offer a piece of constructive criticism if I may. I'd add more to the piece that answers, for the reader:
While getting 'right to the point' is a virtue, I feel like more framing and intro would make this piece more readable, and help prospective readers decide if it's for them.
[meta-note: if other readers disagree, please do of course vote 'disagree' on this comment!]