R

rgb

699 karmaJoined Jul 2020

Bio

Research Fellow at the Center for AI Safety

http://robertlong.online/

Comments
40

You’re correct, Fai - Jeff is not on a co-author on paper. The other participants - Patrick Butlin, Yoshua Bengio, and Grace Lindsay - are.

What's something about you that might surprise people who only know your public, "professional EA" persona?

I suggest that “why I don’t trust pseudonymous forecasters” would be a more appropriate title. When I saw the title I expected an argument that would apply to all/most forecasting, but this worry is only about a particular subset

rgb
3mo19
0
0

Unsurprisingly, I agree with a lot of this! It's nice to see these principles laid out clearly and concisely:

You write

AI welfare is potentially an extremely large-scale issue. In the same way that the invertebrate population is much larger than the vertebrate population at present, the digital population has the potential to be much larger than the biological population in the future.

Do you know of any work that estimates these sizes? There are various places that people have estimated the 'size of the future' including potential digital moral patients in the long run, but do you know of anything that estimates how many AI moral patients there could be by (say) 2030?

Hi Timothy! I agree with your main claim that "assumptions [about sentience] are often dubious as they are based on intuitions that might not necessarily ‘track’ sentience", shaped as they are by potentially unreliable evolutionary and cultural factors. I also think it's a very important point! I commend you for laying it out in a detailed way.

I'd like to offer a piece of constructive criticism if I may. I'd add more to the piece that answers, for the reader:

  1. what kind of piece am I reading? What is going to happen in it?
  2. why should I care about the central points? (as indicated, I think there are many reasons to care, and could name quite a few myself)
  3. how does this piece relate to what other people say about this topic?

While getting 'right to the point' is a virtue, I feel like more framing and intro would make this piece more readable, and help prospective readers decide if it's for them.

[meta-note: if other readers disagree, please do of course vote 'disagree' on this comment!]

Hi Brian! Thanks for your reply. I think you're quite right to distinguish between your flavor of panpsychism and the flavor I was saying doesn't entail much about LLMs. I'm going to update my comment above to make that clearer, and sorry for running together your view with those others.

Ah, thanks! Well, even if it wasn't appropriately directed at your claim, I appreciate the opportunity to rant about how panpsychism (and related views) don't entail AI sentience :)

The Brian Tomasik post you link to considers the view that fundamental physical operations may have moral weight (call this view "Physics Sentience"). 

[Edit: see Tomasik's comment below. What I say below is true of a different sort of Physics Sentience view like constitutive micropsychism, but not necessarily of Brian's own view, which has somewhat different motivations and implications]

But even if true, [many versions of] Physics Sentience [but not necessarily Tomasik's] doesn't have straightforward implications about what high-level systems, like organisms and AI systems, also comprise a sentient subject of experience. Consider: a human being touching a stove is experiencing pain on Physics Sentience; but a pan touching a stove is not experiencing pain. On Physics Sentience, the pan is made up of sentient matter, but this doesn't mean that the pan qua pan is also a moral patient, another subject of experience that will suffer if it touches the stove.

To apply this to the LLMs case: 

-Physics Sentience will hold that the hardware on which LLMs run is sentient - after all, it's a bunch of fundamental physical operations.

-But Physics Sentience will also hold that the hardware on which a giant lookup table is running is sentient, to the same extent and for the same reason.

-Physics Sentience is silent on whether there's a difference between (1) and (2), in the way that there's a difference between the human and the pan.

The same thing holds for other panpsychist views of consciousness, fwiw. Panpsychist views that hold that fundamental matter is consciousness don't tell us anything, themselves, about what animals or AI systems are sentient. It just says they are made of conscious (or proto-conscious) matter.

I like it! I think one thing the post itself could have been clearer on is that reports could be indirect evidence for sentience, in that they are evidence of certain capabilities that are themselves evidence of sentience. To give an example (though it’s still abstract), the ability of LLMs to fluently mimic human speech —> evidence for capability C—> evidence for sentience. You can imagine the same thing for parrots: ability to say “I’m in pain”—> evidence of learning and memory —> evidence of sentience. But what they aren’t are reports of sentience.

so maybe at the beginning: aren’t “strong evidence” or “straightforward evidence”

Thanks for the comment. A couple replies:

I want to clarify that these are examples of self-reports about consciousness and not evidence of consciousness in humans.

Self-report is evidence of consciousness in Bayesian sense (and in common parlance): in a wide range of scenarios, if a human says they are conscious of something, you should have a higher credence than if they do not say they are. And in the scientific sense: it's commonly and appropriately taken as evidence in scientific practice; here is Chalmers's "How Can We Construct a Science of Consciousness?" on the practice of using self-reports to gather data about people's conscious experiences:

Of course our access to this data depends on our making certain assumptions: in particular, the assumption that other subjects really are having conscious experiences, and that by and large their verbal reports reflect these conscious experiences. We cannot directly test this assumption; instead, it serves as a sort of background assumption for research in the field. But this situation is present throughout other areas of science. When physicists use perception to gather information about the external world, for example, they rely on the assumption that the external world exists, and that perception reflects the state of the external world. They cannot directly test this assumption; instead, it serves as a sort of background assumption for the whole field. Still, it seems a reasonable assumption to make, and it makes the science of physics possible. The same goes for our assumptions about the conscious experiences and verbal reports of others. These seem to be reasonable assumptions to make, and they make the science of consciousness possible .

It's suppose it's true that self-reports can't budge someone from the hypothesis that other actual people are p-zombies, but few people (if any) think that. From the SEP:

Few people, if any, think zombies actually exist. But many hold that they are at least conceivable, and some that they are possible....The usual assumption is that none of us is actually a zombie, and that zombies cannot exist in our world. The central question, however, is not whether zombies can exist in our world, but whether they, or a whole zombie world (which is sometimes a more appropriate idea to work with), are possible in some broader sense.

So yeah: my take is that no one, including anti-physicalists who discuss p-zombies like Chalmers, really thinks that we can't use self-report as evidence, and correctly so.

Load more