R

rgb

775 karmaJoined

Bio

Research Fellow at the Center for AI Safety

http://robertlong.online/

Comments
43

True, I should have been more precise—by consciousness I meant phenomenal consciousness. On your (correct) point about Kammerer being open to consciousness more generally, here's Kammerer (I'm sure he's made this point elsewhere too):

Illusionists are not committed to the view that our introspective states (such as the phenomenal judgment “I am in pain”) do not reliably track any real and important psychological property. They simply deny that such properties are phenomenal, and that there is something it is like to instantiate them. Frankish suggests calling such properties “quasi-phenomenal properties” (Frankish 2016, p. 15)—purely physico-functional and non-phenomenal properties which are reliably tracked (but mischaracterized as phenomenal) by our introspective mechanisms. For the same reason (Frankish 2016, p. 21), illusionists are not committed to the view that a mature psychological science will not mention any form of consciousness beyond, for example, access-consciousness. After all, quasi-phenomenal consciousness may very well happen to have interesting distinctive features from the point of view of a psychologist.

But on your last sentence

He could still believe moral status should be about consciousness, just not phenomenal consciousness.

While that position is possible, Kammerer does make it clear that he does not hold it, and thinks it is untenable for similar reasons that he thinks moral status is not about phenomenal consciousness. (cf. p. 8)

It's an excellent question! There are two ways to go here:

  1. keep the liberal notion of preferences/desires, one that seems like it would apply to plants and bacteria, and conclude that moral patienthood is very widespread indeed. As you note, few people go for this view (I don't either). But you can find people bumping up against this view:

Korsgaard: "The difference between the plant's tropic responses and the animal's action might even, ultimately, be a matter of degree. In that case, plants would be, in a very elementary sense, agents, and so might be said to have a final good." (quoted in this lecture on moral patienthood by Peter Godfrey-Smith.

  1. Think that for patienthood what's required is a more more demanding notion of "preference", such that plants don't satisfy it but dogs and people do. And there are ways of making "preference" more demanding besides "conscious preference". You might think that morally-relevant preferences/desires have to have some kind of complexity, or some kind of rational structure, or something like that. That's of course quite hand-wavy—I don't think anyone has a really satisfying account.

Here's a remark from Francois Kammerer, who thinks that moral status cannot be about consciousness (which he thinks does not exist), argues that it should be about desire, and who lays out nicely the 'scale' of desires of various levels of demandingness:

On the one extreme, we can think of the most basic way of desiring: a creature can value negatively or positively certain state of affairs, grasped in the roughest way through some basic sensing system. On some views, entities as simple as bacteria can do that (Lyon & Kuchling, 2021). On the other hand, we can think of the most sophisticated ways of desiring. Creatures such as, at least, humans, can desire for a thing to thrive in what they take to be its own proper way to thrive and at the same time desire their own desire for this thing to thrive to persist – an attitude close to what Harry Frankfurt called “caring” (Frankfurt, 1988). Between the two, we intuitively admit that there is some kind of progressive and multidimensional scale of desires, which is normatively relevant – states of caring matter more than the most basic desires. When moving towards an ethic without sentience, we would be wise to ground our ethical system on conceptsthat we will treat as complex and degreed, and even more as “complexifiable” as the study of human, animal and artificial minds progresses.

rgb
10
1
0

small correction that Jonathan Birch is at LSE, not QMUL. Lars Chittka, the co-lead of the project, is at QUML

You’re correct, Fai - Jeff is not on a co-author on the paper. The other participants - Patrick Butlin, Yoshua Bengio, and Grace Lindsay - are.

What's something about you that might surprise people who only know your public, "professional EA" persona?

I suggest that “why I don’t trust pseudonymous forecasters” would be a more appropriate title. When I saw the title I expected an argument that would apply to all/most forecasting, but this worry is only about a particular subset

rgb
19
0
0

Unsurprisingly, I agree with a lot of this! It's nice to see these principles laid out clearly and concisely:

You write

AI welfare is potentially an extremely large-scale issue. In the same way that the invertebrate population is much larger than the vertebrate population at present, the digital population has the potential to be much larger than the biological population in the future.

Do you know of any work that estimates these sizes? There are various places that people have estimated the 'size of the future' including potential digital moral patients in the long run, but do you know of anything that estimates how many AI moral patients there could be by (say) 2030?

Hi Timothy! I agree with your main claim that "assumptions [about sentience] are often dubious as they are based on intuitions that might not necessarily ‘track’ sentience", shaped as they are by potentially unreliable evolutionary and cultural factors. I also think it's a very important point! I commend you for laying it out in a detailed way.

I'd like to offer a piece of constructive criticism if I may. I'd add more to the piece that answers, for the reader:

  1. what kind of piece am I reading? What is going to happen in it?
  2. why should I care about the central points? (as indicated, I think there are many reasons to care, and could name quite a few myself)
  3. how does this piece relate to what other people say about this topic?

While getting 'right to the point' is a virtue, I feel like more framing and intro would make this piece more readable, and help prospective readers decide if it's for them.

[meta-note: if other readers disagree, please do of course vote 'disagree' on this comment!]

Hi Brian! Thanks for your reply. I think you're quite right to distinguish between your flavor of panpsychism and the flavor I was saying doesn't entail much about LLMs. I'm going to update my comment above to make that clearer, and sorry for running together your view with those others.

Ah, thanks! Well, even if it wasn't appropriately directed at your claim, I appreciate the opportunity to rant about how panpsychism (and related views) don't entail AI sentience :)

Load more