Derek Shiller

Lead Web Developer @ The Humane League
415Joined Mar 2019Derekshiller.com

Bio

The views expressed here are my own.

Comments
59

Perhaps I oversold the provocative title. But I do think that affective experiences are much harder, so even if there is a conscious AI it is unlikely to have the sorts of morally significant states we care about. While I think that it is plausible that current theories of consciousness might be relatively close to complete, I'm less sympathetic that current theories of valence are plausible as relatively complete accounts. There has been much less work in this direction.

I guess this is a matter of definitions.

I agree that this sounds semantic. I think of illusionism as a type of error theory, but people in this camp have always been somewhat cagey what they're denying and there is a range of interesting theories.

At an rate, whether consciousness is a real phenomenon or not, however we define it, I would count systems that have illusions of consciousness, or specifically illusions of conscious evaluations (pleasure, suffering, "conscious" preferences) as moral patients and consider their interests in the usual ways.

Interesting. Do you go the other way too? E.g. if a creature doesn't have illusions of consciousness, then it isn't a moral patient?

For example, a single neuron to represent an internal state and another another neuron for a higher-order representation of that internal state.

This requires an extremely simplistic theory of representation, but yeah, if you allow any degree of crudeness you might get consciousness in very simple systems.

I suppose you could put my overall point this way: current theories present very few technical obstacles, so there it would take little effort to build a system which would be difficult to rule out. Even if you think we need more criteria to avoid getting stuck with panpsychism, we don't have those criteria and so can't wield them to do any work in the near future.

"everything special about human consciousness was coded in there" sounds like whole brain emulation

I mean everything that is plausibly relevant according to current theories, which is a relatively short list. There is a big gulf between everything people have suggested is necessary for consciousness and a whole brain emulation.

Generally, though, I think Graziano and other illusionists would want to test whether it treats its own internal states or information processing as having or seeming to have properties consciousness seems to have, like being mysterious/unphysical/ineffable.

It has been awhile since I've read Graziano -- but if I recall correctly (and as your quote illustrates) he likes both illusionism and an attention schema theory. Since illusionism denies consciousness, he can't take AST as a theory of what consciousness is; he treats it instead as a theory of the phenomena that leads us to puzzle mistakenly about consciousness. If that is right, he should think that any artificial mind might be led by an AST architecture, even a pretty crude one, to make mistakes about mind-brain relations and that isn't indicative of any further interesting phenomenon. The question of the consciousness of artificial systems is settled decisively in the negative by illusionism.

I was under the impression that we still don’t know what the necessary conditions for consciousness are

We definitely don't, and I hope I haven't committed myself to any one theory. The point is that the most developed views provide few obstacles. Those views tend to highlight different facets of human cognitive architecture. For instance, it may be some form of self-representation that matters, or the accessibility of representations to various cognitive modules. I didn't stress this enough: of the many views, we may not know which is right, but it wouldn't be technically hard to satisfy them all. After all, human cognitive architecture satisfies every plausible criterion of consciousness.

On the other hand, it is controversial whether any of the developed views are remotely right. There are some people who think we've gone in the wrong direction. However, these people generally don't have specific alternative proposals that clearly come down one way or another on AI systems.

Some theories of consciousness are basically theories of human (and sometimes animal) consciousness, really just explaining which neural correlates predict subjective report, but do not claim those minimal neural correlates generate consciousness in any system, so building a computer or writing software that just meets their minimally stated requirements should not be taken as generating consciousness.

From the 50s to the 90s, there was a lot of debate about the basic nature of consciousness and its relation to the brain. The theory that emerged from that debate as the most plausible physicalist contender suggested that it was something about the functional structure of the brain that matters for consciousness. Materials aren't relevant, just abstract patterns. These debates were very high level and the actual functional structures responsible for consciousness weren't much discussed, but they suggested that we could fill in the details and use those details to tell whether AI systems were conscious.

From the 90s to the present, there have been a number of theories developed that look like they aim to fill in the details of the relevant functional structures that are responsible for consciousness. I think these theories are most plausibly read as specific versions of functionalism. But you're right, the people who have developed them often haven't committed fully either to functionalism or to the completeness of the functional roles they describe. They would probably resist applying them in crude ways to AIs.

The theorists who might resist the application of modern functionalist theories to digital minds are generally pretty silent on what might be missing in the functional story they tell (or what else might matter apart from functional organization). I think this would undercut any authority they might have in denying consciousness to such systems, or even raising doubts about it.

Suppose that Replika produced a system that had perceptual faculties and a global workspace, that tracked its attention and utilized higher-order representations of its own internal states in deciding what to do. Suppose they announced to the media that they had created a digital person, and charged users $5 an hour to talk to it. Suppose that Replika told journalists that they had worked hard to implement Graziano's theory in their system, and yes, it was built out of circuits, but everything special about human consciousness was coded in there. What would people's reactions be? What would Graziano say about it? I doubt he could come up with many compelling reasons to think it wasn't conscious, even if he could say that his theory wasn't technically intended to apply to such systems. This leaves curious public in the who can really say camp reminiscent of solipsism or doubts about the consciousness of dogs. I think they'd fall back on problematic heuristics.

Double or Nothing (Would you accept a 49% chance of extinction for a 51% chance of doubling the number of individuals for all time?): Doubling an already infinite payoff isn't worth jeopardizing it.

This seems deeply problematic to me. I gather you would rather reduce the level of welfare of the infinite number of existing people from very high to just barely positive to avoid a .000...0001% chance that they wouldn't exist at all, even though every single person would much prefer that you take that chance.

Toby Ord argues that this is incoherent because there are no natural units in which to measure happiness and suffering, and therefore it's unclear what it even means to put them on the same scale.

One problem might be that there are no natural units on which to measure happiness and suffering. Another is that there are too many. If there are a hundred thousand different ways to put happiness and suffering on the same scale and they all differ in the exchange rate they imply, then it seems you've got the same problem. Your example of comparisons in terms of elementary particles feels somewhat arbitrary, which makes me think this may be an issue.

It seems equally valid that there's an "anti-mugger" out there who is thinking "if Pascal refuses to give the mugger the 10 livres, then I will grant him 100 quadrillion Utils". There is no reason to privilege the mugger who is talking to you, and ignore the anti-mugger whom you can't see.

Two thoughts:

1.) You really need the probabilities of the mugger and anti-mugger to be nearly exactly equal. if there is a slight edge to believing the mugger rather than the hypothetical anti-mugger, that is enough to get the problem off the ground. There is a case to be made for giving a slight edge to what the mugger says: some smart philosophers think testimony is a basic source of evidence, such that if someone actually says P, that is a (possibly very small) reason to believe P. Even if these philosophers are almost certainly wrong, you shouldn't be 100% confident that they are. The right way to respond to your uncertainty about the epistemic significance of testimony is to give some small edge to the truth of actual testimony you hear vs. hypothetical testimony you make up. That is enough to lead standard decision theory to tell you to hand over the money.

2.) Pascal's mugger issues seem most pressing in cases where our reasons don't look like they might be perfectly balanced. I've suggested some examples. You don't consider any cases where we clearly do have asymmetric reasons supporting very very small probabilities for very very very high expected utilities.

I don’t think you’re forced to say that if a life with x utility is neutral, a life with x - 1 utility is bad. It seems to me that the most plausible version of the OPs approach would have a very wide neutral band.

Load More