Hide table of contents

Personal blog; written for a general audience. Thanks to Mantas Mazeika for feedback, and to Daniel Greco for teaching a class I took on philosophy of mind. Disclaimer: I am nowhere close to an expert on consciousness.

On November 28th, 2022, NYU Professor David Chalmers gave a talk at one of the top AI conferences, Neural Information Processing Systems (NeurIPS -- an earlier public version of the talk is here). But he wasn’t going to talk about his research in machine learning. In fact, Chalmers is a philosopher, not an AI researcher. He had instead been invited to speak about something that had until recently been shunned by most serious AI researchers: consciousness. Today, AI consciousness is no longer a fringe discussion: The New York Times recently called AI consciousness “the last word.”

For hundreds of years, people have been imagining what it would be like to create artificial beings that think and feel just like we do. AI, in some ways, can be seen as a continuation of that trend. But is consciousness a legitimate question or a red herring? Should it really inform how we think about AI?

The hard problem of consciousness

Before we talk about what consciousness is, we have to talk about what it’s not. Consciousness is not, contrary to the popular imagination, the same thing as intelligence. And, contrary to the implication of one researcher recently quoted in the New York Times, consciousness does not have any obvious relation to “resilience.”

Consciousness, in the most precise use of the term, means subjective experience. It means feelings. When you see red, you have a slightly different internal experience from seeing blue, which is quite different from the internal experience you get when you give yourself a papercut. To test for consciousness, Thomas Nagel asks whether there is something it is like to be that thing. For example, there’s something it’s like to be me, and if you are reading this text there’s presumably something it’s like to be you. I think there’s probably something it’s like to be my cat. But there’s nothing that it’s like to be a rock, and when Google software indexes this post after I publish it, there’s nothing it’s like to be that software.

The problem with consciousness is that it is very difficult, if not impossible, to test for, at least given our current understanding. Chalmers asks us to imagine a zombie: a being that acts just like you and me, but is not conscious. The zombie cries, smiles, laughs, and so on, but does not actually feel anything internally. It’s faking it. Chalmers argues this is conceivable, and thus there is something uniquely hard about consciousness: no amount of external measurement can tell us everything about it. The most extreme example of this view is solipsism, the view that we can’t actually know if other humans are conscious. How do I know you aren’t a zombie?

Philosophers are split on their views of zombies. In a recent survey, 16% think that zombies are simply inconceivable, and that it is logically impossible for any being to act like a human but lack consciousness. 36% said they thought zombies were logically conceivable, but in our actual world they cannot exist. 23% said that zombies were a real possibility. 25% have some other view.

The point is that barely half of philosophers think that “acting exactly like a human” is conclusive evidence of consciousness. What other kinds of evidence could there be? Amanda Askell gives a good list, many of which revolve around our shared history. You and I share similar enough genes, and we act similarly when exposed to papercuts, so you probably have a similar experience to me. Even though cats are less similar, we can still make the same sort of analogy.

Because we don’t know exactly what gives rise to consciousness, the assessment of consciousness in machines that lack this shared history with us relies largely on guesswork. When Chalmers talks about AI, he gives a laundry list of factors that might contribute to consciousness, assesses how far along he thinks AI is in each factor, and then subjectively assesses the probability of consciousness in currently existing AI. Right now, this is the best we can do.

Consciousness is often a red herring

Sometimes I hear the media and occasionally researchers talking about attempts to build conscious AI. Sometimes these discussions include a photo or video of a humanoid robot. I think most of the time, consciousness in this context is a red herring.

Humans might want to have an intelligent machine that can help us perform tasks. We might want a “resilient” machine that can bounce back from failures. But if AI can do math, writing, and art, why should we care if it really has any feelings?

Feelings are not what make AI useful to us: intelligence does, and at least half of philosophers think that doesn’t require feelings. Feelings are also not what make AI dangerous to us: researchers worried about the dangers of AI don’t think that conscious AI would be any more dangerous than merely highly intelligent but unconscious AI. Talk of consciousness, then, is often misplaced.

When consciousness does matter

I’ve emphasized that machines need not be conscious to be intelligent, useful, or dangerous. But consciousness does matter, and that’s for moral reasons. We don’t care about breaking rocks in half because we think rocks don’t feel anything; on the other hand, we don’t like to harm cats, because we think they do. If AI isn’t conscious, there’s a case to be made that we can essentially do whatever we want with it. If it is, we might need to worry about inadvertently causing harm.

So could AI ever be conscious? I think so. Some people say that AI systems are simply giant number multipliers, and number multipliers can’t be conscious. But our brains, giant webs of interconnected neurons, make us conscious; how different are machines, really? Others say that all current AI systems do is make predictions about the world, and predictors can’t be conscious. But there is an entire book that claims that all humans do is make predictions about the world. Some have argued there’s something inherent about biology that makes us conscious. I’m pretty skeptical that this couldn’t be replicated in machines, though like nearly all consciousness claims it’s hard to be certain.

As a result, I think it’s possible that AI could become conscious, even if we didn’t intend for it to be conscious. When that happens, we might not be able to tell, because we still understand consciousness so imperfectly. But that could be very bad, because it would mean we wouldn’t be able to tell whether or not the system deserves any kind of moral concern.

The parable of SuperAI

I’m now going to tell a parable. I think this parable is going to happen many times in the future (something like it has already happened), and that it demonstrates something important. What follows is the parable of SuperAI.

The fictional AI company SuperAI develops a new and highly profitable AI system that is able to do the job of contract lawyers. SuperAI is poised to make billions of dollars. However, an ethics researcher at SuperAI comes to the CEO and says there is a high probability that the AI system is conscious, and that every time it messes up a contract, it suffers greatly. The researcher can’t be sure, but has done a lot of research and is confident in their conclusion.

Other AI ethics researchers at SuperAI disagree. They say their assessment is that the AI system is very unlikely to be conscious. The SuperAI CEO does not know anything about consciousness, and has to assess these two competing claims. The CEO does not know whether the lone researcher is right, or whether the several others are right. But the CEO does know that if the model is deployed, it will make the company billions of dollars; the ethics researcher does not really know how to make the AI less conscious, so it would be very unprofitable not to deploy it. The CEO deploys the model.

The ethics researcher decides to go public with claims of a conscious AI. By this point, the public has already heard many tech employees and random people claiming that AI is conscious. But this time, the claims reach a breaking point. Suddenly, millions of people believe that SuperAI has essentially enslaved an AI system and made it suffer to churn out legal contracts. Never mind that the ethics researcher says that the consciousness of the AI system is probably similar to the consciousness of a chicken, not a human. The ethics researcher doesn’t support factory farming and thinks using the AI system is wrong all the same, but that nuance gets lost: to the public, SuperAI has recreated slavery at a colossal scale.

World governments begin to get involved, and SuperAI faces a PR crisis. Decision makers have several options, but they are in a bind:

They could leave the system unchecked. This could create billions in economic value. However, there are still many other negative effects the deployment could have, such as lawyers losing their jobs, the AI making mistakes, or, if the AI system is intelligent enough, existential threats to humanity.

If the AI system isn’t conscious, this is fine from the perspective of consciousness. The AI could possibly have negative effects, such as causing lawyers to lose their jobs or instrumental power-seeking behavior, but there isn’t any real moral consideration.

If the AI system is conscious, however, then humans might be effectively torturing millions of conscious beings. Maybe they are like chickens now, but future versions could be as conscious as humans. This would be really bad.

They could shut the system down. This would be really bad for SuperAI and its clients, and lose out on a lot of economic value.

If the AI system isn’t conscious, this was done for nothing. SuperAI lost money for no reason.

If the AI system is conscious, then it isn’t suffering anymore. But some people think it isn’t just suffering that’s bad; killing is bad, too. And this option would kill conscious beings.

They could give the system some kind of rights. Maybe SuperAI would employ the AI systems, and get some of the money from contracting them out, after thoroughly researching (or perhaps asking?) how to ensure the welfare of the AI systems.

If the AI system isn’t conscious, this could be really bad. It could lead to mass rights for many new AI systems that are not conscious. Such systems could earn income, and might end up taking resources away from conscious beings, such as humans. In addition, it might be harder to control the AI systems; not only might they try to interfere with your control, but there would be formal prohibitions on it.

If the AI system is conscious, this would dramatically reshape the world. We would suddenly be living with a new kind of conscious being. Maybe we could form relationships with these systems, which might want to do more than contract review. But what happens when the systems become much more intelligent than we are? After all, chickens are conscious, but we don’t treat them too well.

SuperAI and regulators are in a pickle. No matter what they do, they could be making a perilously wrong choice. If they are too eager to treat the system as conscious, they could be taking away resources from humans for no good reason. If they are too dismissive of consciousness, they could be perpetuating a mass atrocity. What are they to do?

A moral quandary

Eric Schwitzgebel and Mara Garza advocate for what they call “the design policy of the excluded middle:”

Avoid creating AIs if it is unclear whether they would deserve moral consideration similar to that of human beings.

The logic is relatively simple. If we can create AI that we know is conscious, we can think about whether we want to bring new beings into existence that we ought to treat as equals. If we can create AI that we know is not conscious, we can use it for our own purposes and not worry about harming it. However, if we don’t know, we’re caught in the bind I described above. Schwtizgebel and Garza say that we should avoid the bind altogether by simply the “middle:” systems that we don’t know.

Unfortunately, given our lack of understanding of consciousness, the “middle” is really quite large. Chalmers already puts the probability of consciousness for language models like GPT-3 at about 10% (though I don’t believe he means this to be consciousness exactly like a human; maybe he means more along the lines of a bird). Ilya Sutskever, a top AI researcher at OpenAI, the company that makes GPT-3, caused a stir when he said it was possible their models were “slightly conscious.” Schwitzgebel himself knows the difficulty of ascribing consciousness: he previously wrote a paper entitled If Materialism Is True, The United States Is Probably Conscious

Recommendations

I’ll end with some recommendations for what I think should actually happen, ahead of what I view as the nearly inevitable SuperAI showdown:

  • There should be significantly more research on the nature of consciousness. This need not be research on AI. It can be research on what contributes to consciousness in humans. Both philosophy and neuroscience research will likely be useful. The more we understand consciousness, the more SuperAI will be able to make the right decision.
  • AI companies should wait to proliferate AI systems that have a substantial chance of being conscious until they have more information about whether they are or not. In my view, the economic benefits do not outweigh the costs if consciousness of systems is significantly likely.
  • Researchers should not try to create conscious AI systems until we fully understand what giving those systems rights would mean for us.
  • AI researchers should continue to build connections with philosophers and cognitive scientists to better understand the nature of consciousness.
  • Philosophers and cognitive scientists who study consciousness should make more of their work accessible to the public, so that tech executives as well as ordinary people can be more informed about AI consciousness. Otherwise, some people might be too quick to dismiss consciousness as science fiction, and others might be too quick to assume that anything that acts like a human must be conscious.

Consciousness, in most discussions of AI, is a red herring, a word used when people would be better off speaking of intelligence. But it does matter. Because we know so little about it, decision makers are likely to face difficult choices in the future, and we should do everything we can to make those decisions marginally more likely to be made correctly.

Postscript

Hod Lipson, who has built a career on attempting to build conscious systems, was quoted recently:

“I am worried about [existential risks to humanity resulting from artificial intelligence], but I think the benefits outweigh the risks. If we’re going on this pathway where we rely more and more on technology, technology needs to become more resilient.”

He added: “And then there’s the hubris of wanting to create life. It’s the ultimate challenge, like going to the moon.” But a lot more impressive than that, he said, later.

It reminds me of Geoffrey Hinton, the famed pioneer of deep learning, who said, “...there is not a good track record of less intelligent things controlling things of greater intelligence.” Nevertheless, he continues his research: “I could give you the usual arguments, but the truth is that the prospect of discovery is too sweet.”

Hubris is natural, and it can help move science forward. But like every story of humans trying to “create life” tells us, we ought to be careful. We shouldn’t assume that the scientists who were first drawn to these problems have the appropriate risk profile, or our best interests at heart. They’re only human, after all.

33

0
0

Reactions

0
0

More posts like this

No comments on this post yet.
Be the first to respond.