Hide table of contents

One of the core questions regarding the moral status of AI concerns their consciousness. Is there anything it’s like to be them? Contemporary AI systems are widely regarded as clearly not conscious, but there seems to be growing concern among experts that we may see conscious AI systems in the not-too-distant future.

Understanding our duties to the AI systems we create will involve assessing the nature of their minds, and thus their moral status. There are many important questions about AI minds that bear on their moral status, but whether they are consciousness has a clear and widely recognized role. In addition, it may be important in securing or denying AIs the public’s moral consideration.

Existing consciousness research revolves first and foremost around human beings. The physical bases (or neural correlates) of consciousness in humans remain uncertain. Leading proposals are both vague and highly controversial. Extending theories of consciousness to AIs will require careful thought about how to generalize beyond the human case.

Alternatively, we might look to identify behavioral indicators of consciousness. Behavior has a much more salient role in swaying our attitudes than abstract considerations of architecture. But modern AIs are carefully trained to behave like us, and so it is not easy to tell whether their behaviors indicate anything beyond mimicry.

Therefore, we see a variety of kinds of uncertainty at play: there is methodological uncertainty, uncertainty regarding the underpinnings of human consciousness, uncertainty regarding the significance of behavioral evidence, uncertainty about how AIs work, etc. Coming up with any concrete estimate of the probability of consciousness in AI systems will require mapping, measuring, and aggregating these uncertainties.

Rethink Priorities has overcome similar challenges before. Our Moral Weight Project wrangled patchy evidence about behavioral traits and cognitive capacities across the animal kingdom through a Monte Carlo framework that output probabilistic estimates of welfare ranges for different species. We learned a lot from this work and we are eager to apply those lessons to a new challenge.

We are now turning to the question of how best to assess the probability of AI consciousness. Over the coming months, we plan to carry out a project encompassing the following tasks:

  1. Evaluating different modeling approaches to AI consciousness estimation. What different paradigms are worth exploring? What are the pros and cons of each?
  2. Identifying some plausible proxies for consciousness to feed into these models. What are the challenges in pinning down values for these proxies? Where might future technical work be most fruitful?
  3. Producing a prototype model that translates uncertainty about different sources of evidence into probability ranges for contemporary and hypothetical future AI models. Given our uncertainties, what should we conclude about the overall probability of consciousness?

Having such a model is valuable in a few different ways. First, we can produce an overall estimate of the probability that a given system is conscious—an estimate that’s informed by, rather than undermined by, our uncertainty about the correct theory of consciousness. Second, because the inputs to the process can be updated with new information as, say, new capabilities come online, we can readily update our overall estimate of the probability of consciousness. Third, because we can repeat this process based on the capabilities that were present at earlier dates, we can also model the historical rate of change in the probability of digital consciousness. In principle, we can use that to make tentative projections about the likely changes in that probability going forward. This information may be useful to labs, policymakers, and other stakeholders who want to set thresholds for precautionary measures.

We would welcome support for this work.

 

Acknowledgments

This post was written by the Worldview Investigations Team at Rethink Priorities. Rethink Priorities is a global priority think-and-do tank that aims to do good at scale. We research and implement pressing opportunities to make the world better. We act upon these opportunities by developing and implementing strategies, projects, and solutions to key issues. We do this work in close partnership with foundations and impact-focused non-profits or other entities. If you're interested in Rethink Priorities' work, please consider subscribing to our newsletter. You can explore our completed public work here.

Comments7
Sorted by Click to highlight new comments since:

Thanks for this great project! Do you also plan to estimate the welfare range conditional on consciousness, or the probability of positive or negative experiences conditional on consciousness?

Not at the moment. Consciousness is tricky enough as it is. The field is interested in looking more closely at valence independently of consciousness, given that valence seems more tractable and you could at least confirm that AIs don't have valenced experience, but that lies a bit outside our focus for now.

Independently, we're also very interested in how to capture the difference between positive and negative experiences in alien sorts of minds. It is often taken for granted based on human experience, but it isn't trivial to say what it is.

The field is interested in looking more closely at valence independently of consciousness

Could you link the most relevant piece you are aware of? What do you mean by "independently"? Under hedonism, I think the probability of consciousness only matters to the extent it informs the probability of valences experiences.

you could at least confirm that AIs don't have valenced experience

Interesting! How?

Independently, we're also very interested in how to capture the difference between positive and negative experiences in alien sorts of minds. It is often taken for granted based on human experience, but it isn't trivial to say what it is.

Makes sense. Without that, it would be very hard to improve digital welfare.

I'm very excited about this work, congratulations on the launch!

Super exciting! 

I just wanted to share a random perspective here: Would it be useful to model sentience alongside consciousness itself? 

If you read Daniel Dennett's book Kinds of Minds or take some of the Integrated Information Theory stuff seriously, you will arrive at this view of a field of consciousness. This view is similar to Philip Goff's or to more Eastern traditions such as Buddhism. 

Also, even in theories like Global Workspace Theory, the amount of localised information at a point in time matters alongside the type of information processing that you have. 

I'm not a consciousness researcher or anything, but I thought it would be interesting to share. I wish I had better links to research here and there, but if you look at Dennett, Philip Goff, IIT or Eastern views of consciousness, you will surely find some interesting stuff.

Exciting news! I don't know whether we should prioritise Digital Consciousness, but I think it's important for there to be de-confusion work happening in this space.

Curated and popular this week
Relevant opportunities