Quick context:

  • The potential development of artificial sentience seems very important; it presents large, neglected, and potentially tractable risks.
  • 80,000 Hours lists artificial sentience and suffering risks as "similarly pressing but less developed areas" than their top 8 "list of the most pressing world problems".
  • There's some relevant work on this topic by Sentience Institute, Future of Humanity Institute, Center for Reducing Suffering, and others, but room for much more. Yesterday someone asked on the Forum "How come there isn't that much focus in EA on research into whether / when AI's are likely to be sentient?"
  • A month ago, people got excited about the FLI open letter: "Pause giant AI experiments".

Now, Researchers from the Association for Mathematical Consciousness Science have written an open letter emphasising the urgent need for accelerated research in consciousness science in light of rapid advancements in artificial intelligence. (I'm not affiliated with them in any way.)

It's quite short, so I'll copy the full text here:

This open letter is a wakeup call for the tech sector, the scientific community and society in general to take seriously the need to accelerate research in the field of consciousness science.

As highlighted by the recent “Pause Giant AI Experiments” letter [1], we are living through an exciting and uncertain time in the development of artificial intelligence (AI) and other brain-related technologies. The increasing computing power and capabilities of the new AI systems are accelerating at a pace that far exceeds our progress in understanding their capabilities and their “alignment” with human values.

AI systems, including Large Language Models such as ChatGPT and Bard, are artificial neural networks inspired by neuronal architecture in the cortex of animal brains. In the near future, it is inevitable that such systems will be constructed to reproduce aspects of higher-level brain architecture and functioning. Indeed, it is no longer in the realm of science fiction to imagine AI systems having feelings and even human-level consciousness. Contemporary AI systems already display human traits recognised in Psychology, including evidence of Theory of Mind [2].

Furthermore, if achieving consciousness, AI systems would likely unveil a new array of capabilities that go far beyond what is expected even by those spearheading their development. AI systems have already been observed to exhibit unanticipated emergent properties [3]. These capabilities will change what AI can do, and what society can do to control, align and use such systems. In addition, consciousness would give AI a place in our moral landscape, which raises further ethical, legal, and political concerns.

As AI develops, it is vital for the wider public, societal institutions and governing bodies to know whether and how AI systems can become conscious, to understand the implications thereof, and to effectively address the ethical, safety, and societal ramifications associated with artificial general intelligence (AGI).

Science is starting to unlock the mystery of consciousness. Steady advances in recent years have brought us closer to defining and understanding consciousness and have established an expert international community of researchers in this field. There are over 30 models and theories of consciousness (MoCs and ToCs) in the peer-reviewed scientific literature, which already include some important pieces of the solution to the challenge of consciousness.

To understand whether AI systems are, or can become, conscious, tools are needed that can be applied to artificial systems. In particular, science needs to further develop formal and mathematical tools to model consciousness and its relationship to physical systems. In conjunction with empirical and experimental methods to measure consciousness, questions of AI consciousness must be tackled.

The Association for Mathematical Consciousness Science (AMCS) [4], is a large community of over 150 international researchers who are spearheading mathematical and computational approaches to consciousness. The Association for the Scientific Study of Consciousness (ASSC), [5], comprises researchers from neuroscience, philosophy and similar areas that study the nature, function, and underlying mechanisms of consciousness. Considerable research is required if consciousness science is to align with advancements in AI and other brain-related technologies. With sufficient support, the international scientific communities are prepared to undertake this task.

The way ahead
Artificial intelligence may be one of humanity’s greatest achievements. As with any significant achievement, society must make choices on how to approach its implications. Without taking a position on whether AI development should be paused, we emphasise that the rapid development of AI is exposing the urgent need to accelerate research in the field of consciousness science.

Research in consciousness is a key component in helping humanity to understand AI and its ramifications. It is essential for managing ethical and societal implications of AI and to ensure AI safety. We call on the tech sector, the scientific community and society as a whole to take seriously the need to accelerate research in consciousness in order to ensure that AI development delivers positive outcomes for humanity. AI research should not be left to wander alone.

Footnotes and signatories at the original post. There's been some news coverage, e.g. in the BBC where I heard about it. I expect it to be positive if people working at relevant or adjacent research institutes signed the open letter too.

Sorted by Click to highlight new comments since:

Notably, it seems that Yoshua Bengio is one of the signatories (he is an extremely prominent AI researcher; one of the three researchers who won a Turing Award for their work in deep learning).

Curated and popular this week
Relevant opportunities