Hide table of contents

Or, why do we care if a system is conscious?

I think the answer lies in a basic instinct of ours: we just care more about systems that are conscious like us than about systems that are not. Take “care” to mean “are interested about”, or take it to mean “give moral relevance”, but the outcome is that conscious systems matter to us.

And where does this come from? I suspect that it’s a kind of double inference: first, we experience the world, and we infer that systems similar to us (other humans) also experience the world; second, we care about ourselves, so we have an intuitive feeling that we should also care about other beings that experience the world (perhaps a convoluted way to say that we have empathy).

However, each person has a different belief about how other beings experience the world as compared to themselves, and this belief will shape their moral attitudes with regard to such beings, leading to all sorts of behavior towards them.

I shall mention that precisely because what we do is an inference, the exact definition of “conscious” doesn’t really matter for now. We care about systems that are like us in the sense of experiencing the world. Since this value is rooted in an intuition rather than a rationalized thought, the exact meaning of “being conscious as we are” is not critical here.


Solving the mystery of consciousness is an exciting endeavour in its own right, but in today’s world, the moral angle of this discussion is probably the most pressing one. We should care about which systems are conscious because we have a moral duty with respect to those systems. We want to know which systems are conscious and to what extent so that our actions are aligned with our intuitive value of caring about conscious systems like us.

As a simple example, imagine two people in coma in the hospital, only surviving thanks to advanced life support. The resources in the hospital are scarce and the doctor needs to choose which one will keep the support and which one won’t. All things being equal, if the doctor has access to a future “test for consciousness” (an improved version of the command-following test), and this test results positive for one subject but not for the other, the choice seems pretty straightforward. Clearly, consciousness matters here.

In the recent debate about AI consciousness, why do we care if an AI system can be conscious or not? I think the main concern here is our moral obligation towards AI systems. There are two scenarios that we want to avoid:

  • if some AI systems of the future are conscious in a way that is similar to us but we treat them as if they don’t, we might incur in another moral catastrophe as was the case with slavery or women rights,
  • if some AI systems of the future are not conscious in a relevant way but we treat them as if they are, we will eventually take decisions that protect AI systems and harm other beings (including humans). After all, there are always choices to be made (more on this later).

In both cases, having more certainty about the conscious status of such systems would allow us making choices that are aligned with our values. And this is something we strive for, hence the importance of a science of consciousness.

Now, it is clear that the degree of moral duty that humans feel towards other conscious systems (including other humans) is as diverse as it can be, and has a strong cultural and contextual component. We are far from having reverence for consciousness. Instead, we seem to intuitively prioritize some systems over others in the base of personal (and probably culturally-inherited) heuristics, such as intelligence, affinity with a species, and so on.

The question here is if those cultural intuitions can be replaced by more principled, science-based arguments. Recent efforts have been made in this direction (see the recent Rethink Priorities report), and I think that a better scientific understanding of consciousness is fundamental to improving our moral intuitions towards systems different from us.

This will possibly lead to a “gradation” in our moral responsibilities, with some beings deserving more attention than others. This is important specially when one considers that we have limited resources and must in some way prioritize our actions (as awful as it sometimes looks). The result is an expanded moral circle where the systems inside the circle are not given equal relevance. Given our limited resources and inevitable footprint in the world, the question is not whether or not we prioritize beings in our moral circle, but how we do it.


So far, I have purposely employed the term consciousness in a loose way, hoping it would align with most people’s intuitions. I have sometimes used the C-word as something that can be present or absent, other times as something that comes in degrees. But I will try to be more rigorous from now on, aware that the definition of consciousness is a controversial affair and that each person that has reflected enough about it has a favorite ontological status for consciousness.

In particular, the idea of a “gradation” in consciousness (as in the phrase “as conscious as we are”) might be well understood in everyday language, but becomes less attractive when inspected through a philosophical lens. Indeed, for many people, consciousness is a feature of particular systems (“I’m conscious, the table is not”), an idea popular among physicalists. Others think about consciousness as an all-pervading feature of the universe that is always there, either as the only substance that exists (idealism), or as another fundamental property of our universe (panpsychism). Therefore, the idea of developing our moral intuitions towards other systems based on “how conscious they are” will not be compatible with most philosophical views about consciousness.

The concept of structured consciousness, borrowed from the Kolmogorov theory of consciousness (KT), will become handy for the rest of the argument, since it’s a quantifiable property (in principle) and can be easily accommodated in most ontological views of consciousness. In brief, our experience has an organization to it, and this organization spans across many dimensions (temporal, spatial, hierarchical, counterfactual). We can call this the structure of experience. Whether you think about consciousness as an emergent feature of some specific systems, or as an ubiquitous property of the universe, there is a good case to be made that not all conscious experiences have the same structure to them.

KT proposes that the fact that living beings like us have developed such a structure stems from our limitations as bounded systems having to model a complex environment in order to survive. The structure of the experience is related to the mathematical (algorithmic) properties of the models that an “agent” needs to run to predict the external environment. This idea is strongly aligned with predictive coding, and with similar theories like the Free Energy principle, where hierarchical generative models shape the experience of the agent.

It is worth noting that I’m treating this property, the structure of experience, as a feature that can be quantified, which is essential for the development of a solid scientific foundation of our moral priorities. Here I will not delve into the details of how one could quantify structure formally, I will just mention that Kolmogorov complexity has been proposed in KT as way to compute the amount of structure in a particular model (related to a particular experience), and that recent work has tried to formalize this further using other mathematical tools.

If it’s true that we care about beings that are “as conscious as we are”, then the above ways of quantifying the structure of experience are relevant for advancing our moral intuitions about other beings.


Some movements in animal advocacy (and also recently in AI research) have used the capacity for well-being or suffering as a proxy of our moral obligation towards conscious beings (this is the case in the Rethink Priorities report mentioned above). Indeed, this seems like a straightforward way to prioritize our moral circle, probably built upon another inference: suffering is bad for me and is better avoided, so if another being can suffer, it is better avoided too. This proxy relies on the assumption that systems that are different enough have different potential for positive valence (well-being) or negative valence (suffering)

[One can be reluctant to accept that assumption, embracing the possibility that all beings have equal possibilities in terms of positive and negative valence. This would mean that an ant has the same potential for well-being than a chimpanzee. I think there are good reasons to believe that this is not the case, as I argument later.]

One of the main limitations of prioritizing systems based on their positively or negatively valenced states is that our intuitions about such states seem to be unavoidably based on our own human ideas of well-being or suffering, as is apparent in many of the metrics used in the Rethink Priorities report (anxiety-like behavior, parental care, communication). Luckily, I think there are alternatives that are less prone to anthropocentric biases and that are still scientifically sound and quantifiable, and I propose that structured experience is a good candidate for such a metric.

In particular, I suggest that the more structured experience, the more capacity for positive/negative valence. As a corollary, finding out the amount of structured experience supported by different systems is key to develop better-informed moral attitudes towards those systems. How valence and structure are precisely related remains to be better elucidated, but I have sketched elsewhere some intuitive ways of how this may happen.


Some may not be convinced about the existence of any relationship between the structure of consciousness and the capacity for suffering. For example, one can contend that only the experience of pain really matters (independently of the structure of such an experience), and that this is ultimately linked to anatomical features such as the amount of nociceptors. I would say that even in this case we would need a robust science of consciousness to understand the relationship between the anatomical substrate and the experience of pain, specially when moving far from us in the phylogenetic tree.

Indeed, the main argument here is that a science of consciousness is important because we care about which systems have certain types of experiences, and this could mean having experiences with a similar structure as we do, or feeling pain. In fact, I think that being honest about the limitations of a science of consciousness is crucial for its success, in particular by acknowledging that the science of consciousness we need today should try to address our ethical uncertainties and perhaps cannot do more than that (and perhaps cannot even do that). This means accepting that, despite our technological advances, the “mystery of consciousness” in its full grandeur, which has been a mystery for thousands of years, will probably remain being so.

In closing, I will mention that there is something very “provincial” in some of the arguments exposed here, e.g., “we care about systems that are conscious like us”, “we have to understand which systems can suffer under our definition of suffering”. I’m not sure how much we can do about our evolved empathy: we just seem to care more about systems that are conscious like us or that experience what we call “suffering”. But for sure we can do something about our intuitions about how much other systems suffer or experience, something less provincial than human-like overt behavior. And we have another important value, which is honesty. We can acknowledge that we will always care more about systems like us, but if we find out that other systems are more like us than we thought, we know we ought to care about them. A science of consciousness made by humans will always be about humans and will always use humans as a starting point (in fact, each of us will use “me” as a starting point). But this science can help us align our actions with our values, changing the way we treat other systems beyond our cultural and anthropocentric intuitions.

0

0
0

Reactions

0
0

More posts like this

Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities