What do we mean by the word ‘Intelligence’? Can it even be Artificial?
Cross-post from Substack: https://open.substack.com/pub/alexbaxter1/p/a-different-kind-of-smart?r=7m9mmg&utm_campaign=post&utm_medium=web
In 1964, when asked to define obscenity, US Supreme Court Justice Potter Stewart declined to offer a strict legal definition, saying instead, “I know it when I see it.” Though the case concerned censorship, the standard has broader application. Where intuition and shared understanding are doing the real work, precise definitions may obscure as much as they reveal. In the current discourse around artificial intelligence, the word “intelligence” has become one such concept: widely used, rarely examined.
What is meant by that pairing of words? When we speak of intelligence in the ordinary sense, we tend to mean something cognitively rich, nuanced, and creative, something embodied and experiential. The term “artificial intelligence” appears to refer to something quite different, something far more technical. The word ‘Intelligence’ is shared, but the underlying phenomenon may not be.
This distinction is illuminated by a basic adaptation of a famous thought experiment known as Mary’s Room, proposed by the philosopher Frank Jackson in 1982. In the original scenario, Mary is a brilliant scientist who has spent her entire life in a black-and-white room. She knows everything there is to know about colour from a scientific perspective, the wavelengths of light, the neurobiology of vision, the physics of perception, but she has never actually experienced colour. When she finally leaves the room and sees red for the first time, does she learn something new? Intuitively, yes. She learns what it is like to see red.
In some respects, artificial intelligence resembles Mary in her room, but in a more radical sense. An AI system does not merely lack experience of colour, it lacks experience altogether. It does not encounter the world but receives information about it indirectly, through data, text, images, and human inputs. It can accumulate vast quantities of information without ever undergoing experience. In this sense, it can “know” a great deal. But is this consistent with what we ordinarily mean by intelligent?
At this point a clarification is necessary. Intelligence and consciousness are not the same thing and conflating them too quickly risks undermining the argument. It is possible to imagine a system that is conscious but not particularly intelligent, and that systems inverse, something highly capable without any accompanying inner life. The question being pressed here is a narrower one: whether intelligence in its fullest and most meaningful sense can be cleanly separated from experience at all. Not whether capability requires consciousness, but whether what we ordinarily mean when we call something intelligent already contains an experiential dimension we rarely make explicit. If it does, then a system that lacks experience entirely may be capable, sophisticated, even maximally useful, without being intelligent in the sense we actually care about. The distinction matters because the word carries moral and conceptual weight that the technical definition quietly strips away.
Even granting AI systems their increasingly sophisticated capacities, their mode of operation remains fundamentally different from our own. Contemporary large language models take a question, break it into components, process those components through a neural network trained on vast quantities of data, and generate an output that statistically resembles the kinds of responses humans tend to produce. A contested but useful heuristic describes them as stochastic parrots. While their outputs can be impressively fluent and insightful they are best understood as advanced pattern completion rather than as understanding.
This difference becomes visible in AI confabulation, commonly called hallucination. While humans misremember and confabulate too, as with the well-documented unreliability of eyewitness testimony, in AI systems confabulation is a structural feature, not a failure. A neural network does not understand the world but predicts likely outputs based on patterns derived from its training data. When it generates a false citation or fabricates a plausible-sounding fact, it is not failing in the way a human fails. It is producing precisely the output most statistically consistent with its inputs. The gap between pattern prediction and world-directed understanding becomes visible at exactly these moments.
This points toward a deeper question about sentience. Consider a simple analogy. When I throw a ball, physical forces such as gravity, air resistance, and rotation act upon it. The ball moves through the air according to these forces. But there is nothing it is like to be the ball. It undergoes physical processes without experience. Now consider a child chasing that same ball. The same physical forces act upon the child, yet there is something it is like to be the child. The child has sensations, perceptions, and emotions, what philosophers call qualia. An inner life accompanies the physical motion.
Sentience, in this broad sense, refers to the capacity for experience and the existence of a subjective point of view. It is the bedrock of what we typically mean by consciousness. Whatever else consciousness may involve, it begins with there being something that it is like to be the entity in question.
Artificial intelligence, as we currently understand and construct it, lacks this dimension entirely. There is nothing it is like to be a language model. No inner life accompanies its computations. It does not encounter the world but processes symbols. In this respect, it is closer to the ball than to the child.
The obvious rebuttal is that this distinction is beside the point. If a system produces intelligent outputs, the question of inner experience is irrelevant to the question of intelligence. But this response overreaches. By the same logic, a thermostat that registers temperature and adjusts a heating system is, in some minimal sense, processing information and responding to its environment. We do not call this intelligent because we recognise, intuitively, that something is missing. The rebuttal simply moves the threshold rather than resolving the question. As systems grow more sophisticated, the outputs become more convincing, but the structure remains the same. We are being asked to infer an inner life from external behaviour. The philosopher John Searle illustrated this with his Chinese Room thought experiment: a system can manipulate symbols according to rules and produce correct outputs without understanding a single thing those symbols mean. Fluency is not comprehension.
Sentience grants a form of knowledge that is embodied, experiential, and irreducibly first-person. An AI system cannot access this by design. Just as Mary learns something new upon seeing red, there are aspects of reality that cannot be transmitted through indirect description alone. They must be lived.
If intelligence, in its fullest sense, includes this embodied dimension, then AI lacks something essential. And if Justice Stewart’s standard is applied here, one might reasonably conclude: I don’t see it.
The outputs of AI systems may convincingly mimic the forms of human thought and speech. They may surpass human performance across many cognitive domains. Yet the underlying mode of operation remains alien to our own. The word “intelligence” implies conceptual continuity where the reality may be radical discontinuity. If the difference proves as profound as it appears, we may eventually require a different term altogether, one that captures what these systems genuinely are rather than what they appear to resemble. If we do consider AI to be a form of intelligence, it is a fundamentally different kind. The distinction is not merely academic. What we call a thing shapes what we expect it to become.
