Cross-posted from my blog: https://statsanddata.wordpress.com/2026/01/28/is-machine-consciousness-impossible/
I was reading with great fascination an article on AI consciousness by the neuroscientist Anil Seth. The article challenges the assumption that consciousness can be reproduced by implementing the computations inside the brain, an hypothesis known as computational functionalism that forms the basis of most discussions around consciousness in artificial agents. Instead, Prof. Seth argues that consciousness is substrate-dependent meaning that it is not merely the computations that matter but how various structures in the brain implement them and the relevance of other functions that these structures perform.
There is a lot in there to unpack and that's what I intend to do with this post. However, I want to note first that as someone who has maintained some level of interest in the question of various unresolved issues in consciousness I was very surprised to hear about this substrate-dependent view for the first time. It may sit outside the current consensus, but the logic of this substrate-dependent view is so inherently sound that it immediately felt like the missing piece of the puzzle.
In fact, Seth lays out three different arguments to make the case: first that the analogy of brain as computers is inaccurate; second, there are computations besides the one that computers can do (Turing computations) that may be relevant for consciousness and third, that life may possibly be a prerequisite for consciousness. In my reading, there is some overlap across these arguments and additionally, some ideas have been mixed up and there are quite a few claims I don't agree with. I'll try to explain, interpret and analyze these positions based on my understanding as what I have learned independently.
Limitations of brain as a computer metaphor
One of the recurring statements I came across in discussions on this topic was the emphasis on treating a metaphor as just that and conversely, the dangers of regarding it as a genuine equivalence. We've all heard the analogy of the brain as a computer or an information processing system. Instead of circuits, memory chips and bit manipulations, the brain uses neurons and action potentials to carry out complex computations on the input - be that from the visual cortex, the auditory nerves or the peripheral muscles. If we probe the brain and determine all the interactions between neurons (in effect, infer the connectome) and combine that with our understanding of dynamics of signal processing across the various components then we should, in principle at least, be able to implement the same in a machine. The standard argument goes on to claim that reproducing the sequence of processing steps (aka computations) of the brain would lead to emergence consciousness in artificial systems[1].
In fact, it is this computational functionalist perspective that has led people to believe that we will eventually be able to attain consciousness from something like artificial neural networks (ANN) because we can simulate the interactions and the computations both of which are finite in number.
Yet, this assumes a massive leap of faith—namely, the unproven insistence that consciousness is a guaranteed byproduct of the computations. We know for a fact that there is a whole lot going on in the brain (in fact even in a single neuron) that is unrelated to a simplistic picture of discrete computational steps:
Brain activity patterns evolve across multiple scales of space and time, ranging from large-scale cortical territories down to the fine-grained details of neurotransmitters and neural circuits, all deeply interwoven with a molecular storm of metabolic activity. Even a single neuron is a spectacularly complicated biological machine, busy maintaining its own integrity and regenerating the conditions and material basis for its own continued existence.
Unlike ANNs where there is a clear separation between the computations (and the model architectures) and the underlying hardware that implements them, there are no clear boundaries in the brain demarcating the processing of signals from the machinery that executes them
In fact, it's a bit reductive even to talk about how the brain functions in terms of simple input-output patterns. Yes there are sensory and motor neurons that carry signals from various tissues to the brain and vice versa but that is not an isolated standalone process that can be abstracted with a black-box conceptual model of input-output pairs. At the very least there are such things as brain states - physical and mental, presumably the latter determined entirely by former - and it would be naive to assume that conscious experience is independent of the interaction of the processing of the sensory signals with the specific states. And these states don't have a direct analogy in an algorithm that aims to reproduce the relationship between an input-output pair. In other words, even if we were to implement a machine that can simulate these input-output neuronal signals in isolation, it is far from obvious that that the machine is conscious or has any intentionality.
This actually tracks our intuition quite well. To cite the common example, if we find ourselves in a basketball court with ball in hand and take aim at the net, we are certainly not calculating the angles and projectile kinematics to determine the direction and force needed to land the ball inside the net. Nonetheless, if we had to develop an algorithm to achieve the same outcome, then seemingly the most direct approach is to use the projectile equations to obtain the required parameters. And that distinction in the process would apply for a range of tasks that a machine can successfully reach human or supra-human performance. Thinking along these lines lends support to the view that how the computation occurs as opposed to merely the final outcome is relevant to consciousness.
This claim that consciousness depends on the details of the computation and not just the final output significantly complicates the project of developing consciousness in artificial systems. Part of the problem is not knowing exactly what is required here; as long as we worked with the assumption that recreating similar responses as the brain to a specific set of inputs would see the emergence of consciousness, there was some definiteness in terms of what needs to be achieved but when we discard that and concede that the type of computational process is critical component to consciousness, the only way to achieve artificial consciousness is to understand how exactly the brain works. And our knowledge of that remains fairly limited, certainly at the level of detail needed to import that as an algorithm.
Is the brain a Turing machine?
But we need to back up a little bit here. We have been throwing around the word computation here as though that is a universally well understood concept. While most people would have some degree of intuition for it, some of the arguments that is made in the article hinges on the precise definition.
By computation we mean being able to algorithmically compute some function f(x)f(x). According to the Church-Turing thesis, there is a universal class of functions known as Turing-computable functions that can be computed by some computational system with Turing Machine being the classic example. The Universal Turing Machine (UTM) is a conceptual abstraction that can simulate any other Turing Machine and hence any algorithm that can be implemented by a computer can be mapped on to one that can be solved using an UTM.
Now by definition all algorithms that can be run on a computer are Turing computable. What about the brain? Can all the processing that leads from the input to the output inside the brain be represented as Turing-computable function? Strictly speaking the answer is probably no because we know that Turing computable functions deal with discrete countable inputs but the dynamics in the brain involve continuous signals (an uncountable set).
However, that is more a technical objection because in practice we can always simulate to arbitrary degree of approximation everything that happens in continuous space using a countable set. One of Seth's claim is that this gap between the processing in the brain and what can be achieved using a computer is germane to consciousness. I don't quite agree with that objection.
In fact, the broader point made by Seth is the possibility that there are other non-Turing computations that could be occurring inside the brain. It is not entirely clear what he means by that but he cites dynamical systems, electromagnetic fluxes and neurochemical processes that may all be relevant for consciousness. As we've seen already seen, these processes may play a critical role in consciousness but the question of whether they are Turing computable or not is tied to the question of whether laws of physics are Turing computable.
And that question is hotly debated but much of it boils down to the limitation of Turing computability to discrete countable input set whereas physical laws and variables are continuous in nature. But, as described earlier, we can in principle approximate the continuous differentiable laws of physics to any desired degree of accuracy in terms of Turing computable functions and while that may leave behind some residual stochasticity[2], it seems a stretch to argue that consciousness hinges on that.
Computation vs Dynamical simulation of the brain
There is another important distinction here that is touched upon but seems to be slightly muddled in the article. When we usually draw analogy between brain and computer, we are referring to the specific sequence of steps that transforms an input signal. Much of the discussion above and computational functionalist position in particular assumes that perspective. Yet there is a more fundamental and more direct way of recreating all the processes of the brain – simulate the entire system.
Here simulating the brain means that we start at the level of cells or organelles, represent all the dynamical properties as variables – such as dendritic features, cell wall configurations, ion channels, transport dynamics etc – and let them evolve according to physical laws in a manner similar to what we do for predicting weather patterns or some such.
There are however two problems with this. The first being that the number of variables involved would be so large that it is computationally infeasible to be able to do that with any meaningful degree of accuracy. The second, and more fundamentally, the simulation is not the real thing – no more so than simulating the chain reaction of a nuclear explosion causes an actual catastrophic blast. Everything from velocity, energy to pressure, temperature and radiation are just variables in a computer program that are updated according to physical laws.
It would however be possible that we could synthetically engineer a brain-like system – one that replicates the structure and functioning of the brain at a microscopic level. That could be an avenue for consciousness to emerge but we are very far from doing so even for even the simplest of organisms.
Consciousness and Passage Of Time
Another distinction between the brain and computing systems that really stood out for me is the temporal dimension to the computation. As far as an algorithm goes, there is no time scale that is attached to it (not referring to the complexity here but the actual physical time) and the duration of execution depends on the hardware and other considerations like parallel processing. Yet, for an organism, time is an essential part of the physical reality and every segment of neuronal processing is associated with some duration that is relatively predictable. It is entirely conceivable that consciousness depends on that characteristic; if we could hypothetically speed up our brain by 100x would we still have consciousness as we understand it?
More importantly – as Seth might argue – is the question of speeding up our brains by 100x even well-posed? The processing inside is inseparable from the basic properties and functioning of the constituent elements like neurons and these are determined by the neuro-biological context in which they operate and the biochemical and electrical configurations they assume. If it is physically impossible to speed up such processes, then the question does not even make a whole lot of sense. Furthermore, even if we somehow overcome this limitation, it would seem that the first person subjective experience would change considerably.
All of this further strengthens the idea of the inseparability of the substrate from the function inside the brain.
Is life necessary for consciousness?
Could it be possible that being alive is a prerequisite for any type of consciousness. This is a point that is raised in the article and it is well worth considering even if the author seems a little more uncertain about this than the other arguments.
Part of the problem here is that we find consciousness only in (some) living organisms and all living organisms have assumed their current forms through biological evolution from a common ancestor. As our current definition of life[3] is restricted to this well-defined set, we can only speculate on what life could potentially mean in more general terms and unfortunately, we are likely to be biased by extrapolating the distinctive characteristics of the only life we are familiar with on our planet. For example, if we are to extend the definition, then why should we not call LLMs as being alive? Well, I imagine the argument against that claim would inevitably refer to our present definition in one way or the other!
Still that does not invalidate the point raised as to the role of what we understand to be life in emergent property of consciousness. The central argument in this regard is the hypothesis that our brains work not as a standard computational model that determines the output for a given input, but rather as a sequence of error-corrections prompted by the input sensory signal deviating from the predictions that are constantly happening in the brain.
As such I don’t find argument this very convincing because the error-correction, while certainly distinct from simplistic notions of input-output functional mapping, can also be represented as a sequence of computations. That in and of itself is not sufficient to necessitate life. However, there is more implied here and that revolves around the suggestion that this type of error-correction is inextricably tied to our subjective experiences and that in turn depends on not just brain in isolation but the entire body and in particular their role in generating sensory signals.
Here he describes this in a quasi-poetic way:
This drive to stay alive doesn’t bottom out anywhere in particular. It reaches deep into the interior of each cell, into the molecular furnaces of metabolism. Within these whirls of metabolic activity, the ubiquitous process of prediction error minimization becomes inseparable from the materiality of life itself. A mathematical line can be drawn directly from the self-producing, autopoietic nature of biological material all the way to the Bayesian best-guessing that underpins our perceptual experiences of the world and of the self.
This is by no means a water-tight argument and if anything it is a little vague and despite that, I find it rather persuasive (even if I would be hard pressed to justify it). Perhaps its power has to do less with the quality of evidence in support of it and more on the implication that there is a whole dimension of other considerations that may be relevant for consciousness, something that most discourse on this topic seem to entirely avoid.
After all, we have not managed to reproduce anything resembling even the simplest form of life in an artificial setting. The connectome of a nematode (302 neurons) was fully mapped out in 1986 and the same was done for the fruit fly ( ~140k neurons) in 2024. And yet we have not created any artificial agent that can simulate the behavior and life processes of these organisms. That is hardly a slam-dunk argument against artificial consciousness but a reminder that there is a lot more complexity involved than we may glibly assume in our present quest for either superhuman AI or worse the Black Mirror esque ambitions of having our brains exported on to a chip.
- ^
The entire discussion here (and the article by Anil Seth) all assume a strictly physicalist/materialist conception of consciousness. In other words, we are not entertaining any supernatural notions as to the origins of consciousness in humans and/or higher organisms.
- ^
Classical/non-quantum stochasticity. There is of course randomness from quantum mechanics and there are arguments (not convincing according to me) about its role in consciousness but that’s a different discussion altogether
- ^
NASA definition: “a self-sustaining chemical system capable of Darwinian evolution”
