Cognitive capacity in the age of AI

 

Crosspost from Substack: https://open.substack.com/pub/alexbaxter1/p/the-doorman-fallacy?r=7m9mmg&utm_campaign=post&utm_medium=web&showWelcomeOnShare=true

 

The narrative of AI optimisation promises a future in which artificial intelligence absorbs our drudgery and returns us, liberated, to the things that matter. The vision is compelling. We offload the routine and mundane, freeing ourselves to pursue richer, more meaningful lives. Time becomes abundant, and cognitive resources once consumed by the trivial are redirected toward creativity, innovation, and personal fulfilment. It is an appealing narrative, one that aligns with humanity's deep-seated desire to transcend the ordinary.

But this narrative conceals something that deserves closer scrutiny. Is the removal of basic tasks necessarily an unqualified good? What if these activities serve a deeper function, not merely as obstacles to creativity, but as part of the very conditions that make creativity and meaning possible?

Basic, everyday tasks such as cleaning, organising, and maintaining may appear cognitively simple, but they are not neurologically neutral. They structure attention, provide rhythm to daily life, and anchor individuals in a sequence of tangible, embodied actions. These activities often require low-level, sustained engagement rather than intense focus, creating a mental space in which thoughts can wander, recombine, and incubate.

In neurological terms, this state of low-level engagement activates the Default Mode Network (DMN), a series of interconnected brain regions that come online during rest and when engaging in internal thought processes like daydreaming, reminiscing, planning, or reflecting. Broadly speaking, it is the neural opposite of hyper-focusing on an external task. It is precisely in these DMN-activated moments, while washing dishes or folding clothes, that creative insights often emerge. This is not a neural space we can afford to consider expendable. To remove such tasks entirely may be to disrupt a subtle but vital cognitive ecology.

This concern becomes clearer when viewed through what Rory Sutherland calls the Doorman Fallacy. In his 2019 book Alchemy, Sutherland describes a pattern of thinking in which a function is defined too narrowly in the pursuit of cost efficiency, leading to the unintended destruction of value. He offers the example of a doorman at a luxury hotel observed by a consultant. The consultant identifies that the primary function of the role is to open and close the door and accordingly recommends replacing the doorman with an automatic sliding door. On this narrow definition, the recommendation appears rational. Yet it eliminates a host of less measurable but deeply significant functions, among them security and deterrence, hospitality, status signalling, and the cultivation of a personalised customer experience.

Two biases underpin this fallacy. The first is measurability. In the face of complexity, we tend to privilege what can be easily measured, using it as a proxy for the whole, and in doing so undervalue or entirely exclude aspects that resist quantification. The second is reductionism. We are drawn to reduce human functions to discrete, singular tasks, when in reality they are composed of layered, interdependent processes. The reduction, in this sense, leads to destruction, not because efficiency is misguided, but because it is pursued with an incomplete account of value.

When this framework is applied to artificial intelligence, the parallels become difficult to ignore. At a surface level, we can see how roles might be reductively defined. A software engineer becomes a producer of code outputs. A therapist becomes a system of call-and-response clarification. But at a deeper level, the concern extends beyond professions to cognition itself.

Increasingly, AI systems are being used to think, plan, summarise, and strategise, effectively outsourcing elements of our intellectual labour. In doing so, we risk reducing thinking to its narrowest functional form, a kind of cognition stripped of the wandering, associative processes that give it depth. By delegating our routine cognitive labour to AI, we inadvertently remove the conditions under which the DMN activates, and the quiet generative work it performs.

The concern, then, is not that optimisation is inherently harmful, but that it may carry hidden trade-offs. Just as physical inactivity can erode bodily strength, cognitive passivity may erode certain forms of mental agility. Creativity, often imagined as a high-level, almost mystical faculty, may in fact depend on a foundation of more mundane processes, among them attention, memory, pattern recognition, and the slow accumulation of experience through repeated engagement with the world.

Seen in this light, the Doorman Fallacy offers more than a critique of business efficiency. It becomes a lens through which to examine the trajectory of human cognition in an age of intelligent machines. If we define thinking too narrowly, if we reduce it to outputs, answers, or optimised solutions, we may inadvertently strip away the very processes that give it depth, flexibility, and originality.

We may then find ourselves in a paradoxical position where we possess more time to be creative yet lack the underlying cognitive conditions that make creativity possible. It would be akin to hiring a cleaner to save time for writing, only to discover that the act of cleaning was precisely what allowed one's thoughts to settle enough to write.

This is not a case against AI, nor a rejection of optimisation. It is a case for knowing which tasks are genuinely expendable, and which, despite their apparent simplicity, constitute the formative conditions of human cognition.

In the end, the promise of optimisation must be balanced against the preservation of capacity. The cognitive labour that appears most expendable, that which quietly scaffolds our thinking, may be precisely what we cannot afford to lose. What we lose without noticing, we may not recover.

1

0
0

Reactions

0
0

More posts like this

Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities