Hide table of contents

A Narrow Review of the Research on AI’s Negative Impacts

 

Crosspost from substack: https://open.substack.com/pub/alexbaxter1/p/the-cognitive-costs-of-artificial?r=7m9mmg&utm_campaign=post&utm_medium=web&showWelcomeOnShare=true

 

The integration of generative artificial intelligence (AI) into educational and professional environments is occurring faster than regulation can keep pace, and without sufficient assessment of its long-term impacts. The limited research that does exist identifies potential negative effects on human function, where initial efficiency gains are offset by the erosion of fundamental mental faculties.  

The research highlighted below describes potential negative cognitive and psychosocial impacts that must be considered in any assessment of AI as it relates to human flourishing.

 

A note on the evidence

It is worth noting that the research described below carries meaningful limitations. None of these limitations mean the study outcomes are wrong per se, but they should be read as a prompt for further research rather than as settled findings. 

A few of the main limitations are:

  • Correlation - Many studies in this area conflate correlation with causation.  For example, the observation that heavy AI users display lower critical thinking scores may reflect pre-existing tendencies rather than AI-induced decline.  This is a massive distinction that the research rarely resolves.
  • Baseline comparison - Historically, anxiety has accompanied every major cognitive technology advance, from writing to the calculator to the search engine.  The cognitive atrophies predicted in relation to these advances have largely failed to materialise at a societal level. While often implied, research rarely makes a convincing case for why AI should be treated as categorically different to past cases.
  • The samples - The empirical foundations for many of the claims that emerge from the research are narrower than the conclusions they are asked to support.  For example, research involving small and WEIRD (Western, Educated, Industrialized, Rich, Democratic) samples performing artificial tasks over short timeframes, may make sweeping pronouncements about neural architecture, collective intelligence, and the future of human cognition.
  • Publication – A number of the articles referenced are in preprint or have not been peer reviewed and should not be read as proven claims. They are included here to stimulate thought in an area where research is both lagging and lacking, and as ideas that may usefully orient future inquiry.

 

Erosion of Memory and Knowledge Retention

Generative AI may serve as a "cognitive crutch" that impairs long-term memory formation through cognitive offloading (Barcaui, 2025). 

According to Barcaui (2025), when users delegate core mental work to AI they bypass "desirable difficulties", challenges like effortful retrieval and productive struggle, that are essential for durable learning. Their randomized controlled trial found that students using ChatGPT as a study aid scored significantly lower on a surprise retention test 45 days later compared to a traditional study group. This suggests that while AI may increase the efficiency of initial task completion, it may also undermine the encoding processes needed for robust knowledge construction.

Atrophy of Critical Thinking and Reasoning

Frequent reliance on AI tools is strongly correlated with a decline in critical thinking skills (Gerlich, 2025). 

Research by Gerlich (2025) utilizing the Halpern Critical Thinking Assessment (HCTA) found that increases in ‘cognitive offloading’ mediated a negative correlation between frequent AI tool usage and critical thinking abilities, particularly in younger populations aged 17–25. This shift is theoretically framed as the rise of Artificial cognition or the outsourcing of deliberation to external algorithms (labelled as "System 3"), which can supplant internal intuition (System 1) and deliberation (System 2) (Shaw & Nave, 2026). 

Shaw and Nave (2026) term this key behavioural signature as "cognitive surrender", or the uncritical adoption of AI outputs. Their experimental data showed that participants frequently adopted faulty AI recommendations even following errors, reflecting a loss of autonomy, self-efficacy, and agency.

Attentional Fragmentation and Cognitive Overload

The pervasive presence of AI-driven digital technology contributes to cognitive overload, reducing the brain’s capacity for deep, sustained focus (Deckker & Sumanasekara, 2025). 

Deckker & Sumanasekara’s (2025) paper describes the link between constant digital engagement that drives media multitasking, and diminished cognitive control with increased susceptibility to distraction. Additionally, Shalu et al. (2025) found that prolonged utilisation of AI in task performance may precipitate cognitive fatigue and diminished focus, and that long-term interaction is significantly associated with mental exhaustion and attention strain.

Neural Evidence of Skill Atrophy

Electroencephalography (EEG) data show that AI usage restructures the brain’s underlying cognitive architecture (Kosmyna et al., 2025). 

Kosmyna et al. (2025) found that brain connectivity systematically "scales down" or reduces in synchronisation, with the amount of external support provided.  In an essay writing task, LLM-assisted groups showed the weakest neural coupling in frequency bands associated with internal attention and working memory. This form of neural disengagement had immediate behavioural consequences with the majority of LLM-assisted participants failing to accurately quote from their own essays. This finding suggests that AI support may lead to "deskilling" and a loss of perceived ownership over one's intellectual output.

Homogenization of Thought and Expression

The widespread adoption of Large Language Models (LLMs) risks linguistic and cognitive homogenization (Sourati et al., 2025). 

Sourati et al. (2025) hypothesise that because models are trained on large, often biased datasets, they reinforce dominant Western, high-income perspectives while marginalizing alternative cultural voices. This study highlights that in creative ideation tasks, AI assistance was shown to increase the semantic similarity of ideas across different users, flattening the "cognitive landscape" and leading to an algorithmic monoculture of thought, that favoured efficiency and predictability over original, idiosyncratic expression.

Psychosocial Impacts: Loneliness and Dependency

Negative impacts extend to the psychosocial domain, where prolonged AI interaction is associated with increased loneliness and reduced real-world socialization (Phang et al., 2025). 

Phang et al. (2025) found that vulnerable individuals are more likely to develop cognitive dependence on AI. This technology-dependent interaction pattern can lead to form of bonding where users lose agency and confidence in their decision-making ability, and rely on AI for practical and emotional support.

Taken individually, each of these findings is preliminary, but considered together. they raise broader questions about the direction of travel.

 

Possible Ramifications

The findings suggest that if there is a cognitive debt to be incurred by AI usage, it is not a matter of mental laziness, but a potential restructuring of human cognition. 

One possible mechanism suggested by the research is that using artificial cognition ("System 3" thinking) via Large Language Models may trigger fluency heuristics, where the model's confident, authoritative tone mimics the user's internal intuition, a kind of 'feeling of knowing'. If true, this may then contribute to what Shaw and Nave (2026) term cognitive surrender, an uncritical acceptance of outputs. Preliminary EEG data from Kosmyna et al. (2025) suggests this may have a neural correlate, with observed "scaling down" of connectivity potentially indicating reduced intellectual ownership, though the causal direction of this relationship remains unclear.

Whether these individual-level effects scale upward is an open question, but the research raises the possibility of broader societal and psychosocial risks. At a collective level, Sourati et al. (2025) hypothesise that widespread adoption of AI-generated content could risk narrowing the human conceptual space and lead to a reduction in the cognitive diversity needed for complex problem-solving. At a personal level, the reviewed studies identify patterns tentatively linking AI interaction with increased social vulnerability and a gradual erosion of personal agency, though as noted above, these associations are largely correlational. Taken together, the research suggests, rather than demonstrates, a possible trajectory in which frictionless AI tools shift users away from independent judgment. If that trajectory is real, it would warrant serious attention from researchers and policymakers alike.

Ultimately, though, any cognitive debt we incur will not be an inevitable byproduct of AI itself, but a consequence of its societal implementation. If AI continues to be integrated as a frictionless replacement for mental effort, the resulting 'deskilling' and neural disengagement may fundamentally alter human intellectual autonomy. The challenge for future research and policy is to ensure that AI serves to extend the boundaries of human thought rather than narrowing the landscape of human flourishing.

 

Reference List

Barcaui, A. (2025). ChatGPT as a cognitive crutch: Evidence from a randomized controlled trial on knowledge retention. Social Sciences & Humanities Open, 12, 102287. https://doi.org/10.1016/j.ssaho.2025.102287

Deckker, D., & Sumanasekara, S. (2025). A systematic review of the impact of artificial intelligence, digital technology, and social media on cognitive functions. International Journal of Research and Scientific Innovation (IJRISS), 9(3), 134-154. https://dx.doi.org/10.47772/IJRISS.2025.90300011

Gerlich, M. (2025). AI tools in society: Impacts on cognitive offloading and the future of critical thinking. Societies, 15(1), 6. https://doi.org/10.3390/soc15010006

Kosmyna, N., Hauptmann, E., Yuan, Y. T., Situ, J., Liao, X. H., Beresnitzky, A. V., Braunstein, I., & Maes, P. (2025). Your brain on ChatGPT: Accumulation of cognitive debt when using an AI assistant for essay writing task [Preprint]. MIT Media Lab. https://doi.org/10.48550/arXiv.2506.08872

Phang, J., Lampe, M., Ahmad, L., Agarwal, S., Fang, C. M., Liu, A. R., Danry, V., Lee, E., Pataranutaporn, P., & Maes, P. (2025). Investigating affective use and emotional well-being on ChatGPT [Preprint]. MIT Media Lab. https://doi.org/10.48550/arXiv.2504.03888

Shalu, Verma, N., Dev, K., Bhardwaj, A. B., & Kumar, K. (2025). The cognitive cost of AI: How AI anxiety and attitudes influence decision fatigue in daily technology use. Annals of Neurosciences, 1–12. https://doi.org/10.1177/09727531251359872

Shaw, S. D., & Nave, G. (2026). Thinking—Fast, slow, and artificial: How AI is reshaping human reasoning and the rise of cognitive surrender [Preprint]. The Wharton School Research Paper. https://doi.org/10.31234/osf.io/yk25n_v1

Sourati, Z., Ziabari, A. S., & Dehghani, M. (2025). The homogenizing effect of large language models on human expression and thought [Preprint]. University of Southern California. https://doi.org/10.48550/arXiv.2508.01491

10

0
0

Reactions

0
0

More posts like this

Comments1
Sorted by Click to highlight new comments since:

Your argument about the Doorman Fallacy seems to capture the individual layer of a broader dynamic. My question is whether these reduced cognitive costs scale in a qualitatively different way at the collective level. If many agents begin to delegate not just generation but also evaluation to AI systems, the cost of producing plausible outputs may fall faster than the cost of verifying them. In that case, does the shared epistemic infrastructure — the common ground that makes coordination possible — begin to erode independently of any individual’s cognition? Put differently: is there a point where individually rational cognitive offloading leads to a collective coordination failure that no single actor intends or can correct?

Curated and popular this week
Relevant opportunities