Hide table of contents

On the asymmetry in how we discuss AI futures and why that matters for policy and public engagement.

Crossposted from my Substack.

AI Doom and Gloom

Catastrophe makes for a good story. Movies and TV shows about plane crashes, asteroids heading for earth, crime sprees, global pandemics, and land-burying floods grab our attention. It is probably pretty uncontroversial to say that a movie about a plane landing safely, or new hospitals being built to improve local healthcare services, or a massive improvement in employment opportunities would not draw the same crowd. Stories are built on conflict; it sits at the heart of every story that has ever kept us on the edge of our seats. With that in mind, it stands to reason that almost every TV show or movie about AI futures portrays such futures as bleak at best, and a significant existential threat at worst.

In Terminator, Skynet has thrown mankind into a nuclear apocalypse. In The Matrix, “the architect” has turned farmed humans into living batteries. In Transcendence, ASI represents an unstoppable and profound existential threat. The AI in I-Robot is dangerous in its inability to understand human values. Ex-Machina’s Ava is manipulative in her exploitation of human empathy. Even texts which don’t lean as hard into the “AI as threat” narrative generally aren’t especially optimistic either. In Her, the focus is on how the inauthentic and one-sided nature of a human-AI relationship can expose human vulnerabilities: not existentially threatening but hardly cheerful either.

What about documentaries? One of the most watched documentaries on AI-driven systems is Netflix’s The Social Dilemma, which was watched by 38 million households within 4 weeks of its release. The film argues that AI systems that are optimised to maximise profit can unintentionally manipulate the behaviour of human users at a global scale. Although the documentary does not explore AI futures in the same way as the movies mentioned above, it is worth considering how the opinions of its substantial viewership might have been shaped when it comes to trust in technology and hope in a technologically advanced future.

The same trend towards pessimism - or at best skepticism - about AI futures can be seen in books published in the last couple of years, including If Anyone Builds It, Everyone Dies by Eliezer Yudkowsky and Nate Soares, and Our Final Invention by James Barrat. This is completely understandable. We need to take seriously the prospect that AI could represent a formidable existential threat over the coming years. I, for one, am grateful that people are doing research on how we can minimise the existential risk from AGI/ASI. We need more of this. But notice what's largely absent from this landscape: compelling visions of what we're working toward, not just what we're working to avoid.

What I am saying is certainly not that we should ignore the threat that is looming ahead of us, but that our ability to tackle that threat would be strengthened by giving due weight to the possibilities of an AI future in which we manage to get everything - or at least all of the crucial things - right.

 

The Motivation Problem

What is the result of this asymmetry in the AI narrative? The Pew Research Center’s spring 2025 survey found that 34% of those surveyed were more concerned than excited about AI, with just 16% saying they were more excited than concerned (42% felt equally excited and concerned). More than twice as many people lean towards concern rather than excitement. People are right to be concerned, but the important point in this context is this: people are also right to be excited, and that message isn’t being heard as clearly. How much this lagging excitement matters depends on what motivates people. Are they more motivated by concern or excitement?

Prospect Theory, which includes the concept of loss aversion, shows that there is a cognitive bias whereby people experience a loss as disproportionately psychologically painful when compared with the benefits of an equivalent gain. This would imply that people are more motivated by avoiding losses than they are by maximising gains. If this is applicable to the case of AI futures, then we would be right to invest most of our focus in doom narratives, aiming to motivate public interest, support, and engagement in AI safety by utilising fear and loss aversion. If the prospect of loss pushes people to act, then perhaps amplifying all of the potential losses that could be made reality through the development of an advanced misaligned AI is our best shot. So the question remains: does the concept of loss aversion apply here?

Kahneman and Tversky’s original 1979 paper on Prospect Theory looked at hypothetical gambling scenarios with known probabilities. There are crucial differences to consider when we compare this to AI existential risk.

Prospect Theory Research

  • Small, comparatively trivial, stakes
  • Outcomes are immediately observed
  • Known probabilities
  • Personal decisions
  • Concrete outcome

AI Risk Context

  • Existential risk - huge stakes
  • Long-term future
  • Probabilities are uncertain and difficult to estimate
  • Decisions impact everyone
  • More abstract outcome

When faced with potential losses and gains that are both deeply uncertain and so huge as to be, to some degree, incomprehensible, it is plausible that the concept of loss aversion does not apply as readily as we might initially think. Weighing up the end of human civilisation against human flourishing to an extent we can hardly dream of is quite different than considering how happy or sad you might be about losing or winning $20. What we can say, based on loss aversion research, is that in ordinary scenarios, people are more motivated by potential losses than gains. However, AI existential risk scenarios are not ordinary, and when stakes become incomprehensibly large and timelines stretch over decades, different psychological mechanisms may come into play. Research on climate communication and health behaviour offers further insights into what motivates behaviour when it comes to more significant and longer-term threats.

In What We Think About When We Try Not to Think About Global Warming, psychologist and economist Per Espen Stoknes warns that humans are averse to messages of doom, and that the “apocalypse of climate hell” story has been contributing to the stalemate of the climate paradox. A study by the Oxford Institute of Journalism showed that more than 80% of all climate-related news stories had a disaster framing, with psychological research indicating that this kind of overuse of fear-inducing narratives prompts guilt, denial, and disengagement in people. Evidently, climate doom messaging can backfire. Espen Stoknes suggests that inspiring stories of hope could counteract some of the impact of the doom narrative. The way the story of climate change is told mirrors, to some extent, the way we are telling the story of an AI future. For this reason, we should give due consideration to how the climate change story impacts willingness to act in readers or viewers.

A 2012 meta-analysis of health message framing tells us something similar. Research has shown that gain-framed messages are more effective than loss-framed messages at promoting prevention behaviours such as quitting smoking or taking up exercise. Messages that emphasised the benefits of positive behaviours had a bigger impact on encouraging action than those that focused on the negative impacts of an unhealthy lifestyle.

Loss aversion doesn’t need to be discounted, but these examples urge us to consider the value of telling positive stories when it comes to inspiring people to engage with AI risk. Let’s consider what stories we might amplify in our efforts.

 

What Positive Futures Could Look Like

Positive visions of AI futures do exist, but they face a structural disadvantage: hopeful narratives are inherently less dramatic than stories of crisis and chaos. The entertainment industry is unlikely to abandon apocalyptic narratives - they're too profitable. This places responsibility on those working in AI safety and development to articulate potential benefits more prominently. What might an AI future look like if we get things right? There are many points I could touch upon here; the potential benefits of AI in the future are extensive and span many issues and industries. With that in mind, here are just a few stories we could amplify in the hope that people who are minimally familiar with AI safety and risk might be inspired to get involved.

Disease Treatment/Drug Development

Advanced AI could revolutionise healthcare in multiple ways. Through accelerated drug and vaccine development, we could prevent or treat most natural infectious diseases. Diagnostic accuracy could improve dramatically. Individualised cancer treatment regimes, which are currently impossible to scale, could become feasible. Early prediction and diagnosis of Alzheimer's disease could become reality, and human lifespan could increase significantly. It could improve the patient-care work done by healthcare workers by massively decreasing, through automation, the time they spend on paperwork and administration, cutting down on burnout.

Food Production Efficiency

In agriculture, AI applications span the entire production cycle. AI could enable us to monitor soil health and the health and behaviour of livestock. Through the latter, it could allow us to predict outbreaks of disease, and tackle them quickly if they do happen. In aquaculture, it could help us to boost sustainability, prevent overfeeding, and minimise waste. When it comes to crop farming, AI could allow us to monitor weather, soil composition, and pest control. It could offer support with marketing and sales through pricing and margin management, sales growth, and customer service. Drone technology could provide guidance on which crops to grow, when, and where. AI-driven indoor farming could provide 20 times more yield per acre than traditional farming. In these ways among others, advanced AI could help to end world hunger.

Educational Tools

What if every student had access to a personal tutor? AI in education could analyse student data and offer personalised educational tools that best support the needs and learning styles of each student. AI systems can continually monitor student performance and deliver real-time feedback. It could enhance accessibility for students with additional needs. It could reduce the workload on teachers by automating administrative tasks, allowing educators to focus more on student learning objectives and meaningful interactions.

Reduced Animal Suffering

Billions of animals suffer in food production, on roads, and in the wild. AI offers pathways to reduce this harm. It could enable advances in the production of alternative proteins by optimising the extrusion process, mapping combinations of plant proteins, and tailoring the optimisation of plants for specific processes (see more on alternative protein production in this post by Max Taylor). It could monitor injuries or diseases in livestock animals, pets, or wild animals through precision livestock farming (PLF) and surveillance. By collecting data on their body language, behaviours, weight, digestive health, and vocalisations, AI could enable us to take better care of companion animals. Self-driving cars, dash cams, and thermal sensors could reduce animal deaths on our roads (in the US alone, an estimated one million animals die on the roads every day).

These are just a few brief examples of the potential benefits of an AI future. These examples may be familiar to those working in AI safety, but the Pew data suggests they haven't reached the broader public. That gap represents both a communication failure and an opportunity.

 

Why It Matters

If we want AI to go well, then public support matters. The core of my argument is this: we can build stronger public support for AI safety by emphasising not just what we stand to lose, but what we stand to gain. Positive visions help encourage people to invest resources in making that dream come to fruition. Public pressure or support influences policy decisions. The priorities of politicians are influenced by what matters most to their constituents. If AI safety is a top public priority, then it improves the chances of creating and implementing effective AI safety regulations. If the public are disengaged and disconnected from the issue due to doom narratives, then they may either ignore the topic altogether or, alternatively, push for overly strict regulations that impede beneficial AI development.

Public enthusiasm for positive AI futures translates into resource allocation. Public support influences both private and state investment in safety-focused AI development. The climate communication research mentioned earlier shows that people are more motivated to invest in solar panels and green initiatives than to adopt austerity measures. When people are inspired to dream about bright AI futures, it is likely that they will simultaneously be inspired to support that dream by allocating resources to it.

If the AI narrative feels doom and fear dominant, then it becomes more difficult to attract talented candidates to the field. Avoiding catastrophe is certainly one incentive, but avoiding catastrophe and building something extraordinary is undoubtedly a stronger sell. We want the best people to feel inspired to work on making AI go well. A positive vision maximises the chances that this will happen. It is also likely to improve global cooperation on building responsible AI systems - a dream can create common ground in a way that a threat does not.

 

Finding a Balance

AI risks are serious and deserve to be treated as such. Work on AI safety and alignment is absolutely crucial, as are the doom narratives. Things really could go badly, and it is important that people understand the risks involved should we develop misaligned advanced AI systems. Despite this post arguing for narratives about bright AI futures, we must continue speaking clearly and loudly about catastrophic risks. People need to hear these stories. My argument is not that we need fewer of these stories, but that we need more of the other kind. We could benefit from more writing - both fiction and non-fiction - that talks about how incredible things could be if we take the right path, and more public discussion that takes seriously both the catastrophic risks and the extraordinary and exciting possibilities of an AI future.

As Espen Stoknes says about climate communication: “I don’t think there is just one right type of climate story to tell to get people to understand the urgency of the issue and move them to action. Rather, a plurality of stories is needed, each creating meaning and engagement for different groups of people.” So it is for AI; there isn’t just one right story to tell, we need a plurality of stories covering AI possibilities from the very worst to the very best and everything in between. Making AI go well will require both the skeptics and the dreamers.

Cover photo by Benjamin Davies on Unsplash

3

1
0

Reactions

1
0

More posts like this

Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities