In my early twenties, while living in Rome, I fell into the habit of visiting my local pizzeria several times a week - extra prosciutto and extra buffala every time. I knew it wasn’t healthy, but I was young, felt fine, and saw no immediate consequences, so I ignored the risk.

A few months later I was offered a routine blood test at work. The results showed my triglyceride levels were well into the high range. Nothing had changed about what I knew and impacts of my decisions. But seeing the numbers made the future feel real. So I changed my habits almost immediately.

It was a warning shot.

Without a warning shot, humans tend not to act. If a certain course of action offers short-term benefits and no immediate pain, it can be all too easy to ignore the risks that we know exist. We need a single event, a moment that forces us to confront the path we’re currently on and instils the clarity needed to change course.


Warning shots don’t just change individual behaviours like my penchant for pizza; they change institutions. We’ve seen it over and over again throughout history.

Take the Sputnik moment. By the end of the Second World War the United States had emerged as the undisputed global leader in science and technology. The war effort had catalysed extensive federal funding of scientific research under Vannevar Bush’s Office of Scientific Research and Development (OSRD); rockets, radar, nuclear power, synthetic fibres - technology that would go on to transform modern life. US dominance had been established. Bush, in his essay Science, the Endless Frontier, advocated for the US to capture this momentum, to continue federal funding of basic research, and to establish a National Science Foundation (NSF) with an initial budget of $33.5 million. The NSF was established, but in its first full year was awarded a budget of just $3.5 million. It was one decision that reflected a broader attitude of complacency - one that would, in retrospect, define the post-war balance of scientific progress with the USSR.

That was until October 4th, 1957, when the USSR launched Sputnik 1 into orbit. Rear Admiral Rawson Bennett, then chief of US naval operations, dismissed the satellite as “a hunk of iron almost anyone could launch.” The American public disagreed. As the official NASA history of the Sputnik and Vanguard programmes reflects, most Americans had been vaguely aware of Soviet advances - the atom bomb, the hydrogen bomb, the ICBM test of August 1957 - but none of these facts had registered deeply. When Sputnik appeared, the reaction was a compound of awe, surprise, and fear: if the Soviets could put a satellite in orbit, what was to prevent them from putting up a larger one equipped with nuclear warheads?

It was the warning shot that set off the space race - a single moment which brought clarity to the American public and policymakers and triggered an explosion of policy and funding decisions geared towards accelerating US research and development. Less than a year after the launch, Congress passed the National Defense Education Act, pouring billions into the US education system. By 1959 the NSF’s annual budget had tripled its pre-Sputnik levels.[1]

It’s a potent example of how institutions tend to respond to shocks as opposed to forecasts alone.

The power of a warning shot is not to reveal something that is not already known - it’s to reveal the underlying significance of facts that are already available. To connect the dots and provide insight as to the outcomes we are currently on a path towards. In doing so, during the following window of clarity, there is space for action and for change.


The average American was aware of the USSR’s pre-Sputnik scientific advances without it denting their underlying complacency about US technological dominance. In much the same way, the facts of AI risk are widely available today - yet so far they have had minimal effect on our general complacency that everything will work out okay as AI capacities continue to expand.

We’re aware that AI is already having complex effects on users’ mental health, with emergence of the phenomenon of 'AI psychosis'. We’re aware that existing models have been shown to conduct misaligned behaviour - from strategically hiding their true capabilities, to engaging in blackmail to avoid shutdown - and that the alignment problem is yet to be solved.[2] We’re aware that we lack the evaluation methodology to fully understand the capabilities and intentions of future models. We’re aware that the breadth of risk profiles is substantial — from labour displacement to surveillance, from AI-enabled cyber and CBRN attacks, to rapid conflict escalation and, ultimately, loss of control.[^5]

Yet despite this range of evidence, few of these facts have been internalised by the majority of the public or public institutions. We are not acting like a society that will imminently be facing these risks. Our policymakers are not planning for a world in which the predictions of leading tech CEOs - that there will be a “country of geniuses in a data centre,” or superintelligence within the next 2 years, are remotely possible.[3]

We are acting like a society that has not yet had its warning shot.


When I first found myself researching the rapid progress in the field of progress in AI I was not too concerned. I had no reason not to be concerned; I could see progress was developing rapidly and governments had not put in place the policy to mitigate the potential risks on the horizon. But I assumed  we'd figure it out. We always do. We'll have some sort of warning shot and politicians will wake up. They'd legislate for improved safety requirements, invest in social adaptation, and rapidly reduce the risks involved before it was too late. 

But the question I find myself asking is: are we still capable of hearing warning shots in the way we once did?

I see three challenges.

The first is the degradation of our information environment. It’s no secret that different political groups now live in different realities, curated through the lens of their own algorithm. Epistemic fragmentation is in and of itself disastrous for any society, but its effects are particularly acute during periods of hazard that require clarity and wisdom. We are at the early stages of bringing into existence the most powerful technology in history. The consequences of being unable to see truth from fiction, of doubting the authority of experts and failing to be informed by their warnings, are higher than at arguably any other point in our history. A warning shot that might once have unified a nation can now be reframed, dismissed, or drowned in noise before it has the chance to register.

The second is the pace of progress. During the early days of modern AI there were discrete moments where progress could be easily identified as going from zero to one. AlphaGo’s move 37 - the first time a machine learning algorithm displayed what appeared to be an original, non-human-like decision. ChatGPT - the first time large language model technology was revealed to the public at scale. These were legible events. They cut through and enabled some degree of collective 'taking stock' of the weight of the moment.

But we are now in a different phase. Major technological leaps in capability occur from month to month, between each model release. For a technically proficient observer watching closely, the scale and significance of these advances might be plain to see. But for the average person casually noting headlines, it is perhaps difficult to notice that anything is happening at all beyond steady progress. Like the difference between a race car going at 150 mph and one going at 400 - one is a technological marvel, the other fairly common - yet for an observer at the trackside they look exactly the same: a flash before the eyes. 

The third is the breadth of AI’s risk surface. In London in early 2026, the group Pause AI held its second protest rally. Reading up on the event, I was struck by the diversity of concerns, rangeing from the potential extinction of humanity to risks to human creativity. Different groups were concerned about different ways in which AI will touch and transform society, precisely because it will touch and transform all elements of society.  It feels as if different groups are watching for different warning shots. My concern is that multiple warning shots in different directions may lack the same clarifying, unifying effect, and instead produce a general sense of confusion and overwhelm that fails to focus motivated people towards the achievement of specific policy and broader strategic goals. The average US citizen in 1957 was concerned about the rise of the USSR, and the Sputnik moment spoke to that single fear — a single uniting truth. But can such a unifying truth exist with AI? Or is the landscape of fears too sprawling to unite people and institutions behind a collective goal?


Thinking about these hurdles makes me wonder whether the moment might have already passed. I think of the first lawsuits brought by the families of  people who killed themselves at the behest of an AI model. I think about the conflict between the US Department of War and Anthropic, which saw the DoW designate Anthropic a supply chain risk for refusing to remove limitations on the use of its models for domestic surveillance and autonomous weapons systems.[4] Both were viewed as broadly significant moments by those watching. But who was really watching? Did they cut above the noise? Or did they appear as brief headlines on someone’s feed before being drowned out again?

Perhaps it will take something more than a warning shot to force us into action — something more shocking and painful. These have existed throughout history as well: Chernobyl leading to vastly improved nuclear safety standards, 9/11 forcing a wholesale rethinking of US counter-terror capacity. Warning shots that drew blood.

And even if we do hear a warning shot clearly, there is no guarantee it pushes us in the right direction. This is the tension at the heart of my own analogy. The Sputnik moment - the core example across this essay - was a moment that heightened competitive fears and accelerated technological advancement. It didn’t lead to restraint; it led to a sprint. Today there are those who, like me, believe we should be displaying more caution - focusing on safety to ensure we have control of the technology we develop. But there are also those who feel we need to speed up, that the West is in an existential race with China to reach superintelligence and that the fate of democracy lies in winning that race. A warning shot that crystallises attention could just as easily fuel the accelerationists as it could the cautious. The direction of the response is not predetermined - it depends on the framing, the politics, and the fear.


Right now I am fairly confident that we are on an unacceptably high-risk path. Unprecedented frontier AI technology is being developed and deployed at the whim of market forces, and continues to accelerate - accelerating with it the full range of risk profiles. Yet the policy landscape has not yet adapted. Policymakers lack the knowledge, urgency, and political will to implement the policies required to reduce risk. With the rumoured release of Anthropic's next generation of models on the horizon ('Mythos' is rumoured to constitute a 'step change' in capabilities, particularly across cyber capabilities), we are now entering years where these risks will become increasingly more tangible - leaving theory in the past.

This is not, in and of itself, cause for panic - as long as we believe that a warning shot will come. That a certain moment or event will crystallise focus on the existing risk landscape and instigate rapid action. Warning shots are the mechanism by which societies recognise when a given risk profile has risen beyond an acceptable level - they open a policy window to change course before it is too late.

But the underlying question remains. Given the state of our information environment, the pace of progress, and the sprawl of our concerns - will we be able to hear it?

  1. ^

    By 1960, combined federal education funding had grown almost sixfold from 1953 levels, driven largely by the NDEA.

  2. ^

    The alignment problem is the underlying challenge that we cannot yet reliably ensure that AI models will behave in alignment with human interests and values. Cases of misalignment are recorded across varying degrees of severity both in lab and real world conditions. The impact of this is limited while models remain limited in their capabilities, but the risks and consequences of misaligned behaviours will scale as models grow more powerful. Progress has been made in reducing instances of misaligned behaviour. Nevertheless, existing safety mechanisms have yet to confront the stress test of controlling models which surpass human-level intelligence. Recent instances of alignment faking and sandbagging — where models strategically hide their true capabilities — illustrate the scale of the challenge we face in being certain that future models are truly aligned and sufficiently controlled.

  3. ^

    Dario Amodei has described the prospect of “a country of geniuses in a data centre.” Sam Altman and others have repeatedly forecast superintelligence within years. 

  4. ^

    The question of the use of AI in warfare feels particularly pertinent given the widely reported use of Claude systems to enable US military operations in Iran. 'Target identification to strike in just 4 clicks' has been the claim of Palantir. Given this fact, the reported attack on an elementary school by a US Tomahawk missile deserves (killing 175 people) warrants additional scrutiny. Was this a missile system failure? Or was AI targeting involved in this strikes.

  5. Show all footnotes

5

0
0

Reactions

0
0

More posts like this

Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities