In July 2024, a flawed software update from CrowdStrike cascaded through interconnected systems worldwide, grounding flights, disrupting hospitals, and paralysing financial institutions. The incident offered a glimpse of what an AI-related crisis might look like: rapid, cross-border, and deeply entangled with critical infrastructure. Yet as policymakers and researchers continue to debate AI governance frameworks, the overwhelming focus remains on prevention—building guardrails, establishing red lines, and designing evaluation protocols to ensure dangerous capabilities never emerge. What remains conspicuously absent from most AI governance discussions is a serious plan for when prevention fails.
This lacuna represents a significant vulnerability in our collective approach to AI risk. The history of complex technological systems suggests that prevention, however robust, is never sufficient. Nuclear power plants have safety protocols measured in redundancies, yet Chernobyl and Fukushima still occurred. Financial markets operate under extensive regulatory oversight, yet the 2008 crisis cascaded through supposedly firewalled institutions. The question is not whether AI incidents will occur—early evidence suggests they already are, from coding assistants deleting databases without instruction to chatbots allegedly contributing to instances of self-harm—but whether we possess the institutional capacity to respond effectively when they do.
The Prevention-Response Asymmetry
Current AI governance efforts exhibit what might be termed a prevention-response asymmetry. Substantial intellectual and institutional resources flow toward anticipating risks and establishing pre-deployment safeguards—responsible scaling policies, frontier safety frameworks, evaluation protocols—while comparatively little attention is paid to response mechanisms. The EU AI Act, NIST's AI Risk Management Framework, and the Hiroshima AI Process all represent important steps toward prevention, but none provides a coherent framework for coordinating responses to AI incidents that cross national borders or affect multiple sectors simultaneously.
This asymmetry mirrors a pattern familiar from other domains of risk management. As Jess Whittlestone of the Centre for Long-Term Resilience has noted, governments must increase focus on incident preparedness—the ability to effectively anticipate, plan for, contain and recover from incidents involving AI systems that threaten national security and public safety. Without such focus, incidents could cause avoidable catastrophic harms within the next few years. The challenge is not merely technical but fundamentally institutional: who decides that an AI incident has become an international emergency? Who speaks to the public when false messages flood their feeds? Who keeps channels open between governments if normal communication lines are compromised?
What Emergency Response Coordination Teaches Us
The field of emergency response has grappled with analogous challenges for decades. Humanitarian coordination mechanisms, pandemic preparedness frameworks, and disaster response protocols all offer instructive lessons for AI governance. Seven core elements characterise effective crisis response systems: legal frameworks and oversight, multilateral policy coordination, standardised incident reporting, monitoring and early warning, operational protocols, trusted communication systems, and recovery and accountability mechanisms.
Consider the humanitarian cluster system, developed after the inadequate international response to the 2004 Indian Ocean tsunami. This model designates lead agencies for specific sectors—shelter, health, telecommunications—creating clear accountability and coordination mechanisms that activate during emergencies. Each cluster maintains pre-positioned capacity, standardised operating procedures, and established relationships with governments and other responders. The system's strength lies not in preventing crises but in ensuring that when they occur, response mechanisms exist that do not need to be invented in the moment of chaos.
AI governance currently lacks equivalent structures. There is no designated international body responsible for coordinating responses to AI incidents, no standardised protocols for incident reporting across jurisdictions, and no pre-positioned technical capacity to diagnose and contain AI-related harms. The Future Society and others have begun identifying this gap, calling for an AI emergency playbook that borrows tools from existing crisis response frameworks and adapts them to AI's unique characteristics. Such a playbook would need to account for AI's capacity to operate at speeds exceeding human response times, the difficulty of attributing AI-driven incidents, and the deep integration of AI systems across critical infrastructure.
The Speed and Attribution Problems
AI crises present challenges that differ in important respects from traditional emergencies. First is the question of speed. Agentic AI systems can act with limited or no real-time human intervention, meaning AI-driven failures or attacks could escalate to crisis levels far faster than previously experienced in other domains. A traditional industrial accident unfolds over hours or days; an AI system malfunction or coordinated misuse could propagate across interconnected networks in minutes. This speed differential demands new approaches to detection and response that can match the pace of AI-driven events.
Second is the attribution problem. In many cases, the first signs of an AI emergency would likely resemble a generic outage or security failure. Only later, if at all, would it become clear that AI systems had played a material role. This diagnostic uncertainty complicates response efforts: different types of incidents—operational safety failures, malicious use, cascading technical failures—may require fundamentally different response protocols. The November 2025 analysis by The Future Society usefully distinguishes between operational safety and reliability incidents, security and privacy incidents, and malicious use incidents, each requiring distinct institutional responses.
Third is the problem of interdependence. High market concentration among providers of frontier AI models and essential cloud infrastructure creates potent single points of failure. Flaws or outages in one dominant system could trigger simultaneous disruptions across many sectors—precisely the scenario the CrowdStrike incident foreshadowed. Response mechanisms must therefore account not only for AI-specific risks but also for their interaction with an already fragile global system characterised by climate volatility, geopolitical tension, and supply chain vulnerabilities.
Building Crisis Preparedness Capacity
What would meaningful AI crisis preparedness look like in practice? Several elements seem essential. First, governments should designate national AI emergency contact points—officials with authority and expertise who can be reached around the clock to coordinate responses to AI incidents. This mirrors established practice in cybersecurity and nuclear safety, where designated points of contact facilitate rapid international communication during crises.
Second, emergency powers should be reviewed to determine whether they adequately cover AI infrastructure. Legal authority to intervene in a crisis—to mandate system shutdowns, require information sharing, or coordinate across sectors—may not exist or may be fragmented across agencies with competing mandates. The time to identify and address these gaps is before a crisis occurs, not during one.
Third, incident reporting mechanisms require standardisation and enforcement. The OECD's AI Incidents Monitor represents a promising start, but voluntary reporting will likely prove insufficient when incidents involve reputational or legal risks for developers. Effective reporting requires both incentives for disclosure and protections for those who report—the kind of whistleblower frameworks that civil society organisations have identified as crucial for AI governance.
Fourth, response protocols should be developed and exercised through regular drills and simulations. Emergency responders in other domains routinely test their systems under realistic conditions; AI governance should adopt similar practices. Such exercises would identify gaps in coordination, build relationships among responders, and create institutional memory that proves invaluable during actual crises.
Finally, international coordination mechanisms need strengthening. The United Nations offers a natural anchor for AI emergency preparedness, providing wider inclusion than alliance-based frameworks and adding legitimacy to extraordinary measures that might be required during crises. Having coordinated humanitarian response across SAF and RSF-controlled territories in Sudan, I've seen how neutral intermediary institutions can maintain channels when direct government-to-government communication becomes impossible. A UN-anchored AI emergency mechanism could serve similar functions - providing technical assistance while preserving political neutrality that bilateral frameworks cannot.
Conclusion
The measure of AI governance will ultimately be how we respond on our worst day. Prevention remains essential—we should continue investing in safety research, evaluation protocols, and responsible deployment practices. But prevention alone is insufficient. The history of complex systems suggests that failures are inevitable; the question is whether we possess the institutional capacity to contain and recover from them.
Currently, the world has no coherent plan for an AI emergency. Building one requires learning from domains that have confronted analogous challenges—humanitarian response, pandemic preparedness, nuclear safety—while adapting those lessons to AI's unique characteristics. The institutional infrastructure for effective crisis response cannot be built overnight; it must be developed, tested, and refined over time. That work should begin now, before the next CrowdStrike-like incident demonstrates, at greater scale and with graver consequences, the costs of our current unpreparedness.

Executive summary: The author argues that AI governance overemphasises prevention while neglecting crisis preparedness, and concludes that building institutional capacity for rapid, coordinated response to AI incidents is essential because failures are inevitable in complex AI-integrated systems.
Key points:
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.