Post 0.0: 5-Page Overview and Introduction
Tl;dr:
Despite unprecedented investment in AI safety, we face a critical coordination failure that systematically increases existential risk. The window to establish effective coordination is closing rapidly as AI capabilities advance ever faster. Technical solutions alone cannot address competitive dynamics or coordination problems. So this coordination gap isn't merely inefficient — it creates structural vulnerabilities for multitudes of risks related to AI.
We attempt to characterize and explain that gap and argue that:
- A comprehensive "grand strategy" framework is necessary to integrate efforts across technical, governance, and policy domains and should be a common goal, shared by AI governance researchers, EAs and other allies.
- A public discourse has to be established as the primary medium for coordination, information processing and the development of such a grand strategy for AI.
- Computational tools that can enable new forms of strategic coordination should be funded and explored (akin to our AMTAIR project which we will introduce in a subsequent post).
Through theoretical investigation, historical analysis and practical tools, this series explores such frameworks, with a particular focus on facilitating cross-domain collaboration.
We are aiming for this to be a joint effort, and are actively looking for involvement from other individuals perspectives to join this effort.
Acknowledgements:
We briefly want to say thank you to Matthew Genztel, Seán Ó hÉigeartaigh, Haydn Belfield, Shahar Avin, Thomas Porter, Johanne Meyer and everyone else who has supported us in our process of putting together this series. Thank you for believing in us, critically examining our ideas, and providing warm, effective feedback over these past months. We would also like to thank the members of the original MTAIR team, especially Sammy Martin and Aryeh Englander for their encouragement and support surrounding the AMTAIR portion of our project. Thank you for passing the baton, and for your avid support and enthusiasm.
The Coordination Crisis: What's Actually Missing
We face a paradox in AI governance that threatens to undermine even our most sophisticated safety initiatives: unprecedented investment in AI safety research and policy exists alongside a fundamental coordination failure. Despite millions in funding, rapidly growing awareness and proliferating frameworks, we lack the strategic "operating system" needed to align these disparate efforts as AI capabilities advance at an accelerating pace. This coordination gap isn't merely inefficient—it represents the vast, counterfactual loss of value which could be realized by coordinating around the mitigation of existential risk.
The problem can be understood through two complementary frameworks:
The Narrative Fragmentation: Humans make sense of complex, rapidly evolving phenomena through grand narratives—coherent frameworks that organize disparate events into comprehensible patterns. The World Wars became navigable through the "Allies versus Axis" framework. America's Cold War containment policy provided a unifying strategic framework across military, economic, diplomatic, and cultural domains. In stark contrast, our discourse on AI lacks such a unifying framework. Each community operates with different terminologies, priorities, and implicit theories of change. When specialists can't agree on fundamental questions—like whether to accelerate or decelerate AI development—coordinated action becomes impossible.
The Distributed Computing Failure: From a technical perspective, current AI governance resembles an uncoordinated distributed computing system. Each organization functions as an independent processor executing its own algorithms without reference to the broader system. This distributed system lacks the equivalent of an operating system that would allocate resources efficiently, ensure information flows to where it's most needed, prevent redundant work, manage dependencies, and coordinate responses to emerging risks. The consequences include fragmentation between technical and governance communities, duplicative research efforts, misaligned incentives, and inconsistent standards—all growing more severe as complexity increases.
When organizations and individuals function as independent processors without shared protocols, we inevitably generate duplicative work, create inconsistent approaches to interdependent problems and worst of all, leave critical gaps unaddressed.
Technical safety researchers develop solutions without implementation pathways; policy specialists craft frameworks without technical grounding; ethicists articulate principles without operational specificity. While critical questions about deployment oversight remain unaddressed by either community. As each stakeholder optimizes locally, collective safety deteriorates globally.
From Observations to Implications: The Logic of Coordination
Our argument proceeds through a structured, five-layer progression from empirical observations to necessary conclusions:
Key Observations: Four empirical patterns define the current landscape:
- AI capabilities are advancing at an accelerating pace, with compression from decades to months between significant milestones and emergent capabilities appearing at scale thresholds.
- Technical alignment efforts face substantial challenges including specification problems, robustness limitations, interpretability bottlenecks, and uncertain scalability of current approaches.
- AI governance efforts remain fragmented with proliferation without convergence, institutional silos, competing jurisdictional claims, and governance gaps where no entity has both legitimacy and capability to coordinate global response.
- Global coordination mechanisms have consistently struggled with analogous challenges from climate change to nuclear security to pandemic response, suggesting existing institutions are poorly suited to rapid technological development with distributed creation capability.
Core Premises: Related to these observations, we see three core, key hypothesis emerge:
- We face a narrowing window for effective intervention due to technological lock-in, institutional inertia, and path dependency that makes certain governance choices increasingly difficult as capabilities advance.
- The risk of catastrophic outcomes is strongly indicated by various methodologies: historical precedent with transformative technologies, expert assessment from pioneers including Russell and Hinton, and theoretical arguments about instrumental convergence and principal-agent problems.
- Governance interventions represent crucial leverage points, as technical solutions alone cannot address competitive dynamics, verification challenges, or coordination problems that emerge between developers, nations, and other stakeholders.
Important Implications: These premises entail to three crucial implications:
- Current coordination failures significantly increase risk through safety gaps where different groups work with incompatible assumptions, resource misallocation toward less critical problems, negative-sum dynamics from locally optimized decisions, and scaling risk magnitude as capabilities grow.
- Time-sensitivity creates a coordination imperative due to effectiveness decay where identical coordination efforts become less impactful as capabilities advance, increasing coordination complexity with more actors entering the field, and capability acceleration compressing available response windows.
- Effective coordination requires both depth and breadth to overcome epistemic limitations affecting all stakeholder groups in isolation while preserving specialized expertise and harnessing stakeholder diversity for more robust solutions.
Central Claims and Community Action: The Path Forward
Based on this analysis, we’ve seen three central claims emerge:
1. Implicit coordination is fundamentally insufficient. While implicit coordination can sometimes emerge in domains with simple objectives and clear feedback mechanisms, AI development lacks these conditions. History shows this clearly: early nuclear governance relied on implicit coordination with devastating consequences; only after explicit mechanisms emerged—test ban treaties, verification protocols—did risks stabilize. In AI, incentive misalignment is structural, epistemic limitations prevent convergence on shared understandings, and feedback loops lack corrective power when dealing with potentially irreversible risks.
2. The absence of a grand strategy creates predictable failure patterns. These include strategic gaps persisting despite tactical progress (focusing on visible problems like bias while neglecting fundamental questions about long-term control), lowest-common-denominator solutions prevailing (ethics principles that achieve consensus by remaining abstract and non-binding), and tactical successes masking strategic failures (celebrating interpretability improvements while the governance-capabilities gap widens).
3. Robust information processing requires community coordination. No single organization, discipline, or stakeholder group possesses sufficient knowledge to address AI risks effectively. Specialized communities develop sophisticated analyses but struggle to integrate insights across boundaries. Distributed cognition consistently outperforms even expert individuals in complex assessments, but only when properly structured.
To address these failures, we propose three complementary approaches:
- Develop an explicit, shared grand strategy framework that provides comprehensiveness across domains, explicit prioritization among competing risks, temporal structure across timeframes, flexibility to incorporate new information, and operational specificity connecting goals to actions.
- Establish public discourse as the primary development medium, where transparency improves strategy quality through enhanced error correction, reduced information asymmetries, improved incentive alignment, and broader participation.
- Create structured tools for strategic coordination, including formal representations that enable better synthesis of perspectives, technological tools that amplify collective intelligence, and balanced approaches to formalization that enable participation without sacrificing rigor.
Why the EA and Rationalist Communities?
The communities offer valuable capabilities for initiating strategic discourse, including their track record predicting AI developments (e.g., Gwern’s scaling laws predictions that preceded industry recognition), demonstrated commitment through career reallocations, institution building and existing infrastructure for productive disagreement and coordination.
At some point we will discuss the benefits and drawbacks of migrating the AI governance discourse to a separate platform, which in the case of the Alignment Forum exemplifies how communities can accelerate progress through structured discourse on a dedicated medium.
Architecture and Imperative: Defining What's at Stake
Grand strategies represent comprehensive frameworks for coordinating multiple actors across different domains toward achieving overarching, shared objectives. Technically, a grand strategy represents a set of strategies for multiple agents that specifies what each should do across all possible scenarios they may encounter.
Unlike normal strategies that map situations to actions for a single agent, a grand strategy coordinates the actions of a coalition of agents to achieve outcomes that would be impossible through uncoordinated individual optimization or simple, ad-hoc cooperation. We will provide a formalization and more rigorous explanation in piece 4 of the series.
While often associated with nation-states, non-governmental actors have successfully strategized on similarly grand scales in domains from disease eradication to biodiversity preservation. The smallpox eradication campaign, Linux ecosystem development, and conservation initiatives demonstrate how distributed actors can maintain strategic coherence through shared perception of challenges, explicit prioritization frameworks, nested governance structures, and dedicated integration institutions.
For AI governance specifically, strategic coordination becomes essential due to:
Unprecedented speed, scope, and stakes: The compression of development timelines creates a fundamental mismatch with traditional governance timeframes, while AI's cross-domain impact means no single regime can address all challenges. Unlike climate change where partial solutions yield partial benefits, AI alignment may have critical thresholds below which solutions prove entirely inadequate.
Technical complexity requiring integration: Effective governance requires synthesizing expertise from technical AI alignment, game theory, international relations, regulatory design, organizational governance, ethics, and cognitive science—domains that develop in parallel without strategic coordination.
Coordination requires international cooperation: Effective AI governance requires international cooperation requiring coordination across different cultural, economic, and strategic contexts. Our approach acknowledges these complexities and seeks to develop frameworks that can accommodate diverse goals across coalitions while identifying crucial areas for aligning incentives and actions. The series will also address how national strategic positions shape AI governance and explore opportunities for international coordination despite challenging competitive dynamics in the geopolitical landscape.
Mixed competitive and cooperative incentives: Labs compete for talent and market share, and nations compete for technological leadership, yet all share interests in avoiding catastrophic outcomes—creating a classic stag hunt with tragedy-of-the-commons characteristics.
The benefits of coordination extend beyond theoretical necessity to concrete advantages:
- Efficiency gains through reduced duplication and division of labor
- Information advantages through cross-domain integration and blindspot identification
- Impact amplification through aligned action reaching effectiveness thresholds
- Psychological benefits providing clarity of purpose and defined contribution pathways
- Community benefits enabling rapid mobilization for emergent challenges
From Analysis to Action: Practical Steps Forward
To bridge theory and practice, we're developing three complementary initiatives:
The Grand Strategy for AI Risk Series will represent our primary contribution to bridging theory and practice in AI governance. This multipart series systematically builds an intellectual foundation for strategic coordination around AI risks—beginning with the coordination crisis, overview and motivation (Pieces 0.1-0.5), exploring historical grand strategies and their emergence (Pieces 1-2), analyzing national approaches and candidate AI strategies (Pieces 3-4), and introducing practical tools through the AMTAIR project (Piece 5 plus technical supplements).
Rather than simply diagnosing problems, we're developing concrete solutions to create an adaptive, systemic AI grand strategy–along with aiding coordination in governance work. We hope to develop computational tools that make implicit models explicit, identify cruxes of disagreement, evaluate intervention impacts across worldviews, and facilitate strategic alignment among disparate actors.
With the AMTAIR Project, we aim to develop both the theoretical foundations and these practical tools for collaborative sense-making about AI governance challenges.
We will announce and introduce this project in more detail in a later post. At its core, it aims to extend the MTAIR framework through automated content extraction from AI safety literature, formal representation of causal relationships and uncertainty, and Bayesian network quantification. By extending the MTAIR framework with systematic approaches to making implicit models explicit, we hope to create an infrastructure for identifying genuine agreements and disagreements across different perspectives. As planned, the resulting tools will include a Risk Factor Explorer for visualizing interactions between hypotheses and interventions, an Intervention Analyzer for understanding the cross-domain effects of proposed policies, a Worldview Comparator for identifying common ground between apparently opposed perspectives, and more. Rather than providing definitive answers, these tools serve as coordination mechanisms that enhance collective reasoning about complex, uncertain futures and support strategic coordination.
The AMTAIR Project, along with additional tools for improving AI Risk coordination, are meant to work independently as well as to work hand in hand with the third initiative of our overall project, our community-building initiatives.
With Community Building Initiatives, we advocate for the social infrastructure necessary for developing and implementing a grand strategy through reader's guides to key concepts; discussion groups focused on strategic integration, policy review boards, and strategic forecasting & war gaming exercises. As the project continues, we aim to move from advocating for these community-building initiatives to facilitating high-expected-impact events–such as the aforementioned forecasting & war gaming exercises. A portion of our efforts will also take the form of lectures and workshops aimed at communicating our work at EAG events and other speaking engagements with various stakeholders and potential AI risk grand strategy coalition members.
We invite you to join this effort not as passive readers but as active participants in what may be the defining coordination challenge of our century—whether through engaging with the analytical frameworks, contributing to tool development, providing financial support to this project, or applying these approaches within your domain of expertise.
How You Can Engage Today:
- If you're a technical researcher: Map your work to broader strategic objectives, participate in cross-domain working groups, and contribute technical constraints to governance frameworks.
- If you're a governance specialist, Engage with formalization tools, identify technical validation requirements for governance proposals, and join strategic synthesis efforts.
- If you're a funder, Evaluate your portfolio for strategic coherence, allocate resources to integration functions, and support shared epistemic infrastructure.
- If you're new but concerned, Begin with introductory guides, join structured discussions, and contribute to the collaborative analysis of key documents.
The coordination gap in AI governance is real, growing, and dangerous—but not insurmountable. The window for establishing effective governance frameworks narrows as capabilities advance, making this not merely an important project but an urgent one. By creating even small improvements in our capacity for strategic coordination, you contribute to what may be the most consequential collective action challenge of our time.
Thank you for reading, and may our work prove useful–even in the smallest way–for helping to coordinate humanity’s efforts to stay on The Narrow Path and preserve human life for decades to come.
Call for Help/Collaborators:
The coordination gap in AI governance is real, growing, and dangerous—but not insurmountable. To advance this work, we are actively seeking:
- Collaborators with expertise in research, software development, technical alignment, and organizational administration
- Funding partners to support 12 months of full-time research
- Constructive critics to strengthen our approach and analysis
- Community participants to engage with upcoming pieces and discussions
- Interview guests for podcast interviews series on the topic of AI governance/strategy - through Coleman Snell’s podcast On What Matters
If you can contribute in any of these ways, please contact us at cjs386@cornell.edu or via direct message on the EA Forum or LessWrong.
Further Acknowledgments:
We would also like to conclude this piece by giving more thanks to some of the thinkers who inspired both our writing style: especially The Zvi and Holden Karnosky. The validity of your arguments and clarity of your insights have contributed to both of our paths into AI risk, as well as aided the direct thoughts that moved us towards this project to begin with. Furthermore, thank you to Andrea Miotti, Tolga Bilge, Dave Kasten, and James Newport for writing A Narrow Path, both Coleman and Valentin found profound meaning, will, and rational reasons for hope, through reading A Narrow Path while in the midst of writing the Grand Strategy for AI Risk series. To everyone mentioned and anyone we are forgetting, thank you.