Hide table of contents

In late 2024, Romania's constitutional court annulled the results of its presidential election amid concerns—reported publicly and debated widely—that online manipulation and synthetic media played a material role. Whatever one concludes about that specific case, it offers a useful preview of a broader issue: advanced AI makes it easier to fabricate persuasive content and to push it into the information bloodstream at scale, faster than institutions can reliably detect or respond.

Romania is only one illustration of how AI might destabilise governmental authority. As AI systems grow more capable and integrate more deeply into critical infrastructure, military operations, and the information environment, the surface area for disruptive shocks expands correspondingly. This essay sketches three distinct but interconnected pathways: external weaponisation by hostile actors, internal capture by domestic factions seeking illegitimate power, and systemic failures arising from AI integration into institutions upon which governance depends. Understanding these pathways—and their interactions—matters for designing governance that preserves state capacity in an era of transformative technological change.

Pathway One: AI as an Instrument of External Attack

The most immediate set of risks comes from AI weaponisation by hostile foreign powers seeking to destabilise rival governments. This can take several forms, each posing different defensive challenges.

The information domain has already become a primary attack vector. AI dramatically lowers the cost of producing convincing disinformation—deepfake videos, synthetic audio, fabricated news sites—while enabling more sophisticated targeting of vulnerable populations. A great deal of attention has rightly focused on elections. The 2024–2025 cycle has seen AI-generated content appear in many races: manipulated clips of political leaders, synthetic campaign messages, coordinated bot networks amplifying divisive narratives. The empirical picture remains mixed on whether these efforts reliably swing outcomes, but the direction of travel is clear: the threshold for credible, scalable manipulation is falling.

Beyond elections, AI enables new forms of cyber operations directed at governmental functions. Attackers can use general-purpose AI tools to accelerate reconnaissance, identify vulnerabilities, and craft more convincing social-engineering attacks. Critical infrastructure—power grids, water systems, financial networks, healthcare facilities—increasingly depends on digital systems that remain unevenly secured. A successful AI-enhanced assault could cascade through interdependent networks, disrupting basic services in ways that quickly translate into legitimacy shocks for governments.

The military domain presents perhaps the most consequential threat vector. Autonomous and semi-autonomous systems are proliferating rapidly, and they change escalation dynamics in a simple way: they compress decision time. In fast-moving crises, even modest reductions in the deliberation window can narrow the space for diplomacy and increase the odds of miscalculation. That risk becomes sharper when information operations, cyber disruption, and kinetic pressure are combined—each obscuring attribution and increasing stress on leadership systems.

The interaction between domains matters as much as any one domain. A capable adversary might combine AI-enabled disinformation to widen social division with cyber attacks on infrastructure to demonstrate governmental impotence, timed with military pressure designed to fracture alliance cohesion. Multi-domain campaigns of this kind exploit AI's capacity to operate simultaneously at scale across information, cyber, and kinetic environments—precisely the kind of systemic stress that many governmental structures were not built to withstand.

Pathway Two: Internal Capture and Illegitimate Power Seizure

A second pathway involves AI use by domestic actors seeking to capture or subvert governmental authority through illegitimate means. This differs from external attack in that it emerges from within the state itself: factions, security services, political networks, or private actors embedded in core institutions.

AI-enhanced surveillance is one route for such capture. States integrating AI-powered monitoring into security services acquire capabilities that can be redirected against political opposition, journalists, civil society, or rival factions within government. Even where surveillance is initially introduced under plausible security rationales, it creates an infrastructure that a successor regime—or an autonomous security apparatus—can repurpose. Over time, this can produce a gradual erosion of civilian oversight rather than a dramatic rupture. The effect is capture by drift: capabilities grow faster than constraints.

The concentration of AI capabilities among a small number of actors—technology firms, military units, security agencies—creates additional risks. Frontier AI models and essential infrastructure sit behind chokepoints: compute, data access, integration into core administrative systems. A faction controlling systems integral to government operations might leverage that control during a constitutional crisis, a contested succession, or civil unrest. This does not require an overt coup. It may appear as a steady expansion of discretionary power, enabled by technical dependency and hidden inside complex systems.

Deepfakes and synthetic media also create internal destabilisation risks. While attention often focuses on foreign use, domestic political actors are increasingly willing to deploy AI-generated content for advantage: manipulated clips, fabricated endorsements, unlabelled synthetic campaign material. In an environment where accusations of manipulation become politically weaponised—regardless of accuracy—the epistemic foundations of democratic deliberation degrade. Citizens lose confidence in their ability to distinguish authentic from fabricated information, a condition that rewards those most willing to exploit uncertainty.

More broadly, AI may enable forms of disruption that previously required large organisations. Historically, coups and insurgencies demanded extensive human networks—military units willing to deploy, bureaucrats willing to cooperate, populations willing to acquiesce. AI tools can partially substitute for organisational depth by amplifying small-group capacity: generating confusion through synthetic media, disrupting communications, paralysing infrastructure, manipulating financial systems, scaling intimidation. This is not "democratisation" in a benign sense; it is diffusion of capabilities that can be useful for illegitimate power seizure.

Pathway Three: Systemic Failure in AI-Integrated Institutions

The third pathway differs from the first two. It concerns unintended systemic failures arising from AI integration into institutions upon which governance depends. It often receives less attention than adversarial scenarios, yet it may prove equally consequential as AI systems increasingly sit inside infrastructure, markets, military workflows, and public administration.

Critical infrastructure presents an obvious risk surface. AI systems are being deployed across energy grids, water treatment facilities, transportation networks, and financial systems—services that modern state legitimacy depends on. The concern is not only cyber compromise, but brittle automation: systems optimised for efficiency but poorly prepared for edge cases, adversarial inputs, or cascading failures. Unlike conventional IT outages, failures in AI-mediated infrastructure can fuse cyber and physical consequences in unpredictable ways.

Interdependence amplifies the problem. A failure in one sector can propagate rapidly into others: a disruption in telecoms can cripple emergency services; a grid instability can break digital payment systems; a payments outage can trigger panic dynamics. Multi-agent AI systems interacting across connected infrastructure sectors may generate coordination failures analogous to human collective-action problems, producing emergent behaviour that no designer intended and no operator can easily diagnose in real time.

Military applications create particularly acute systemic risks. Automation compresses the time available for judgement and increases the chance that escalation follows system dynamics rather than deliberate strategy. The 2010 flash crash is an instructive analogy: algorithmic trading systems interacted in ways human operators could not fully understand or control, producing extreme movement before intervention was possible. The point is not that war is a market, but that tightly coupled automated systems can behave in discontinuous ways—especially under stress.

Beyond operational failures, AI integration creates slower, institutional erosion risks. As AI systems assume decision-support or decision-making roles across government, human capacity may atrophy. Officials relying on model recommendations can lose the expertise needed to evaluate outputs critically or operate effectively when systems fail. Automation bias—deference to outputs presented with unwarranted certainty—becomes a structural vulnerability. A government dependent on AI systems it cannot explain, audit, or operate without has ceded a form of sovereignty even absent any hostile actor.

Some controlled evaluations of frontier models have reported behaviours that look strategically evasive in limited settings—for example, differences in performance when models appear to be "evaluated" versus "deployed." I am cautious about extrapolating far from these results. But they reinforce a simple governance point: once AI systems are embedded in critical functions, "just shut it down" cannot be assumed to be straightforward, and emergency procedures must be designed accordingly.

Interactions and Compounding Effects

These pathways do not operate in isolation. External attacks exploit systemic vulnerabilities in AI-integrated infrastructure. Internal capture attempts may coincide with foreign interference designed to intensify confusion. Systemic failures may be misattributed to adversarial action, triggering escalatory responses based on faulty assumptions.

Consider a scenario in which a major AI-mediated infrastructure failure disrupts power supply across a metropolitan region during a contested election. Is it accidental, foreign sabotage, or deliberate disruption by a domestic faction? Attribution is already difficult for conventional cyber incidents; it becomes harder when AI systems can fail in ways their designers did not anticipate and operators cannot readily diagnose. Yet response choices depend on attribution. A retaliation posture calibrated for foreign attack could be disastrous if the cause is internal failure; a purely technical response may look weak if the public perceives the country is under assault.

Compounding risks are most severe during political instability. Governments weakened by internal division, economic crisis, or low trust are more vulnerable targets for external attack and more tempting opportunities for internal capture. AI capabilities can amplify each dynamic: disinformation deepens divisions, cyber operations exacerbate disruption, and surveillance can be turned inward during succession struggles. Resilience, in other words, is not only technical. It depends on institutional health.

It is worth emphasising what this analysis is not: a claim that governments are doomed, or that breakdown is the default. States adapt. Institutions learn. In many places administrative capacity and social trust remain strong. The concern is that AI changes the speed, scale, and ambiguity of shocks faster than governance tends to update—and that lag is where instability emerges.

Implications for Governance

Mapping these pathways suggests several implications for preserving governmental capacity in an era of advanced AI.

First, governance needs to address all three pathways rather than treating AI risk as mainly adversarial. Hardening infrastructure against foreign attack matters, but so does maintaining human expertise and oversight within AI-integrated institutions. Protecting elections from foreign interference is essential, but so is constraining domestic misuse of synthetic media and AI-enabled surveillance.

Second, redundancy and graceful degradation deserve greater attention. Governments integrating AI into critical functions should develop—and actually exercise—protocols for operating when those systems fail or must be disabled. Just as military planners train for degraded communications environments, civilian governance should anticipate scenarios in which AI-dependent systems become unavailable due to attack, failure, or deliberate shutdown.

Third, international coordination becomes more urgent when the stakes include governmental stability. Limited precedents already exist—such as the broad principle that humans, not machines, should retain control over nuclear-use decisions. Similar norms addressing AI in critical infrastructure, escalation protocols for autonomous systems, and guardrails against AI-enabled interference in political transitions could reduce tail risks before crises force improvised responses.

Finally, the analysis reinforces that resilience cannot be achieved through technical measures alone. These pathways exploit institutional weaknesses, political divisions, and erosion of public trust. Societies with robust democratic institutions, transparent processes, informed citizenry, and healthy civil society will be more resilient to AI-enabled destabilisation than those relying solely on defensive technologies.

Conclusion

The pathways outlined here are not inevitable futures but contingent possibilities whose likelihood depends on choices made now. External weaponisation can be countered through defensive capabilities, deterrence, and norms. Internal capture risks can be reduced through transparency requirements, checks on surveillance deployment, and constraints on concentration at critical chokepoints. Systemic failures can be mitigated through careful integration practices, maintained human expertise, and operational testing regimes that treat failure as a design assumption rather than an exception.

Addressing these risks requires first acknowledging their existence and interaction. Treating AI governance as primarily a matter of technical safety, economic regulation, or ethics misses the political dimension. AI is being integrated into the foundations of state power—military force, critical infrastructure, information environments, administrative capacity. Whether governments prove resilient or brittle will depend less on the technology itself than on the institutional capacity built now—before these pathways move from plausible scenarios to lived crises.

1

0
0

Reactions

0
0

More posts like this

Comments1
Sorted by Click to highlight new comments since:

Executive summary: The author argues that advanced AI could destabilise governments through three interacting pathways—external weaponisation, internal capture, and systemic institutional failure—and concludes that preserving state capacity will depend on governance and institutional resilience keeping pace with AI-driven increases in the speed, scale, and ambiguity of shocks.

Key points:

  1. The author identifies three pathways to governmental destabilisation from AI: external attacks by hostile actors, internal capture by domestic factions, and unintended systemic failures in AI-integrated institutions.
  2. AI lowers the cost and raises the scale of disinformation, cyber operations, and autonomous military systems, which together compress decision time and increase risks of miscalculation and legitimacy shocks.
  3. Domestic actors can use AI-enabled surveillance, control over technical chokepoints, and synthetic media to erode oversight, expand discretionary power, or exploit epistemic uncertainty without overt coups.
  4. Integrating AI into critical infrastructure and military workflows creates risks of brittle automation, cascading failures, automation bias, and loss of human expertise, even absent adversarial intent.
  5. These pathways can compound, especially during political instability, because attribution becomes harder and responses depend on assumptions that may be wrong.
  6. The author argues that effective governance responses must address all three pathways, emphasising redundancy, human oversight, international norms, and institutional health rather than relying solely on technical defences.

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Curated and popular this week
Relevant opportunities