A shorter, more conversational version of my AISES winter 2025 final submission of the academic paper on independence within alignment
Why I'm sharing this
As a former USAid Diplomat in Eastern Europe, now hoping to transition into AI Governance, I've been watching a pattern emerge that worries me. Countries trying to align with European AI standards, or in an effort to run from the Silicon "bros" as fast as possible, are sometimes making decisions that look like good governance on the surface but create hidden dependencies underneath. This isn't about bad actors or failed policies - it's about how rational choices, made under pressure, can accumulate into strategic vulnerabilities over time.
I wrote an academic paper for the AISES Fellowship on this (full version linked at the end), and my goal in sharing this on the EA forum is to find support in establishing a place for me in the greater AI international governance conversation. As a former USAid diplomat I believe in the power of humanity and democracy and I am currently attempting to transition my career to AI policy outside of the USA. My dream is to find a place where I can help countries retain their voice and remind policy makers that humans are an important resource too, and should be valued over profits. This is my first post... Please be kind..
**********
The QWERTY problem in AI governance
Start with a keyboard. QWERTY wasn't designed to be optimal - it was designed in the 1800s to prevent typewriter jams. But once millions of people learned it, once manufacturing tooled around it, once whole systems assumed it, switching became effectively impossible. The cost of change exceeded any benefit. We're locked in.
This same dynamic - what economists call path dependence - is now playing out in AI governance. And unlike keyboards, this one has national security implications.
What I'm seeing on the ground
Take two countries I've been tracking:
Albania is an EU candidate state under intense pressure to demonstrate modernization and alignment. In 2024, they launched "Diella," an AI-powered "virtual minister" meant to enhance transparency. It looked impressive - forward-thinking, tech-savvy, aligned with European digital transformation goals.
But here's what concerned me: instead of building internal capacity to audit and oversee AI systems, they essentially outsourced accountability to the AI itself. The regulators who are legally responsible for oversight don't have access to the underlying models, the training data, or the procurement logic. They have documentation showing compliance, but no actual ability to interrogate the system independently.
This isn't corruption or incompetence. It's what happens when you need to demonstrate alignment fast, but you don't have the staff, technical expertise, or time to build real oversight capacity. The consultants come in, set up compliance structures, and then... never leave. Because the internal capacity to replace them never develops.
Moldova shows a different path to the same problem. Unlike Albania, Moldova has genuine political will and careful governance. Their 2024 AI policy documents are disciplined and thoughtful. But Moldova is also dealing with sustained Russian hybrid warfare - disinformation campaigns, cyber operations, economic pressure - all while trying to implement ambitious AI governance frameworks.
The result isn't hollow institutions, it's exhausted institutions. Every crisis requires another layer of governance response. Cybersecurity coordination. Election integrity monitoring. AI oversight. Each individually necessary, but collectively overwhelming for a small civil service already stretched thin.
Here's the thing: Moldova is governing. But they're doing it while running on fumes. And when you're that depleted, you become dependent on external support not because you want to be, but because you literally don't have the capacity to do it alone.
Why this matters more than it seems
These aren't just implementation hiccups. They're structural vulnerabilities that compound over time through three mechanisms:
Institutional lock-in: When regulators lack technical capacity, they rationally outsource to consultants and compliance vendors. But once embedded, these dependencies are hard to unwind. The internal expertise that should have developed never does. Responsibility stays with the state; practical control migrates elsewhere.
Technological lock-in: AI governance isn't abstract - it runs on physical infrastructure. Cloud platforms, data centers, proprietary software. Once these are deployed, they determine what oversight is even possible. You can't audit what you can't access. You can't modify what's controlled by someone else's architecture.
Cognitive lock-in: Governance becomes performative. Laws exist, registries are created, ethics boards meet - but the actual interrogation of systems never happens. Compliance is demonstrated through documentation rather than substantive oversight.
The gray-zone problem
This vulnerability gets weaponized in what security analysts call "gray-zone" operations - the space below actual warfare but above normal competition. Disinformation, cyber operations, economic leverage, technological influence.
Think about Europe's energy dependence on Russia pre-2022. Those pipelines didn't get built maliciously - they were rational economic decisions at the time. But once in place, they became strategic leverage. Political leaders could change their minds quickly about Russian energy; the physical infrastructure couldn't change at all.
AI procurement is becoming the new energy pipeline. Decisions made under crisis conditions - to secure compute capacity, demonstrate compliance, or access expertise - embed dependencies that persist long after the political situation that justified them has shifted.
And here's what makes AI governance particularly exploitable: decisions get made fast (often under political pressure), but their consequences unfold slowly and resist reversal. That temporal mismatch creates windows for influence that gray-zone actors actively exploit.
What would better governance look like?
I'm calling it independence within alignment - the idea that countries should fully participate in European AI standards while preserving the capacity to govern what they adopt.
This isn't about abandoning coordination or going it alone. It's about ensuring that alignment builds institutions instead of substituting for them.
Concretely, I think this requires:
Optionality: Governance systems must preserve real exit pathways. Can you change vendors? Can you modify oversight mechanisms? Can you audit the actual code? If not, you don't have governance - you have a dependency.
Voice: States need to participate in rule interpretation and evolution, not just rule adoption. Otherwise "harmonization" becomes extraction of compliance rather than co-production of governance.
Continuity: Human governance capacity needs to be treated as sovereign infrastructure. Auditors, technical regulators, procurement specialists - these can't be temporary contractor positions that exist only during pilot phases. If capacity disappears when funding does, you never had capacity.
One practical mechanism I'm proposing: Digital Capacity & Safeguards Corps. Think of them as time-bound, deployable teams with a clear mandate to build indigenous oversight capability - and explicit exit guarantees. They'd train domestic auditors, establish model evaluation capabilities, embed audit rights into contracts, and design governance systems that remain functional after external support withdraws.
Crucially, their success would be measured by their ability to leave, not by how long they stay.
Why this matters beyond Eastern Europe
Europe's margins aren't peripheral to AI governance - they're stress tests for the entire system. They reveal what happens when ambitious regulatory frameworks meet institutional asymmetry, time pressure, and adversarial conditions.
If governance frameworks can't hold at the edges, they won't hold at the core either. The vulnerabilities we're seeing in Albania and Moldova today could emerge anywhere that faces similar pressures tomorrow.
The stakes
We're at a choice point. The EU AI Act establishes crucial protections and standards - I'm not arguing against harmonization. But if we implement it in ways that prioritize speed and uniformity over building actual oversight capacity, we risk creating systems that look strong at adoption but become brittle under pressure.
Alignment without capacity isn't governance. It's dependency in a formal wrapper.
The world isn't going to stand still. Political alignments will shift. New security threats will emerge. Technologies will evolve in ways we can't predict. In that context, the question isn't whether to coordinate, but whether to coordinate in ways that preserve the ability to adapt when conditions inevitably change.
Governance systems designed for permanence in a volatile world don't endure - they harden and then fracture. Systems designed for adaptation do endure.
That's what independence within alignment tries to capture. Not resistance to European frameworks, but insistence that participation preserve agency across time.
_______________________________________________________
The complete paper with citations, theoretical framework, and detailed case analysis is available here: Independence Within Alignment: Avoiding Governance Irreversibility in European AI Policy
I welcome feedback, challenges, and extensions of this thinking. This is an evolving argument, and I'm particularly interested in hearing from people working on AI governance implementation, democratic resilience, or security policy in contexts facing similar pressures. This is my first post on EA so please be kind. 🙂 Thank you.
