Hide table of contents

Cross-posted from Alygn. We are building what this post describes. Feedback welcome, especially pushback.

 

The Gap We Keep Circling

The AI safety community has produced extraordinary work on alignment, interpretability, and existential risk. What it has not produced, and what nobody has produced, is the institutional infrastructure that sits between that research and the actual deployment decisions happening right now, at scale, across thousands of organizations that will never read an alignment paper.

This post is about that gap, why it is structurally harder to close than it looks, and what we think an honest attempt to close it actually requires.

 

Two Problems That Look Separate But Are Not

Problem One: Operators are deploying AI without governance frameworks.

Every week, companies are putting autonomous AI agents into production, handling customer calls, routing sensitive decisions, processing personal data, without documented incident response protocols, without vendor risk assessments, without any framework they could show a regulator, a lawyer, or an affected person if something went wrong.

This is not malice. It is the predictable result of a world where deployment speed is rewarded and governance infrastructure does not exist at the scale and accessibility operators need. The companies doing this are not frontier labs. They are logistics companies, financial services firms, legal practices, municipalities. They do not have safety teams. They have engineers shipping products.

The governance gap at the operator level is not a future problem. It is happening now, and the absence of documented frameworks means that when incidents occur, and with autonomous systems operating at scale they will, there is no institutional record of what was known, what was decided, and why.

Problem Two: The court system cannot handle AI incidents.

When an AI system causes harm, the affected person's primary recourse is civil litigation. This is a serious problem for three reasons.

First, speed. Civil litigation moves in years. AI incidents compound in days. By the time a court resolves anything, the technical context is unrecognizable, the affected party has exhausted their resources, and any deterrent effect on the deploying organization has long since dissipated.

Second, technical competence. Courts were not designed to adjudicate questions about model behavior, training data, vendor dependency chains, or the relationship between a system prompt and an autonomous agent's decision. The expertise required does not exist in the legal system at the volume AI incidents will demand.

Third, incentive structure. Litigation is adversarial by design. What AI incidents often require is not an adversarial process but a rapid, technically informed determination of what happened, who bears responsibility, and what remediation is owed. Those are coordination problems, not contest problems.

The result is that the people most likely to be harmed by AI failures, customers of operators rather than researchers following the frontier, have the weakest institutional protection.

 

Why No Single Actor Can Fix This

The obvious response is that labs should build better governance. Some are trying. But labs face a structural credibility ceiling: an AI company cannot independently certify its own safety, cannot credibly arbitrate disputes involving competitors, and cannot coordinate with rivals on shared standards without an independent institutional intermediary. Any governance infrastructure a lab builds for itself is, by definition, not neutral.

Operators cannot build this either. They lack the technical expertise, the institutional standing, and the scale to develop frameworks that would hold up under external scrutiny. They need something to adopt, not something to invent.

Regulators can mandate governance requirements, and increasingly they are. But regulation is jurisdiction-bound, reactive by design, and consistently slower than the deployment decisions it is meant to govern. By the time a statutory framework arrives, the practices it regulates are already established.

The pattern here is familiar. Finance built clearinghouses because no single bank could credibly clear its own trades. Aviation built independent safety bodies because no airline could credibly investigate its own accidents. Nuclear built international oversight regimes because no state could credibly govern its own weapons program.

Every complex, high-risk industry has eventually built a neutral institution to solve exactly this coordination failure. The institution that gets built before the crisis defines the terms. The institution built after carries the compromises of the moment.

AI has not built its equivalent yet.

 

What We Are Building

Alygn is an independent AI governance institution. We are not a lab, a regulator, a compliance software vendor, or a policy advocate. We are building the coordination infrastructure that fills the gap between safety research and deployment reality.

In practice, this means three things:

Governance readiness for operators. Structured assessments that identify where an organization's AI deployment practices create undocumented risk, followed by the documentation and protocols that close those gaps. Incident response frameworks, vendor risk assessments, customer consent registries, governance statements for external stakeholders. The things that sound administrative until the moment they are the only thing standing between an organization and a very bad situation.

AI dispute arbitration. A specialized framework for resolving AI-related incidents that moves at the speed the situation demands, with adjudicators who understand the technical context. Not a replacement for courts; a complement that handles the volume and complexity courts cannot absorb. We believe access to technically competent, expedited arbitration is a meaningful protection for people affected by AI failures, not just a risk management tool for organizations.

Inter-lab coordination. A neutral forum for governance dialogue between competing AI laboratories. Labs cannot coordinate on safety standards, incident response, or governance practices without a credible neutral party facilitating that process. The competitive pressure that drives the race to the bottom on safety is not something any individual lab can resolve from the inside. Shared standards enforced through a neutral institution is the only mechanism that changes the incentive structure without requiring a regulator to impose it.

 

The Dignity and Liberty Protection Standard

One contribution we are developing publicly is a reference framework for evaluating AI deployments against human dignity and individual liberty: the Dignity and Liberty Protection Standard.

It is not a certification scheme or a compliance checklist. It is a structured set of questions any organization can use to evaluate whether their AI systems preserve human agency, avoid manipulative design patterns, and treat human safety as a non-negotiable constraint rather than a variable to be optimized.

The three domains:

Agency preservation. Does the system expand human choice or constrain it? Are the options it presents to users genuinely open, or are they shaped to produce a specific outcome regardless of user preference?

Cognitive sovereignty. Does the system inform or manipulate? There is a meaningful distinction between providing relevant information and exploiting psychological patterns to bypass deliberate decision-making. The standard asks which side of that line a given deployment sits on.

Physical integrity. Does the system treat human safety as a hard constraint or a variable? In any conflict between operational efficiency and human physical safety, the standard requires the latter to win without exception.

We are developing this as a public institutional contribution. Our intention is for it to become a reference point for industry self-regulation, regulatory frameworks, and due diligence benchmarks as AI governance matures; not a proprietary tool Alygn charges to apply.

 

What We Are Not Claiming

We are not claiming that governance infrastructure solves the alignment problem. It does not. A well-governed deployment of a misaligned system is still a misaligned system.

We are claiming that the absence of governance infrastructure makes every other safety effort harder to translate into protection for the people who are actually affected by AI deployment today. Alignment research that does not reach operators, safety frameworks that do not produce operator behavior change, and incident response that relies on a court system designed for a different era are not adequate substitutes for institutional coordination infrastructure.

We are also not claiming that Alygn has solved this problem. We are at the beginning of what is, by design, a long institutional build. Institutions that function were not built quickly.

 

Why This Is Relevant to the EA Community Specifically

The existential risk framing that motivates much of this community's work implies a transition period: a period during which AI systems are becoming significantly more capable but have not yet reached the threshold where the most serious risks materialize. That transition period is also the period during which governance infrastructure needs to be built.

Institutions formed after a crisis are captured, politicized, or imposed. The actors who build the neutral coordination layer before the moment of urgency are the ones whose frameworks survive that moment. The actors who wait inherit whatever gets assembled under pressure.

The EA community has invested heavily in technical safety research and in policy advocacy. The institutional infrastructure layer, the coordination mechanisms between labs, between operators, between technical safety work and deployment reality, is underdeveloped relative to its importance.

We think that is worth discussing. Pushback on the framing, the approach, or the specific mechanisms we are building is welcome and useful.

 

Tania Lea Aizenman Sanchez is Founder and CEO of Alygn, an independent AI governance institution incorporated in Texas. Alygn can be reached via LinkedIn.

1

0
0

Reactions

0
0

More posts like this

Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities