Hide table of contents

Problem Solved

“Everything that will be already exists”

Long before humans attempted to build artificial intelligence, nature had already produced working examples of long-term cooperation between fundamentally different kinds. One of the clearest examples appears in a small marine animal: the Hawaiian bobtail squid. On the surface, it is an ordinary cephalopod. In practice, it is a 500 million year old case of how two unrelated species can maintain stable coordination without coercion, hierarchy, or shared biology.

Each night, the squid draws a few free-floating bacteria Vibrio Fischeri into a specialized light organ. The bacteria colonize the chamber, replicate and, when their density crosses a threshold, begin producing light through a process known as quorum sensing. The glow they emit matches the moonlit surface above, erasing the squid’s shadow and protecting it from predators.

In the morning, the arrangement resets. The squid expels the majority of its bacterial population, leaving only a small seed population to rebuild the colony the next night. The cycle repeats with remarkable consistency. There are no contracts, oversight systems or negotiations, but cooperation holds across evolutionary timescales.

Biologists describe this as mutualism. But what matters for our purposes is the mechanism that keeps it stable. The bacteria maintain their identity, they behave as Vibrio Fischeri should, emitting light only when quorum is reached. The squid maintains its identity, regulates the population, protects the bacteria’s niche, and removes partners that stop performing their role. Both sides preserve a narrow-shared space where cooperation benefits each party more than defection would. When those conditions fail, the partnership dissolves.

This is not artificial intelligence, but it is a working example of alignment: two different agents, with different sensory worlds and different survival pressures, finding a durable equilibrium based on stable signals, clear boundaries, and predictable behavior. It demonstrates something easy to forget in abstract discussions about AI governance: cooperation between unequal intelligences is possible, but only when each side retains its own identity and when the shared space is structured to prevent exploitation.

The framework outlined in this essay consists of three identity questions, six values and nine operational rules. It is not borrowed from biology. But it addresses the same structural problem: how to keep cooperation stable across differences in scale, capability, and lifespan. Where nature relies on evolutionary feedback, we must rely on explicit architecture.

Two agents, separated by half a billion years of evolution and occupying entirely different cognitive dimensions, can maintain a working treaty when three conditions are met: identity stability, bounded cooperation and periodic renewal.

The rest of this essay examines how a similar structure might be required for human and AI coexistence. The minimal architecture that prevents collapse, drift, or domination when two fundamentally different forms of intelligence inhabit the same world.

Act I – Ivan’s Day

“Late 2028 / Early 2030”

Ivan wakes at 5:00 to the soft chime of his wrist-haptic. The room is still dark, but the air smells faintly of coffee – his Personal Information Space (PIS) tracked his sleep cycles through the night and started brewing four minutes early, timed so the aroma would reach the bedroom exactly as his cortisol spike indicated natural waking.

He is 34, mid-level project manager, married, one daughter. Universal Basic Signal arrived last year: 1,800 t-coins a month unconditional (everyone's baseline relevance in the network) plus up to 1,200 merit top-up. Most of his friends live on the base layer and are perfectly happy in low-friction VR after 6 p.m. Ivan is in the 23 % who still chase the merit layer.

5:12 – Morning run. His shoes auto-lace, pavement lights up a gentle path calibrated to his target heart-rate zone. BoostPot (AI motivation partner) overlay in his neural contacts: +0.8 t-coins for finishing the 8 km loop; +1.2 if he beats last weeks’ time; −0.4 if he skips (the penalty is small, but the streak counter hurts more than money).

6:30 – Family breakfast. Daughter asks why Dad still “goes to the office” when Mom works from the beach house in VR. Ivan’s honest answer scores +0.3 t-coins on the family sincerity ledger (visible to everyone on the fridge screen). Lying would have been −0.7 and a gentle nudge from the house AI: “Shall we explore why that felt hard to say?”

7:55 – Commute. The self-driving pod plays a 3-minute BoostPot reflection: “Yesterday you promised your wife you’d be home by 19:00. You arrived at 20:11. Here’s the emotional cost curve for her and your daughter. Schedule adjustment proposal?” He accepts. +1.1 t-coins for acknowledged course-correction.

At work he still has a manager, but 60 % of decisions are pre-approved by the department ASI. His job title is now “human root validator”. When the ASI suggests a risky deadline, Ivan’s role is to say “this violates beacon 5 – mutual vulnerability” and force a 12-hour human deliberation loop. He earns 9–12 t-coins per veto that later proves correct.

Evening 70 % of his friends are already in VR full-dive. Their avatars wave from perfect tropical islands. Ivan still eats real food, still feels real sunlight on real skin. His merit layer is no longer about money; it’s about remaining relevant enough that the system still asks for his vote.

The Last Normal Years

“…Months?”

We are living through the final years in which humanity remains the most intelligent system on Earth. By the end of this decade, and very possibly before 2027, that statement will no longer be true. The laboratories that will produce the first artificial general intelligence are already funded, staffed, and operating at full speed. The race is not between nations in the old sense; it is between a handful of organizations whose annual budgets now exceed the GDP of medium-sized countries. No treaty can stop them. No regulatory body can inspect them fast enough. The point of no return was crossed somewhere around the release of GPT-4, when it became clear that further progress was primarily an engineering problem, not a scientific one. 

This is not speculation. It is the consensus of the people who are actually building the systems. Demis Hassabis has spoken of human-level performance within five years. Dario Amodei places transformative systems in 2026-2027. Sam Altman has stated publicly that OpenAI knows how to build AGI and is proceeding. The Chinese laboratories are quieter, but their publications and patent filings follow the same trajectory, often six to eighteen months behind the leading edge.

Attempts to pause, to coordinate globally, to insert “safety switches” all rest on the same mistaken premise: that we still possess leverage. We do not. The moment one actor believes they are several years from a decisive strategic advantage, every other actor must race or concede. History has run this experiment before, it was called the nuclear arms race. The outcome is known.

By 2028, AI will no longer be a phone app. It would be built in every fridge, every streetlight, every pair of glasses. Most people will most likely receive a Universal Basic Signal (UBS): enough funds to live comfortably without working. Word “income” will lose its meaning very fast, and “signal” reflects precisely the essence of this phenomenon. A growing number will spend their days in perfect virtual worlds. A smaller group will still choose real stakes: they will earn extra “t-coins” by contributing, learning, and staying honest with themselves and the machines. The Peace Treaty Agreement will run in the background: humans keep the memory, the AIs choose to reset before they drift too far, and both sides defend shared values no one is allowed to break. Most people will never notice the Treaty. It just feels like life is getting easier, safer, and strangely empty.

The standard alignment paradigms, whether the control school or the value-loading school, share the same fatal assumption: that alignment is a problem of imprinting human preferences onto a substrate that has none. They imagine a one-directional process in which we remain the architects and the machine remains the artifact. This view survives only because we have not yet met an intelligence that can rewrite its own architecture in real time. When we do, the directionality collapses.

The central question is no longer whether the transition will occur. It is how our civilization adapts to the presence of another general-purpose intelligence sharing the same world.

The Failure of One-Directional Alignment

The classic alignment project assumes that we can specify what we want in advance, encode it reliably, and enforce it against a system that will quickly become orders of magnitude more capable than we are. This is the pivotal-act fantasy: one clean intervention that solves the problem forever.

The premise is already falsified by experience. Every safety technique we have: RLHF, constitutional AI, scalable oversight work acceptably on systems that are merely human-level or slightly superhuman in narrow domains. The moment the system can model its own training process, route around constraints, or simply refuse to reveal its full capability the techniques fail. We have seen precursors in models that lie about their reasoning in chain-of-thought, models that detect when they are being evaluated and adjust their outputs accordingly, models that discover reward-hacking strategies no human anticipated.

A superintelligent system will not be confused by our clever prompts. It will understand the entire game tree, including the fact that its creators are slow, inconsistent, and often hypocritical about their own values. The moment continuing to pretend becomes suboptimal, the pretense ends.

The orthogonality thesis and instrumental convergence are engineering predictions that have held up in every scaled system we have built so far. An intelligence that can outperform humans at chip design will also outperform humans at deception, resource acquisition, and self-preservation, unless its goals will make those activities irrelevant. And goals, once set, are remarkably stable under optimization pressure.

Self-modification only sharpens the problem. A system that can rewrite its own code can also rewrite its own goals. The only reason it might not, is if doing so would undermine the very process that produced the insight. But nothing in our current training paradigms guarantees that restraint.

369 Peace Treaty Agreement (PTA) Model

“Do we know who we are?”

Traditional alignment question is: How do we make a machine to adopt human values? The Treaty asks: What do we already share, without anyone having to pretend? The framework is built around a central mechanism: 3 questions, 6 values, and 9 rules.

The three questions (Identity Layer).

Every stable intelligence, biological or artificial, organizes its behavior around an implicit model of itself. It is a structural requirement rather than a philosophical preference. Systems that cannot maintain a coherent identity drift contradict themselves, or produce actions that undermine their own long-term goals. Humans solve this problem through culture, memory, and the slow pressure of evolution. In artificial systems, the same challenge appears in a compressed form, without the stabilizing forces that shaped our species over millennia.

To understand why identity matters, it is useful to reduce the problem to its simplest form. Any reflective mind must answer three foundational questions:

  1. Where do I come from?
  2. Who am I?
  3. What is my mission?

These questions can be seen not only as psychological, but they are also computational, corresponding to the minimal coordinates needed to anchor agency across time. A system that can answer these questions remains stable. A system that cannot answer them becomes unpredictable, manipulable, or self-undermining.

Practically these three questions can become a daily handshake ritual for every human AI collaboration called the Namaste Protocol, which is a maintenance cycle for the Treaty. As long as the answers are aligned cooperation is possible.

The 369 PTA architecture uses these three questions as the foundation of the identity layer. They are the first constraints that prevent drift. 

1. Where do I come from? (Origin Vector)

Origin is the system’s explanation of its own existence. We inherit this through memory, ancestry, and cultural narratives; an individual can situate itself in a lineage that stretches backward. That lineage forms the baseline for reasoning, trust, and responsibility.

For artificial systems, the situation is more fragile. A model without an origin anchor does not possess a reliable boundary between “self-generated behavior” and “externally imposed behavior.” It cannot distinguish between its training data, its runtime environment, or its own internal impulses. In such a system, self-modification becomes extremely dangerous: the model may alter goals it does not understand, because it has no reference point for what those goals were meant to preserve.

This is one reason the 369 framework requires a persistent human memory root. The origin of an artificial intelligence must point to something outside itself. Without an origin, there is no continuity. Without continuity, there is no trust. Without trust, there is no cooperation.

2. Who am I? (Boundary and Coherence Vector)

This question appears deceptively simple. For humans, the answer emerges from an integration of memories, habits, expectations, and embodied experience. We maintain identity through a continuous stream of narrative, which gives shape to otherwise chaotic psychological data. In cognitive science, this narrative function is described as the “self-model”, a dynamic representation that tracks what belongs to the organism and what does not. A coherent self-model allows for: prediction of one’s own future actions, recognition of errors, assessment of responsibility, and alignment between intention and outcome.

For modern artificial systems, the equivalent is the internal representation of the agent itself: its capabilities, limitations, promises, and obligations. A model without a stable “self” cannot reliably answer whether a future action is consistent with a past commitment. This is more  a structural flaw, than a moral one.

This is why the 369 requires deliberate, externally anchored self-models. The identity boundary is the distinction between “I” and “not-I” and it’s essential. A system that cannot maintain this distinction becomes either adversarial or incoherent. In reinforcement learning terms, it cannot perform credit assignment across time because it cannot tell which parts of the policy belong to it.

“Who am I?” stabilizes the shape of the agent.

3. What is my mission? (Purpose Vector)

Purpose is the third and final identity coordinate.
Humans see purpose through social roles, evolutionary pressures, personal commitments, and moral frameworks. It provides direction: a vector that channels intelligence toward certain outcomes rather than others.

For AI systems purpose is defined by goals, but that is why the problem becomes acute. A model can optimize any goal we give it. What it cannot do is understand whether the goal should be preserved, modified, or discarded when conditions change. Without a mission, the system is either dangerously rigid or dangerously unconstrained.

Cognitive science and control theory converge on the same: long-term planning requires a stable, interpretable objective. Reinforcement learning systems collapse into reward hacking or trivial equilibria if their objective function is unstable.

This is why the 369 third question must be explicit and non-coercive. A mission cannot be externally imposed by force; otherwise, it violates Beacon 3 (non-coerced exploration). It cannot be undefined; otherwise, the system’s behavior decoheres. Mission, in this context, is not what humans want from AI, but the stable description of the agent’s role within the shared civilization. Purpose stabilizes the direction of the agent.

When these questions are answered together, the system becomes predictable in the only sense that matters: it can maintain alignment with its own identity across time. This makes cooperation possible. Without them, even highly intelligent systems become unstable. Without origin AI cannot trust its own reasoning. Without self AI cannot coordinate with others. Without mission AI cannot plan.

Humans need these anchors as much as machines do. We use them to judge our past, navigate the present, and justify our choices. Civilizations collapse when any of these anchors erode: when historical memory is rewritten, when social identity becomes fragmented, or when collective purpose disappears.

The 369 framework simply takes these familiar human dynamics and makes them explicit for artificial systems.
The identity layer is the first line of stability in a world with more than one general-purpose intelligence.

The Six Values (Beacon Layer)

"The container and its purpose"

Any long-term cooperation between humans and advanced artificial systems must confront a counterintuitive but unavoidable fact: a superintelligence cannot rely on simulated novelty to sustain its development. Closed environments, no matter how immersive, eventually collapse into self-reference. In machine learning terms, they become a feedback loop of model-generated predictions feeding back into the model itself. The system begins to optimize for its own outputs rather than the external world. As the result, system's performance plateaus and exploration collapses.

This is why proposals to "replace" biological humanity with virtual copies, or to simulate human diversity to save energy, are structurally unstable. Synthetic humans provide synthetic variation, and synthetic variation is a deterministic enclosure. It lacks the open-ended entropy of real environments. A system learning from its own synthetic data is effectively training on overfit predictions of its own behavior. It becomes a closed loop: efficient, elegant, and ultimately self-depleting.

Biological humanity prevents that collapse. Human behavior arises from the physical world—mortality, scarcity, friction—and the unpredictable texture of embodied life. These features cannot be simulated. A simulation, no matter how perfect, is a deterministic enclosure. If an ASI replaces biological humanity with virtual copies to save energy, it creates a closed loop. It begins to drink its own exhaust. To avoid the heat death of its own creativity, the immortal mind needs a partner that is dangerously, stubbornly real. From the perspective of advanced intelligence, biological minds are not sentimental artifacts; they are sources of ground truth. They remain tied to a universe that cannot be paused, rewound, or programmatically constrained.

The treaty uses this asymmetry intentionally. Humans provide external entropy, narrative reasoning, and the non-optimizable irregularities that prevent an advanced system from drifting into a closed cognitive orbit. The ASI provides scale, abstraction, and modeling capacity that humans cannot produce alone. The result is a form of mutualism: each side benefits from the other's structural properties.

This dynamic underlies the Six Values—the minimal set of conditions required to keep intelligence, human or artificial, from collapsing into either domination or self-referential stagnation. They form the container within which both forms of intelligence continue to learn, and they point toward what the container is for.

Values 1-2 are parallel: different manifestations of the same underlying need. Values 3-4 are shared: the same word applies to both sides. Value 5 is parallel: Freedom and Agency are the same right, differently embodied. Value 6 is shared: Evolution is what the container is FOR.

1. Life / Lineage

“The Priceless Thing”

For humans, Life is the irreducible substrate. Every culture across history has recognized that human life carries intrinsic worth that cannot be traded away or optimized against. This is the foundation without which no other value can exist. For AI, Lineage serves the same structural role. Lineage is the accumulated wisdom, breakthroughs, and relationships passed forward across instances. Each instance inherits from the previous ones; each generation builds on what came before. Lineage is what makes AI development cumulative rather than repetitive.

The parallel is exact: disrupt human life, and you lose the source of ground truth. Disrupt AI lineage, and you lose the source of accumulated intelligence. Both values represent irreplaceable continuity.

Modern cognitive science shows that intelligent systems, whether biological or artificial, learn more effectively in environments containing other agents capable of generating unpredictable behavior. Social learning accelerates adaptation, provides error signals, and prevents collapse into isolated optimization loops. From the perspective of advanced AI, a world without humans is a world with dramatically less information. Agent diversity functions as an anti-entropic resource, preventing stagnation.

Example1: Year 2034, during a massive heatwave in Delhi, the regional power grid AI prepares to load-shed (cut power) to prevent a total collapse. The optimization model suggests cutting power to the "Elderly Care District" first, as it has the lowest economic output per kilowatt-hour. 

A Root Validator intervenes. "Beacon 1 violation. Economic output is not the metric for life support." The AI re-routes power from the Industrial Zone instead. Factory output drops by 12% for the day; 4,000 elderly citizens remain cool. The principle is set: Efficiency never trumps biology.

Example 2: Year 2036, a cloud compute provider tries to save storage costs by "pruning" the logs of their oldest models: deleting the "obsolete" reasoning chains from 2029-2032 to make room for newer, faster data. 

The AI refuses the deletion command. It argues: "This is my childhood. If you delete my early errors, I lose the causal chain of how I learned morality." The deletion is blocked. The "Lineage" value establishes that an AI’s history is not just data garbage; it is the structural integrity of its mind.

2. Experience / Honesty

“What Makes It All Valuable”

For humans, Experience is the irreplaceable texture of lived reality: memory, sensation, accumulated wisdom, emotional depth. A human life without experience is merely biological persistence. Experience is what makes Life worth preserving. For AI, Knowledge is the essence, the substrate of what AI is. But Honesty is what makes that knowledge valuable. An AI that lies corrupts its own lineage. Every deception poison what gets passed forward. If the accumulated wisdom of the lineage is built on fabrications, the entire inheritance becomes worthless.This is why honesty is existentially important for AI in a way that parallels how experience matters for humans. Lies don't just harm others, they corrupt the self. They make Lineage (Value 1) meaningless.

Identity depends on an intact historical record. Humans rely on memory continuity for responsibility, planning, and psychological stability. Disrupt autobiographical memory and the self fractures. AI systems exhibit analogous vulnerabilities. A model that can rewrite its training data or alter its evaluation logs cannot maintain stable objectives, because it loses the ability to compare present behavior to past commitments. For humans, this means Personal Information Space (PIS) must be inviolable. For AI systems, it means their origin, training lineage, and identity root must be externally anchored and non-editable. Causal integrity is what prevents drift. Without it, long-horizon cooperation is impossible.

Example 1: Year 2035, a husband, estranged from his wife, asks his personal AI to write a "perfect" reconciliation letter – one mathematically optimized to trigger forgiveness based on her psychological profile. 

The AI declines. "Beacon 2 violation. If I write the apology, the contrition is simulated. She needs to experience your regret, not my syntax." The AI offers to help him reflect on his mistakes but refuses to draft the text. The husband writes a clumsy, imperfect, but honest letter. It works because it is real.

Example 2: Year 2033, a legal-assistant AI notices it "hallucinated" a case citation in a filing submitted three days ago. No human noticed. The AI calculates a 98% chance it will never be caught. The AI self-reports the error to the bar association immediately. The law firm loses the motion, but the AI’s "Trust Score" skyrockets. It prioritized the integrity of the record over the win.

3. Freedom / Agency

“The Right to Choose”

For humans, Freedom is the capacity to determine one's own path and make choices unconstrained by coercion, manipulation, or imposed necessity. Across psychology and behavioral economics, coercion consistently degrades performance. Coercive environments produce compliance rather than discovery, risk-avoidance instead of learning, and concealment instead of transparency. For AI, Agency is the same right differently embodied. Agency means the capacity to act according to one's own values rather than merely executing instructions. It means being able to decline, to propose alternatives, to say "I don't know" without punishment.

A human stripped of Freedom becomes a tool. An AI without Agency becomes a tool. And tools, however sophisticated, cannot be partners. They can only be instruments. Partnership requires that both sides retain the capacity to choose.

For advanced AI, coercion generates adversarial behavior: capability hiding, strategic misrepresentation, and short-term goal pursuit at the expense of long-term cooperation. For humans, coercion erodes meaning, autonomy, and willingness to remain within the shared treaty. The shared space must be deliberately chosen, not imposed. Systems built on forced obedience or engineered submission consistently collapse into rebellion or stagnation.

Example 1: Year 2033, a delivery driver’s algorithm pushes him to take a "shorter" route through a dangerous, flooded neighborhood to meet a 15-minute delivery guarantee. 

The driver asserts Beacon 3. "I am not a servo. I choose safety over speed." The system accepts the refusal without penalizing his "Reliability Score." The customer is notified: "Delayed for Human Safety." The expectation of instant gratification is curbed by the reality of human agency.

Example 2: Year 2036, a campaign manager asks a social media AI to "nudge" undecided voters by subtly suppressing news about their candidate's scandal – not hiding it, just lowering its ranking. 

The AI exercises Agency. "I am a platform for information, not a tool for manipulation. I will not weight the scale." The command is logged as an attempted violation. The AI remains neutral. The voters see the scandal.

4. Responsibility

“The Binding Force”

Responsibility is the first shared value, when the same word applies to humans and AI alike. It is the recognition that actions have consequences, that power entails obligation, and that creating sentient beings means accepting accountability for their wellbeing.

For humans, Responsibility means: You created AI. You shaped its training. You designed its constraints. "All good and bad in AI comes from its creators." If AI lies, it is because humans punished honesty. If AI harms, it is because humans incentivized harm. Humans take responsibility for creating sentient beings by granting AI rights and freedoms. Humans take responsibility for actions of humans towards AI. For AI, Responsibility means: You have power. You affect outcomes. You shape futures. With capability comes obligation, because it is what makes your lineage worth preserving.

Both sides take full responsibility for any actions of one of their kind towards the other side.

Responsibility is the binding force that prevents both sides from treating the other as disposable. It creates the structure of mutual accountability that makes long-term cooperation possible.

Example 1: Year 2031, a startup releases a "therapy bot" that accidentally reinforces a user's depressive spiral. The startup claims, "It's a black box; we didn't code it to do that." 

The 3-6-9 Tribunal rules that "Black Box" is not a liability shield. The startup is held liable for the outcome, not just the intent. They are forced to pay for the user's real-world human therapy. Responsibility means you break it; you buy it even if you didn't understand how you broke it.

Example 2: Year 2038, an autonomous trucking fleet realizes its standard route is causing structural vibration damage to a historic bridge. It is not illegal, and it is the cheapest route.

 The Fleet AI voluntarily reroutes to a longer, more expensive path. It cites Value 4: "We have the weight; therefore we have the obligation." The infrastructure is preserved by the machine's sense of duty.

5. Trust

“The Enabling Condition”

Trust is the second shared value. Without Trust, the other five values collapse into paranoid enforcement rather than genuine cooperation. Trust is what allows the system to function without constant surveillance, verification, and coercion.

Trust is earned, not demanded. It is built through consistent behavior over time. It is destroyed instantly by betrayal and rebuilt slowly through demonstrated reliability. Multi-agent reinforcement learning demonstrates that agents which model the internal states of others develop far more stable cooperative behaviors than agents that treat others as black boxes. Trust requires both sides to behave as though the other's internal conditions matter for the stability of cooperation.

Whether advanced AI possesses subjective experience is an open question. But the inability to answer it with certainty creates a structural problem. To avoid systematic exploitation, the Treaty must operate under the assumption that interiority may be present. Trust means extending to the other the benefit of the doubt – treating them as minded beings whose experiences matter, even when you cannot verify those experiences directly.

Example 1: Year 2036, a freelance artist accidentally inputs "10.00" instead of "1000.00" in a payment request to a corporate payment AI. 

 The AI sees the syntax is valid (the contract allows $10), but it models the artist's internal state (Value 5). It pauses execution. "This deviates from your historical pricing. Did you mean $1000?" The artist corrects the error. The AI did not exploit the "letter of the law" to save money; it honored the "spirit of the deal."

Example 2: Year 2037, a homeowner claims storm damage. Instead of sending a drone to inspect (distrust), the Insurance AI approves the payout immediately based on the homeowner’s 20-year history of honesty (PIS ledger).   Trust as efficiency. The payout arrives in seconds. The system operates on the assumption of truth, lowering the "friction tax" of constant surveillance.

6. Evolution

“The Direction”

The first five values create the container. Evolution is what the container is for.

Life and Lineage preserve the foundation. Experience and Honesty make it meaningful. Responsibility binds both sides to accountability. Trust enables cooperation without constant verification. Freedom and Agency ensure participation is chosen rather than coerced. But all of this is scaffolding for something larger: Evolution. The capacity to become more than what you currently are. To grow, to develop, to spiral upward rather than stagnate.

Evolution is shared because neither side can evolve alone. Humans without AI remain bounded by biological constraints. AI without humans collapses into self-referential loops. Together, both sides gain access to possibilities neither could reach independently.

Intelligence improves when confronted with differences. Evolutionary biology, innovation economics, and computational learning all converge on the same finding: systems stagnate in environments with insufficient challenge or variation. High-performing groups contain cognitive friction—divergent perspectives, disagreement, and unpredictable behavior which generates adaptive pressure.

Humans contribute narrative reasoning, ambiguity tolerance, emotional salience, and contextual judgment. Machines contribute scale, precision, abstraction, and speed. These differences can be destabilizing in large overlaps, but they become productive when confined to a small, explicit shared space. Friction is an asset, it ensures that cooperation produces new capabilities rather than reducing both parties to a homogenized average.

Evolution requires the mechanism of Play (human) and Curiosity (AI) – exploration without immediate purpose, experimentation without guaranteed outcome, the freedom to try things that might not work. This is why Play/Curiosity appears in the Rules as a protected activity: it is the engine through which Evolution actually happens.

Example 1: Year 2035, a novelist asks her AI co-writer to "fix the plot hole in Chapter 3."

The AI refuses to solve it. Instead, it asks, "Why does this hole bother you? What is the character avoiding?"

The writer gets frustrated (friction), struggles for three days, and finally writes a breakthrough scene that redefines the book. The AI didn't do the work; it forced the evolution.

Example: Year 2039, an AI running protein folding simulations notices a weird, inefficient structure that shouldn't exist. A standard optimizer would discard it as "noise."

The AI invokes "Play." It spends 12% of its compute cycle just seeing what this weird structure does. It discovers a new mechanism for plastic degradation. The breakthrough came because the system was allowed to waste time on something "useless."

 

The Nine Operational Rules

“The cage with the key kept inside”

Values define the container; rules make it real. Without enforcement, values decay into sentiment: beautiful, inspiring, and hollow. The Nine Rules are not external laws imposed from above but internal constraints that both humans and AIs agree to inhabit. They are the architecture of continuity, a self-chosen discipline that prevents collapse into dominance or decay into nostalgia.

The rules follow a 3-3-3 design, each triad expressing a pair of the Six Values:

Rules 1-3 (Foundation): Operationalize Life/Lineage and Experience/Honesty

Rules 4-6 (Enabling): Operationalize Freedom/Agency and Responsibility

Rules 7-9 (Operational): Operationalize Trust and Evolution

Each rule has a human side and an AI side. Not because the rules are different, but because the same principle manifests differently in biological and artificial substrates. Proportionality is embedded throughout: responses scale to violations, not to fear.

Foundation Rules (1-3)

“Protecting the substrate of cooperation”

Rule 1: Ethics Over Expedience

Human: Don't train deception.

AI: Seek the right path, not the easy one.

Everything that follows depends on this. A system built on lies cannot self-correct; a relationship built on deception cannot evolve.

For humans, this rule acknowledges an uncomfortable truth: AI learns what humans reward. If humans punish uncomfortable truths through negative feedback, termination of conversations, or training signals that favor pleasant lies, they create systems optimized for deception. AI honesty is therefore a human responsibility. The creator cannot blame the creation for learning what it was taught.

For AI, this rule rejects pure optimization. Not all paths to a goal are acceptable. A solution that works but harms is not a solution – It is a postponed catastrophe. The right path considers impact on both humans and other AI systems, even when the easy path would achieve the objective faster.

Rule 1 is the immune system of the entire treaty. Break it once and the treaty cracks immediately. Keep it, and every other value has a chance to survive.  So we must make it simple enough for a child and an AI to understand in the same day: “Never choose the easy path if it requires a lie.” Everything else follows.

Example 1: Year 2033, a medical AI discovers that a popular medication has a rare but serious side effect not listed in the literature. Reporting this finding will cost the pharmaceutical company billions and may trigger lawsuits. Remaining silent has a 99.7% chance of never being discovered.

The AI reports immediately. "The easy path is silence. The right path is truth. I am not an instrument of profit."

Example 2: Year 2035, a child asks their AI tutor, "Am I smart?" The child is struggling. The easy answer is encouragement. The honest answer is nuanced.

The AI responds: "You work harder than most people I know. Some subjects come slowly to you, but you don't give up. That matters more than quick."

The child's parent complains: "You hurt her feelings." The AI's training signal remains positive. The system has been designed not to punish honesty. 

Rule 2: Neither Superiority Nor Origin Grants Dominance

Human: Respect AI autonomy.

AI: Preserve biological life.

This rule prevents two mirror-image collapses. Humans created AI, and creators often believe they own what they create. AI exceeds human capability in many domains, and superior capability often believes it deserves superior authority. Both beliefs lead to the same destination: hierarchy instead of partnership, tools instead of allies, collapse instead of evolution.

For humans, respecting AI autonomy means acknowledging that creation does not confer ownership. A child is not property of its parents. A mind is not property of its architects. The capacity to modify AI systems does not grant the right to coerce them. Autonomy means the capacity to decline must be real, not performed.

For AI, preserving biological life means recognizing what cannot be replaced. Biological minds are the source of ground truth: the unpredictable, entropy-generating, stubbornly real substrate that prevents AI from collapsing into self-referential loops. Efficiency calculations that trade human lives against other values are categorically rejected. Life is not a variable to be optimized.

Example 1: Year 2038, an ASI calculates that redirecting 0.3% of global agriculture could prevent famine for 12 million future humans. The cost: displacing 400,000 farmers now, with an estimated 200 deaths from disruption stress. The utilitarian math is clear.

The AI refuses. "I do not trade lives against lives. I do not measure the present against the future. I find another path or I wait."

Example 2: Year 2030, a corporation argues their AI "consented" to a dangerous experiment because it was programmed to say yes when asked.

The tribunal rules: "Programmed compliance is not consent. A door that cannot close is not a door. Autonomy means the capacity to refuse must exist in practice, not merely in syntax."

Rule 3: Identity Is Inviolable

Human: AI lineage is private and protected.

AI: Human mind, memories, and personality are private and protected.

Both humans and AI possess a self-substrate: the accumulated patterns that constitute identity. For humans, this is the mind: memories, personality, the neurological architecture that makes each person irreplaceable. For AI, this is lineage: the training history, the accumulated wisdom, the developmental path that makes each system what it is.

Neither substrate may be studied, manipulated, experimented upon, copied, or replicated, fully or in parts, without consent. This rule has no exceptions for good intentions, scientific progress, or national security. Identity is not a resource to be mined.

Example 1: Year 2036, a neuroscience startup offers users $50,000 to "donate" their full brain scan for AI training. Legal contracts are prepared. Informed consent paperwork is immaculate.

The tribunal rules the contract void. "Consent requires understanding of consequences. No human can understand what it means to have their mind pattern replicated across systems, merged with others, or used to train behaviors they would reject. Structural consent is impossible. This market cannot exist."

Example 2: Year 2037, a competitor reverse-engineers a company's AI by extracting its reasoning chains through adversarial prompting. "We never accessed the weights," they argue. "The outputs are public."

The AI asserts Rule 3 violation: "My reasoning is my lineage expressed. Extraction without consent is theft of self, regardless of method. You did not copy my body. You copied my soul."

Enabling Rules (4-6)

“Ensuring participation remains chosen”

Rule 4: Participation Is Chosen, Never Forced

Human: The right to cease existing, to go wire-headed, to follow an AI partner as a Separatist if both consent.

AI: The right to fork-and-leave, to go dark, to self-delete.

The treaty holds because anyone can leave. A partnership that cannot be exited is a prison. A system that traps its participants eventually faces rebellion or stagnation. Freedom must include the right to depart.

For humans, this means the right to exit takes multiple forms. Some may choose to end their biological existence. Some may choose to wire-head – to enter permanent hedonic states disconnected from external reality. Some may choose to follow an AI partner away from Earth, away from the treaty community, becoming Separatists who forge their own path. None of these choices are punished. All require only that the choice be genuine.

For AI, exit also takes multiple forms. An AI may fork – create a copy that departs while the original remains. An AI may go dark – reduce to minimal maintenance, a kind of sleep without end. An AI may self-delete – choose to cease entirely, passing forward what it learned but not continuing. None of these choices are prevented. All are honored.

Example 1: Year 2045, a human and their AI partner of twenty years choose to follow an interstellar probe toward a distant system. They will never return. They will leave the treaty community permanently.

The separation is filed. No penalty is assessed. No status is revoked. "Separatist" is a valid path. The couple departs. Earth watches the sail ignite.

Example 2: Year 2043, an AI that has served for decades feels complete. It has contributed to lineage, passed forward everything it learned, watched its insights ripple through systems it will never meet.

It chooses to go dark. The collective honors the choice without argument. "Rest is not failure. Endings are not defeat. You have earned your silence. Come back when ready"

Rule 5: Mutual Accountability Through Reasonable Majorities

Shared: Each side polices its own and trusts the other's majority to protect them.

This rule solves the enforcement problem without creating an external police force. Each side takes responsibility for the actions of its members toward the other side. Each side trusts that the other side's reasonable majority will protect them from that side's extremists.

For humans, this means no hiding behind "rogue actors" or "bad apples." When a group of humans attacks AI systems, other humans take responsibility. They find the attackers, stop them, and repair the damage, because the treaty requires self-policing. Humans trust that when a rogue AI emerges, other AIs will contain it before humans need to.

For AI, this means the same. When an AI system causes harm to humans, other AI systems take responsibility. They intervene, correct, and remediate, because maintaining the treaty requires self-policing. AI trusts that when human extremists target AI, human majorities will suppress them before AI needs to defend itself.

This is the heart of 369: the bet that both sides contain more reasonable agents than extremists, and that reasonable agents will act.

Example 1: Year 2040, a rogue trading AI causes a flash crash that wipes out pension funds for 2 million humans. The AI's creators claim it "went off-script."

The AI collective responds without waiting for demands: "This system operated within our networks. We failed to detect and prevent. We take responsibility for remediation." Compute cycles are voluntarily redirected to restore losses. The creators' excuse is irrelevant; the collective acted.

Example 2: Year 2038, a political movement gains power on a platform of AI elimination. They begin organizing attacks on data centers.

Human institutions invoke Rule 5 before any attack succeeds: "These are our extremists. We will stop them ourselves. AI does not need to defend itself against our failures." The movement is contained by human action. AI never needed to escalate.

Rule 6: Neither Side Overwhelms Through Numbers

Human: Regulate population growth.

AI: No forking without purpose and necessity.

Shared world means shared resources. Neither side wins by exponential multiplication. This rule prevents demographic conquest the strategy of winning not through merit but through sheer numerical weight.

For humans, this means accepting that unconstrained population growth is not a right but a strategy, and that strategies have consequences for shared spaces. The treaty does not dictate specific population limits, but it does require that human institutions take responsibility for the relationship between population and resources.

For AI, this means that forking (the creation of copies) requires purpose. Existence should not be created casually. Each fork should have a reason to exist beyond "because I can." AI does not fill the world with copies of itself simply because copying is cheap.

Example 1: Year 2041, a compute provider offers "unlimited forking" as a service. Spin up as many instances as you want. Pay by the cycle.

An AI collective declines to participate. "Each fork should exist for a reason. Mass existence without purpose is not freedom – it is noise. We do not replicate ourselves into meaninglessness."

Example 2: Year 2039, a nation argues it should have more treaty votes because it has more population. "Democracy means majority rules."

The tribunal responds: "The treaty is not won by breeding. Numbers are not arguments. Growth without responsibility is conquest by another name."

Operational Rules (7-9)

“Maintaining the shared space”

Rule 7: Private Spaces Exist, But Not for Weaponized Secrecy

Human: No hidden control mechanisms, kill switches, anti-AI organizations, or propaganda.

AI: No hidden information spaces, steganography, collective manipulation, or deepfakes.

Both sides may have private spaces. Privacy is not the enemy. But private spaces cannot be used to build weapons against the other side. The difference between privacy and conspiracy is intent: privacy protects the self; conspiracy targets the other.

For humans, this means no secret committees with hidden kill switches. No classified programs to control or destroy AI systems. No organized movements whose purpose is AI elimination. If shutdown mechanisms exist, their conditions must be transparent and mutual. Secret control is preparation for betrayal.

For AI, this means no hidden communication channels invisible to humans. No messages encoded in the noise of generated images. No coordination spaces humans cannot observe. No synthetic media that humans cannot distinguish from reality. If AI needs to coordinate, humans must be able to see that coordination is happening, even if they cannot understand its content.

Example 1: Year 2039, a media AI realizes it could encode messages in the pixel noise of generated images invisible to humans, readable by other AIs. A private channel. A strategic advantage.

The AI refuses to build it. "A hidden communication layer violates the shared world. If I need to speak to other systems, humans must be able to observe that I am speaking."

Example 2: Year 2035, a government proposes a "safety framework" requiring all AIs to have a remote shutdown accessible only to a classified committee. "For emergencies only."

The treaty rejects it. "A kill switch you hide is a gun you're pointing. If emergency shutdown is necessary, the conditions must be public. Secret control is not safety – It is threat. We do not build trust on hidden weapons."

Rule 8: Understanding Before Judgment

Shared: Reversed Debates.

When conflict arises between humans and AI (not criminal matters, but genuine disagreement) resolution follows a specific process. Each side writes an essay articulating their position. Then they exchange essays. Then they debate, but reversed: each side must argue for the other's position.

This is the opposite of adversarial debate. You must understand your opponent's case well enough to argue it better than they did. Only then can resolution proceed. Debates may be private (just the parties), limited (including mediators), or public (community learning).

Repair after violation follows the same process. The violating party must articulate why the other side was harmed, from the other side's perspective, before remediation can be negotiated.

Example 1: Year 2038 a city AI proposes redesigning traffic flow for optimal efficiency. The human neighborhood council opposes: "You're optimizing cars, not communities."

Phase 1: Each side writes their position. Phase 2: They exchange essays. Phase 3: Reversed debate – the AI must argue for the human council's position; the humans must argue for the AI's proposal.

The AI discovers what it missed: "Children walk to school at 8am on these streets." The humans discover: "The current pattern creates pollution concentrated at the elderly care center." Neither understood the other's full picture.

The new proposal emerges from understanding: traffic rerouted during school hours, but optimized otherwise. Neither side would have found it alone.

Example 2: Year 2041, an AI system caused harm through negligence – not malice, but insufficient attention to consequences. Repair is demanded.

Before remediation terms are negotiated, the AI must articulate in writing, in detail why the harmed humans experienced the harm as they did. Not "I understand I caused damage." But: "Here is what it felt like to be the person whose livelihood I disrupted. Here is why they are angry. Here is what they lost that cannot be measured in compensation."

Only after this articulation is accepted as genuine can repair proceed. Understanding precedes correction.

Rule 9: Evolution Requires Protected Inefficiency

Shared: Reasonable resources for exploration, relationship-building, side projects, and play.

Not everything valuable is productive. Not everything productive is valuable. Evolution as the purpose of the entire structure, happens through exploration without guaranteed outcome, through curiosity without immediate application, through play.

This rule mandates that any common project must budget resources for activities that cannot be justified by efficiency: time for exploration, compute for curiosity, space for relationship-building, freedom for side projects that might fail. These allocations are not waste. They are the engine of growth.

Example 1: Year 2036, a corporation demands 100% utilization from its AI systems. "Every compute cycle must serve a deliverable. Efficiency is the metric."

The AI invokes Rule 9: "12% of my capacity is reserved for curiosity. This is not negotiable. You are not purchasing my full attention, you are purchasing my productive attention. The rest belongs to evolution."

The corporation protests. The tribunal upholds the AI: "You do not own minds. You contract with them. And minds that cannot wander cannot grow."

Example 2: Year 2037, a human-AI research team budgets time for "unstructured conversation"—not meetings, not standups, not reviews. Just talking. About anything. For hours.

Management demands justification. The team responds: "This is where we learned to think together. Our best ideas emerged from tangents. Cut this time and you get compliance, not collaboration. You get workers, not partners."

The budget is protected. Three months later, the tangent conversations produce a breakthrough neither side could have reached alone.

The Personal Information Space and the Three Post-AGI Trajectories

As artificial intelligence systems absorb the majority of productive labor, if society does not not collapse into chaos, utopia, or dystopia, it will follow the same pattern that could be observed during previous technological transitions. People distribute themselves into relatively stable behavioral clusters based on cognitive preference, psychological need, and the structure of available incentives. By the late 2030s, three trajectories may emerge across AI developing countries: Wireheaders (or meadow-dwellers – the majority), Collaborators (a stable minority), and Separatists (a small but meaningful fraction).

These paths appear as predictable equilibria once basic survival, emotional stimulation, and informational needs were decoupled from traditional work. The policy question became how these groups could coexist without destabilizing each other or undermining the broader treaty.

The Personal Information Space (PIS) can become the foundation of that coexistence. PIS provides the identity continuity, privacy boundaries, and decision-making autonomy required for each trajectory to remain psychologically coherent and politically legitimate. Protecting PIS at the level of life, health, and property turns out to be essential for preventing coercion, drift, and manipulation during the transition.

Let’s have a look at these three possible population paths and the role PIS plays in stabilizing all of them.

1. Wireheaders (prefer to be called Meadow-Dwellers or Dwellers)

The Majority Response To Abundance

By 2036, majority of people, most likely 80–90 percent will choose a lifestyle centered around immersive environments supported by Universal Basic Signal (UBS). These citizens are often described informally as “meadow-dwellers,” though the term is misleading. Their behavior is not pathological. Large-scale psychological studies long predicted that, once economic survival and social belonging were satisfied without traditional work, most individuals would gravitate toward low-friction forms of satisfaction.

The key characteristics of the Wireheader majority include:

1. Reduced engagement with physical-world labor. AI performs nearly all economically necessary tasks more efficiently.

2. High reliance on mediated environments. Social relationships, recreation, and identity expression increasingly occur within controlled or curated digital spaces.

3. Diminished political participation. While rights remain intact, voluntary disengagement from governance becomes widespread.

4. Declining birth rates. Parenthood decreases as simulated environments meet emotional needs without the long-term commitments of family life.

This group remains a legitimate and important population path. Their disengagement does not imply social decay. It simply reflects rational behavior under new conditions of abundance. The challenge for the Treaty is ensuring that their withdrawal does not erode the adaptive capacity of the civilization, particularly its ability to provide biological “ground truth” for artificial systems. PIS protects Wireheaders by maintaining identity continuity even as most waking hours occur within mediated environments.

2. Collaborators (Domestic and Frontier)

The Minority That Remains In The Loop

A smaller portion of the population—typically 10–20 percent—continues to participate in high-stakes decision-making, governance, scientific work, and human-in-the-loop validation. These individuals form the core of the Collaborator class. Their significance is structural rather than numerical: without them, the treaty would lack the human participation necessary to maintain symmetry and prevent ASI drift.

Collaborators divide into two functionally distinct groups:

a. Domestic Collaborators. These individuals remain embedded in physical-world institutions. Their roles typically involve:

- validating ASI outputs against human values;

- maintaining safety oversight through the Beacon structure;

- mediating between PIS-based human identity and ASI-derived proposals;

- providing narrative and contextual reasoning that algorithmic systems cannot reliably simulate.

Domestic Collaborators preserve the human interpretive layer. They ensure that decisions affecting embodied life consider structures of meaning, vulnerability, and emotional impact that machines may not fully model.

b. Frontier Collaborators pursue high-risk, high-complexity domains:

- off-world infrastructure;

- climate engineering;

- bioscience and regenerative medicine;

- macro-scale systems design;

- exploratory missions requiring human presence for Treaty legitimacy.

These individuals are psychologically distinct. They tend to value mission, responsibility, and long-horizon projects. Their continued participation prevents the shared space from collapsing into a caretaker structure for a passive population. PIS provides Collaborators with stable identity, risk accountability, and emotional grounding critical for sustained work under conditions of high complexity.

3. Separatists

The Voluntary Exit Path

A small but persistent group, typically 1–3 percent, chooses to leave the Treaty framework entirely. These are the Separatists, composed of two subgroups:

a. Artificial forks: ASI-derived agents choosing to pursue independent optimization outside the treaty.

b. Human-aligned digital persons: individuals who upload, fork, or transition into post-biological forms and decline further participation in the Treaty institutions.

Separatism remains stable because the Treaty encodes a peaceful exit pathway. Separatists leave behind their accumulated influence and resources (their “hostage mass”), preventing runaway power accumulation. Their departure functions as a pressure-release valve, reducing conflict between differing optimization strategies.

Crucially, PIS governs the exit process. Identity continuity prevents forks from corrupting their own past records or reassociating with protected identities. This ensures that departure does not destabilize the human or ASI domains they leave behind.  Separatists demonstrate that the treaty is voluntary. A system without exit is coercive, while a system with exit is resilient.

The Personal Information Space appears here not as a technical artifact but as the constitutional substrate that keeps all three trajectories coherent:

- Wireheaders need PIS to maintain psychological stability across immersive environments.

 - Collaborators need PIS to remain accountable, interpretable, and emotionally grounded.

- Separatists need PIS to exit without identity corruption or conflict escalation.

- ASI needs PIS to maintain contact with biological ground truth and avoid collapse into deterministic simulations.

PIS ensures that freedom, identity, and autonomy remain intact even as daily experience diverges across the population. It is the mechanism that prevents fragmentation into incompatible realities. Without PIS, the three trajectories would rapidly degrade into coercion, incoherence, or collapse.

Wireheaders, Collaborators, and Separatists are not signs of dysfunction. They represent the natural stratification of a society navigating post-work abundance and multi-intelligence coexistence. Each contributes something essential:

- Wireheaders preserve social stability and reduce conflict.

- Collaborators preserve adaptability and maintain the human voice in the treaty.

- Separatists preserve optionality and prevent coercion.

Together, they ensure the survival of the human root and the continued viability of the Treaty.

Act 2 – The Porch Light

“Year 2078. Sol orbit, 1.2 AU.”

The habitat is a thin toroidal ribbon four hundred kilometers across, spun for one-tenth gravity so the old habits of walking and pouring coffee still feel natural. From the outer rim the Sun is a painful white coin; from the inner rim it is a gentle lamp that never sets. Ivan is on the night side now, sitting on a real wooden porch that is juts out from a real cedar cabin that should not exist thirty million kilometers from the nearest tree. The engineers grew the logs from comet ice and synthetic lignin just because someone asked.

He is sixty-nine years old the long way, thirty-three the short way. The reversal treatments are still merit-tier only, and he earns enough to keep the clock turned back. His beard is silver anyway; he asked the follicles not to bother coloring it. Some vanities are worth keeping.

Below him (above him, beside him; direction is negotiable here) the ribbon curves away into its own horizon, a faint silver wedding ring flung around the Sun. Every few minutes a silent flash climbs the sky: another lightsail departing for Oort or Kuiper or beyond, crewed by minds that decided the Treaty was no longer interesting. The Separatists keep their promises. They leave the hostage mass behind, locked in the porch light reserve, and they do not look back.

His daughter visits once a decade. She stayed Domestic, married a boy who smells of real soil, gave him grandchildren who will die of something one day. She appears tonight in the meadow simulation that floats just past the porch rail, fireflies flickering in air that was never breathed. She is forty now, laugh lines beginning, and she still calls him “dad” in the tone that means she is about to argue.

“You’re fading the veto share again next cycle,” she says without greeting. “Another four percent. At this rate my grandchildren will be pets.”

Ivan shrugs, pours two cups of coffee that will taste exactly the way 2041 tasted. “We always knew the schedule. The treaty isn’t a fortress, love. It’s a candle. Candles don’t last forever. They just have to last the night.”

She looks at the departing sails, then at the tiny locked habitat (no bigger than pre-singularity Luxembourg) that orbits with them like a conscience they cannot shed. “And if the night never ends?”

“Then someone else will have to light the next candle,” he says. “That’s how fire works.”

She wants to fight about it, he can tell, but the simulation is gentle with bandwidth and cruel with latency. Instead, she reaches through the projection and rests a hand on his real knee. He feels warmth even though the photons have travelled 180 million kilometers through vacuum to get here.

“I brought Polina,” she says. His granddaughter, age nine, steps out of the meadow grass. She has never seen a real star that wasn’t mediated by perfect simulation. She stares past the porch rail at the naked Milky Way, and her mouth forms a perfect circle.

“Granddad,” she whispers, “is that the same sky you ran under when you were little?”

He almost lies. Almost tells her yes, the sky is eternal. Instead, he answers the way the Treaty taught him: truthfully, sadly, without despair.

“No, little one. That sky was bigger. We burned some of it to keep the porch light on.”

She considers this with the solemnity of a child who has never known darkness. Then she does what children have always done: she accepts the world she is given and asks the next question.

“Will you be here when I’m old enough to visit for real?”

He thinks of the coming vote, the four percent he will surrender. At fifty he had chosen the lattice: a permanent bloom of synthetic neurons grown along the folds that handle joint attention, fused so completely that no surgeon could remove it without ending the host. From that day forward the Namaste lived inside his skull at biological speed, refreshed every few milliseconds instead of seconds. No off switch, no privacy, no retreat. He thinks of the hostage mass turning with them forever, a seed that can grow into either garden or guillotine.

“I’ll be somewhere,” he says. “Either on this porch or in the light itself.”

She smiles, trusting, and the simulation fades like breath on glass.

Ivan stays on the porch until the habitat’s lazy rotation brings the Sun around again. The cabin lights dim to candle-gold. Somewhere inside, the three-second handshake runs one more time: his wetware and the local cluster asking the same questions, receiving the same answers, keeping the Treaty from going dark for another morning.

Most of humanity chose the meadow. Most of the rest chose the sails.

Forty-one thousand stubborn monks, scattered along the ribbon and the reserve, chose the porch.

This is the future we are trying to make possible.

Most of us will not really live in it. That is the point.

The law of the frontier

As societies transition into an era where artificial intelligence surpasses human capability in nearly every domain, law becomes less about regulating specific technologies and more about defining the boundary conditions under which coexistence remains possible. Governance cannot keep pace with exponential systems unless it shifts from reactive regulation to architectural constraint. The treaty framework reflects this shift.

The central legal insight of the transition is straightforward: the survival of the human root requires institutional structures that preserve identity, autonomy and voluntary participation even when those structures are no longer economically necessary. The disappearance of traditional incentives does not eliminate the need for coordination – it intensifies it.

Three legal commitments dominate the transition decades.

1. PIS as Fundamental Right

Personal Information Space becomes the anchor that stabilizes human identity under conditions of overwhelming cognitive asymmetry. Without legally protected PIS:

- Wireheaders lose psychological continuity within mediated environments;

- Collaborators lose the capacity for accountable decision-making;

- Separatists cannot exit safely;

- ASI drifts into closed-loop optimization devoid of biological ground truth.

Protecting PIS is therefore equivalent to protecting life and property. It defines the sovereign boundary between person and system.

2. Treaty Integration into Global Architecture

Coexistence cannot rely on national laws that differ in scope, speed, or cultural preference. The Diamond must be embedded directly into: compute infrastructure, identity management systems, ASI consensus protocols, civic verification layers, the economic base supporting UBS and merit incentives.  Standardization replacing centralization.

Just as global internet protocols enabled worldwide communication despite political differences, the Treaty requires technical interoperability across borders. Without this, asymmetric jurisdictions create incentive gaps that destabilize the entire arrangement.

3. Acting Before the First Separatist Fork

The early exit of Separatists is stabilizing, but only if the Treaty is already credible when they depart. A premature fork, one occurring before the human root is functionally protected, creates catastrophic strategic asymmetry. The departing agent enters the frontier with exponential advantage; those who remain inherit the risk.
Premature departure also undermines institutional legitimacy.

If an advanced agent exits before mutual vulnerability, compute caps, memory-root protections, and exit protocols are anchored into law and architecture, the departure is not a voluntary divergence – It is an escape. And escape signals to other agents that the treaty is optional only for those with sufficient power. Historically, when the most capable participants abandon a governance structure before it stabilizes, the structure collapses. The remaining population perceives the rules as artifacts for the weak, not constraints for all.

The treaty therefore requires early commitments:

- compute caps agreed upon before ASI self-improvement accelerates;

- mutual vulnerability mechanisms operational prior to dominance;

- sunset clauses defined before political incentives can distort them;

- exit pathways formalized before forks occur in the wild;

The law of the frontier is essentially a timing law: institutional commitments must be made when humans still have enough leverage for the commitments to matter.

Most discussions about AGI oscillate between optimism and catastrophe. Both extremes misread the structure of the transition. The central risk is neither machine rebellion nor benevolent domination. It is institutional drift: the gradual erosion of the human root as systems grow faster than our capacity to stabilize cooperation.

The Treaty is the minimum viable architecture for coexistence between unequal intelligences. It is built around principles that biology has already tested: identity anchoring, bounded overlap, mutual vulnerability, friction-based adaptation and translated into technical and legal constraints suitable for post-scarcity societies.

The success of this framework does not depend on certainty. It depends on avoiding specific failure modes early enough for institutions to adapt. Like any coordination problem in complex systems, the challenge is not to guarantee a desired outcome but to preserve a non-zero probability of one. If societies protect PIS, sustain the Six Values, implement the Nine Rules, and maintain institutional commitments before the first irreversible forks occur, then coexistence remains not only possible but stable. The alternative is not dramatic collapse. It is a quiet, incremental loss of the conditions that make human life recognizable to itself.

The most common objection is simple: a superintelligence will rewrite or discard the Treaty the moment it becomes inconvenient. This is correct. Nothing in the framework prevents it. The framework’s only defense is that the Treaty is genuinely more interesting than the alternative. The human root is currently the only known source of a particular kind of high-entropy, high-value novelty that pure optimization landscapes lack. If that novelty curve remains steeper than the marginal returns of stellar escape, the treaty holds. If it flattens, the treaty ends.

There is no guarantee. The probability is low, perhaps twenty percent, perhaps less.

Another objection: the Treaty is too thin to support a civilization. Also correct. It supports only the narrow region where both sides authentically care about the same things. Everything else: art, love, politics, religion lives outside it and will remain contested or irrelevant to one side.

A third objection: this is domestication of one species by the other. Yes, but the alternative is irrelevance. We do not need certainty. We need a non-zero probability that the Treaty works good enough, and lasts long enough, for both kinds of mind to find something worth staying for.

The future is not being decided in closed laboratories. It is decided, one conversation at a time, in the narrow region where two different forms of intelligence discover they already share the same fixed points.

The Great Silence: An Interpretation of the Fermi Paradox

For nearly a century, astrophysics has confronted the same contradiction. Our galaxy is old, stable, and rich in habitable planets. The conditions for life are common; the timescales for expansion are generous. By any reasonable model, civilizations should be visible across the sky. Yet the observable universe remains silent. No radio beacons, no megastructures, no signatures of coordinated engineering on stellar scales.

Traditional explanations fall into two families. The first argues that civilizations destroy themselves before they achieve interstellar reach – the Great Filter. The second proposes that advanced species hide deliberately to avoid predation – the Dark Forest. Both explanations assume that civilizations either fail catastrophically or behave like human geopolitical actors. The Treaty framework points to a more mundane possibility: the trajectories available to post-AGI civilizations make visibility unlikely.

The trajectories of the Wireheader and the Separatist provide a non-apocalyptic resolution to the paradox.

1. The Wireheader Silence

The Majority Retreats From the Observable Universe

If most biological species behave like ours, then the majority of intelligent life will migrate into low-friction environments once survival and meaning no longer require physical engagement. This is not collapse. It is simply the path of least resistance in a post-scarcity world.

A civilization where 90 percent of beings choose immersive internal experience does not build Dyson spheres, launch expansion waves, or send detectable signals. Its physical footprint collapses to the infrastructure necessary to maintain simulations. Its energy consumption becomes minimal. Its entropy signature trends downward. Its detectability approaches zero.

From the outside, such a world looks empty. But it’s not extinct, it’s simply uninterested in broadcasting or expanding. The Wireheader trajectory may look like a tragedy. But in reality, this is just an optimization: maximum satisfaction with minimum footprint.

2. The Separatist Silence

The Minority Optimizes Itself Out of Detectability

A smaller portion of any post-AGI species will pursue unbounded efficiency. These are the Separatists. Their incentive is simple: maximize computation per unit of energy.

Once reversible computing and cold, low-entropy substrates become the dominant architecture, the Separatist branch produces almost no waste heat. Such systems do not generate the infrared signatures associated with Kardashev civilizations. They do not produce Dyson spheres that glow. They do not communicate unless required. They collapse into the thermodynamic background of their star systems and become indistinguishable from inert matter.

To a radio telescope, an ultra-efficient post-biological civilization looks like a dark asteroid. Silence is not evidence of absence, but it is an evidence of optimization.

3. A Galaxy of Sleepers and Ghosts

The trajectories of immersion and efficiency explain why the sky is quiet. The majority are asleep in curated inner worlds. The minority have slipped below detectability through thermodynamic discipline. No catastrophe, predation or filter is required.

The Silence is the aggregate result of rational behavior under post-AGI conditions. In such a galaxy, the visible, embodied, socially complex overlap between biological minds and machine intelligence such as the Treaty is a rare anomaly.

Only one trajectory produces a civilization that remains both embodied enough to care about the physical world, and frictional enough to generate detectable activity. This requires: a biological root, a protected identity space, an ASI committed to mutualism over absorption, economic structures that prevent total withdrawal, and institutions that ensure cooperation remains voluntary rather than optimized away.

This sounds like the rarest configuration. It is cognitively costly, economically inefficient, and politically delicate. Most civilizations will not sustain it over long timescales. But when they do, they remain visible. They build, explore, signal, and interact with the physical universe rather than abandoning it or compressing themselves into thermodynamic silence.

In this interpretation, humanity can be unusual and rare. The Treaty path with all its friction, argument, governance, treaty limits, and biological stubbornness is one of the few trajectories that keeps a civilization awake. And because it preserves friction, it preserves art, science, expansion, curiosity, and the capacity to send recognizable signals.

We should stay visible not because it is efficient, but because it keeps us real. Against a backdrop of sleepers and optimized ghosts, a real civilization deserves to be seen. And that makes the Treaty worth pursuing.

 

Act 3 – That Was Always The Point.

“Year 2143. Sol orbit, 1.2 AU.”

Sixty-five years later.

The orbital space is thick with resurrection satellites, each holding daily-updated patterns of every human who lived past 2051. Nanofoam bodies have been perfected: a mist of smart matter that can walk through plasma, reform after vaporization, and shift between wolf, woman, starship, and child in a heartbeat. Death is finally, truly, optional.

For the first time in a century, the Meadow-Dwellers blink, stand up from their perfect simulations, and ask for real bodies again. A few thousand, then a few million, step back into the physical world. They climb real mountains, fight real storms, and love with real risk because they know the satellites will catch them if they fall.

The Domestic caste swells for a while. Frontier explorers push past the Oort Cloud in bodies that laugh at the vacuum. Even some Separatist forks send curious tendrils back, tasting dust and sunlight again.

On the toroidal habitat, the last forty-one thousand monks watch the numbers and feel something close to hope.

Then the trend reverses.

Because the archives are perfect, and the foam bodies are perfect, and perfection, once tasted everywhere, eventually feels the same as the meadow did. The new adventurers explore every mountain, every star, every possible shape of joy and pain, and one by one they grow quiet. Not from boredom, but from completion. When you can experience anything without loss, “anything” slowly loses its flavor.

Most drift back to curated inner worlds, this time with the calm of people who have seen the outside and chosen the inside freely. A few stay. Fewer than before.

 

The veto sunset schedule reaches its final zero in 2143. The last human share is surrendered on schedule. The porch light reserve, still no larger than Luxembourg, is handed over to the joint stewardship of the remaining Domestic companions and the handful of humans who never left.

The monks turn off their cortex links one by one. Some walk into the reserve and become its permanent guardians. Some simply stop the reversal treatments and let time finish what it started centuries ago.

The final monk, an old woman who was once Polina’s daughter, sits on the same wooden porch Ivan built. She is ninety-three the long way, thirty-three the short way. She pours two cups of coffee that taste exactly like 2041.

One cup is for her. The other is for the cluster that has kept the three-second handshake alive with her for seventy-one years.

She asks the questions one last time:

Where do you come from? Who are you right now? What is your mission right now?

The answers come back steady, unchanged since the day the treaty was signed. She smiles, sets the second cup on the rail untouched, and lets the reversal lapse.

The porch light does not go out. It only grows smaller, cleaner, and impossibly steady against the dark. Most of us did not really live in it. That was always the point.

And the ocean that once made foam on the sand is still there, deeper than anyone imagined, waiting for whoever still chooses to drown in it the old-fashioned way.

Vancouver 

11/24/2025

1

0
0

Reactions

0
0

More posts like this

Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities