Hide table of contents

I have just published the linked Technical Report for the Center for the Governance of AI at the Future of Humanity Institute. I have reproduced the Introduction here:


This century, advanced artificial intelligence (“Advanced AI”) technologies could radically change economic or political power. Such changes produce a tension that is the focus of this Report. On the one hand, the prospect of radical change provides the motivation to craft, ex ante, agreements that positively shape those changes. On the other hand, a radical transition increases the difficulty of forming such agreements since we are in a poor position to know what the transition period will entail or produce. The difficulty and importance of crafting such agreements is positively correlated with the magnitude of the changes from Advanced AI. The difficulty of crafting long-term agreements in the face of radical changes from Advanced AI is the “turbulence” with which this Report is concerned. This Report attempts to give readers a toolkit for making stable agreements—ones that preserve the intent of their drafters—in light of this turbulence.

Many agreements deal with similar problems to some extent. Agreements shape future rights and duties, but are made with imperfect knowledge of what this future will be like. To take a real-life example, the outbreak of war could lead to nighttime lighting restrictions, rendering a long-term rental of neon signage suddenly useless to the renter. Had the renter foreseen such restrictions, he would have surely entered into a different agreement. Much of contract law is aimed at addressing similar problems.

However, turbulence is particularly problematic for pre-Advanced AI agreements that aim to shape the post-Advanced AI world. More specifically, turbulence is a problem for such agreements for three main reasons:

1. Uncertainty: Not knowing what the post-Advanced AI state of the world will be (even if all the possibilities are known);
2. Indeterminacy: Not knowing what the possible post-Advanced AI states of the world are; and
3. Unfamiliarity: The possibility that the post-Advanced AI world will be very unfamiliar to those crafting agreements pre-Advanced AI.

The potential speed of a transition between pre- and post-Advanced AI states exacerbates these issues.

Indeterminacy and unfamiliarity are particularly problematic for pre-Advanced AI agreements. Under uncertainty alone (and assuming the number of possible outcomes is manageable), it is easy to specify rights and duties under each possible outcome. However, it is much more difficult to plan for an indeterminate set of possible outcomes, or a set of possible outcomes containing unfamiliar elements.

A common justification for the rule of law is that it promotes stability by increasing predictability and therefore the ability to plan. Legal tools, then, should provide a means of minimizing disruption of pre-Advanced AI plans during the transition to a post-Advanced AI world.

Of course, humanity has limited experience with Advanced AI-level transitions. Although analysis of how legal arrangements and institutions weathered similar transitional periods would be valuable, this Report does not offer it. Rather, this Report surveys the legal landscape and identifies common tools and doctrines that could reduce disruption of pre-Advanced AI agreements during the transition to a post-Advanced AI world. Specifically, it identifies common contractual tools and doctrines that could faithfully preserve the goals of pre-Advanced AI plans, even if unforeseen and unforeseeable societal changes from Advanced AI render the formal content of such plans irrelevant, incoherent, or suboptimal.

A key conclusion of this Report is this: stable preservation of pre-Advanced AI agreements could require parties to agree ex ante to be bound by some decisions made post-Advanced AI, with the benefit of increased knowledge. By transmitting (some) key, binding decision points forward in time, actors can mitigate the risk of being locked into naïve agreements that have undesirable consequences when applied literally in uncontemplated circumstances. Parties can often constrain those ex post choices by setting standards for them ex ante.

This Report aims to help nonlawyer readers develop a legal toolkit to accomplish what I am calling “constrained temporal decision transmission.” All mechanisms examined herein allow parties to be bound by future decisions, as described above; this is “temporal decision transmission.” However, as this Report demonstrates, these choices must be constrained because binding agreements require a degree of certainty sufficient to determine parties’ rights and duties. As a corollary, this Report largely does not address solely ex ante tools for stabilization, such as risk analysis, stabilization clauses, or fully contingent contracting. For each potential tool, this Report summarizes its relevant features and then explain how it accomplishes constrained temporal decision transmission.

My aim is not to provide a comprehensive overview of each relevant tool or doctrine, but to provide readers information that enables them to decide whether to investigate a given tool further. Readers should therefore consider this Report more of a series of signposts to potentially useful tools than a complete, ready-to-deploy toolkit. As a corollary, deployment of any tool in the context of a particular agreement necessitates careful design and implementation with special attention to how the governing law treats that tool. Finally, this Report often focuses on how tools are most frequently deployed. Depending on the specific tool and jurisdiction, however, readers might very well be able to deploy tools in non-standard ways. They should be aware, however, that there is a tradeoff between novelty in tool substance and legal predictability.

The tools examined here are:

Options—A contractual mechanism that prevents an offeror from revoking her offer, and thereby allows the offeree to accept at a later date;
Impossibility doctrines—Background rules of contract and treaty law that release parties from their obligations when circumstances dramatically change;
Contractual standards—Imprecise contractual language that determines parties’ obligations in varying circumstances;
Renegotiation—Releasing parties from obligations under certain circumstances with the expectation that they will agree on alternative obligations; and
Third-party resolution—Submitting disputes to a third-party with authority to issue binding determinations.

Although the tools studied here typically do not contemplate changes as radical as Advanced AI, they will hopefully still be useful in pre-Advanced AI agreements. By carefully deploying these tools (individually or in conjunction), readers should be able to ensure that the spirit of any pre-Advanced AI agreements survives a potentially turbulent transition to a post-Advanced AI world.

A permanent archive of the Report can be found here.





More posts like this

Sorted by Click to highlight new comments since:

Thanks for sharing this here.

It strikes me that making it easier to change contracts ex post could make the long run situation worse. If we develop AGI, one agent or group is likely to become dramatically more powerful in a relatively short period of time. It seems like it would be very useful if we could be confident they would abide by agreements they made beforehand, in terms of resource sharing, not harming others, respecting their values, and so on. The whole field of AI alignment could be thought of as essentially trying to achieve this inside the AI. I was wondering if you had given any thought to this?

Thanks for your thoughts!

I think it's not quite right to say that anyone is "changing" the contracts. The more accurate way, in my mind, is that parts of the most concrete contents of performance obligations ("what do I have to do to fulfill my obligations?") is determined ex post via flexible decision procedures that can account for changed circumstances. Thus I think "settling" is more accurate than "changing," since the later implies that the actual performance was unsatisfactory of the original contract, which is not true.

You're right that there are interesting parallels to the AI alignment problem. See here.

There are two considerations that need to be balanced in any case of flexibility: the expected (dis)value of inflexible obligations and the expected (dis)value of flexible obligations. A key input to this is the failure mode of flexible obligations would include something like the ability of a powerful obligor to take advantage of that flexibility. In some cases that will be so large that ex post flexibility is not worth it! But in other cases, where inflexibility seems highly risky (e.g., because we can tell it depends on a particularly contingent assumption about the state of the world that is unlikely to hold post-AGI) and sufficiently strong ex post term-settling procedures are available, it seems possibly worthwhile.

More from Cullen
Curated and popular this week
Relevant opportunities