This post is written in my personal capacity, and does not necessarily represent the views of OpenAI or any other organization. Cross-posted to the Alignment Forum.
In the previous post of this sequence, I argued that intent-aligned AIs would, by default, have incentives to break the law. This post goes into one particularly bad consequence of that incentive: the increased difficulty of making credible pre-AGI commitments about post-AGI actions.
Image by OpenAI's DALL·E
In AGI policy and strategy, it would often be useful to adopt credible commitments about what various actors will do post-AGI. For example, it may be desirable for two leading nations in AGI to agree to refrain from racing to build AGI (at the potential cost to AGI safety) and instead split the economic upside from AGI, thereby transforming a negative-sum dynamic into a positive-sum one. Nations might agree to forego their own development of militarily useful AI systems if they receive security assurances from states that do choose to pursue such systems, thus reducing the number of actors pursuing potentially dangerous military AI development, and therefore reducing the risk of a catastrophic accident. One can imagine similar types of pre-AGI agreements concerning space colonization post-AGI, the welfare of digital minds, democratic control of AGI, and many more important issues in the post-AGI world.
Lawless AI poses an enormous problem for such agreements. Agreements between entities, such as states or companies, require performance by agents of the parties: a state or corporation only acts through its agents. But if one party cannot convincingly show that their agents will carry out orders to comply with such agreements, the other party will be very reluctant to enter into an agreement with them. All the more so if the principal cannot prevent its agents from working against aspects of the agreement that are detrimental to the principal.
I will use the following as a toy problem to show what I mean more concretely: X and Y are two leading AGI development firms considering agreeing to split the profits from AGI development if they both agree to common minimum implement safety measures to prevent racing. How might this play out?
X might worry that Y will develop an AGI agent, A(Y), with the objective of maximizing the value of Y's shares. A(Y) can do this in ways that would undermine X's recourse to Y's assets. For example, A(Y) could create a subagent that surreptitiously earns money and directly mails checks to the shareholders of Y, as a sort of pseudo-dividend. That money might never pass through Y, so X would have a hard time reaching it, even if they detected this. X might also worry that A(Y) would also corrupt the legal process in Y's favor, so that even if they had a good legal claim, they could not vindicate it in court at any reasonable cost.
With these concerns in mind, the deal is unlikely to happen.
How might LFAI improve the situation? X and Y could both agree that:
- All sufficiently advanced AI agents ("Qualifying Agents") that they train, use, benefit from, or deploy will be LFAI.
- The other party may, through a neutral third-party bound by confidentiality, inspect Qualifying Agents and their development to verify that the trained agent will be LFAI when deployed.
- Qualifying Agents will be specifically instructed to assume that they are bound by the agreement.
- The parties will generally cause all of their agents to give full effect to the agreement.
- In any proceeding where the parties are adverse, neither party shall use any artificial intelligence technology to its legal or strategic benefit, unless the other party has free and equal access to the same technology.
- In case of any ambiguity in the terms of the agreement, the ambiguity shall be resolved in favor of more constraints on Qualifying Agents and their principal(s).
The resulting agreement would probably not be watertight, but the parties would have much greater assurances of post-AGI performance than they would absent LFAI. Referring back to the example above, A(Y) could not surreptitiously earn money outside of Y and pass it on the Y's shareholders, nor could A(Y) illegally corrupt adjudications of the agreement. LFAI, then, could make the parties much more likely to be willing to enter stabilizing pre-AGI agreements that require post-AGI performance. This seems like a substantial improvement.
Cf. Amanda Askell et al., The Role of Cooperation in Responsible AI Development (2019) (preprint), https://arxiv.org/abs/1907.04534. ↩︎
Of course, this could be analogized to similar agreements regarding nuclear disarmament, such as Ukraine's fateful decision to surrender its post-Soviet nuclear arsenal in exchange for security assurances (which have since been violated by Russia). See, e.g., Editorial, How Ukraine Was Betrayed in Budapest, Wall St. J. (Feb. 23, 2022), https://www.wsj.com/articles/how-ukraine-was-betrayed-in-budapest-russia-vladimir-putin-us-uk-volodymyr-zelensky-nuclear-weapons-11645657263?reflink=desktopwebshare_permalink. Observers (especially those facing potential conflict with Russia) might reasonably question whether any such disarmament agreements are credible. ↩︎
We will ignore antitrust considerations regarding such an agreement for the sake of illustration. ↩︎
So that this inspection process cannot be used for industrial espionage. ↩︎
This may not be the case as a matter of background contract and agency law, and so should be stipulated. ↩︎
This is designed to guard against the case where one party develops AI super-lawyers, then wields them asymmetrically to their advantage. ↩︎
This is a really valuable idea and is certainly an area we should research more heavily. I have some brief thoughts on the 'pros' and some ideas that aren't so much 'cons' as 'areas for further exploration (AFFE)'. The AFFE list will be longer due to the explanation necessary, not because there's more AFFE than Pros :)
All in all this is a really well thought out idea for AI alignment and I am very hopeful it gets more exploration in the future. I've often felt that much current AI policy research is 'all jaw, no teeth' in that much of the focus is getting AI aligned in a simple lab or thought environment instead of a messy, complex human one.
Potential benefits also include getting many more legal scholars into EA, which is a talent we are sorely lacking as a community and many other areas and projects would also benefit from in the future.
Thanks a ton for your substantive engagement, Luke! I'm sorry it took so long to respond, but I highly value it.
Yeah, definitely agree that this is tricky and should be analyzed more (especially drawing on the substantial existing literature about moral permissibility of lawbreaking, which I haven't had the time to fully engage in).
Yeah, I do think there's an interesting thing here where LFAI would make apparent the existing need to adopt some jurisprudential stance about how to think about the evolution of law, and particularly of predicted changes in the law. As an example of how this already comes up in the US, judges sometimes regard higher courts' precedents as bad law, notwithstanding the fact that the higher court has not yet overruled it. The addition of AI into the mix—as both a predictor of and possible participant in the legal system, as well as a general accelerator of the rate of societal change—certainly threatens to stretch our existing ways of thinking about this. This is also why I'm worried about asymmetrical use of advanced AI in legal proceedings. See footnote 6.
(And yes, the US is also common law. :-) )
Definitely agree. I think the practical baby step is to develop the capability of AI to interpret and apply any given legal system. But insofar as we actually want AIs to be law-following, we obviously need to solve the jurisdictional and choice of law questions, as a policy matter. I don't think we're close to doing that—even many of the jurisdictional issues in cyber are currently contentious. And as I think you allude to, there's also a risk of regulatory arbitrage, which seems bad.
Except the civil law of Louisiana, interestingly. ↩︎
No problem RE timescale of reply! Thank you for such a detailed and thoughtful one :)