This is a link for my PhD dissertation, ‘Artificial Intelligence Governance under Change: Foundations, Facets, Frameworks’. I originally submitted this to the University of Copenhagen’s Faculty of Law in September 2020, and presented it in April 2021 (see also defense presentation slides & handout).
TLDR: this dissertation discusses approaches and choices in regime design for the global governance of advanced (and transformative) AI. To do so, it draws on concepts and frameworks from the fields of technology regulation (‘sociotechnical change’; ‘governance disruption’), international law, and global governance studies (‘regime complexity’).
In slogan form, the project explores how we may govern a changing technology, in a changing world, using governance systems that may themselves be left changed.
In slightly more detail: the project discusses how AI governance regimes and institutions may need to be adapted, to take into account three facets of ‘change’ that will characterize or at least influence (T)AI governance in the coming few decades:
- the ways in which (especially under continual multipolar deployment scenarios) ongoing changes in AI capabilities and downstream applications will tend to create constant adaptation pressure on earlier AI-focused treaties, institutions, or legal norms, and unclarity for _when _and _why _new instruments are needed (lens: sociotechnical change)
- the ways in which AI systems may be increasingly used within international law, to support (or to contest) the establishment, monitoring, enforcement and/or arbitration of international treaty regimes, thereby shifting the possibility frontier on global cooperation (lens: governance disruption)
- The ways in which the architecture of global governance is undergoing structural changes over the last two decades, in ways that affect what kind of instruments (e.g. comprehensive formal treaty regimes vs. fragmented ecologies of informal institutions) will be more or less viable in governing (T)AI (lens: regime complexity).
Sections that may be of particular interest to this audience, include:
- Chapter 2.1 (a balanced argument on the likely stakes of AI, written for an unfamiliar–and likely skeptical–audience);
- Chapter 2.2 (characterization of AI as global governance problem) & 2.3. (overview of AI governance avenues and instruments under international law);
- Chapter 4.4. (overview of distinct types of ‘problem logics’ of six types of AI governance challenges)
- Chapter 5.1 & 5.2 (history & taxonomy of the ways in which new technologies can disrupt, automate, or augment international law, with application to AI)
- Chapter 6.1 & 6.2 (primer on ‘regime complexity’ framework and approach to fragmented global governance)
- Chapter 7 (step-by-step framework discussing how to analyze an (emerging/proposed) AI governance regime in terms of its: (1) origins and foundations; (2) topology and organisation; (3) evolution over time; (4) effects of its organization, and (5) strategies to maintain AI governance regime efficacy, resilience and coherence;
Why post this now? Obviously, a tremendous amount has happened in the two years since I submitted this–in terms of technical AI progress; -in global AI governance developments; and in community-internal debates over Transformative AI timelines, -risks, and -governance. Nonetheless, I felt I’d share this now, as I believe that:
most of the dissertation’s analysis, especially its AI governance-specific claims, have held up very well, fairly well, or functionally well;
the project introduces a set of more general governance design frameworks, with links to existing bodies of work on institutional design and legal automation, which remain relevant and underexplored for transformative AI governance, and
I’ve been told by a bunch of people over the years that they found the manuscript a useful overview, in terms of understanding the range of governance tools available, but also in communicating the stakes of TAI governance to academics that are more new to the field, in disciplinary terms.
I hope it is of interest or use, and welcome feedback!
Though there are a number of conclusions I might adapt or refine further, looking back. This coming year, I will also spend some time updating the project as I rewrite it as an upcoming Oxford University Press book on long-term AI governance (link). I would therefore welcome input on any sections or claims that you believe have not held up well. ↩︎