Lessons for AI governance from the Biological Weapons Convention

by AryanYadav6 min read8th Sep 20214 comments

29

AI governanceBiosecurity
Frontpage

Summary

The Implementation Support Unit, the institution that organises the Biological Weapons Convention’s (BWC) annual meetings and encourages the universal adoption of the convention, has a budget of $1.4 million. In The Precipice, Toby Ord points out that this paltry budget is lower than that of an average McDonald’s restaurant.

Having a central body like the BWC has both its upsides and downsides. An analysis of both can help draw insights into how we want to structure the international AI governance architecture. Here are my main takeaways:

  • 1. We should be particularly concerned with ‘lock-in’ effects that could happen with a centralised structure. It’s often hard to dismantle a central body once we have put it into motion. It’s even more troubling if the institution is flawed from the outset, like the BWC was.
  • 2. A central AI governance architecture would have a hard time coming up with very specific policy recommendations on a multilateral level and this could prove to be very counterproductive when it comes to governing a powerful technology.
  • 3. This post scratches the surface in terms of what it’s trying to achieve, and there is potentially some value in this post being further expanded. The comparisons here may and probably will eventually fall apart with enough investigation. But the investigation is important owing to how young the field of AI governance is.

Introduction

There are two types of AI governance architectures I’m going to look at here - centralised and fragmented. In a centralised structure, governance of a particular area is undertaken by a single body, whereas a fragmented structure is where distinct institutions, who have their own scope and rules, interact to govern a particular area. The BWC can help us understand what we stand to gain and lose with a central structure, and what advantages a fragmented structure may hold. 

I’ve picked the BWC over other institutions because I think this would particularly interest the EA community since biosecurity is seen as one of the more pressing and urgent problems we should work on. But I should note, there are also lessons to be drawn from elsewhere (e.g. other multilateral treaties such as the Chemical Weapons Convention) and I think it would be amazing to see more posts like this focussing on governance in other fields — especially since the field of AI governance is so nascent and there is a lot to figure out and structure. 

An Overview of the BWC

The BWC was the first international agreement to ban the development, production, stockpiling and acquisition of an entire class of weapons of mass destruction. The BWC took advantage of the alignment that occurred between states during the Cold War to get countries on board by ratifying the convention. However, the circumstances of the Cold War meant that it was “politically unacceptable” to incorporate into the BWC a verification system to ensure compliance with the convention. And so, from the very get-go, the BWC suffered from a major defect. 

After the Cold War, there was greater agreement among the BWC member states about the need for an effective BWC verification system. They agreed to create a so-called “Ad Hoc Group” in 1994 to evaluate potential BWC verification measures and draft an additional BWC protocol to implement them. The draft protocol negotiated by the Ad Hoc group would have created an international organization (an “Organization for the Prohibition of Biological Weapons”) to conduct “routine on-site visits to declared facilities” and “challenge inspections of suspect facilities and activities”.

This was a major improvement to the BWC. But with the last stages of the negotiation, things didn’t go well. The United States decided not to sign on, owing to their suspicions about the legitimacy of these on-site verifications. It would be technically challenging to verify implicit biological weapons. It is hard to tell whether some piece of biological matter is good for the world, or bad for the world, prima facie. While this is hardly ever the case with nuclear weapons, for example. 

As of now, the BWC currently faces issues with the relevance of its review conferences. There’s been a lack of discussion on more pressing issues like gain-of-function research and gene drives. And while they discuss proposals to improve the BWC in Review Conferences every five years (and annual Meetings of Experts and Meetings of States Parties), persistent disagreements about the fundamentals of the BWC have resulted in little progress being made. 

BWC and AI Governance Architecture

It's important to point that the BWC has also had significant success. The BWC has strengthened the international norm against developing biological weapons. (See more here, under “Ramifications for the future”.) As a result, any offensive bioweapons research needs to be done in secrecy, which proves detrimental if you’re trying to get work done in the life sciences. Therefore, the BWC makes it more difficult and less appealing for a country to work on bioweapons. The negative consequences (e.g. reputational harm, potential trade embargoes, or even military action) of being found to have broken international law provide an additional disincentive.

So the mere presence of the BWC proves to be important in creating a deterrence against biosecurity threats. Knowing this could be a very useful insight for AI governance work. A central AI architecture could create or strengthen beneficial norms and uphold a sense of proactiveness in thinking about the harms of AI. 

But a central structure like the BWC risks creating a ‘lock-in’ effect. Trying to change the way the BWC works now would be incredibly difficult. And since any change needs to be consensus-based,  profound change becomes nearly impossible. So it is unlikely that the institution’s deficiencies will be remediated anytime soon.

When setting up a central AI governance structure, I think it’s crucial we think about ‘lock-in’ effects, and how they may pan out over the years to come. If we want an equivalent to the BWC for AI governance, it should not start on the wrong foot. (To clarify - the BWC is an arms control treaty that prohibits bioweapons; it is unlikely that we’ll see anything similar with AI (i.e. a complete ban of any “AI weapons”, whatever this means.)

A key argument for having a patchwork of organisations (fragmented) would be its specificity. A central organisation that tries to encompass nearly 200 countries will, out of necessity, have broad policies and rules. Countries have their own dynamics and workings that aren’t always compatible with each other. And so central structures have to make up for this. But having a fragmented structure could mean that governance gets catered to certain regions. So this architecture, to me, would make it a lot easier to ensure that we get the international landscape of AI governance right. (See here for some arguments against a fragmented structure.)

We could also not look at this as a binary decision. Instead of choosing between these two structures, we could create a network of both centralised and fragmented regimes that could get us the best of both worlds. (We can see similar ideas in “A Web of prevention.”) But I haven’t done enough thinking so far to draw any valuable insight. 

Acknowledgements

This post owes a lot to helpful discussions with/feedback from Darius Meissner, Suzanne Van Arsdale, Simon Grimm, Aaron Gertler, Jonas Schuett, Tessa Alexanian and Nuño Sempere. Their help doesn't imply their agreement with what I've written. All mistakes remain my own.

29

4 comments, sorted by Highlighting new comments since Today at 2:35 AM
New Comment

Hi Aryan,

Cool post, very interesting! I'm fascinated by this topic - the PhD thesis I'm writing is on nuclear, bio and cyber weapons arms control regimes and what lessons can be drawn for AI. So obviously I'm very into this, and want to see more work done on this. Really excellent to see you exploring the parallels. A few thoughts:

  • Your point on 'lock-in' seems crucial. It currently seems to me that there are 'critical junctures' (Capoccia) in which regimes get set and then its very hard to change them. So e.g. the failure to control nukes or cyber in early years. ABM is a complex example - very very hard to get back on the table, but Rumsfeld +others managed it after 30 years of battling.
  • My impression is that the BWC (and CWC) - the meetings/conferences etc - are often seen as arms control regimes that are pretty good at keeping up with technical developments - maybe a point in favour of centralisation.
  • Just on the details of the BWC, seems worth mentioning a few things. (Nitpicky: when the UK proposed a BWC, it said verification wasn't technically possible at the time [1]). First, the Nixon Administration thought BW were militarily useless and had already unilaterally disarmed, so verification was less of a priority [2]. Second, one of the reasons to want a Verification Protocol in the 90s was the revelation that the Soviets cheated over the 70s-80s, building the biggest BW program ever. Third, the Bush Admin rejected the Verification Protocol in 2001 (pre 9/11!), its first year - at the same time as it was ripping up START III, Kyoto, and the ABM Treaty. This is all to suggest that state interest, and elites' changing conceptions of state interest, can create space for change.

[1] http://www.cbw-events.org.uk/EX1968.PDF 

[2] https://www.belfercenter.org/publication/farewell-germs-us-renunciation-biological-and-toxin-warfare-1969-70

https://wmdcenter.ndu.edu/Publications/Publication-View/Article/627136/president-nixons-decision-to-renounce-the-us-offensive-biological-weapons-progr/ 

This isn't central to the post, but I'm interested in this parenthetical:

(To clarify - the BWC is an arms control treaty that prohibits bioweapons; it is unlikely that we’ll see anything similar with AI (i.e. a complete ban of any “AI weapons”, whatever this means.)

At first glance, a ban on AI weapons research or AI research with military uses seems pretty plausible to me. For example, one could ban research on lethal autonomous weapons systems and research devoted to creating an AGI without banning, e.g., the use of machine learning for image classification or text generation.

Can you say more about why this seems implausible from your point of view?

Hey Kerry!

Good question. I included this disclaimer because to me it seems very hard to define what we exactly mean by an "AI weapon", which makes a complete ban, like the one the BWC has, implausible. 

I think I still don't quite get why this seems implausible. (For what it's worth, I think your view is pretty mainstream, so I'm asking about it more to understand how people are thinking about AI and not as any kind of criticism of the post or the parenthetical.)

It seems clear to me that an AI weapon could exist. AI systems designed to autonomously identify and destroy targets seem like a particularly clear example. A ban which distinguishes that technology from nearby civilian technology doesn't seem much more difficult than distinguishing biological weapons from civilian uses of biological technology.

Of course we're mostly interested in AGI, not narrower AI technology. I agree that society doesn't think of AGI development as a weapons technology and so banning "AGI weapons" seems strange to contemplate, but it's not too difficult to imagine that changing! After all, many of the proponents of the technology are clear that they think it will be the most powerful technology ever invented, granting its creators unprecedented strength. Various components of the US military and intelligence services certainly seems to think AGI development has military implications, so the shift to seeing it as a dual-use weapons technology doesn't seem to be too big of a leap to imagine.