Hide table of contents

“Long-term risks remain, including the existential risk associated with the              development of artificial general intelligence through self-modifying AI or other means”.

2023 Update to the US National Artificial Intelligence Research and Development Strategic Plan.

Introduction 

The United States is yet to take serious steps to govern the licensing, setting up, operation, security and supervision of AI. In this piece I suggest that this could be in violation of its obligations under Article 6(1) of the International Covenant on Civil and Political Rights (ICCPR). By most accounts, the US is the key country in control of how quickly we have artificial general intelligence (AGI), a goal that companies like OpenAI have been very open about pursuing. The fact that AGI could carry risk to human life has been detailed in various fora and I won’t belabor that point. I present this legal argument so that those trying to get the US government to take action have additional armor to call on.

A. Some important premises 

The US signed and ratified the ICCPR on June 8 1992.[1] While it has not ratified the Optional Protocol allowing for individual complaints against it, it did submit to the competence of the Human Rights Committee (the body charged with interpreting the ICCPR) where the party suing is another state. This means that although individuals cannot bring action against the US for ICCPR violations, other states can. As is the case for domestic law, provisions of treaties are given real meaning when they’re interpreted by courts or other bodies with the specific legal mandate to do so. Most of this usually happens in a pretty siloed manner, but international human rights law is famously non-siloed. The interpretive bodies determining international human rights law cases regularly borrow from each other when trying to make meaning of the different provisions before them. This piece is focused on what the ICCPR demands, but I will also discuss some decisions from other regional human rights courts because of the cross fertilization that I’ve just described. Before understanding my argument, they’re a few crucial premises you have to appreciate. I will discuss them next.  

          (i) All major human rights treaties, including the ICCPR, impose on states a duty to protect life  

In addition to the ICCPR, the African Charter, European Convention and American Convention have all give states a duty to protect life.[2] As you might imagine, the existence of the actual duty is generally undisputed. It is when we get to the specific content of the duty where things become murky.

          (ii) A state’s duty to protect life under the ICCPR can extend to citizens of other countries

The Human Rights Committee (quick reminder: this is the body with the legal mandate to interpret the ICCPR) has made it clear that this duty to protect under the ICCPR extends not only to activities which are conducted within the territory of the state being challenged but also to those conducted in other places – so long as the activities could have a direct and reasonably foreseeable impact on persons outside the state’s territory. The fact that the US has vehemently disputed this understanding[3] does not mean it is excused from abiding by it.

          (iii) States’ duties to protect life under the ICCPR require attention to the activities of corporate entities headquartered in their countries 

Even though the US protested the move,[4] the Human Rights Committee has been clear that the duty to protect extends to protecting individuals from violations by private persons or entities,[5] including activities by corporate entities based in their territory or subject to their jurisdiction.[6] Other regional bodies that give meaning to international human rights also agree: The European Court of Human Rights (European Court) has said states have to keep an eye on acts of third parties and non-State actors[7] while the African Commission has provided more outrightly that the state can be held liable for violations by non-State actors, including corporations.[8]

          (iv) The duty to protect life can be violated without death occurring 

There seems to be consensus among international human rights bodies (including the Human Rights Committee) that for a violation of the duty to protect life to be established, the risk does not need to have materialized. The part that follows will have more on this. 

B. How the duty to protect life has been interpreted, and how the US government could be in violation of Article 6(1) of the ICCPR 

The Human Rights Committee has interpreted Article 6(1) of the ICCPR as establishing a positive obligation to protect life, and specifically a duty to take adequate preventive measures to protect individuals from reasonably foreseeable threats.[9] As such, to convincingly demonstrate that the US (quick reminder: it is a State Party to the ICCPR) is in violation of this provision, each element in Article 6(1) has to be satisfied, step-by-step. This is what I’m going to show you next. 

          (i) It is reasonable for us to expect the US government to be aware that AGI could cause existential catastrophe

For a state to be held liable under Article 6, the threat has to be reasonably foreseeable. The Human Rights Committee hasn’t really told us what that means, but its peer institutions like the European Court have held that this question of reasonableness must be answered in light of all the circumstances of each case.[10] In many other fields of law, the reasonableness of conduct depends on what an ordinary actor would do if they had the information and resources available to the actor facing legal challenge. Now to our specific situation. Let’s assume that we can’t show that the US government has some secret information about the development of AI towards AGI. If this is so, I think the Human Rights Committee would probably consider: whether the state has previously indicated knowledge that the activity in question could cause death and if not, whether there is widespread agreement among reasonable people that current AI development could indeed lead to AGI and AGI could cause death, or whether there is widespread agreement among experts that current AI development could lead to AGI and AGI could cause death. 

For those of us prosecuting the AI-could-cause-existential-risk argument, this standard of reasonableness may actually not be a very difficult one to meet. It is true that there is no large-scale consensus among ordinary people that the maturity of AGI will carry a risk to life. It is also true that we are far away from a consensus among experts. But it does seem that the US Government appreciates the possibility that current AI development could lead to AGI and AGI could cause death. Here is the smoking gun that I came across recently in the White House’s 2023 Update of the US’s National AI Strategic Plan. The update noted that “Long-term risks remain, including the existential risk associated with the development of artificial general intelligence through self-modifying AI or other means”. This surely means that it is reasonable for us to expect that the US Government is aware of x risk via AGI. I suppose this argument would fail if: (a) One can show that this isn't the official position of the US government or (b) “Existential risk” as used in the report was meant to mean something less than death. I think counterargument (a) just can’t fly given that it is literally a report written by a White House office. I’m not entirely certain about counterargument (b) but I think the UK government’s understanding (seems to be that x risk = death) gives us additional circumstantial evidence to show that the US government’s understanding is likely to be similar. 

          (ii) The x risk that AGI carries is foreseeable to the US government 

For my argument to make sense, the next legal element that has to be satisfied is that of "foreseeability". Unfortunately, the Human Rights Committee has not told us what exactly “foreseeable” means. Yet I would suggest that once we’ve demonstrated that the US Government appreciates that AGI could lead to a significant number of deaths (again, see the quote in the paragraph right before this one) and since it’s not in doubt that they know OpenAI and other entities are building towards AI with the goal of creating AGI then surely the foreseeable-ness of the threat has already been shown. 

                   PS even the real and immediate standard offers a path

For the sake of argument (and because we are not very clear what "foreseeable" means to the Human Rights Committee) let us take up an even more difficult but well-elaborated standard. Other international human rights courts and commissions seem to have understood the duty to protect life as imposing a positive duty to take preventive measures to protect an individual or a group of individuals’ life from a real and immediate/imminent risk.[11] If we take a literal reading of Article 6, it seems as though the “foreseeable threat” standard may be less exacting for to meet than a standard that requires proof of “real and immediate” risk. I think this means that if we can prove real and immediate risk then we will most likely have satisfied the “foreseeable threat” standard as well. Let’s go there then. 

According to the European Court, the real and immediate standard requires that we prove: (i) there is a real and immediate risk to an identified individual or individuals from the acts of a third party, (ii) the authorities knew or ought to have known of the existence of that risk and (iii) that the authorities failed to take reasonable measures to avoid that risk.[12] The European Court has also said that this obligation exists where the risk is to an identified individual(s) or the general society.[13]  

But what is a real and immediate risk? Well, the European Court has previously found a risk to be immediate despite the risk having been in existence long before it materialised.[14] Other than that, one dissenting opinion described a real and immediate risk to be one which is ‘substantial or significant’, ‘not remote or fanciful’ and ‘real and ever-present’.[15]No decisions have given us detail about what ‘real’, ‘substantial’ and ‘significant’ entail. 

Satisfying this standard will probably come down to how ‘real’ we can show x risk from AI to be. This is a complex test to meet. I think it carries both subjective and objective elements. That is, the risk to life needs to be (a) self-evident to ordinary people but also (b) widely recognized by experts. I’m confident that the arguments that AGI could easily be misaligned and could pursue goals antithetical to the survival of humanity are very powerful and – when explained carefully in a step-by-step manner – would meet this standard of ‘realness’ and ‘significance’. However, the fact that some respected experts consider this argument to be a crackpot idea definitely means the ‘real-ness’ of the threat isn’t that obvious. One occurrence that I think really helps to fortify my claim that the "realness" requirement has been satisfied is the rise of large language models. It seems as though it is the capabilities of ChatGPT that really made the possibility of AGI seem more real to ordinary people, policymakers and experts. For this reason, I would argue that LLMs help to prove the legal standard of "realness". In other words, I'm claiming that under this legal standard the US's obligations under Article 6(1) were triggered when ChatGPT was released because it was then that the realness threshold was met. 

          (iii) There are adequate preventive measures that the US government could take to stem x risk from AI 

The final legal element to satisfy is whether there are any adequate preventive measures that the US could take in light of the circumstances. The Human Rights Committee hasn't elaborated on the precise meaning of “adequate preventive measures” as used in Article 6(1) of the ICCPR. However, we can find useful guidance about this from some of the European Court’s decisions. In a 2020 case, the Court noted that once an activity is found to carry a risk for human life, states must create regulations "geared towards the special features of the activity in question" and with "special attention to the level of potential risk to human lives". Even more interestingly, the Court said that regulations are expected to govern licensing, setting up, operation, security and supervision of the activity in question and “must make it compulsory for all those undertaking the activity to ensure the effective protection of citizens whose lives might be endangered by the inherent risks”. 

At a minimum, I think the Human Rights Committee would endorse this understanding of adequate preventive measures if only because these are anodyne and easy-to-take steps within the reach of any government. Indeed, we know that regulation around licensing etc is surely well within the capability of the US government. For that reason, it has an obligation – at a minimum – to create regulation touching on these aspects of AI development insofar as it relates to AGI.    But it's not just that. I think the specific content of the regulation created has to meet the level of threat in question and so the specific regulation the US adopts should be challengeable. But that's a story for another day. 

C. Conclusion 

If you agree with me on the premises I started from and you endorse the interpretations I’ve adopted then you can see how the US could be found in violation of Article 6(1) if it takes no robust regulatory action on any AI development that’s focused on creating AGI.

D. Possible headwinds for my argument 

Experts in international human rights law may be skeptical about what I’m proposing because even if you were to get a bully-proof country to take such a case before the Human Rights Committee, the Committee cannot impose sanctions on a state – it is instead limited to making recommendations. I also imagine that some people will claim that the US government would simply scoff at any international law argument about why it should act a certain way. In response to both points, it's worth reiterating what many more exceptional scholars than I could ever be have written: International embarrassment can in fact have a significant galvanizing effect on domestic action. And just as importantly, I think pushing this argument would probably cause more people to become alive to the risks surrounding AI that’s being developed in the US, and that on its own may be a big win for us. 

To me, the most compelling reason why this may not be a good argument to push would come from a strategic standpoint. If you are more pro-playing nice, pushing this argument would wreak havoc on that approach and perhaps antagonise American policymakers. There is also the chance that robust AI regulation gets framed as "what foreigners who don't live here and haven't built this country want us to do" argument. I have to say I’m not sure which strategic approach is better between playing nice and being more pugilistic. For now, I present this argument on the assumption that both approaches can and should be pursued.
 


[1] United Nations, UN Treaty Body Database, ⸺

https://tbinternet.ohchr.org/_layouts/15/TreatyBodyExternal/Treaty.aspx?CountryID=187&Lang=EN on May 21 2023. See also, Joseph S and Castan M, International Covenant on Civil and Political Rights: Cases, materials and commentary, 3rd edition, Oxford University Press, 2013, page 8.

[2] Article 4 of the African Charter, Article 4(1) of the American Convention and the first sentence of the European Convention, Article 6(1) of the International Covenant on Civil and Political Rights. 

[3] Observations of the United States of America on the Human Rights Committee’s Draft General Comment No. 36 on Article 6 – Right to life, 6 October 2017, para. 13 and 15. See also, CCPR, Concluding observations on the fourth periodic report of the United States of America, CCPR/C/USA/CO/4, 23 April 2014, para. 4. See also, Fourth Periodic Report of the United States of America to the United Nations Committee on Human Rights Concerning the International Covenant on Civil and Political Rights, December 30 2011, para. 504-505.

[4] Observations of the United States of America on the Human Rights Committee’s Draft General Comment No. 36 on Article 6 – Right to life, October 6 2017, para. 31 and 33.

[5] Annakkarage Suranjini Sadamali Pathmini Peiris v Sri Lanka, CCPR Comm No.1862/2009, April 18 2012. See also, CCPR General Comment 36, page 18.

[6] CCPR General Comment 36, page 22.

[7] Osman v The United Kingdom, ECHR, 116 and Kurt v Austria, ECHR, page 156.

[8] African Commission on Human and Peoples’ Rights, General Comment 3, page 38.

[9] CCPR General Comment No. 36, Article 6: Right to life, September 3 2019, para. 18 and 21.

[10] Osman v The United Kingdom, ECHR, page 116.

[11] See for example, in the ECHR: Osman v The United Kingdom, Judgment of October 28 1998, para. 116; Kurt v Austria, Judgment of 15 June 2021, para. 156 and Kotilainen and others v Finland, Judgment of September 17, para. 69; In the Inter-American Court: Valle Jaramillo et al v Colombia, Judgment of November 27 2008, para. 78; Pueblo Bello Massacre v Colombia, Judgment of January 31 2006, para. 123 and Luna Lopez v Honduras, Judgment of October 10 2013, para. 120 and 124.

See as well African Commission, General Comment No. 3 on the African Charter on Human and Peoples’ Rights: The right to life (Article 4), November 18 2015, para. 38 and 41.

[12] Osman v The United Kingdom, ECHR, page 116.

[13] Mastromatte v Italy, ECHR Judgment of October 24 2002, para. 69 and 74.

[14] Ӧneryildiz v Turkey, ECHR Judgment of November 30 2004, para. 100.

[15] Hiller v Austria, ECHR Judgment of November 22 2016, page 21.

44

0
0

Reactions

0
0

More posts like this

Comments1
Sorted by Click to highlight new comments since: Today at 9:01 PM

If there really was such an obligation, it would seem to be very onerous. Does the US have an obligation to prevent every death that occurs in the world that it possibly could? It's not surprising to me that it seems that the US rejects multiple of the premises.

Curated and popular this week
Relevant opportunities