Hide table of contents

The following is an assignment I submitted for my Cyber Operations class at Georgetown, regarding the risk of large AI model theft and what the US Cybersecurity and Infrastructure Security Agency (CISA) should/could do about it. 

Ultimately I was fairly rushed with this memo and realized less than halfway through that perhaps I shouldn't have chosen CISA as my client, but it was too late to change. I don't confidently endorse all of the claims and recommendations in this memo (although I do not think I actively disagree with any of them), especially given my lack of familiarity with the field, tight length constraints, and lack of time to do as much research as I wanted. Thus, I was hesitant about sharing this as a normal post. However, I've decided to post it to potentially help spark ideas/discussion among others who might be interested. I'll just add a few extra thoughts:

  • It seems plausible that someone could try to classify large/foundation AI models as IT Critical Infrastructure and this would open up a number of policy avenues, but I did not have the time or expertise to figure out if and how this could be done and then what all of the effects (good and bad) would be. For more details, see footnote 26.
  • If one cannot classify AI as critical infrastructure, it's not clear CISA should be the primary policymaking audience for this problem (as opposed to NSA/FBI, Congress, OSTP, etc.); CISA seems to be more focused on dealing with the downstream effects of AI  model theft, such as use by cyber criminals to attack critical infrastructure—but there may be relevant policy levers regarding how the federal government protects its models, etc.
  • Despite the previous point, I would loosely guess that CISA should be somewhat involved in the policy process, if only as an extra voice pushing for action (e.g., "here's how future model proliferation could seriously impact federal networks and critical infrastructure").
  • At the time when I wrote this memo there was very, very little research focusing on how to prevent large AI model theft; most research just focused on model sabotage, etc. However, this may be rapidly changing.

(A few minor edits have been made post-submission; the memo was originally submitted on 4/15/2023)

-------------

Memorandum for the Cybersecurity and Infrastructure Security Agency (CISA)

SUBJECT: Supporting Security of Large AI Models Against Theft

Recent years have seen a rapid increase in the capabilities of artificial intelligence (AI) models, as GPT-4 demonstrates. However, as these large models become more capable and more expensive to train, they become increasingly attractive targets for theft. In the hands of malicious actors, these models could increasingly pose security risks to critical infrastructure (CI), such as by enhancing such actors’ cyber capabilities. Rather than strictly focusing on the downstream effects of powerful AI models, CISA should also work to reduce the likelihood (or rapidity) of large AI model theft. This memo will explain some of the threats to and from powerful AI models, briefly describe relevant market failures, and conclude with recommendations for CISA to mitigate the risk of AI model theft.

 

There are Strong Incentives and Historical Precedent for China and Other Actors to Steal AI Models

There are multiple reasons to expect that hackers will attempt to exfiltrate large AI model files:

  1. Current large models have high up-front development (“training”) costs/requirements but comparatively low operational costs/requirements after training.[1] This makes theft of AI models attractive even for non-state actors and distinct from many instances of source code theft.[2] Additionally, recent export controls on semiconductors to China could undermine China’s ability to develop future large models,[3] which would further increase Beijing’s incentive to steal trained models.
  2. China and other actors have repeatedly stolen sensitive data and intellectual property (IP) in the past.[4]
  3. Someone leaked Meta’s new large language model (LLaMA) within days of Meta providing model access to researchers.[5]

 

AI Model Theft/Proliferation May Threaten Critical Infrastructure—and Far More 

Theft of powerful AI models—or the threat thereof—could have significant negative consequences beyond straightforward economic losses:

  1. Many powerful AI models could be abused: 
    1. Content generation models could enhance disinformation and spear phishing campaigns.[6]
    2. Image recognition models could empower semi-autonomous weapons or authoritarian surveillance.[7]
    3. Simulation models could facilitate the design of novel pathogens.[8]
    4. Agent models could soon (if not already) semi-autonomously conduct effective offensive cyber campaigns.[9]
  2. The mere threat of theft/leaks could discourage improvements in model reliability and interpretability that require providing more access to powerful models.[10] This is especially relevant to CISA as consumers and industries (including CI) increasingly rely on AI.[11]
  3. If China steals powerful models that enhance its ability to conduct AI research, this could increase catastrophic “structural” risks.[12] For example, future AI systems may disrupt strategic stability by undermining confidence in nuclear second-strike capabilities.[13] Furthermore, intensified racing dynamics could drive China and the US to deprioritize safety measures, increasing the risk of permanent human disempowerment or even extinction by creating powerful/self-improving but misaligned systems.[14] Although such powerful systems may not seem near, they reasonably may be attainable over the next two decades.[15]

Ultimately, the capabilities of future systems are hard to predict, but the consequences of model proliferation could be severe.

 

Traditional Market Incentives Will Likely Fail to Minimize These Risks

Many companies will have some incentives to protect their models. However, there are market failures and other reasons to expect that their efforts will be suboptimal:

  1. The risks described in the previous section are primarily externalities and companies that do not appropriately guard against these risks may out-compete companies that do.
  2. Unauthorized use of models may be limited to foreign jurisdictions where the companies did not expect to have substantial market access (e.g., China). Thus, IP theft may not have a significant impact on a company’s profitability.
  3. Market dynamics could disincentivize some prosocial actions such as cybersecurity incident disclosures.[16]

 

Recommendations for CISA: Assess Risks While Developing Expertise, Partnerships, and Mitigation Measures

Thus far, there has been minimal public research on how policymakers should mitigate the risks of model theft. CISA has an important role to play at this early stage of the policy pipeline, especially to facilitate information flows while spurring and supporting other actors (e.g., Congress) who have more resources or authority to address the upstream problems. The following subsections provide four main categories of recommendations.

  • As part of CISA’s objective to “continually identify nascent or emerging risks before they pose threats to our infrastructure,”[17] CISA should assess the risk of malicious actors using current or near-future large AI models to conduct cyberattacks against CI.[18] 
  • CISA should work with law enforcement to populate datasets[19] and/or case study compilations of cybersecurity incidents regarding sabotage or theft of large models, as well as cyberattacks utilizing large models.[20]
  • Disseminate the findings of these assessments among relevant policymakers and stakeholders (as appropriate), to inform policymaking and countermeasure development. This is especially relevant for policy regarding the National AI Research Resource (NAIRR).[21]

This analysis—even in the very preliminary stages—should also inform CISA’s implementation of the remaining recommendations.

Build CISA Subject Matter Expertise and Analytical Capacity

To improve its assessments and preparations, CISA could employ Protective Security Advisors (PSAs) with experience relevant to AI model security and/or the AI supply chain, including cloud providers, semiconductors, and other CI sectors.[22] Alternatively, CISA could create a dedicated working group or task force to deal with these issues.[23] Depending on the findings of CISA’s risk assessments, CISA could also seek additional funding from Congress. The following category of recommendations could also help improve CISA’s knowledge and analytical capacity.

Develop Partnerships with AI Labs and Facilitate Information Sharing

Given that partnerships are the “foundation and the lifeblood”[24] of CISA’s efforts, it should invite AI labs into cyber information sharing and security tool ecosystems.[25] Specifically, CISA should ensure that large AI labs are aware of relevant programs, and if they are not participants, determine why and whether CISA can/should modify its policies to allow participation.[26] [EA Forum Note: this footnote contains a potentially interesting/impactful suggestion about designating some AI tools/labs as "IT critical infrastructure." I could not fully explore this recommendation in my memo due to space and time constraints, but it could be the most important takeaway/suggestion from this memo.]

Help Develop and Implement Security Standards and Methods

CISA should work with the National Institute of Standards and Technology (NIST) and other federal research and development agencies[27] to develop cybersecurity methods and standards (e.g., hardware-based data flow limiters) that CISA and other agencies could mandate for federal agencies that produce/house large AI models (including the potential NAIRR[28]).


References

Abdallat, A. J. 2022. “Can We Trust Critical Infrastructure to Artificial Intelligence?” Forbes. July 1, 2022. https://www.forbes.com/sites/forbestechcouncil/2022/07/01/can-we-trust-critical-infrastructure-to-artificial-intelligence/?sh=3e21942e1a7b.

“About ISACs.” n.d. National Council of ISACs. Accessed April 15, 2023. https://www.nationalisacs.org/about-isacs.

Adi, Erwin, Zubair Baig, and Sherali Zeadally. 2022. “Artificial Intelligence for Cybersecurity: Offensive Tactics, Mitigation Techniques and Future Directions.” Applied Cybersecurity & Internet Governance Journal. November 4, 2022. https://acigjournal.com/resources/html/article/details?id=232841&language=en.

Allen, Gregory, Emily Benson, and William Reinsch. 2022. “Improved Export Controls Enforcement Technology Needed for U.S. National Security.” Center for Strategic and International Studies. November 30, 2022. https://www.csis.org/analysis/improved-export-controls-enforcement-technology-needed-us-national-security.

“Automated Indicator Sharing (AIS).” n.d. CISA. Accessed April 15, 2023. https://www.cisa.gov/topics/cyber-threats-and-advisories/information-sharing/automated-indicator-sharing-ais.

Brooks, Chuck. 2023. “Cybersecurity Trends & Statistics for 2023: More Treachery and Risk Ahead as Attack Surface and Hacker Capabilities Grow.” Forbes. March 5, 2023. https://www.forbes.com/sites/chuckbrooks/2023/03/05/cybersecurity-trends--statistics-for-2023-more-treachery-and-risk-ahead-as-attack-surface-and-hacker-capabilities-grow/?sh=2c6fcebf19db.

Calma, Justine. 2022. “AI Suggested 40,000 New Possible Chemical Weapons in Just Six Hours.” The Verge. March 17, 2022. https://www.theverge.com/2022/3/17/22983197/ai-new-possible-chemical-weapons-generative-models-vx.

“CISA Strategic Plan 2023–2025.” 2022. CISA. September 2022. https://www.cisa.gov/sites/default/files/2023-01/StrategicPlan_20220912-V2_508c.pdf.

Cottier, Ben. 2022. “The Replication and Emulation of GPT-3.” Rethink Priorities. December 21, 2022. https://rethinkpriorities.org/publications/the-replication-and-emulation-of-gpt-3.

———. 2023. “Trends in the Dollar Training Cost of Machine Learning Systems.” Epoch. January 31, 2023. https://epochai.org/blog/trends-in-the-dollar-training-cost-of-machine-learning-systems.

Cox, Joseph. 2023. “How I Broke into a Bank Account with an AI-Generated Voice.” Vice. February 23, 2023. https://www.vice.com/en/article/dy7axa/how-i-broke-into-a-bank-account-with-an-ai-generated-voice.

Dickson, Ben. 2020. “The GPT-3 Economy.” TechTalks. September 21, 2020. https://bdtechtalks.com/2020/09/21/gpt-3-economy-business-model/.

“Enhanced Cybersecurity Services (ECS).” n.d. CISA. Accessed April 15, 2023. https://www.cisa.gov/resources-tools/programs/enhanced-cybersecurity-services-ecs.

Feldstein, Steven. 2019. “The Global Expansion of AI Surveillance.” Carnegie Endowment for International Peace. September 17, 2019. https://carnegieendowment.org/2019/09/17/global-expansion-of-ai-surveillance-pub-79847.

Geist, Edward. 2018. “By 2040, Artificial Intelligence Could Upend Nuclear Stability.” RAND Corporation. April 24, 2018. https://www.rand.org/news/press/2018/04/24.html.

Grace, Katja. 2023. “How Bad a Future Do ML Researchers Expect?” AI Impacts. March 8, 2023. https://aiimpacts.org/how-bad-a-future-do-ml-researchers-expect/.

“Guaranteeing AI Robustness against Deception (GARD).” n.d. DARPA. Accessed March 11, 2023. https://www.darpa.mil/program/guaranteeing-ai-robustness-against-deception.

Hill, Michael. 2023. “NATO Tests AI’s Ability to Protect Critical Infrastructure against Cyberattacks.” CSO Online. January 5, 2023. https://www.csoonline.com/article/3684730/nato-tests-ai-s-ability-to-protect-critical-infrastructure-against-cyberattacks.html.

Humphreys, Brian. 2021. “Critical Infrastructure Policy: Information Sharing and Disclosure Requirements after the Colonial Pipeline Attack.” Congressional Research Service. May 24, 2021. https://crsreports.congress.gov/product/pdf/IN/IN11683.

“ICT Supply Chain Risk Management Task Force.” n.d. CISA. Accessed April 15, 2023. https://www.cisa.gov/resources-tools/groups/ict-supply-chain-risk-management-task-force.

“Information Technology Sector.” n.d. CISA. Accessed April 15, 2023. https://www.cisa.gov/topics/critical-infrastructure-security-and-resilience/critical-infrastructure-sectors/information-technology-sector.

Johnson, James. 2020. “Artificial Intelligence: A Threat to Strategic Stability.” Strategic Studies Quarterly. https://www.airuniversity.af.edu/Portals/10/SSQ/documents/Volume-14_Issue-1/Johnson.pdf.

“Joint Cyber Defense Collaborative.” n.d. CISA. Accessed April 15, 2023. https://www.cisa.gov/topics/partnerships-and-collaboration/joint-cyber-defense-collaborative.

Kahn, Jeremy. 2023. “Silicon Valley Is Buzzing about ‘BabyAGI.’ Should We Be Worried?” Fortune. April 15, 2023. https://fortune.com/2023/04/15/babyagi-autogpt-openai-gpt-4-autonomous-assistant-agi/.

LaPlante, Phil, and Ben Amaba. 2021. “CSDL | IEEE Computer Society.” Computer. October 2021. https://www.computer.org/csdl/magazine/co/2021/10/09548022/1x9TFbzhvTG.

Laplante, Phil, Dejan Milojicic, Sergey Serebryakov, and Daniel Bennett. 2020. “Artificial Intelligence and Critical Systems: From Hype to Reality.” Computer 53 (11): 45–52. https://doi.org/10.1109/mc.2020.3006177.

Lawfare. 2023. “Cybersecurity and AI.” Youtube. April 3, 2023. https://www.youtube.com/watch?v=vyyiSCJVAHs&t=964s.

Longpre, Shayne, Marcus Storm, and Rishi Shah. 2022. “Lethal Autonomous Weapons Systems & Artificial Intelligence: Trends, Challenges, and Policies.” Edited by Kevin McDermott. MIT Science Policy Review 3 (August): 47–56. https://doi.org/10.38105/spr.360apm5typ.

MITRE. n.d. “MITRE ATT&CK.” MITRE. Accessed April 15, 2023. https://attack.mitre.org/.

Murphy, Mike. 2022. “What Are Foundation Models?” IBM Research Blog. May 9, 2022. https://research.ibm.com/blog/what-are-foundation-models.

Nakashima, Ellen. 2015. “Chinese Breach Data of 4 Million Federal Workers.” The Washington Post, June 4, 2015. https://www.washingtonpost.com/world/national-security/chinese-hackers-breach-federal-governments-personnel-office/2015/06/04/889c0e52-0af7-11e5-95fd-d580f1c5d44e_story.html.

National Artificial Intelligence Research Resource Task Force. 2023. “Strengthening and Democratizing the U.S. Artificial Intelligence Innovation Ecosystem an Implementation Plan for a National Artificial Intelligence Research Resource.” https://www.ai.gov/wp-content/uploads/2023/01/NAIRR-TF-Final-Report-2023.pdf.

“Not My Problem.” 2014. The Economist. July 10, 2014. https://www.economist.com/special-report/2014/07/10/not-my-problem.

“Partnerships and Collaboration.” n.d. CISA. Accessed April 15, 2023. https://www.cisa.gov/topics/partnerships-and-collaboration.

“Protective Security Advisor (PSA) Program.” n.d. CISA. Accessed April 15, 2023. https://www.cisa.gov/resources-tools/programs/protective-security-advisor-psa-program.

Rasser, Martijn, and Kevin Wolf. 2022. “The Right Time for Chip Export Controls.” Lawfare. December 13, 2022. https://www.lawfareblog.com/right-time-chip-export-controls.

Roser, Max. 2023. “AI Timelines: What Do Experts in Artificial Intelligence Expect for the Future?” Our World in Data. February 7, 2023. https://ourworldindata.org/ai-timelines.

Sganga, Nicole. 2022. “Chinese Hackers Took Trillions in Intellectual Property from about 30 Multinational Companies.” CBS News. May 4, 2022. https://www.cbsnews.com/news/chinese-hackers-took-trillions-in-intellectual-property-from-about-30-multinational-companies/.

Stein-Perlman, Zach, Benjamin Weinstein-Raun, and Katja Grace. 2022. “2022 Expert Survey on Progress in AI.” AI Impacts. August 3, 2022. https://aiimpacts.org/2022-expert-survey-on-progress-in-ai/.

“TrojAI: Trojans in Artificial Intelligence.” n.d. IARPA. Accessed March 11, 2023. https://www.iarpa.gov/research-programs/trojai.

Vincent, James. 2023. “Meta’s Powerful AI Language Model Has Leaked Online — What Happens Now?” The Verge. March 8, 2023. https://www.theverge.com/2023/3/8/23629362/meta-ai-language-model-llama-leak-online-misuse.

Zwetsloot, Remco, and Allan Dafoe. 2019. “Thinking about Risks from AI: Accidents, Misuse and Structure.” Lawfare. February 11, 2019. https://www.lawfareblog.com/thinking-about-risks-ai-accidents-misuse-and-structure.


 


[1] For example, a single successful training run of GPT-3 reportedly required dozens of terabytes of data and cost millions of dollars of GPU usage, but the trained model is a file smaller than a terabyte in size and actors can operate it on cloud services that cost under $40 per hour. Sources: 
Cottier, Ben. 2022. “The Replication and Emulation of GPT-3.” Rethink Priorities. December 21, 2022. https://rethinkpriorities.org/publications/the-replication-and-emulation-of-gpt-3; and
Dickson, Ben. 2020. “The GPT-3 Economy.” TechTalks. September 21, 2020. https://bdtechtalks.com/2020/09/21/gpt-3-economy-business-model/.

Additionally, one report suggested that by 2030, state-of-the-art models may cost hundreds of millions or even >$1B dollars to train (although the report highlights that these estimates could significantly change). Source: Cottier, Ben. 2023. “Trends in the Dollar Training Cost of Machine Learning Systems.” Epoch. January 31, 2023. https://epochai.org/blog/trends-in-the-dollar-training-cost-of-machine-learning-systems.

[2] The following podcast explains this point in more detail: Lawfare. 2023. “Cybersecurity and AI.” Youtube. April 3, 2023. https://www.youtube.com/watch?v=vyyiSCJVAHs&t=964s (starting mainly at 16:04).

[3] For discussion regarding this claim, see: Allen, Gregory, Emily Benson, and William Reinsch. 2022. “Improved Export Controls Enforcement Technology Needed for U.S. National Security.” Center for Strategic and International Studies. November 30, 2022. https://www.csis.org/analysis/improved-export-controls-enforcement-technology-needed-us-national-security; and

Rasser, Martijn, and Kevin Wolf. 2022. “The Right Time for Chip Export Controls.” Lawfare. December 13, 2022. https://www.lawfareblog.com/right-time-chip-export-controls.

[4] Nakashima, Ellen. 2015. “Chinese Breach Data of 4 Million Federal Workers.” The Washington Post, June 4, 2015. https://www.washingtonpost.com/world/national-security/chinese-hackers-breach-federal-governments-personnel-office/2015/06/04/889c0e52-0af7-11e5-95fd-d580f1c5d44e_story.html; and

Sganga, Nicole. 2022. “Chinese Hackers Took Trillions in Intellectual Property from about 30 Multinational Companies.” CBS News. May 4, 2022. https://www.cbsnews.com/news/chinese-hackers-took-trillions-in-intellectual-property-from-about-30-multinational-companies/.

[5] Vincent, James. 2023. “Meta’s Powerful AI Language Model Has Leaked Online — What Happens Now?” The Verge. March 8, 2023. https://www.theverge.com/2023/3/8/23629362/meta-ai-language-model-llama-leak-online-misuse

[6] Brooks, Chuck. 2023. “Cybersecurity Trends & Statistics for 2023: More Treachery and Risk Ahead as Attack Surface and Hacker Capabilities Grow.” Forbes. March 5, 2023. https://www.forbes.com/sites/chuckbrooks/2023/03/05/cybersecurity-trends--statistics-for-2023-more-treachery-and-risk-ahead-as-attack-surface-and-hacker-capabilities-grow/?sh=2c6fcebf19db
Cox, Joseph. 2023. “How I Broke into a Bank Account with an AI-Generated Voice.” Vice. February 23, 2023. https://www.vice.com/en/article/dy7axa/how-i-broke-into-a-bank-account-with-an-ai-generated-voice.

[7] Feldstein, Steven. 2019. “The Global Expansion of AI Surveillance.” Carnegie Endowment for International Peace. September 17, 2019. https://carnegieendowment.org/2019/09/17/global-expansion-of-ai-surveillance-pub-79847;
Longpre, Shayne, Marcus Storm, and Rishi Shah. 2022. “Lethal Autonomous Weapons Systems & Artificial Intelligence: Trends, Challenges, and Policies.” Edited by Kevin McDermott. MIT Science Policy Review 3 (August): 47–56. https://doi.org/10.38105/spr.360apm5typ (p. 49).

[8] Calma, Justine. 2022. “AI Suggested 40,000 New Possible Chemical Weapons in Just Six Hours.” The Verge. March 17, 2022. https://www.theverge.com/2022/3/17/22983197/ai-new-possible-chemical-weapons-generative-models-vx

[9] For further discussion of this topic, see: Kahn, Jeremy. 2023. “Silicon Valley Is Buzzing about ‘BabyAGI.’ Should We Be Worried?” Fortune. April 15, 2023. https://fortune.com/2023/04/15/babyagi-autogpt-openai-gpt-4-autonomous-assistant-agi/
Adi, Erwin, Zubair Baig, and Sherali Zeadally. 2022. “Artificial Intelligence for Cybersecurity: Offensive Tactics, Mitigation Techniques and Future Directions.” Applied Cybersecurity & Internet Governance Journal. November 4, 2022. https://acigjournal.com/resources/html/article/details?id=232841&language=en.

[10] The example of Meta’s LLaMA, mentioned earlier, provides both some support and rebuttal for this concern: Meta has insisted it plans to continue sharing access despite the leaks, but there are good reasons to think this event will discourage other companies from implementing similar access rules. Source: Vincent, “Meta’s Powerful AI Language Model Has Leaked Online.”

[11] Laplante, Phil, Dejan Milojicic, Sergey Serebryakov, and Daniel Bennett. 2020. “Artificial Intelligence and Critical Systems: From Hype to Reality.” Computer 53 (11): 45–52. https://doi.org/10.1109/mc.2020.3006177: “The use of artificial intelligence (AI) in critical infrastructure systems will increase significantly over the next five years” (p. 1);
LaPlante, Phil, and Ben Amaba. 2021. “CSDL | IEEE Computer Society.” Computer. October 2021. https://www.computer.org/csdl/magazine/co/2021/10/09548022/1x9TFbzhvTG;
Hill, Michael. 2023. “NATO Tests AI’s Ability to Protect Critical Infrastructure against Cyberattacks.” CSO Online. January 5, 2023. https://www.csoonline.com/article/3684730/nato-tests-ai-s-ability-to-protect-critical-infrastructure-against-cyberattacks.html
Abdallat, A. J. 2022. “Can We Trust Critical Infrastructure to Artificial Intelligence?” Forbes. July 1, 2022. https://www.forbes.com/sites/forbestechcouncil/2022/07/01/can-we-trust-critical-infrastructure-to-artificial-intelligence/?sh=3e21942e1a7b.

Additionally, autonomous vehicles could constitute critical infrastructure.

[12] Zwetsloot, Remco, and Allan Dafoe. 2019. “Thinking about Risks from AI: Accidents, Misuse and Structure.” Lawfare. February 11, 2019. https://www.lawfareblog.com/thinking-about-risks-ai-accidents-misuse-and-structure.

[13] “Some observers have posited that autonomous systems like Sea Hunter may render the underwater domain transparent, thereby eroding the second-strike deterrence utility of stealthy SSBNs. [...] However, irrespective of the veracity of this emerging capability, the mere perception that nuclear capabilities face new strategic challenges would nonetheless elicit distrust between nuclear-armed adversaries—particularly where strategic force asymmetries exist.” Source: Johnson, James. 2020. “Artificial Intelligence: A Threat to Strategic Stability.” Strategic Studies Quarterly. https://www.airuniversity.af.edu/Portals/10/SSQ/documents/Volume-14_Issue-1/Johnson.pdf.

See also: Geist, Edward. 2018. “By 2040, Artificial Intelligence Could Upend Nuclear Stability.” RAND Corporation. April 24, 2018. https://www.rand.org/news/press/2018/04/24.html.

[14] Although this claim may be jarring for people who are not familiar with the progress in AI over the past decade or with the AI safety literature, the threat of extinction (or functionally equivalent outcomes) as a result of developing a very powerful system is non-trivial. Notably, in one 2022 survey of machine learning researchers, nearly half (48%) of the respondents believed there is at least a 10% chance that AI would lead to “extremely bad” outcomes (e.g., human extinction). Source: Grace, Katja. 2023. “How Bad a Future Do ML Researchers Expect?” AI Impacts. March 8, 2023. https://aiimpacts.org/how-bad-a-future-do-ml-researchers-expect/.

[15] Surveys of machine learning researchers provide a mixed range of forecasts, but in the aforementioned 2022 survey (notably prior to the public release of Chat-GPT), >75% of respondents said there was at least a 10% chance that humanity would develop “human-level AI” (roughly defined as a system that is better than humans at all or nearly all meaningful cognitive tasks) in the next 20 years. Additionally, >35% of the respondents said there was at least a 50% chance of this outcome. Notably however, some types of highly autonomous cyber systems may not even require “human-level AI.” For data and further discussion regarding these forecasts, see Roser, Max. 2023. “AI Timelines: What Do Experts in Artificial Intelligence Expect for the Future?” Our World in Data. February 7, 2023. https://ourworldindata.org/ai-timelines.
For the original survey, see: Stein-Perlman, Zach, Benjamin Weinstein-Raun, and Katja Grace. 2022. “2022 Expert Survey on Progress in AI.” AI Impacts. August 3, 2022. https://aiimpacts.org/2022-expert-survey-on-progress-in-ai/

[16] For sources on this claim, see: “Not My Problem.” 2014. The Economist. July 10, 2014. https://www.economist.com/special-report/2014/07/10/not-my-problem; and 
Humphreys, Brian. 2021. “Critical Infrastructure Policy: Information Sharing and Disclosure Requirements after the Colonial Pipeline Attack.” Congressional Research Service. May 24, 2021. https://crsreports.congress.gov/product/pdf/IN/IN11683

[17] “CISA Strategic Plan 2023–2025.” 2022. CISA. September 2022. https://www.cisa.gov/sites/default/files/2023-01/StrategicPlan_20220912-V2_508c.pdf

[18] As part of this, CISA should work with other agencies such as the Office of Science and Technology Policy (OSTP), National Security Agency (NSA), and the broader Department of Homeland Security (DHS) to forecast future AI models’ capabilities and proliferation.

[19] This could potentially build on or be modeled after datasets such as MITRE’s ATT&CK. See: MITRE. n.d. “MITRE ATT&CK.” MITRE. Accessed April 15, 2023. https://attack.mitre.org/.

[20] This should apply to large models regardless of whether they were stolen, developed for malicious purposes, etc. The overall dataset or case study compilation should probably cover more than just critical infrastructure targets, but CISA could just be a primary contributor for incidents involving critical infrastructure.

[21] National Artificial Intelligence Research Resource Task Force. 2023. “Strengthening and Democratizing the U.S. Artificial Intelligence Innovation Ecosystem an Implementation Plan for a National Artificial Intelligence Research Resource.” https://www.ai.gov/wp-content/uploads/2023/01/NAIRR-TF-Final-Report-2023.pdf. Note that some mock legislation briefly specifies CISA (not by acronym) on page J-12.

[22] For details about the PSA program, see: “Protective Security Advisor (PSA) Program.” n.d. CISA. Accessed April 15, 2023. https://www.cisa.gov/resources-tools/programs/protective-security-advisor-psa-program.

[23] See for example: “ICT Supply Chain Risk Management Task Force.” n.d. CISA. Accessed April 15, 2023. https://www.cisa.gov/resources-tools/groups/ict-supply-chain-risk-management-task-force.

[24] “Partnerships and Collaboration.” n.d. CISA. Accessed April 15, 2023. https://www.cisa.gov/topics/partnerships-and-collaboration

[25] CISA has stated in its strategy “We will use our full suite of convening authorities and relationship management capabilities to expand and mature partnerships with stakeholders and facilitate information sharing.” Source: “CISA Strategic Plan 2023–2025.”
Some of the relevant CISA programs include Automated Indicator Sharing (AIS), Enhanced Cybersecurity Services (ECS), and possibly even the Joint Cyber Defense Collaborative (JCDC). 

“Automated Indicator Sharing (AIS).” n.d. CISA. Accessed April 15, 2023. https://www.cisa.gov/topics/cyber-threats-and-advisories/information-sharing/automated-indicator-sharing-ais;
“Enhanced Cybersecurity Services (ECS).” n.d. CISA. Accessed April 15, 2023. https://www.cisa.gov/resources-tools/programs/enhanced-cybersecurity-services-ecs;
“Joint Cyber Defense Collaborative.” n.d. CISA. Accessed April 15, 2023. https://www.cisa.gov/topics/partnerships-and-collaboration/joint-cyber-defense-collaborative.
This also includes external organizations such as sector-based Information Sharing and Analysis Centers (ISACs). For more details about ISACs, see: “About ISACs.” n.d. National Council of ISACs. Accessed April 15, 2023. https://www.nationalisacs.org/about-isacs. While CISA may not be able to control participation in such organizations, it should at least determine whether AI labs are participating in such collaborations, if only to inform the development of policies to address unmet needs (or, if necessary, to impose mandatory disclosure requirements).

[26] Perhaps one of the more drastic possible options here is to categorize labs producing so-called “foundation models” (e.g., GPT-4) as part of the information technology critical infrastructure sector. It is unclear from an outsider perspective how legally feasible or politically desirable this categorization would be, but as GPT-4 and related models increasingly become the basis for other software applications this designation should become more logical and/or acceptable.
For more information about foundation models, see: Murphy, Mike. 2022. “What Are Foundation Models?” IBM Research Blog. May 9, 2022. https://research.ibm.com/blog/what-are-foundation-models.
For information about the IT critical infrastructure sector designation, see: “Information Technology Sector.” n.d. CISA. Accessed April 15, 2023. https://www.cisa.gov/topics/critical-infrastructure-security-and-resilience/critical-infrastructure-sectors/information-technology-sector.

[27] This particularly includes the Defense Advanced Research Projects Agency (DARPA) and the Intelligence Advanced Research Projects Activity (IARPA), both of which are already working on some projects related to the integrity and reliability of AI models, including GARD at DARPA and TrojAI at IARPA. Sources: “Guaranteeing AI Robustness against Deception (GARD).” n.d. DARPA. Accessed March 11, 2023. https://www.darpa.mil/program/guaranteeing-ai-robustness-against-deception; and
“TrojAI: Trojans in Artificial Intelligence.” n.d. IARPA. Accessed March 11, 2023. https://www.iarpa.gov/research-programs/trojai.

[28] National Artificial Intelligence Research Resource Task Force, “Strengthening and Democratizing the U.S….”

  1. ^

    Ultimately I was fairly rushed with this memo and realized less than halfway through that perhaps I shouldn't have chosen CISA as my client, but it was too late to change. I don't confidently endorse all of the claims and recommendations in this memo (especially given my lack of familiarity with the field, tight length constraints, and lack of time to do as much research as I wanted), but I'm sharing it to potentially help others who might be interested.

7

0
0

Reactions

0
0

More posts like this

Comments
No comments on this post yet.
Be the first to respond.