The National Institute of Standards and Technology (NIST) is seeking public comment until April 29 on its Draft AI Risk Management Framework.  NIST will produce a second draft for comment, as well as host a third workshop, before publishing AI RMF 1.0 in January 2023. Please send comments on this initial draft to AIframework@nist.gov by April 29, 2022.

I would like to see places like ARC, OpenAI, Redwood Research, MIRI, Centre for the Governance of AI, CHAI, Credo AI, OpenPhil[1], FHI, Aligned AI, and any other orgs make efforts to comment. Without going into the reasons why deeply here on a public forum, I think influencing the development of NIST's AI Risk Management Framework could be high impact. The framework is intended for voluntary use in addressing risks in the design, development, use, and evaluation of AI products, services, and systems. NIST standards are often added to government procurement contracts, so these standards often impact what the federal government does or does not purchase through acquisitions. This in turn impacts industry and how they develop their products, services, and systems to meet government standards so they can get those sweet, sweet federal dollas. For example, the IRS issued a Request for Proposals (RFP) soliciting a contract with a company that would meet NIST SP 800-63-3 requirements for facial recognition technology. Another way NIST is influential is with commercial-off-the-shelf items (COTS) in that companies would benefit in making products, services, and systems that can be easily adapted aftermarket to meet the needs of the U.S. government so that they can reach both commercial and governmental markets. 

I have been somewhat disheartened by the lack of AI alignment or safety orgs with making comments on early-stage things where it would be very easy to move the Overton window and/or (in best-case scenario) put some safeguards  in place against worst-case scenarios for things we clearly know could be bad, even if we don't know how to solve alignment problems just yet. The NIST Framework moving forward (it will go through several iterations and updates) will be a great place to add in AI safety standards that we KNOW would at least allow us to avoid catastrophe. 

This is also a good time to beg and plead for more EAs to go into NIST for direct work. If you are thinking this might be a good fit for you and want to try it out, please consider joining Open Phil's Tech Policy Fellowship the next time applications open (probably late summer?).  

I am heartened that at least some orgs that at least sometimes if not always contemplate AI alignment and safety have recently provided public comment on AI stuff the U.S. gov is doing. E.g., Anthropic, CSET, Google (not sure if it was DeepMind folks), Stanford HAI (kind of) at least commented on the recent NAIRR Task Force Request for Information (RFI). Future of Life Institute has also been quite good at making comments of this type and has partnered with CHAI in doing so. But there is more room for improvement and sometimes these comments can be quite impactful (especially for formal administrative rulemaking, but we will leave that aside). In the above NAIRR Task Force example, there were only 84 responses. Five additional EA orgs saying the same thing in a unifying voice could be marginally impactful in influencing the Task Force. 

NIST’s work on the Framework is consistent with its broader AI efforts, recommendations by the National Security Commission on Artificial Intelligence, and the Plan for Federal Engagement in AI Standards and Related Tools. Congress has directed NIST to collaborate with the private and public sectors to develop the AI RMF.

Please go forth and do good things for the world, AI Orgs :-)

  1. ^

    Uncertain of whether OP should actually being on this list but including for completeness.

87

0
0

Reactions

0
0

More posts like this

Comments4
Sorted by Click to highlight new comments since: Today at 2:45 PM

Thanks for encouraging involvement with the NIST AI Risk Management Framework (AI RMF) development process.  Currently my main focus is the AI RMF and related standards development, particularly on issues affecting AI safety and catastrophic risks.  Colleagues at UC Berkeley and I previously submitted comments to NIST, available at https://www.nist.gov/system/files/documents/2021/09/16/ai-rmf-rfi-0092.pdf and https://cltc.berkeley.edu/2022/01/25/response-to-nist-ai-risk-management-framework-concept-paper/ .  We are also preparing comments on the AI RMF Initial Draft, which we plan to submit to NIST soon.  

If any folks working on AI safety or governance are preparing comments of their own and want to discuss, I'd be happy to: email me at anthony.barrett@berkeley.edu.  

Update: GovAI has recently submitted comments on the Initial Draft of the NIST AI Risk Management framework: https://www.governance.ai/research-paper/submission-to-the-nist-ai-risk-management-framework.

Our key recommendations are:

  • Put more emphasis on low-probability, high-impact risks, especially catastrophic risks to society.
  • Create a new Socio-Technical Characteristic on “Misuse/Abuse”.
  • Create a new Guiding Principle on “Alignment with Human Values and Intentions”.
  • Recommend organizations set up an internal audit function to continually assess whether their AI RMF implementation has improved their ability to manage AI risks.
  • Review and update the AI RMF frequently.

And I think a few other EA-aligned orgs have also submitted comments. I expect NIST to publish all submissions soon.

[comment deleted]2y1
0
0

I completely agree with the urgency and the evaluation of the problem.

In case begging and pleading doesn't work, a complementary method is to create a prestige differential between AI safety research and AI capabilities research (i.e., like that between green-energy research and fossil fuels), with the goal of convincing people to move from the latter to the former. See my post for a grand strategy.

How do we recruit AI capabilities researchers to transition into AI safety research? It seems that "it is relatively easy to persuade people to join AI safety in 1-on-1s." I think it's most likely worth brainstorming methods to scale this.