P

pcihon

41 karmaJoined Apr 2019

Bio

I work on AI policy and governance research.

Comments
3

This is a useful overview, thank you for writing it. It's worth underlining that international governance has seen considerably more discussion as of late. 

OpenAI, UN Secretary Antonio Guterres, and others have called for an IAEA for AI. Yoshua Bengio and others have called for a CERN. UK PM Sunak reportedly floated both ideas to President Biden, and has called an international summit on AI safety, to be held in December. 

There is some literature as well. GovAI affiliates have written on the IAEA and CERN. Maathijs Maas, Luke Kemp, and I wrote a paper on design considerations for international AI governance and recommended a modular treaty approach. 

It would be good to see further research and discussions ahead of the December summit.

It's great to see this renewed call for safety standardization! A few years after my initial report, I continue to view standardization of safety processes as an important way to spread beneficial practices and as a precursor to regulation, as you describe. A few reactions to forward the conversation:

 1. It's worth underlining a key limitation to standards in my view: it's difficult for them to influence the vanguard.  Standards are most useful in disseminating best practices (from the vanguard where they're developed to everyone else) and thus raising the safety floor. This poses obvious challenges for  standards' use in alignment. Though not insurmountable, effective use of standards here would be a deviation from the common path in standardization. 

2. Following from 1, a dedicated SSO for AI safety that draws from actors concerned about alignment could well make sense. One possible vehicle could be the Joint Development Foundation.

3. I appreciate the list of best practices worth considering for standardization. These are promising directions, though it would be helpful to understand if there is much buy-in from safety experts. A useful intervention: create a (recurring) expert survey that measures perceived maturity of candidate best practices and their priority for standardization. This would be a good intervention in the short-term. 

4. I agree that AI safety expertise should be brought to existing standardization venues and also with your footnote 14 caveat that the opportunity cost of researchers time should not be treated lightly. In practice, leading AI labs would benefit from emulating large companies' approaches: dedicated staff (or even teams) to monitor developments at SSOs and to channel expertise (whether inviting an expert researcher to one SSO meeting or by circulating SSO submission internally for AI safety researcher feedback) in a way that does not overburden researchers. At the community level, individuals may be able to fill this role, as Tony Barrett has with NIST (as Evan Murphy linked, his submission is worth a close read).

5. I appreciate your identification of transparency as a pitfall of many SSOs and a point to improve. Open availability of standards should be encouraged. I'd go further to encourage actors to be transparent about their engagement in standardization: publish blogs/specifications for wider scrutiny. Transparency can also increase data for researchers trying to measure the efficacy of standards engagement (itself a challenging question).

6. It's worth underlining the importance of standards to implementing the EU AI Act as currently envisioned. Even if the incentives are not such that we see a Brussels Effect for AI, the standards themselves may be expected to be used beyond the single market. This would mean prioritizing engagement in CEN-CENELEC to inform standards that will support conformity assessment.