U.S. Secretary of Commerce Gina Raimondo announced today additional members of the executive leadership team of the U.S. AI Safety Institute (AISI), which is housed at the National Institute of Standards and Technology (NIST). Raimondo named Paul Christiano as Head of AI Safety, Adam Russell as Chief Vision Officer, Mara Campbell as Acting Chief Operating Officer and Chief of Staff, Rob Reich as Senior Advisor, and Mark Latonero as Head of International Engagement. They will join AISI Director Elizabeth Kelly and Chief Technology Officer Elham Tabassi, who were announced in February. The AISI was established within NIST at the direction of President Biden, including to support the responsibilities assigned to the Department of Commerce under the President’s landmark Executive Order.
...
Paul Christiano, Head of AI Safety, will design and conduct tests of frontier AI models, focusing on model evaluations for capabilities of national security concern. Christiano will also contribute guidance on conducting these evaluations, as well as on the implementation of risk mitigations to enhance frontier model safety and security. Christiano founded the Alignment Research Center, a non-profit research organization that seeks to align future machine learning systems with human interests by furthering theoretical research. He also launched a leading initiative to conduct third-party evaluations of frontier models, now housed at Model Evaluation and Threat Research (METR). He previously ran the language model alignment team at OpenAI, where he pioneered work on reinforcement learning from human feedback (RLHF), a foundational technical AI safety technique. He holds a PhD in computer science from the University of California, Berkeley, and a B.S. in mathematics from the Massachusetts Institute of Technology.
Following up from previous news post:
https://forum.effectivealtruism.org/posts/9QLJgRMmnD6adzvAE/nist-staffers-revolt-against-expected-appointment-of
Raimondo and the Department of Commerce seem to have been remarkably effective on AI/China issues during the Biden administration. Is there any detailed reporting on how governance became (seemingly) so good there?
This piece might have some of what you're looking for: https://www.washingtonpost.com/opinions/2023/10/31/ai-gina-raimondo-is-steph-curry/
What are some of your favorite examples of their effectiveness?
What are some failure modes of such an agency for Paul and others to look out for? (I shared one anecdote with him, about how a NIST standard for "crypto modules" made my open source cryptography library less secure, by having a requirement that had the side effect that the library could only be certified as standard-compliant if it was distributed in executable form, forcing people to trust me not to have inserted a backdoor into the executable binary, and then not budging when we tried to get an exception for this requirement.)
Were you prohibited from also open sourcing it?
The source code was available, but if someone wanted to claim compliance with the NIST standard (in order to sell their product to the federal government, for example), they had to use the pre-compiled executable version.
I guess there's a possibility that someone could verify the executable by setting up an exact duplicate of the build environment and re-compiling from source. I don't remember how much I looked into that possibility, and whether it was infeasible or just inconvenient. (Might have been the former; I seem to recall the linker randomizing some addresses in the binary.) I do know that I never documented a process to recreate the executable and nobody asked.
Is this a use case for Reproducible Builds?
(feel a little awkward just pushing news but feel some completeness obligation on this subject)