TB

Tony Barrett

47 karmaJoined Dec 2021

Bio

Visiting Scholar, UC Berkeley Center for Long-Term Cybersecurity (CLTC); Senior Policy Analyst, Berkeley Existential Risk Initiative (BERI); Co-Founder & Director of Research, Global Catastrophic Risk Institute (GCRI)

Comments
3

Yes I expect that the government would aim to protect the reported information (or at least key sensitive details) as CUI or in another way that would be FOIA exempt.

Operations management and project management often have substantial overlap in activities and methods. However, one key distinction is that "operations" typically represent ongoing or repeated activities (such as running a factory or a manufacturing line within a factory), while "projects" are temporary and finite (such as conducting a research study on whether the factory should make a particular new product). See, e.g., https://www.northeastern.edu/graduate/blog/project-management-vs-operations-management/.  (I used to work at a consulting firm where there were many project managers running short-term research or software-development projects for clients, and a few operations managers responsible for continual business functions such as invoicing clients and paying employees of the consulting firm.)

Thanks for encouraging involvement with the NIST AI Risk Management Framework (AI RMF) development process.  Currently my main focus is the AI RMF and related standards development, particularly on issues affecting AI safety and catastrophic risks.  Colleagues at UC Berkeley and I previously submitted comments to NIST, available at https://www.nist.gov/system/files/documents/2021/09/16/ai-rmf-rfi-0092.pdf and https://cltc.berkeley.edu/2022/01/25/response-to-nist-ai-risk-management-framework-concept-paper/ .  We are also preparing comments on the AI RMF Initial Draft, which we plan to submit to NIST soon.  

If any folks working on AI safety or governance are preparing comments of their own and want to discuss, I'd be happy to: email me at anthony.barrett@berkeley.edu.