Next week I'm interviewing Richard Ngo, current AI (Safety) Governance Researcher at OpenAI and previous Research Engineer at DeepMind.
Before that he was doing a PhD in the Philosophy of Machine Learning at Cambridge, on the topic of "to what extent is the development of artificial intelligence analogous to the biological and cultural evolution of human intelligence?"
He is focused on making the development and deployment of AGI more likely to go well and less likely to go badly.
Richard is also a highly prolific contributor to online discussion of AI safety in a range of places, for instance:
- Moral strategies at different capability levels on his blog Thinking Complete
- The alignment problem from a deep learning perspective on the EA Forum
- Some conceptual alignment research projects on the AI Alignment Forum
- Richard Ngo and Eliezer Yudkowsky policely debating AI Safety on Less Wrong
- The AGI Safety from First Principle education series
- And on his Twitter
What should I ask him?
I'd be particularly curious to hear Richard's thoughts on non-governmental approaches to governance: How robust does he see the corporate governance approaches within labs like OpenAI as being? Does he believe any corporate governance ideas are particularly promising? Additionally, does he see any potential from private sector collaboration or consortia on self-governance, or from non-profit / NGO attempts at monitoring and risk mitigation?