Next week I'm interviewing Richard Ngo, current AI (Safety) Governance Researcher at OpenAI and previous Research Engineer at DeepMind.

Before that he was doing a PhD in the Philosophy of Machine Learning at Cambridge, on the topic of "to what extent is the development of artificial intelligence analogous to the biological and cultural evolution of human intelligence?"

He is focused on making the development and deployment of AGI more likely to go well and less likely to go badly.

Richard is also a highly prolific contributor to online discussion of AI safety in a range of places, for instance:

What should I ask him?

New Answer
Ask Related Question
New Comment

11 Answers sorted by

anea

Sep 29, 2022

141
  1. What made him choose to work full time on governance rather than technical AI alignment?
  2. What does he think about working on improving the value of the future conditional on survival versus reducing AI x-risk?
  3. What's the OpenAI Futures theory of change?
  4. What policy areas or proposals in AI policy seem either promising or underexplored?
  5. Thoughts on various AI governance proposals (live monitoring of hardware use, chip shutdown mechanisms, regulating the legality of doing big training runs, international agreements on semiconductor trade, restricting semiconductor exports to certain countries, winfall clause, spreading good norms at top labs, etc.)?

HaydnBelfield

Sep 29, 2022

124

I think he quit his PhD actually. So you could ask him why, and what factors people should consider when choosing to do a PhD or deciding to change while on it.

 

<Before that he did a PhD in the Philosophy of Machine Learning at Cambridge, on the topic of "to what extent is the development of artificial intelligence analogous to the biological and cultural evolution of human intelligence?">

Ben

Sep 29, 2022

100

Sounds interesting! I'd be interested in:

  1. Could Richard give a summary of his conversation with Eliezer, and on what points he agrees and disagrees with him? 
  2. (Perhaps this has been covered somewhere else) Could Richard give an broad overview of different approaches to AI alignment and which ones he thinks are most promising?

Thanks!

nmulani

Sep 29, 2022

60

I'd be particularly curious to hear Richard's thoughts on non-governmental approaches to governance: How robust does he see the corporate governance approaches within labs like OpenAI as being? Does he believe any corporate governance ideas are particularly promising? Additionally, does he see any potential from private sector collaboration or consortia on self-governance, or from  non-profit / NGO attempts at monitoring and risk mitigation?

Greg_Colbourn

Oct 05, 2022

51

Does he agree with FTX Future Fund's worldview on AI? If his probabilities (e.g. for "P(misalignment x-risk|AGI)" or "P(AGI by 2043") are significantly different, will he be entering their competition?

Ben_West

Oct 03, 2022

42

What does he think about rowing versus steering in AI safety? Ie does he think we are basically going in the right direction and we just need to do more of it, or do we need to do more thinking about the direction in which we are heading?

Guy Raveh

Sep 29, 2022

40
  1. How does he view the relationship between AI safety researchers and AI capabilities developers? Can they work in synergy while having sometimes opposite goals?

  2. What does he think the field of AI safety is missing? What kinds of people does it need? What kinds of platforms?

Ofer

Sep 29, 2022

30

What are the upsides and downsides of doing AI governance research at an AI company, relative to doing it at a non-profit EA organization?

Geoffrey Miller

Sep 30, 2022

20

Some AI applications may involve AI systems that need to get aligned with the interests, values, and preferences of non-human animals (e.g. pets, livestock, zoo animals, lab animals, endangered wild animals, etc.) -- in addition to being aligned with the humans involved in their care-taking.

Are AI alignment researchers considering how this kind of alignment could happen? 

Which existing alignment strategies might work best for aligning with non-human animals?

Quadratic Reciprocity

Sep 30, 2022

20

Besides (or after) doing his AGI Safety Fundamentals Program (and the potential future part 2 / advanced version of the curriculum), what does he recommend university students interested in AI safety do? 

JoshYou

Sep 29, 2022

10

What makes someone good at AI safety work? How does he get feedback on whether his work is useful, makes sense, etc?