Hide table of contents

Next week I'm interviewing Richard Ngo, current AI (Safety) Governance Researcher at OpenAI and previous Research Engineer at DeepMind.

Before that he was doing a PhD in the Philosophy of Machine Learning at Cambridge, on the topic of "to what extent is the development of artificial intelligence analogous to the biological and cultural evolution of human intelligence?"

He is focused on making the development and deployment of AGI more likely to go well and less likely to go badly.

Richard is also a highly prolific contributor to online discussion of AI safety in a range of places, for instance:

What should I ask him?

45

0
0

Reactions

0
0
New Answer
New Comment

11 Answers sorted by

  1. What made him choose to work full time on governance rather than technical AI alignment?
  2. What does he think about working on improving the value of the future conditional on survival versus reducing AI x-risk?
  3. What's the OpenAI Futures theory of change?
  4. What policy areas or proposals in AI policy seem either promising or underexplored?
  5. Thoughts on various AI governance proposals (live monitoring of hardware use, chip shutdown mechanisms, regulating the legality of doing big training runs, international agreements on semiconductor trade, restricting semiconductor exports to certain countries, winfall clause, spreading good norms at top labs, etc.)?

I think he quit his PhD actually. So you could ask him why, and what factors people should consider when choosing to do a PhD or deciding to change while on it.

 

<Before that he did a PhD in the Philosophy of Machine Learning at Cambridge, on the topic of "to what extent is the development of artificial intelligence analogous to the biological and cultural evolution of human intelligence?">

Sounds interesting! I'd be interested in:

  1. Could Richard give a summary of his conversation with Eliezer, and on what points he agrees and disagrees with him? 
  2. (Perhaps this has been covered somewhere else) Could Richard give an broad overview of different approaches to AI alignment and which ones he thinks are most promising?

Thanks!

I'd be particularly curious to hear Richard's thoughts on non-governmental approaches to governance: How robust does he see the corporate governance approaches within labs like OpenAI as being? Does he believe any corporate governance ideas are particularly promising? Additionally, does he see any potential from private sector collaboration or consortia on self-governance, or from  non-profit / NGO attempts at monitoring and risk mitigation?

Does he agree with FTX Future Fund's worldview on AI? If his probabilities (e.g. for "P(misalignment x-risk|AGI)" or "P(AGI by 2043") are significantly different, will he be entering their competition?

What does he think about rowing versus steering in AI safety? Ie does he think we are basically going in the right direction and we just need to do more of it, or do we need to do more thinking about the direction in which we are heading?

  1. How does he view the relationship between AI safety researchers and AI capabilities developers? Can they work in synergy while having sometimes opposite goals?

  2. What does he think the field of AI safety is missing? What kinds of people does it need? What kinds of platforms?

What are the upsides and downsides of doing AI governance research at an AI company, relative to doing it at a non-profit EA organization?

Some AI applications may involve AI systems that need to get aligned with the interests, values, and preferences of non-human animals (e.g. pets, livestock, zoo animals, lab animals, endangered wild animals, etc.) -- in addition to being aligned with the humans involved in their care-taking.

Are AI alignment researchers considering how this kind of alignment could happen? 

Which existing alignment strategies might work best for aligning with non-human animals?

Besides (or after) doing his AGI Safety Fundamentals Program (and the potential future part 2 / advanced version of the curriculum), what does he recommend university students interested in AI safety do? 

What makes someone good at AI safety work? How does he get feedback on whether his work is useful, makes sense, etc?

Curated and popular this week
Relevant opportunities