Ideas for topics, points, or resources that would be especially valuable to get across to incoming graduate students in Artificial Intelligence?
Context
On October 18, I will be giving a ≤1.5-hour lecture to the incoming class of 50+ graduate students in the AI program at my university. This will be one of ≤10 lectures making up the required introductory AI course. I can also assign ≤4 hours of required reading to complete after the lecture and before the class discussion the following week.
The theory of change here is pretty simple: share the basics of AI safety with grad students so that they can at least engage in informed conversations about it. In the best case, they'd take concerns seriously, engage more deeply with the field, reach out to me to talk more, and keep safety in mind when choosing a research topic.
Asks
I'd specifically appreciate:
- Recommendations for readings to assign (up to 4 hours of content)
- Pointers to resources that break down the bullet points in my outline below, e.g. an overview of the governance ideas that are most discussed right now
- Pointers to where these discussions happen. I expect quite a few people are giving similar presentations!
Don't worry about this if you're busy. I'm confident that I have access to plenty of information to build a strong presentation, especially thanks to the lovely folks running AI Safety Fundamentals. But if you have absolutely awesome resources you want to make sure I don't miss, please send them along!
My basic outline
- Emerging capabilities lead to new risks
- Risks
- Misuse: releasing pandemics, mounting cyberattacks, etc.
- Collective action problems: eroding human self-determination (as in "What failure looks like")
- Misalignment: systems developing misaligned goals (as in "The alignment problem from a deep learning perspective"), or maybe not (as in "Reward is not the optimization target")
- What we can do about it
- Governance ideas
- Technical research agendas