This curriculum, a follow-up to the Alignment Fundamentals curriculum (the ‘101’ to this 201 curriculum), aims to give participants enough knowledge about alignment to understand the frontier of current research discussions. It assumes that participants have read through the Alignment Fundamentals curriculum, taken a course on deep learning, and taken a course on reinforcement learning (or have an equivalent level of knowledge).
Although these are the basic prerequisites, we expect that most people who intend to work on alignment should only read through the full curriculum after they have significantly more ML experience than listed above, since upskilling via their own ML engineering or research projects should generally be a higher priority for early-career alignment researchers.
When reading this curriculum, it’s worth remembering that the field of alignment aims to shape the goals of systems that don’t yet exist; and so alignment research is often more speculative than research in other fields. You shouldn’t assume that there’s a consensus about the usefulness of any given research direction; instead, it’s often worth developing your own views about whether techniques discussed in this curriculum might plausibly scale up to help align AGI.
The curriculum was compiled, and is maintained, by Richard Ngo. For now, it’s primarily intended to be read independently; once we’ve run a small pilot program, we’ll likely extend it to a discussion-based course.
Curriculum overview
Week 1: Further understanding the problem
Week 2: Decomposing tasks for better supervision
Week 3: Preventing misgeneralization
Week 4: Interpretability
Week 5: Reasoning about Reasoning
Weeks 6 & 7 (Track 1): Eliciting Latent Knowledge
Weeks 6 & 7 (Track 2): Agent Foundations
Weeks 6 & 7 (Track 3): Science of Deep Learning
Weeks 8 & 9: Literature Review or Project Proposal
See the full curriculum here. Note that the curriculum is still under revision, and feedback is very welcome!
Richard -- thanks for posting this. It looks like a very useful curriculum.
Naive question as an alignment newbie:
If the point of 'AI alignment' is 'alignment with human values', why does the alignment field pay so little attention to the many decades of scientific research on the origins, nature, and diversity of human values, and focus almost entirely on the last few decades of research on machine learning?
It feels like many alignment courses are focusing only on the AI side of the equation, and acting as if the human side of alignment is trivial, obvious, and/or under-researched.
Genuine question; it's something that's been puzzling me for several months.
Richard - thanks very much for your quick and helpful reply. I'll have a look at the links you included, and ruminate about this further...