Aman Patel and I have been thinking a lot about the AI Safety Pipeline. In the course of doing so we haven't come across any visualizations. Here's a second draft of a visualized pipeline of what organizations/programs are relevant to moving through the pipeline (mostly in terms of theoretical approaches). It is incomplete. There are orgs that are missing because we don't know enough about them (e.g., ERIs). The existing orgs are probably not exactly where they should be, and there's tons of variation within participants of each program; there should be really long tails. The pipeline moves left to right and we have placed organizations generally where they seem to fit in on the pipeline. The height of each program represents its capacity or number of participants.
This is a super rough draft, but it might be useful for some people to see the visualization and recognize where their beliefs/knowledge differs from ours.
We support somebody else putting more than an hour into this project and making a nicer/more accurate visual.
Current version as of 4/11:

Thanks for this!
Does "Learning the Basics" specifically mean learning AI Safety basics, or does this also include foundational AI/ML (in general, not just safety) learning? I'm wondering because I'm curious if you mean that the things under "Learning the Basics" could be done with little/no background in ML.
Good question. I think "Learning the Basics" is specific to AI Safety basics and does not require a strong background in AI/ML. My sense is that the AI Safety basics and ML are slightly independent. The ML side of things simply isn't pictured here. For example, the MLAB (Machine Learning for Alignment Bootcamp) program which ran a few months ago focused on taking people with good software engineering skills and bringing them up to speed on ML. As far as I can tell, the focus was not on alignment specifically, but was intended for people likely to work in a... (read more)