This team is going to be doing incredibly important work:
- They’ll be the main team doing evals, forecasting, and risk assessment for catastrophic risk.
- They’ll be coordinating AGI preparedness (figuring out what protective measures we need, etc.)
- They’re in charge of developing and maintaining OpenAI’s RDP (our version of an RSP).
I think this will be one of the most important teams at OpenAI for mitigating AGI risk. The team is led by Aleksander Madry, who is great, and the early team members Tejal and Kevin are awesome.
I think it would be enormously impactful if they can continue to hire people who are really excellent + really get AGI risk. Please seriously consider applying, and spread the word to friends who you think could be a great fit!