Hey all, wanted to share what some colleagues at OpenAI are up to: the new Preparedness team has been publicly announced, and they’re hiring!
This team is going to be doing incredibly important work:
- They’ll be the main team doing evals, forecasting, and risk assessment for catastrophic risk.
- They’ll be coordinating AGI preparedness (figuring out what protective measures we need, etc.)
- They’re in charge of developing and maintaining OpenAI’s RDP (our version of an RSP).
I think this will be one of the most important teams at OpenAI for mitigating AGI risk. The team is led by Aleksander Madry, who is great, and the early team members Tejal and Kevin are awesome.
I think it would be enormously impactful if they can continue to hire people who are really excellent + really get AGI risk. Please seriously consider applying, and spread the word to friends who you think could be a great fit!
Sharmake -- in most contexts, your point would be valid, and inappropriate binarization would be a bad thing.
But when it comes to AI X-risk, I don't see any functional difference between dismissing AI X risks, and thinking that AI progress will help solve (other?) X risks, or thinking that increasing AI progress with somehow reduce AI X risks. Those 'third options' just seem like they fall into the overall category of 'not taking AI X risk seriously, at all'.
For example, if people think AI progress will somehow reduce AI X risk, that boils down to thinking that 'the closer we get to the precipice, the better we'll be able to avoid the precipice'.
If people think AI progress will somehow reduce other X risks, I'd want a realistic analysis of what those other alleged X risks really are, and how exactly AI progress would help. In practice, in almost every blog, post, and comment I've seen, this boils down to the vague claim that 'AI could help us solve climate change'. But very few serious climate scientists think that climate change is a literal X risk that could kill every living human.