Today, the AI Extinction Statement was released by the Center for AI Safety, a one-sentence statement jointly signed by a historic coalition of AI experts, professors, and tech leaders.
Geoffrey Hinton and Yoshua Bengio have signed, as have the CEOs of the major AGI labs–Sam Altman, Demis Hassabis, and Dario Amodei–as well as executives from Microsoft and Google (but notably not Meta).
The statement reads: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
We hope this statement will bring AI x-risk further into the overton window and open up discussion around AI’s most severe risks. Given the growing number of experts and public figures who take risks from advanced AI seriously, we hope to improve epistemics by encouraging discussion and focusing public and international attention toward this issue.
I'm really heartened by this, especially some of the names on here I independently admired who haven't been super vocal about the issue yet, like David Chalmers, Bill McKibben, and Audrey Tang. I also like certain aspects of this letter better than the FLI one. Since it focuses specifically on relevant public figures, rapid verification is easier and people are less overwhelmed by sheer numbers. Since it focuses on an extremely simple but extremely important statement it's easier to get a broad coalition on board and for discourse about it to stay on topic. I liked the FLI one overall as well, I signed it myself and think it genuinely helped the discourse, but if nothing else this seems like a valuable supplement.