Today, the AI Extinction Statement was released by the Center for AI Safety, a one-sentence statement jointly signed by a historic coalition of AI experts, professors, and tech leaders.
Geoffrey Hinton and Yoshua Bengio have signed, as have the CEOs of the major AGI labs–Sam Altman, Demis Hassabis, and Dario Amodei–as well as executives from Microsoft and Google (but notably not Meta).
The statement reads: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
We hope this statement will bring AI x-risk further into the overton window and open up discussion around AI’s most severe risks. Given the growing number of experts and public figures who take risks from advanced AI seriously, we hope to improve epistemics by encouraging discussion and focusing public and international attention toward this issue.
In the vein of "another good point" made in public reactions to the statement, an article I read in The Telegraph:
"Big tech’s faux warnings should be taken with a pinch of salt, for incumbent players have a vested interest in barriers to entry. Oppressive levels of regulation make for some of the biggest. For large companies with dominant market positions, regulatory overkill is manageable; costly compliance comes with the territory. But for new entrants it can be a killer."
This seems obvious with hindsight as one factor at play, but I hadn't considered it before reading it here. This doesn't address Daniel / Haydn's point though, of course.
https://www.telegraph.co.uk/business/2023/06/04/worry-climate-change-not-artificial-intelligence/