Today, the AI Extinction Statement was released by the Center for AI Safetya one-sentence statement jointly signed by a historic coalition of AI experts, professors, and tech leaders.

Geoffrey Hinton and Yoshua Bengio have signed, as have the CEOs of the major AGI labs–Sam Altman, Demis Hassabis, and Dario Amodei–as well as executives from Microsoft and Google (but notably not Meta).

The statement reads: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

We hope this statement will bring AI x-risk further into the overton window and open up discussion around AI’s most severe risks. Given the growing number of experts and public figures who take risks from advanced AI seriously, we hope to improve epistemics by encouraging discussion and focusing public and international attention toward this issue.

Comments28


Sorted by Click to highlight new comments since:

Thank you to the CAIS team (and other colleagues) for putting this together. This is such a valuable contribution to the broader AI risk discourse.

I'm really heartened by this, especially some of the names on here I independently admired who haven't been super vocal about the issue yet, like David Chalmers, Bill McKibben, and Audrey Tang. I also like certain aspects of this letter better than the FLI one. Since it focuses specifically on relevant public figures, rapid verification is easier and people are less overwhelmed by sheer numbers. Since it focuses on an extremely simple but extremely important statement it's easier to get a broad coalition on board and for discourse about it to stay on topic. I liked the FLI one overall as well, I signed it myself and think it genuinely helped the discourse, but if nothing else this seems like a valuable supplement.

Very cool!

I am surprised you did not mention climate since this is the one major risk where we are doing a good job (i.e. if we paying as much attention to AI as to future pandemics and nuclear risk this isn't very reassuring, as it seems these are major risks there are not well addressed / massively underresourced compared to importance).

I, for one, think that it is good that climate change was not mentioned. Not necessarily because there are no analogies and lessons to be drawn, but rather because it can more easily be misinterpreted. I think that the kind of actions and risks are much more similar to bio and nuclear, in that there are way less actors and, at least for now, it is much less integrated to day-to-day life. Moreover, in many scenarios, the risk itself is of more abrupt and binary nature (though of course not completely so), rather than a very long and gradual process. I'd be worried that comparing AI safety to climate change would be easily misinterpreted or dismissed by irrelevant claims.

Linch
26
20
0

At least in the US, I'd worry that comparisons to climate change will get you attacked by ideologues from both of the main political sides (vitriol from the left because they'll see it as evidence that you don't care enough about climate change, vitriol from the right because they'll see it as evidence that AI risk is as fake/political as climate change).

IMO it was tactically correct to not mention climate. The point of the letter is to get wide support, and I think many people would not be willing to put AI X-Risk on par with climate

Yeah, I can see that though it is a strange world where we treat nuclear and pandemics as second-order risks.

climate since this is the one major risk where we are doing a good job

Perhaps (at least in the United States) we haven't been doing a very good job on the communication front for climate change, as there are many social circles where climate change denial has been normalized and the issue has become very politically polarized with many politicians turning climate change from an empirical scientific problem into a political "us vs them" problem.

since this is the one major risk where we are doing a good job

What about ozone layer depletion?

Not a current major risk, but also turned out to be trivially easy to solve with minimal societal resources (technological substitution was already available when regulated, only needed regulating a couple of hundred factories in select countries), so does not feel like it belongs in the class of major risks.

I disagree, I think major risks should be defined in terms of their potential impact sans intervention, rather than taking tractability into account (negatively). 

Incidentally there was some earlier speculation of what counterfactually might happen if we had invented CFCs a century earlier, which you might find interesting.

I think we're talking past each other.

While I also disagree that we should ignore tractability for the purpose you indicate, the main point here is more "if we'd chose the ozone layer as an analogy we are suggesting the problem is trivially easy" which doesn't really help with solving the problem and it already seems extremely likely that AI risk is much trickier than ozone layer depletion.

This is exciting!

Do you have any thoughts on how the community should be following up on this?

Made the front page of Hacker News. Here's the comments.

The most common pushback (and the first two comments, as of now) are from people who think this is an attempt at regulatory capture by the AI labs, though there's a good deal of pushback and (I thought) some surprisingly high-quality discussion.

It seems relevant that most of the signatories are academics, where this criticism wouldn't make sense. @HaydnBelfield created a nice graphic here demonstrating this point.

I've also been trying this to people claiming financial interests. On the other hand, the tweet Haydn replied to actually makes another good point though, that does apply to professors - diverting attention to from societal risks that they're contributing to but can solve, to x-risk where they can mostly sign such statements and then go "🤷🏼‍♂️", shields them from having to change anything in practice.

In the vein of "another good point" made in public reactions to the statement, an article I read in The Telegraph:

"Big tech’s faux warnings should be taken with a pinch of salt, for incumbent players have a vested interest in barriers to entry. Oppressive levels of regulation make for some of the biggest. For large companies with dominant market positions, regulatory overkill is manageable; costly compliance comes with the territory. But for new entrants it can be a killer."

This seems obvious with hindsight as one factor at play, but I hadn't considered it before reading it here. This doesn't address Daniel / Haydn's point though, of course.

https://www.telegraph.co.uk/business/2023/06/04/worry-climate-change-not-artificial-intelligence/

The most common pushback (and the first two comments, as of now) are from people who think this is an attempt at regulatory capture by the AI labs

 

This is also the case in the comments on this FT article (paywalled I think), which I guess indicates how less techy people may be tending to see it.

Note that this was covered in the New York Times (paywalled) by Kevin Roose. I found it interesting to skim the comments. (Thanks for working on this, and sharing!) 

This is so awesome, thank you so much, I'm really glad this exists. The recent shift of experts publicly worrying about AI x-risks has been a significant update for me in terms of hoping humanity avoids losing control to AI.

(but notably not Meta)

Wondering how much I should update from Meta and other big tech firms not being represented on the list. Did you reach out to the signing individuals via your networks and maybe the network didn't reach some orgs as much? Maybe there are company policies in place that prevent employees from some firms from signing the statement? And is there something specific about Meta that I can read up on (besides Yann LeCun intransigence on Twitter :P)?

I'm not sure, we can dismiss Yann LeCun's statements so easily; mostly, because I do not understand how Meta works. How influential is he there? Does he set general policy around things like AI risk?

I feel there is this unhealthy dynamic where he represents the leader of some kind of "anti-doomerism" – and I'm under the impression that he and his Twitter crowd do not engage with the arguments of the debate at all. I'm pretty much looking at this from the outside, but LeCun's arguments seem to be so far behind. If he drives Meta's AI safety policy, I'm honestly worried about that. Meta just doesn't seem to be an insignificant player.

Huge appreciation to the CAIS team for the work put in here

Great work guys, thanks for organising this!

I'm mildly surprised that Elon Musk hasn't signed, given that he did sign the FLI 6-month pause open letter and has been vocal about being worried about AI x-risk for years.

Probably the simplest explanation for this is that the organizers of this statement haven't been able to reach him, or he just hasn't had time yet (although he should have heard about it by now?). 

I reckon there's a pretty good chance he didn't sign because he wasn't asked, because he's a controversial figure.

Yea, that could be the case, although I assume having Elon Musk sign could have generated 2x the publicity. Most news outlets seem to jump on everything he does. 

Not sure what the tradeoff between attention and controversy is for such a statement. 

Most news outlets seem to jump on everything he does.

That's where my thoughts went, maybe he and/or CAIS thought that the statement would have a higher impact if reporting focuses on other signatories. That Musk thinks AI is an x-risk seems fairly public knowledge anyways, so there's no big gain here.

Truly brilliant coalition-building by CAIS and collaborators. It is likely that the world has become a much safer place as a result. Congratulations!

Curated and popular this week
 ·  · 20m read
 · 
Advanced AI could unlock an era of enlightened and competent government action. But without smart, active investment, we’ll squander that opportunity and barrel blindly into danger. Executive summary See also a summary on Twitter / X. The US federal government is falling behind the private sector on AI adoption. As AI improves, a growing gap would leave the government unable to effectively respond to AI-driven existential challenges and threaten the legitimacy of its democratic institutions. A dual imperative → Government adoption of AI can’t wait. Making steady progress is critical to: * Boost the government’s capacity to effectively respond to AI-driven existential challenges * Help democratic oversight keep up with the technological power of other groups * Defuse the risk of rushed AI adoption in a crisis → But hasty AI adoption could backfire. Without care, integration of AI could: * Be exploited, subverting independent government action * Lead to unsafe deployment of AI systems * Accelerate arms races or compress safety research timelines Summary of the recommendations 1. Work with the US federal government to help it effectively adopt AI Simplistic “pro-security” or “pro-speed” attitudes miss the point. Both are important — and many interventions would help with both. We should: * Invest in win-win measures that both facilitate adoption and reduce the risks involved, e.g.: * Build technical expertise within government (invest in AI and technical talent, ensure NIST is well resourced) * Streamline procurement processes for AI products and related tech (like cloud services) * Modernize the government’s digital infrastructure and data management practices * Prioritize high-leverage interventions that have strong adoption-boosting benefits with minor security costs or vice versa, e.g.: * On the security side: investing in cyber security, pre-deployment testing of AI in high-stakes areas, and advancing research on mitigating the ris
 ·  · 32m read
 · 
Summary Immediate skin-to-skin contact (SSC) between mothers and newborns and early initiation of breastfeeding (EIBF) may play a significant and underappreciated role in reducing neonatal mortality. These practices are distinct in important ways from more broadly recognized (and clearly impactful) interventions like kangaroo care and exclusive breastfeeding, and they are recommended for both preterm and full-term infants. A large evidence base indicates that immediate SSC and EIBF substantially reduce neonatal mortality. Many randomized trials show that immediate SSC promotes EIBF, reduces episodes of low blood sugar, improves temperature regulation, and promotes cardiac and respiratory stability. All of these effects are linked to lower mortality, and the biological pathways between immediate SSC, EIBF, and reduced mortality are compelling. A meta-analysis of large observational studies found a 25% lower risk of mortality in infants who began breastfeeding within one hour of birth compared to initiation after one hour. These practices are attractive targets for intervention, and promoting them is effective. Immediate SSC and EIBF require no commodities, are under the direct influence of birth attendants, are time-bound to the first hour after birth, are consistent with international guidelines, and are appropriate for universal promotion. Their adoption is often low, but ceilings are demonstrably high: many low-and middle-income countries (LMICs) have rates of EIBF less than 30%, yet several have rates over 70%. Multiple studies find that health worker training and quality improvement activities dramatically increase rates of immediate SSC and EIBF. There do not appear to be any major actors focused specifically on promotion of universal immediate SSC and EIBF. By contrast, general breastfeeding promotion and essential newborn care training programs are relatively common. More research on cost-effectiveness is needed, but it appears promising. Limited existing
 ·  · 11m read
 · 
Our Mission: To build a multidisciplinary field around using technology—especially AI—to improve the lives of nonhumans now and in the future.  Overview Background This hybrid conference had nearly 550 participants and took place March 1-2, 2025 at UC Berkeley. It was organized by AI for Animals for $74k by volunteer core organizers Constance Li, Sankalpa Ghose, and Santeri Tani.  This conference has evolved since 2023: * The 1st conference mainly consisted of philosophers and was a single track lecture/panel. * The 2nd conference put all lectures on one day and followed it with 2 days of interactive unconference sessions happening in parallel and a week of in-person co-working. * This 3rd conference had a week of related satellite events, free shared accommodations for 50+ attendees, 2 days of parallel lectures/panels/unconferences, 80 unique sessions, of which 32 are available on Youtube, Swapcard to enable 1:1 connections, and a Slack community to continue conversations year round. We have been quickly expanding this conference in order to prepare those that are working toward the reduction of nonhuman suffering to adapt to the drastic and rapid changes that AI will bring.  Luckily, it seems like it has been working!  This year, many animal advocacy organizations attended (mostly smaller and younger ones) as well as newly formed groups focused on digital minds and funders who spanned both of these spaces. We also had more diversity of speakers and attendees which included economists, AI researchers, investors, tech companies, journalists, animal welfare researchers, and more. This was done through strategic targeted outreach and a bigger team of volunteers.  Outcomes On our feedback survey, which had 85 total responses (mainly from in-person attendees), people reported an average of 7 new connections (defined as someone they would feel comfortable reaching out to for a favor like reviewing a blog post) and of those new connections, an average of 3