Global moratorium on AGI, now (Twitter). Founder of CEEALAR (née the EA Hotel; ceealar.org)
The first of those has a weird resolution criteria of 30% year-on-year world GDP growth ("transformative" more likely means no humans left, after <1 year, to observe GDP imo; I would give the 30+% growth over a whole year scenario little credence because of this). For the second one, I think you need to include "AI Dystopia" as doom as well (sounds like an irreversible catastrophe for the vast majority of people), so 27%. (And again re LLMs, x-risk isn't from LLMs alone. "System 2" architecture, and embodiment, two other essential ingredients of AGI, are well on track too.)
1/ Unaligned ASI existing at all is equivalent to "doom-causing levels of CO2 over a doom-causing length of time". We need an immediate pause on AGI development to prevent unaligned ASI. We don't need an immediate pause on all industry to prevent doom-causing levels of CO2 over a doom-causing length of time.
2/ It's really not 99% of worlds. That is way too conservative. Metaculus puts 25% chance on weak AGI happening within 1 year and 25% on strong AGI happening within 3 years.
1% (again, conservative[1]) is not a Pascal's Mugging. 1%(+) catastrophic (not extinction) risk is plausible for climate change, and a lot is being done there (arguably, enough that we are on track to avert catastrophe if action[2] keeps scaling).
flippant militant advocacy for pausing on alarmist slogans that will carry extreme reputation costs in the 99% of worlds where no x-risk from LLMs happen
It's anything but flippant[3]. And x-risk isn't from LLMs alone. "System 2" architecture, and embodiment, two other essential ingredients, are well on track too. I'm happy to bear any reputation costs in the event we live through this. It's unfortunate, but if there is no extinction, then of course people will say we were wrong. But there might well only be no extinction because of our actions![4]
(Sorry I missed this before.) There is strong public support for a Pause already. Arguably all that's needed is galvanising a critical mass of the public into taking action.
1% is very conservative (and based on broad surveys of AI researchers, who mostly are building the very technology causing the risk, so are obviously biased against it being high). The point I'm making is that even a 1% chance of death by collateral damage is totally unacceptable coming from any other industry. Supporting a Pause should therefore be a no brainer. (Or to be consistent we should be dismantling ~all regulation of ~all industry.)
Thanks. Yeah, I see a lot of disagreement votes. I was being too hyperbolic for the EA Forum. But I do put ~80% on it (which I guess translates to "pretty much"?), with the remaining ~20% being longer timelines, or dumb luck of one kind or another that we can't actually influence.