Greg_Colbourn

5476 karmaJoined
Interests:
Slowing down AI

Bio

Participation
4

Global moratorium on AGI, now (Twitter). Founder of CEEALAR (née the EA Hotel; ceealar.org)

Comments
1035

Thanks. Yeah, I see a lot of disagreement votes. I was being too hyperbolic for the EA Forum. But I do put ~80% on it (which I guess translates to "pretty much"?), with the remaining ~20% being longer timelines, or dumb luck of one kind or another that we can't actually influence.

The first of those has a weird resolution criteria of 30% year-on-year world GDP growth ("transformative" more likely means no humans left, after <1 year, to observe GDP imo; I would give the 30+% growth over a whole year scenario little credence because of this). For the second one, I think you need to include "AI Dystopia" as doom as well (sounds like an irreversible catastrophe for the vast majority of people), so 27%. (And again re LLMs, x-risk isn't from LLMs alone. "System 2" architecture, and embodiment, two other essential ingredients of AGI, are well on track too.)

1/ Unaligned ASI existing at all is equivalent to "doom-causing levels of CO2 over a doom-causing length of time". We need an immediate pause on AGI development to prevent unaligned ASI. We don't need an immediate pause on all industry to prevent doom-causing levels of CO2 over a doom-causing length of time.

2/ It's really not 99% of worlds. That is way too conservative. Metaculus puts 25% chance on weak AGI happening within 1 year and 25% on strong AGI happening within 3 years.

1% (again, conservative[1]) is not a Pascal's Mugging. 1%(+) catastrophic (not extinction) risk is plausible for climate change, and a lot is being done there (arguably, enough that we are on track to avert catastrophe if action[2] keeps scaling).

flippant militant advocacy for pausing on alarmist slogans that will carry extreme reputation costs in the 99% of worlds where no x-risk from LLMs happen

It's anything but flippant[3]. And x-risk isn't from LLMs alone. "System 2" architecture, and embodiment, two other essential ingredients, are well on track too. I'm happy to bear any reputation costs in the event we live through this. It's unfortunate, but if there is no extinction, then of course people will say we were wrong. But there might well only be no extinction because of our actions![4]

  1. ^

    I actually think it's more like 50%, and can argue this case if you think it's a crux.

  2. ^

    Including removing CO₂ from the atmosphere and/or deflecting solar radiation.

  3. ^

    Please read the PauseAI website.

  4. ^

    Or maybe we will just luck out [footnote 10 on linked post].

(Sorry I missed this before.) There is strong public support for a Pause already. Arguably all that's needed is galvanising a critical mass of the public into taking action.

Donated $180k to PauseAI (US and Global). Calling on more people to donate significant amounts. Pretty much the only way we're going to survive the next 5-10 years is by such efforts being successful. [X post]

I think a bottleneck for this is finding experienced/plugged in people who are willing to go all out on a Pause.

1% is very conservative (and based on broad surveys of AI researchers, who mostly are building the very technology causing the risk, so are obviously biased against it being high). The point I'm making is that even a 1% chance of death by collateral damage is totally unacceptable coming from any other industry. Supporting a Pause should therefore be a no brainer. (Or to be consistent we should be dismantling ~all regulation of ~all industry.)

The fact that you can't say more is part of the problem. There needs to be an open global discussion of an AGI Moratorium at the highest levels of policymaking, government, society and industry.

Yes, I was thinking of James Hansen's testimony to the US Senate in 1988 as being equivalent to some of the Senate hearings on AI last year.

Load more