Global moratorium on AGI, now (Twitter). Founder of CEEALAR (née the EA Hotel; ceealar.org)
In the AGI case there needs to be similar conditions - there have to be enough insecure computers on the planet for the AGI to occupy, enough insecure financial assets or robotics for the agi to manipulate the world
All of these seem true, with the exception that robots aren't needed - there are already plenty of humans (the majority?) that can be manipulated with GPT-4-level generated text.
or intelligence - which itself needs massive amounts of compute - needs to be so useful at high levels that the AGI can substitute for some inputs.
The AI can gain access to the massive amounts of compute via the insecure computers and insecure financial resources.
You need evidence for this.
There are already plenty of sound theoretical arguments and some evidence for things like specification gaming, goal misgeneralisation and deception in AI models. How do you propose we get sufficient empirical evidence for AI takeover short of an actual AI takeover or global catastrophe?
actual empirical evidence that a nuke if used would destroy the planet
How would you get this short of destroying the planet? The Trinity test went ahead based on theoretical calculations showing that it couldn't happen, but arguably nowhere near enough of them, given the stakes!
But with AGI, half of the top scientists think there's a 10% chance it will destroy the world! I don't think the Trinity test would've gone ahead in similar circumstances.
-----------------------------------
Ben I'm sorry but your argument is not defensible. Your examples are a joke. Many of them shouldn't even be in the list as they provide zero support for the argument.
Downvoted your comment for it's hostility and tone. This isn't X (Twitter).
Great post! Some highlights [my emphasis in bold]:
Funnily enough, even though animal advocates do radical stunts, you do not hear this fear expressed much in animal advocacy. If anything, in my experience, the existence of radical vegans can make it easier for “the reasonable ones” to gain access to institutions. Even just within EAA, Good Food Institute celebrates that meat-producer Tyson Foods invests in a clean meat startup at the same time the Humane League targets Tyson in social media campaigns. When the community was much smaller and the idea of AI risk more fringe, it may have been truer that what one member did would be held against the entire group. But today x-risk is becoming a larger and larger topic of conversation that more people have their own opinions on, and the risk of the idea of AI risk getting contaminated by what some people do in its name grows smaller.
This with the additional point that AI Pause should be a much easier sell than animal advocacy as it is each and every person's life on the line, including the people building AI. No standing up for marginalised groups, altruism or do-gooding of any kind is required to campaign for a Pause.
Much of the public is baffled by the debate about AI Safety, and out of that confusion, AI companies can position themselves as the experts and seize control of the conversation. AI Safety is playing catch-up, and alignment is a difficult topic to teach the masses. Pause is a simple and clear message that the public can understand and get behind that bypasses complex technical jargon and gets right to the heart of the debate– if AI is so risky to build, why are we building it?
Yes! I think a lot of AI Governance work involving complicated regulation, and appeasing powerful pro-AI-industry actors and those who think the risk-reward balance is in favour of reward, loses sight of this.
advocacy activities could be a big morale boost, if we’d let them. Do you remember the atmosphere of burnout and resignation after the “Death with Dignity” post? The feeling of defeat on technical alignment? Well, there’s a new intervention to explore! And it flexes different muscles! And it could even be a good time!
It's definitely been refreshing to me to just come out and say the sensible thing. Bite the bullet of "if it's so dangerous, let's just not build it". And this post itself is a morale boost :)
Nuclear chain reactions leading to massive explosions are dangerous. We don't have separate prohibition treaties on each specific model of nuke.
Impenetrable multi-trillion-parameter neural networks are dangerous. I think it does make sense for AI developers to prove that AI (as per the current foundation model neural network paradigm) in the abstract can be safe.
Isn't targeting policymakers still outside game? (If inside game is the big AI companies.)
If we slow down licensing too much, we almost guarantee that the first super-intelligence is not going to be developed by anyone going through the proper process.
The licensing would have to come with sufficient enforcement of compute limits that this isn't possible (and any sensible licensing would involve this. How many mega-environment-altering infrastructure projects are built without proper licenses? Sure, they may be rubber-stamped via corrupt officials, but that's another matter..)
Agree. I find Empty Individualism pretty depressing to think about though. And Open Individualism seems more natural, from (my) subjective experience.
On the related theme I see running through many of these suggestions (slowing down AI / moratorium on AGI): Joep Meindertsma of Pause AI.
Re human genetic engineering, I don't think it's data on errors that is preventing it happening, it's moral disgust of eugenics. We could similarly have a taboo against AGI if enough people are scared enough of, and disgusted by, the idea of a digital alien species that doesn't share our values taking over and destroying all biological life.
I don't know specifics on who has applied to LTFF, but I think you should be funding orgs and people like these:
All of these are new (post-GPT-4): Centre for AI Policy, Artificial Intelligence Policy Institute, PauseAI, Stop AGI, Campaign for AI Safety, Holly Elmore, Safer AI, Stake Out AI.
Also, pre-existing: Centre for AI Safety, Future of Life Institute.
(Maybe there is a bottleneck on applications too.)
We are already in crunch time, doubly so post GPT-4. What predictors are you using that aren't yet being triggered?
I also agree with David Manheim that the path matters; and therefore incremental steps such as a US moratorium are likely net positive, especially considering that it is crunch time, now. International treaties can be built from such a precedent, and the US is probably at least 1-2 years ahead of the rest of the world currently.