[Edit: I've updated this post on October 24 in response to some feedback]
NIMBYs don’t call themselves NIMBYs. They call themselves affordable housing advocates or community representatives or environmental campaigners. They’re usually not against building houses. They just want to make sure that those houses are affordable, attractive to existing residents, and don’t destroy habitat for birds and stuff.
Who can argue with that? If, ultimately, those demands stop houses from being built entirely, well, that’s because developers couldn’t find a way to build them without hurting poor people, local communities, or birds and stuff.
This is called politics and it’s powerful. The most effective anti-housebuilding organisation in the UK doesn’t call itself Pause Housebuilding. It calls itself the Campaign to Protect Rural England, because English people love rural England. CPRE campaigns in the 1940s helped shape England’s planning system. As a result, permission to build houses is only granted when it’s in the “public interest”; in practice it is given infrequently and often with onerous conditions.[1]
The AI pause folks could learn from their success. Instead of campaigning for a total halt to AI development, they could push for strict regulations that aim to ensure new AI systems won’t harm people (or birds and stuff).
This approach has two advantages. First, it’s more politically palatable than a heavy-handed pause. And second, it’s closer to what those of us concerned about AI safety ideally want: not an end to progress, but progress that is safe and advances human flourishing.
I think NIMBYs happen to be wrong about the cost-benefit calculation of strong regulation. But AI safety people are right. Advanced AI systems pose grave threats and we don’t know how to mitigate them.
Maybe ask governments for an equivalent system for new AI models. Require companies to prove to planners their models are safe. Ask for:
- Independent safety audits
- Ethics reviews
- Economic analyses
- Public reports on risk analysis and mitigation measures
- Compensation mechanisms for people whose livelihoods are disrupted by automation
- And a bunch of other measures that plausibly limit the AI risks
In practice, these requirements might be hard to meet. But, considering the potential harms and meaningful chance something goes wrong, they should be. If a company developing an unprecedentedly large AI model with surprising capabilities can’t prove it’s safe, they shouldn’t release it.
This is not about pausing AI.
I don’t know anybody who thinks AI systems have zero upside. In fact, the same people worried about the risks are often excited about the potential for advanced AI systems to solve thorny coordination problems, liberate billions from mindless toil, achieve wonderful breakthroughs in medicine, and generally advance human flourishing.
But they’d like companies to prove their systems are safe before they release them into the world, or even train them at all. To prove that they’re not going to cause harm by, for example, hurting people, disrupting democratic institutions, or wresting control of important sociopolitical decisions from human hands.
Who can argue with that?
[Edit: Peter McIntyre has pointed out that Ezra Klein made a version of this argument on the 80K podcast. So I've been scooped - but at least I'm in good company!]
- ^
“Joshua Carson, head of policy at the consultancy Blackstock, said: “The notion of developers ‘sitting on planning permissions’ has been taken out of context. It takes a considerable length of time to agree the provision of new infrastructure on strategic sites for housing and extensive negotiation with councils to discharge planning conditions before homes can be built.”” (Kollewe 2021)
It seems that the successful opposition to previous technologies was indeed explicitly against that technology, and so I'm not sure the softening of the message you suggest is actually necessarily a good idea. @charlieh943 recent case study into GM crops highlighted some of this (https://forum.effectivealtruism.org/posts/6jxrzk99eEjsBxoMA/go-mobilize-lessons-from-gm-protests-for-pausing-ai - he suggests emphasising the injustice of the technology might be good); anti-SRM activists have been explictly against SRM (https://www.saamicouncil.net/news-archive/support-the-indigenous-voices-call-on-harvard-to-shut-down-the-scopex-project), anti-nuclear activists are explicitly against nuclear energy and many more. Essentially, I'm just unconvinced that 'its bad politics' is necessarily supported by case studies that are most relevant to AI.
Nonetheless, I think there are useful points here both about what concrete demands could look like, or who useful allies could be, and what more diversified tactics could look like. Certainly, a call for a morotorium is not necessarily the only thing that could be useful in pushing towards a pause. Also, I think you make a point that a 'pause' might not be the best message that people can rally behind, although I reject the opposition. I think, in a similar way to @charlieh943 that emphasising injustice may be one good message that can be rallied around. I also think a more general 'this technology is dangerous and allowing companies to make it are dangerous' may also be a useful rallying message, which I have argued for in the past https://forum.effectivealtruism.org/posts/Q4rg6vwbtPxXW6ECj/we-are-fighting-a-shared-battle-a-call-for-a-different
Gideon - nice comment. I agree that it's quite tricky to identify specific phrases, messages, narratives, or policies that most people would rally around.
A big challenge is that in our hyper-partisan, polarized social media world, even apparently neutral-sounding concepts such as 'injustice' or 'freedom' get coded as left, or right, respectively.
So, the more generic message 'this technology is dangerous', or 'this tech could hurt our kids', might have broader appeal. (Although, even a mention of kids might get coded as leaning conservative, given the family values thing.)