Less than a year ago, a community-wide conversation started about slowing down AI.
Some commented that outside communities won't act effectively to restrict AI, since they're not "aligned" with our goal of preventing extinction. That's where I stepped in:
Communities are already taking action – to restrict harmful scaling of AI.
I'm in touch with creatives, data workers, journalists, veterans, product safety experts, AI ethics researchers, and climate change researchers organising against harms.
Today, I drafted a plan to assist creatives. It's for a funder, so I omitted details.
Would love your thoughts, before the AI Pause Debate Week closes:
Plan
Rather than hope new laws will pass in 1-2 years, we can enforce established laws now. It is in AI Safety's interest to support creatives to enforce laws against data laundering.
To train “everything for everyone” models (otherwise called General-Purpose AI), companies scrape the web. AI companies have scraped so much personal data, that they are breaking laws. These laws protect copyright holders against text and data mining, children against sharing of CSAM, and citizens against the processing of personal data.
Books, art and photos got scraped to train AI without consent, credit or compensation. Creatives began lobbying, and filed six class-action lawsuits in the US. A prediction market now puts a 24% chance on generative AI trained on crawled art being illegal in 2027 because of copyright in the US.
In the EU, no lawsuit has been filed. Yet the case is stronger in the EU.
In the EU, this commercial text and data mining is illegal. The Digital Single Market 2019 directive upholds a 2001 provision:“Such [TDM] exceptions and limitations may not be applied in a way which prejudices the legitimate interests of the rightholder or which conflicts with the normal exploitation of his work or other subject-matter."
[project details]
This proposal is about restricting data laundering. If legal action here is indeed tractable, it is worth considering funding other legal actions too.
Long-term vision
We want this project to become a template for future legal actions.
Supporting communities’ legal actions to prevent harms can robustly restrict the scaled integration of AI in areas of economic production.
Besides restricting data, legal actions can restrict AI being scaled on the harmful exploitation of workers, uses, and compute:
- Employment and whistleblowing laws can protect underpaid or misled workers.
- Tort, false advertising, and product safety laws can protect against misuses.
- Environmental regulations can protect against pollutive compute.
AI governance folk have focussed the most on establishing regulations and norms to evaluate and prevent risks of a catastrophe or extinction.
Risk-based regulation has many gaps, as described in this law paper:
❝ risk regulation typically assumes a technology will be adopted despite its harms…Even immense individual harms may get dismissed through the lens of risk analysis, in the face of significant collective benefits.
❝ The costs of organizing to participate in the politics of risk are often high.. It also removes the feedback loop of tort liability: without civil recourse, risk regulation risks being static. Attempts to make risk regulation “adaptive” or iterative in turn risk capture by regulated entities.
❝ risk regulation as most scholars conceive of it entails mitigating harms while avoiding unnecessarily stringent laws, while the precautionary principle emphasizes avoiding insufficiently stringent laws… [M]any of the most robust examples of U.S. risk regulation are precautionary in nature: the Food and Drug Administration’s regulation of medicine...and the Nuclear Regulatory Commission’s certification scheme for nuclear reactors. Both of these regulatory schemes start from the default of banning a technology from general use until it has been demonstrated to be safe, or safe enough.
Evaluative risk-based regulation tends to lead to AI companies being overwhelmingly involved in conceiving of and evaluating the risks. Some cases:
- OpenAI lobbying against categorizing GPT as “high risk”.
- Anthropic's Responsible Scaling Policy – in effect allowing staff to scale on, as long as they/the board evaluates the risk that their “AI model directly causes large scale devastation” as low enough.
- Subtle regulatory capture of the UK's AI Safety initiatives.
Efforts to pass risk-based laws will be co-opted by Big Tech lobbyists aiming to dilute restrictions on AI commerce. The same is not so with lawsuits – the most AI companies can do is try not to lose the case.
Lawsuits put pressure on Big Tech, in a “business as usual” way. Of course, companies should not be allowed to break laws to scale AI. Of course, AI companies should be held accountable. Lawsuits focus on the question whether specific damages were caused, rather than on broad ideological disagreements, which makes lawsuits less politicky.
Contrast the climate debates in US congress with how Sierra Club sued coal plant after coal plant, on whatever violations they could find, preventing the scale up of coal plants under the Trump Administration.
A legal approach reduces conflicts between communities concerned about AI.
The EU Commission announcement that “mitigating the risk of extinction should be a global priority” gave into bifurcated reactions – excitement from the AI Safety side, critique from the AI Ethics side. Putting aside whether a vague commitment to mitigate extinction risks can be enforced, the polarization around it curbs a collective response.
Lately, there are heated discussions between AI Ethics and AI Safety. Concerns need to be recognised (eg. should AI Safety folk have given labs funds, talent, and ideological support? should AI Ethics folk worry about more than current stochastic parrots?).
But it distracts from what needs to be done: restrict Big Tech from scaling unsafe AI.
AI Ethics researchers have been supporting creatives, but lack funds.
AI Safety has watched on, but could step in to alleviate the bottleneck.
Empowering creatives is a first step to de-escalating the conflict.
Funding lawsuits rectifies a growing power imbalance. AI companies are held liable for causing damage to individual citizens, rather than just being free to extract profit and reinvest in artificial infrastructure.
Communities are noticing how Big Tech consolidates power with AI.
Communities are noticing the growing harms and risks of corporate-funded automated technology growth.
People feel helpless. Community leaders are overwhelmed by the immensity of the situation, recognising that their efforts alone will not be enough.
Like some Greek tragedy, Big Tech divides and conquers democracy.
Do we watch our potential allies wither one by one – first creatives, then gig workers, then Black and conservative communities, then environmentalists?
Can we support them to restrict AI on the frontiers? Can we converge on a shared understanding of what situations we all want to resolve?
Thank you for the thoughts!
Yes for employed tech workers. But OpenAI and Meta also rely on gigs/outsourcing to a much larger number of data workers, who are underpaid.
That's fair in terms of AI companies being able to switch to employing those researchers instead.
Particularly at OpenAI though, it seems half or more ML researchers now are concerned about AI x-risk, and were kinda enticed in by leaders and HR to work on a beneficial AGI vision (that by my controllability research cannot pan out). Google and Meta have promoted their share of idealistic visions, that similarly seem misaligned with what those corporations are working toward.
A question is how much an ML researcher whistleblowing by releasing internal documents could lead to negative public opinion on an AI company and/or lead to a tightened regulatory response.
Makes sense if you put credence on that scenario.
IMO it does not make sense given how the model's functioning must be integrative/navigating of the greater physical complexity of model components interacting with larger outside contexts.
Agreed here, and given the energy-intensiveness of computing ML models (vs. estimate for "flops" in human brains), if we allow to corporations gradually run more autonomously, it makes sense for those corporations to scale up nuclear power.
Next to direct CO2 emissions of computation, other aspects would concern environmentalists.
I used compute as a shorthand, but would include all chemical pollution and local environmental destruction across the operation and production lifecycles of the hardware infrastructure.
Essentially, the artificial infrastructure is itself toxic. At the current scale, we are not noticing the toxicity much given that it is contained within facilities and/or diffuse in its flow-through effects.
I wrote this for a lay audience:
But crypto-currencies go bust, since they produce little value of their own.
AI models, on the other hand, are used to automate economic production.
AI company leaders are anticipating, reasonably, that they are going to get regulated.
I would not compare against the reference of how much model scaling is unrestricted now, but against the counterfactual of how much model scaling would else be restricted in the future.
If AI companies manage to shift policy focus toward legit-seeming risk regulations that fail at restricting continued reckless scaling of training and deployment, I would count that as a loss.
Strong point. Agreed.
Here is another example I mentioned in the project details: