Hide table of contents

Less than a year ago, a community-wide conversation started about slowing down AI.

Some commented that outside communities won't act effectively to restrict AI, since they're not "aligned" with our goal of preventing extinction.  That's where I stepped in:


Communities are already taking action – to restrict harmful scaling of AI. 
I'm in touch with creatives, data workers, journalists, veterans, product safety experts, AI ethics researchers, and climate change researchers organising against harms.


Today, I drafted a plan to assist creatives.  It's for a funder, so I omitted details.  
Would love your thoughts, before the AI Pause Debate Week closes:

Plan

Rather than hope new laws will pass in 1-2 years, we can enforce established laws now. It is in AI Safety's interest to support creatives to enforce laws against data laundering.

To train “everything for everyone” models (otherwise called General-Purpose AI), companies scrape the web. AI companies have scraped so much personal data, that they are breaking laws. These laws protect copyright holders against text and data mining, children against sharing of CSAM, and citizens against the processing of personal data.

Books, art and photos got scraped to train AI without consent, credit or compensation. Creatives began lobbying, and filed six class-action lawsuits in the US.  A prediction market now puts a 24% chance on generative AI trained on crawled art being illegal in 2027 because of copyright in the US.

In the EU, no lawsuit has been filed. Yet the case is stronger in the EU.

In the EU, this commercial text and data mining is illegal. The Digital Single Market 2019 directive upholds a 2001 provision:“Such [TDM] exceptions and limitations may not be applied in a way which prejudices the legitimate interests of the rightholder or which conflicts with the normal exploitation of his work or other subject-matter."

 [project details]

This proposal is about restricting data laundering. If legal action here is indeed tractable, it is worth considering funding other legal actions too.
 

Long-term vision

We want this project to become a template for future legal actions. 

Supporting communities’ legal actions to prevent harms can robustly restrict the scaled integration of AI in areas of economic production.

Besides restricting data, legal actions can restrict AI being scaled on the harmful exploitation of workersuses, and compute:
- Employment and whistleblowing laws can protect underpaid or misled workers.
- Tort, false advertising, and product safety laws can protect against misuses.
- Environmental regulations can protect against pollutive compute.

AI governance folk have focussed the most on establishing regulations and norms to evaluate and prevent risks of a catastrophe or extinction.

Risk-based regulation has many gaps, as described in this law paper:
❝ risk regulation typically assumes a technology will be adopted despite its harms…Even immense individual harms may get dismissed through the lens of risk analysis, in the face of significant collective benefits. 

❝ The costs of organizing to participate in the politics of risk are often high.. It also removes the feedback loop of tort liability: without civil recourse, risk regulation risks being static. Attempts to make risk regulation “adaptive” or iterative in turn risk capture by regulated entities.

❝ risk regulation as most scholars conceive of it entails mitigating harms while avoiding unnecessarily stringent laws, while the precautionary principle emphasizes avoiding insufficiently stringent laws… [M]any of the most robust examples of U.S. risk regulation are precautionary in nature: the Food and Drug Administration’s regulation of medicine...and the Nuclear Regulatory Commission’s certification scheme for nuclear reactors. Both of these regulatory schemes start from the default of banning a technology from general use until it has been demonstrated to be safe, or safe enough. 


Evaluative risk-based regulation tends to lead to AI companies being overwhelmingly involved in conceiving of and evaluating the risks. Some cases:
OpenAI lobbying against categorizing GPT as “high risk”.
- Anthropic's Responsible Scaling Policy – in effect allowing staff to scale on, as long as they/the board evaluates the risk that their “AI model directly causes large scale devastation” as low enough.
- Subtle regulatory capture of the UK's AI Safety initiatives.

Efforts to pass risk-based laws will be co-opted by Big Tech lobbyists aiming to dilute restrictions on AI commerce. The same is not so with lawsuits – the most AI companies can do is try not to lose the case.

Lawsuits put pressure on Big Tech, in a “business as usual” way. Of course, companies should not be allowed to break laws to scale AI. Of course, AI companies should be held accountable. Lawsuits focus on the question whether specific damages were caused, rather than on broad ideological disagreements, which makes lawsuits less politicky.

Contrast the climate debates in US congress with how Sierra Club sued coal plant after coal plant, on whatever violations they could find, preventing the scale up of coal plants under the Trump Administration.

A legal approach reduces conflicts between communities concerned about AI. 
The EU Commission announcement that “mitigating the risk of extinction should be a global priority” gave into bifurcated reactions – excitement from the AI Safety side, critique from the AI Ethics side. Putting aside whether a vague commitment to mitigate extinction risks can be enforced, the polarization around it curbs a collective response.

Lately, there are heated discussions between AI Ethics and AI Safety. Concerns need to be recognised (eg. should AI Safety folk have given labs funds, talent, and ideological support? should AI Ethics folk worry about more than current stochastic parrots?)
But it distracts from what needs to be done:  restrict Big Tech from scaling unsafe AI. 

AI Ethics researchers have been supporting creatives, but lack funds. 
AI Safety has watched on, but could step in to alleviate the bottleneck.
Empowering creatives is a first step to de-escalating the conflict.

Funding lawsuits rectifies a growing power imbalance. AI companies are held liable for causing damage to individual citizens, rather than just being free to extract profit and reinvest in artificial infrastructure.

Communities are noticing how Big Tech consolidates power with AI. 
Communities are noticing the growing harms and risks of corporate-funded automated technology growth. 

People feel helpless. Community leaders are overwhelmed by the immensity of the situation, recognising that their efforts alone will not be enough.

Like some Greek tragedy, Big Tech divides and conquers democracy.
Do we watch our potential allies wither one by one – first creatives, then gig workers, then Black and conservative communities, then environmentalists?

Can we support them to restrict AI on the frontiers?  Can we converge on a shared understanding of what situations we all want to resolve? 

Comments30
Sorted by Click to highlight new comments since:

Besides restricting data, legal actions can restrict AI being scaled on the harmful exploitation of workersuses, and compute:
- Employment and whistleblowing laws can protect underpaid or misled workers.
- Tort, false advertising, and product safety laws can protect against misuses.
- Environmental regulations can protect against pollutive compute.

These seem quite poorly targeted to me:

  • OpenAI or META generally pays their workers pretty well, and there are enough ML researchers unconcerned with Xrisk that there is little need to deceive them about what they're working on.
  • Retrospective laws like these seem like a poor fit for dealing with AI that might abruptly become more dangerous, especially if it did so in dev rather than prod.
  • A geothermal or nuclear powered AI is no less dangerous than a coal powered one, and a coal-powered kWh used to train a model is no more polluting than that same kWh used to power a home.

Efforts to pass risk-based laws will be co-opted by Big Tech lobbyists aiming to dilute restrictions on AI commerce. The same is not so with lawsuits – the most AI companies can do is try not to lose the case.

This seems incorrect to me, for two reasons:

  • There are currently basically zero restrictions on model scaling, so it's hard to see how regulation could make this worse.
  • Lawsuits can set precedent, so losing them is not zero cost. For example, Lina Khan's frivolous lawsuits against tech companies has weakened the FTC's ability to pursue aggressive antitrust policy because she keeps losing and setting precedents that restrict the FTC in the future.

Thank you for the thoughts! 
 

OpenAI or META generally pays their workers pretty well

Yes for employed tech workers. But OpenAI and Meta also rely on gigs/outsourcing to a much larger number of data workers, who are underpaid
 
 

there are enough ML researchers unconcerned with Xrisk that there is little need to deceive them 

That's fair in terms of AI companies being able to switch to employing those researchers instead. 

Particularly at OpenAI though, it seems half or more ML researchers now are concerned about AI x-risk, and were kinda enticed in by leaders and HR to work on a beneficial AGI vision (that by my controllability research cannot pan out). Google and Meta have promoted their share of idealistic visions, that similarly seem misaligned with what those corporations are working toward. 

A question is how much an ML researcher whistleblowing by releasing internal documents could lead to negative public opinion on an AI company and/or lead to a tightened regulatory response.
 
 

for dealing with AI that might abruptly become more dangerous, especially if it did so in dev rather than prod.

Makes sense if you put credence on that scenario. 
IMO it does not make sense given how the model's functioning must be integrative/navigating of the greater physical complexity of model components interacting with larger outside contexts.
 
 

A geothermal or nuclear powered AI is no less dangerous than a coal powered one,

Agreed here, and given the energy-intensiveness of computing ML models (vs. estimate for "flops" in human brains), if we allow to corporations gradually run more autonomously, it makes sense for those corporations to scale up nuclear power. 

Next to direct CO2 emissions of computation, other aspects would concern environmentalists. 
I used compute as a shorthand, but would include all chemical pollution and local environmental destruction across the operation and production lifecycles of the hardware infrastructure.

Essentially, the artificial infrastructure is itself toxic. At the current scale, we are not noticing the toxicity much given that it is contained within facilities and/or diffuse in its flow-through effects.

 I wrote this for a lay audience:

  • we miss the most diffuse harm – to our environment. Training a model can gobble up more electricity than 100 US homes use in one year. A data center slurps water too – millions of liters a day, as locals undergo drought. Beside carbon emissions, hundreds of cancerous chemicals are released during mining, production and recycling
  • Environmentalists see now how a crypto-boom slurped ~0.5% of US energy. 
    But crypto-currencies go bust, since they produce little value of their own. 
    AI models, on the other hand, are used to automate economic production. 
  • Big Tech extracts profit using AI-automated applications, to reinvest in more toxic hardware factories. To install more hardware in data centers slurping more water and energy. To compute more AI code. To extract more profit.
  • This is a vicious cycle. After two centuries of companies scaling resource-intensive tech, we are near societal collapse. Companies now scale AI tech to automate companies scaling tech. AI is the mother of all climate catastrophes.

     

There are currently basically zero restrictions on model scaling, so it's hard to see how regulation could make this worse.

AI company leaders are anticipating, reasonably, that they are going to get regulated.

I would not compare against the reference of how much model scaling is unrestricted now, but against the counterfactual of how much model scaling would else be restricted in the future.
If AI companies manage to shift policy focus toward legit-seeming risk regulations that fail at restricting continued reckless scaling of training and deployment, I would count that as a loss.
 

 

Lawsuits can set precedent, so losing them is not zero cost. For example, Lina Khan's frivolous lawsuits against tech companies has weakened the FTC's ability to pursue aggressive antitrust policy because she keeps losing and setting precedents that restrict the FTC in the future.

Strong point. Agreed. 

Here is another example I mentioned in the project details:

  • We want to prepare the EU case rigorously, rather than file fast as in happened before in the US. The Stable Diffusion case, which Alex Champandard now advises, previously made technical mistakes (eg. calling outputs “mosaics”). In US courts, dismissal can set a precedent.
  • While under EU civil law, courts do not set legal precedents, in practice a judge will still look back at judges’ decisions in previous cases.

 I'm seeing a few comments so far with the sentiment that "lawsuits don't have the ultimate aim of reducing x-risk, so we shouldn't pursue them". I want to push back on this. 

Let's say you're an environmental group trying to stop a new coal power plant from being built. You notice that the proposed site has not gone through proper planning permissions, and the locals think the plant will ruin their nice views. They are incredibly angry about this, and are doing protests and lawsuits on the matter. Do you support them? 

Under the logic above, the answer would be no. Your ultimate aim has nothing to do with planning permissions or nice views, it's stopping carbon emissions. If they moved it to a different location, the locals objections would be satisfied, but yours wouldn't be. 

But you'd still be insane not to support the locals here. The lawsuits and protest damage the coal project, in terms of PR, money, and delays. New sites are hard to find, and it's quite possible that if the locals win, the project will end up cancelled. Most of the work is being done by people who wouldn't have otherwise helped you in your cause (and might be persuaded to join your cause in solidarity!). And while protecting nice views may not be your number one priority, it's still a good thing to do. 

I hope you see that in this analogy, the AI x-risk person is the environmental group, and the AI ethics person is the locals (or vice versa, depending on which view you believe). Sure, protecting creatives from plagiarism might not be your highest priority, but forcing creative compliance might also have the side effect of slowing down AI development for all companies at once, which you may think helps with x-risk. And it's likely to be easier to implement than a full AI pause, thanks to the greater base of support. 

Very well put. Love the detailed analogy.

I recognise that what is going to appeal to others here concerned about extinction risk is the instrumental reasons. And those instrumental reasons are sufficient to offer some money to cash-strapped communities organising to restrict AI.

(From my perspective, paths to extinction involve a continuation of current harmful AI exploitation, but that’s another story.)

I’m against these tactics. We can and should be putting pressure on the labs to be more safety conscious, but we don’t want to completely burn our relationships with them.

Maintaining those relationships allows us to combine both inside game and outside game which is important as we need both pressure to take action and the ability to direct it in a productive way.

It’s okay to protest them and push for the government to impose a moratorium, but nuisance lawsuits is a great way to get sympathetic insiders off-side and to convince them we aren’t acting in good faith.

If these lawsuits would save us, then it could be worth the downsides, but my modal view is that they end up only being a minor nuisance.

AI Safety’s old approach of building relationships with AI labs has enabled labs to further scale the training and commercialisation of AI models.

as we need both pressure to take action and the ability to direct it in a productive way.

As far ask I can see, little actual pressure has been put on these labs by folks in this community. I don’t think saying stuff by gently protesting counts as pressure.

That’s not what animal welfare organisations do when applying pressure. They seriously point out the deficiencies of the companies involved. They draft binding commitments for companies to wean off using harmful systems. And they ratchet up public pressure for more stringent demands with the internal conversations, so it makes sense for company leaders to make more changes.

My sense is that AI Safety people often are not comfortable with confronting companies, and/or hold somewhat naive notions of what it takes to push for reforms on the margin.

If AI Safety funders could not even stomach the notion of supporting another community (creatives) to ensure existing laws are not broken, then they cannot rely on themselves acting to ensure future laws are not broken by the AI companies.

A common reaction in this community to any proposed campaign that pushes for actually restricting the companies is that the leaders might no longer see us as being nice to them and no longer want to work with us. Which is implying that we perceive the company leaders as having the power in this relationship, and we don’t want to cross them lest they drop us.

Companies whose start up we supported have been actively eroding the chance of safe future AI for years now. And we’re going to let them continue, because we want to “maintain” this relationship with them.

From a negotiation stance, this is will not work out. We are not building the leverage for company leaders to actually consider to stop scaling. They will do lip service on “extinction risks” and then bulldoze over our wishes that they slow down.

The default is that the AI companies are going to scale on, and successfully reach the deployment of very harmful and long-term dangerous systems integrated into our economy.

What do you want to do? Further follow the old approach of trying to make AI labs more safety conscious (with some pause advocacy thrown in)?

I agree that the old approach didn’t work. It was too focused on the inside game. We need to combine the two.

(Update: Just wanted to clarify that it made more sense to focus just on inside game when EA was smaller and it appeared as though it would be extremely challenging to convince the public that they should worry/pay attention. Circumstances have changed since then to increase the importance of outside game).

Great, we agree there then. 

Questions this raises:

  1. How much can supporting other communities to restrict data laundering, worker exploitation, unsafe uses, pollutive compute, etc, slow or restrict AI development? (eg. restrict data laundering by supporting lawsuits against unlawful TDM in the EU and state attorney actions against copyright violations in the US)
  2. How much should we work to support other communities to restrict AI development in different areas ("outside game) vs. working with AI companies to slow down development or "differentiatively" develop ("inside game")? 
  3. How much are we supporting those other communities now?

Hi Remmelt, thanks for sharing these thoughts! I actually generally agree that mitigating and avoiding harms from AI should involve broad democratic participation rather than narrow technical focus - it reminded me a lot of Gideon's post "We are fighting a shared battle". So view the questions below as more nitpicks, as I mostly agreed with the 'vibe' of your post.

AI companies have scraped so much personal data, that they are breaking laws.

Quick question for my understanding, do the major labs actually do their own scraping, or do other companies do the scraping which the major AI labs pay for? I'm thinking of the use of Common Crawl to train LLMs here for instance. It potentially might affect the legal angle, though that's not my area of expertise.

- Subtle regulatory capture of the UK's AI Safety initiatives.

Again, for my clarification, what do you think about this article? I have my own thoughts but would like to hear your take.

AI Ethics researchers have been supporting creatives, but lack funds. 
AI Safety has watched on, but could step in to alleviate the bottleneck.
Empowering creatives is a first step to de-escalating the conflict.

Thanks for the article, after a quick skim I'm definitely going to sit down and read it properly. My honest question is do you think this is actually a step to de-escalate the Ethics/Safety fued? Sadly, my own thoughts have become a lot more pessimistic over the last ~year or so, and I think asking the 'Safety' side to make a unilateral de-escalatory step is unlikely to actually lead to much progress.

(if the above questions aren't pertinent to your post here, happy to pick them up in DMs or some other medium)

Hey, thank you too for the nitty-gritty thoughts!

do the major labs actually do their own scraping, or do other companies do the scraping which the major AI labs pay for?

For major labs I know of (OpenAI, DeepMind, Anthropic) in terms of those that have released the most "generally functional" models, they mostly seem to do their own scraping at this point.

In the early days, OpenAI made use of the BookCorpus and CommonCrawl dataset, but if those are still included, they would be a small portion of total datasets. Maybe OpenAI used an earlier version of the books3 dataset for training GPT-3? 

Of course, they are still delegating work by finding websites to scrape from (pirated book websites are a thing). But I think they used academic and open-source datasets comparatively the most some years ago.

And then there are "less major" AI labs that have relied on "open-source" datasets, like Meta using Books3 to train the previous LLaMA model (for the current model, Meta did not disclose datasets) and StabilityAI using LAION (while funding and offering compute to the LAION group, which under German law means that the LAION dataset can no longer be treated as "research-only").



 

Again, for my clarification, what do you think about this article?

I think the background research is excellent, and the facts mentioned mostly seem correct to me (except this quote: "Despite the potential risks, EAs broadly believe super-intelligent AI should be pursued at all costs."). I am biased though since I was interviewed for that article.

What are your thoughts?  Curious.


 

My honest question is do you think this is actually a step to de-escalate the Ethics/Safety fued?

People in AI Ethics are very sharp at noticing power imbalances.  
One of the frustrations voiced by AI Ethics researchers is how much money is slushing through/around the AI Safety community, yet AI Safety folks don't seem to give a shit about preventing the harms that are already happening now.

I expect no-strings-attached funding for creatives will help de-escalate the conflicts somewhat
The two communities are never going to like each other. But an AI Safety funder can take the intense edge off a bit, through some no-strings-attached funding support. And that will help the two communities not totally hamper each others' efforts.


 

I think asking the 'Safety' side to make a unilateral de-escalatory step is unlikely to actually lead to much progress.

I mean, is it a sacrifice for the AI Safety community to help creatives restrict data laundering?
If it is not a sacrifice, but actually also helps AI Safety's cause, why not do it?

At the very least, it's an attempt at reconciling differences in a constructive way (without demanding from AI Ethics folks to engage in "reasonable" conversations, which I've seen people do a few times on Twitter). AI Ethics researchers can choose how to respond to that, but at least we have done the obvious things to make amends from our side.

Making the first move, and being willing to do things that help the cause of both communities, would reflect well on the AI Safety community (including from the outside public's perspective, who are forming increasingly negative opinions on where longtermist initiatives have gotten us). 

We need to swallow any pride, and do what we can to make conversations be a bit more constructive. We are facing increasing harms here on a path to mass extinction. 

my own thoughts have become a lot more pessimistic over the last ~year or so

Just read through your thoughts, and responded.

I appreciate your honesty here, and the way you stay willing to be open to new opinions, even when things are looking this pessimistic.

I'm curious what you think of this, and if it impedes what you're describing being effective or not: https://arxiv.org/abs/2309.05463 

Two thoughts:

  • I lack the ML expertise to judge this paper, but my sense is it means you can create a pretty good working chatbot on a bunch of licensed textbooks.

  • Having said that, I don’t see how a neural network could generate the variety of seemingly fitting responses that ChatGPT does for various contexts (eg. news, social situations) without neural weights being adjusted to represent patterns found in those contexts.

What are your thoughts?

A failure mode from the lawsuit / anti-job-loss campaigning route is that a potential solution to those that most might be happy with - i.e. paying creatives fairly via royalties or giving out a UBI + share of profits equal to or greater than people's lost income - is not a solution to x-risk. The only solution that covers both is an indefinite global pause on frontier AI development. This is what we need to be coordinating on.

That said, I think the current lawsuits are great, and hope that they manage to slow the big AI companies down. But on the margin -- especially considering short timelines and the length of time legal proceedings take -- I think resources need to be directed toward a global indefinite moratorium on AGI development.

A UBI to creatives is absolutely unrealistic, in my opinion, given how these companies compete for (and lobby and escape taxes for) profit. See also the trend of rising income inequality between workers and owners/managers of the business they work for.

Even then, creatives don’t just want to be paid money. They want to be asked for consent before a company can train a model on their works.

In practice, what creatives I talk with want — no large generative models regurgitating their works — is locally aligned with what AI Safety people want.

~ ~ ~

The concern I have with the global pause framing of somehow first needing to globally coordinate with all relevant AI companies, government offices, etc. to start restrict AI model development.

That turns this into an almost intractable global coordination problem.

It’s adding in a lot of dependencies (versus supporting communities to restrict the exploitation of data, workers, uses and compute of AI now).

Instead of hacking away at the problem of restricting AI development in increments, you are putting all your chips on that all these (self-interested) policy and corporate folks are going to get together, eg. in a Paris Agreement style conference, and actually agree on and enforce strict restrictions.

It is surprising to me that you are concentrating your and others’ efforts on such a lengthy global governance process, given that you predict short timelines to “AGI”. It feels like a Hail Mary to me.

I would suggest focussing on supporting parallelised actions now across the board. Including but not limited to filing lawsuits.

I’m saying you can make meaningful progress by supporting legal actions now. Climate groups have filed hundreds of lawsuits over the years, which they have made meaningful progress with. I’m sure you are also familiar with Rupert Read’s work too on supporting local groups to chip away at climate problems through the Climate Majority Project.

The UBI would probably be government mandated, down to political action from people losing their jobs (e.g. via a politician like Andrew Yang gaining power).

The concern I have with the global pause framing of somehow first needing to globally coordinate with all relevant AI companies, government offices, etc. to start restrict AI model development.

That turns this into an almost intractable global coordination problem.

I don't actually think it's actually less tractable than legal cases successfully taking down OpenAI (and it could well be quicker). It's just on a different scale. We don't need to coordinate the companies. That route has already failed imo (but there are plenty of people in AIS and EA still trying). We just need to get governments to enact laws regulating general purpose AI (especially: enforced compute and data limits to training). Quickly. Then get them to agree international non-proliferation treaties.

I’m saying you can make meaningful progress now by supporting legal actions now. Climate groups have filed hundreds of lawsuits over the years, which they have made meaningful progress with.

The problem with legal action is how slow it is. We have to do this 10x quicker than with climate change. Is there any prospect of any of the legal cases concluding in the next 6-12 months[1]? If so, I'd chip in.

  1. ^

    Presumably there will be all manor of appeals and counter-appeals, to the point where OpenAI are already quite confident that they can kick the can down the road to beyond the singularity before they are actually forced to take any action.

The UBI would probably be government mandated, down to political action from people losing their jobs (e.g. via a politician like Andrew Yang gaining power).

This is really is not realistic. It feels like we are discussing moves on a board game here, rather than what has happened so far economically and politically.

- Even if you have a lot of people worried about losing their jobs, they would also be much more entangled in and divided by the very technology put out by Big Tech companies. That makes it increasingly hard for them to coordinate around a shared narrative like 'we all want a UBI'.

- Politicians too would be lobbied by and sponsored by Big Tech groups. And given how the US voting system tends to converge on two party blocs, independents like Andrew Yang would not stand a chance.

Both is already happening.

Furthermore, I think much of the uses of AI would be for the extraction of capital from the rest of society, and quite a lot of the collective culture and natural ecosystems that our current economic transactions depend on would get gradually destroyed in the process.
Some of that would be offset economically by AI-automation, but overall the living conditions that people would experience will be lower (and even a UBI would not be able to offset that – you can't buy yourself out a degraded culture and toxified ecosystem).

So I doubt whether even if there was magically the political will by politicians increasingly captured by Big Tech to just give everyone a universal basic income that the AI corporations would have enough free capital to be taxed for that.


 

We don't need to coordinate the companies. That route has already failed imo

Agreed.


 

 We just need to get governments to enact laws regulating general purpose AI (especially: enforced compute and data limits to training). Quickly. Then get them to agree international non-proliferation treaties.

"Quickly" does not match my understanding of how global governance can work. Even if you have a small percentage of the population somewhat worried about abstract AI risks, it's still going to take many years.

Look at how many years it took to get to a nuclear nonproliferation treaty (from 1946 to 1968). And there, citizen groups could actually see photos and videos (and read/listen to corroborated stories) of giant bombs going off.
 
 

The problem with legal action is how slow it is. We have to do this 10x quicker than with climate change.

Yeah, and parallelised lawsuits would each involve a tiny number of stakeholders and established processes to come to decisions. 

Again, why would you expect that to not take much longer for efforts at global governance?
I get how a US president could in theory write a degree but, in practice, the amount of consensus between stakeholders and diligent offsetting against corporate lobbying you have to reach is staggering.

 

especially: enforced compute and data limits to training

Agreed. 
And limits on output bandwidth/intensity. 
And bans on non-human-in-the-loop systems. 

I think that you and I are actually pretty much in agreement on what we are working toward. I think we disagree on the means to getting there, and something around empowering people to make choices from within their contexts.

Remmelt, you don't actually mention Pause! Which is a solution to all the problems mentioned.

Even if you think the priority is to shut the current models down, an indefinite pause on further development is a great first step toward that.

Hmm, could you clarify how this would be coordinated in practice?

Currently what I see is a bunch of signatures, and a bunch of people writing pretty good tweets, but no clear mechanism between that and any AI company leaders actually deciding to pause the scaling of model training and deployment.

The Center for AI and Digital Policy did mention the FLI pause letter in their complaint on OpenAI to FTC, so maybe that kinda advocacy helps move the needle.

I’m all for an indefinite pause, ie. a moratorium on all the precursors needed for further AI development, of course!

Everyone joining together in mass protests and letter writing / phone campaigns calling on their representatives to push for a pause. We probably just need ~1-10% of the 63% who don't want AGI/ASI to take such action.

We don't need to get the AI company leaders to agree, we need to get governments to force them.

Good step to aim for. I like it.

How about from there? 
How would US, UK and other national politicians who got somewhat convinced (that this will help their political career) use existing regulations to monitor and enforce restrictions on AI development?

David Manheim talks a bit about using existing regulations here. Stuart Russell also mentioned the idea of recalls for models that cause harm at his senate hearing.

Thanks. David Manheim's descriptions are broad and move around, so I find them hard to pin down in terms of the governance mechanisms he is referring to. 

My impression is that in David's description of the use of existing laws were based on harms like consumer protection, misuse, copyright violation, and illegal discrimination.

This is also what I am arguing for (instead of trying to go through national governments to establish decrees worldwide to pause AI). 

A difference between the focus of David's and my post, is that he was describing how government departments could clarify and sharpen the implementation of existing regulations. The focus of my post is complementary – on supporting communities to ensure enforcement happens through the court system.

Other things David mentioned around eg. monitoring or banning AI systems larger than GPT-4 seem to require establishing new rules/laws somehow or another.

I don't see how establishing those new rules/laws is not going to be a lengthier process than enforcing already established laws in court. And even when the new rules/laws are written, signed and approved/passed, new enforcement mechanisms need to be build around that. 

I mean, if any country can pass this that would be amazing:
"laws today that will trigger a full ban on deploying or training AI systems larger than GPT-4"

I just don't see the political will yet? 
I can imagine a country that does not have companies developing some of the largest models deciding to pass this bill (maybe just against the use of such models). Would still be a win in terms of setting an example for other countries.

Maybe the fact that it's about future models, current politicians would be more okay with setting a limit a few "versions" higher than GPT-4 since in their eyes it won't hamstring economic "progress" now, but rather hamstring future politicians.

Though adding in this exception is another recipe for future regulatory capture:
"...which have not been reviewed by an international regulatory body with authority to reject applications"

Curated and popular this week
Relevant opportunities