If you enjoy this, please consider subscribing to my Substack.
My latest reporting went up in The Nation yesterday:
It’s about the tech industry’s meltdown in response to SB 1047, a California bill that would be the country’s first significant attempt to mandate safety measures from developers of AI models more powerful and expensive than any yet known.
Rather than summarize that story, I’ve added context from some past reporting as well as new reporting on two big updates from yesterday: a congressional letter asking Newsom to veto the bill and a slate of amendments.
The real AI divide
After spending months on my January cover story in Jacobin on the AI existential risk debates, one of my strongest conclusions was that the AI ethics crowd (focused on the tech’s immediate harms) and the x-risk crowd (focused on speculative, extreme risks) should recognize their shared interests in the face of a much more powerful enemy — the tech industry:
According to one estimate, the amount of money moving into AI safety start-ups and nonprofits in 2022 quadrupled since 2020, reaching $144 million. It’s difficult to find an equivalent figure for the AI ethics community. However, civil society from either camp is dwarfed by industry spending. In just the first quarter of 2023, OpenSecrets reported roughly $94 million was spent on AI lobbying in the United States. LobbyControl estimated tech firms spent €113 million this year lobbying the EU, and we’ll recall that hundreds of billions of dollars are being invested in the AI industry as we speak.
And here’s how I ended that story:
The debate playing out in the public square may lead you to believe that we have to choose between addressing AI’s immediate harms and its inherently speculative existential risks. And there are certainly trade-offs that require careful consideration.
But when you look at the material forces at play, a different picture emerges: in one corner are trillion-dollar companies trying to make AI models more powerful and profitable; in another, you find civil society groups trying to make AI reflect values that routinely clash with profit maximization.
In short, it’s capitalism versus humanity.
This was true at the time I published it, but honestly, it felt like momentum was on the side of the AI safety crowd, despite its huge structural disadvantages (industry has way more money and armies of seasoned lobbyists).
Since then, it’s become increasingly clear that meaningful federal AI safety regulations aren’t happening any time soon. The Republican Majority Leader Steve Scalise promised as much in June. But it turns out Democrats would have also likely blocked any national, binding AI safety legislation.
The congressional letter
Yesterday, eight Democratic California Members of Congress published a letter to Gavin Newsom, asking him to veto SB 1047 if it passes the state Assembly. There are serious problems with basically every part of this letter, which I picked apart here. (Spoiler: it's full of industry talking points repackaged under congressional letterhead).
Many of the signers took lots of money from tech, so it shouldn’t come as too much of a surprise. I’m most disappointed to see that Silicon Valley Representative Ro Khanna is one of the signatories. Khanna had stood out to me positively in the past (like when he Skyped into The Intercept’s five year anniversary party).
The top signatory is Zoe Lofgren, who I wrote about in The Nation story:
SB 1047 has also acquired powerful enemies on Capitol Hill. The most dangerous might be Zoe Lofgren, the ranking Democrat in the House Committee on Science, Space, and Technology. Lofgren, whose district covers much of Silicon Valley, has taken hundreds of thousands of dollars from Big Tech and venture capital, and her daughter works on Google’s legal team. She has also stood in the way of previous regulatory efforts.
Lofgren recently took the unusual step of writing a [different] letter against state-level legislation, arguing that SB 1047 was premature because “the science surrounding AI safety is still in its infancy.” Similarly, an industry lobbyist told me that “this is a rapidly evolving industry,” and that by comparison, “the airline industry has established best practices.”
Later from the same story:
Wiener says he would prefer “one strong federal law,” but isn’t holding his breath. He notes that, aside from the TikTok ban, Congress hasn’t meaningfully regulated technology in decades. In the face of this inaction, California has passed its own laws on data privacy and net neutrality (Wiener authored the latter).
When he was shepherding the net neutrality bill, Wiener told me that he experienced similar industry resistance: “the telecoms and cable companies kept shouting, ‘no, no, no. This should be handled at the federal level.’” But, he pointed out that these “are the same corporate actors that are making it impossible for Congress to” do so.
Overall, this congressional letter makes me very skeptical of the prospect of national AI safety regulations any time soon. Whenever a lobbyist says they prefer a federal law, keep in mind that the Representatives in their pocket are standing in the way of that.
The turning tides
In May, Politico reported:
In DC, a new wave of AI lobbyists gains the upper hand: An alliance of tech giants, startups and venture capitalists are spending millions to convince Washington that fears of an AI apocalypse are overblown. So far, it’s working.
You don’t have to be a Marxist to see that this was always the way things would shake out. The AI industry is barely subject to regulation right now, as I wrote in The Nation:
In the West, self-regulation is the status quo. The only significant Western mandatory rules on general AI are included in the sweeping EU AI Act, but these don’t take effect until June 2025.
They’ve been able to write their own rules with no punishment for breaking them. They want to continue to do so because they think it will make them more money.
Some from the AI ethics crowd have argued that the industry is pushing the x-risk narrative to defer and control regulations. From my Jacobin story:
Yet others see a Big Tech conspiracy looming behind these concerns. Some people focused on immediate harms from AI argue that the industry is actively promoting the idea that their products might end the world, like Myers West of the AI Now Institute, who says she “see[s] the narratives around so-called existential risk as really a play to take all the air out of the room, in order to ensure that there’s not meaningful movement in the present moment.” Strangely enough, Yann LeCun and Baidu AI chief scientist Andrew Ng purport to agree.
But SB 1047 has strong support from the AI safety community (and it was crafted with their input — the Center for AI Safety’s lobbying arm is a co-author). The bill has also faced fierce and nearly uniform opposition from the AI industry.
I think there are a few reasons for this:
- The bill targets catastrophic risk — kicking in at $500M in damage or a mass casualty event. This is a much lower bar than extinction!
- The bill uses liability as an enforcement mechanism. A licensing regime could maybe advantage established players, lending credence to regulatory capture arguments, but no firm wants to be exposed to more liability.
- The leaders of AI companies genuinely think that their products could be really powerful/dangerous!
I also don’t really see how SB 1047 hurts efforts to regulate AI’s immediate dangers. The best case I can think of is that it’s spending political capital that could be used on other efforts. But many of the opponents of the bill have favorably highlighted many other pieces of legislation that target immediate harms from AI.
Sneha Revanur, founder of Encode Justice and a bill co-author, put it well in our conversation:
I think the best way for us to move forward is for us to reject this false choice between focusing on short term & long term risks & instead recognize that we're all up against these power-seeking AI companies.
I reported something similar in Jacobin:
But many of the x-risk believers highlighted that the positions “AI causes harm now” and “AI could end the world” are not mutually exclusive."
Capitalism vs. democracy
SB 1047 has flown through the state legislature nearly unopposed. The bill could pass the state Assembly with overwhelming support, but still die with a veto from Newsom.
This is clearly what industry is counting on.
But couldn’t the legislature override the veto? Technically, yes. But that hasn’t happened since 1979, and supporters don’t expect that to change here.
So industry is training its guns on the Governor, using Members of Congress from his state and party to rehash their talking points, in the hopes that Newsom vetoes a bill that has had extremely strong support in the legislature, as well as strong state-wide support in three public opinion polls.
Yesterday’s amendments to SB 1047
Fearing this veto, supporters have watered down the bill in the hopes of softening industry opposition. Yesterday, the SB 1047 team published summaries of the latest round of amendments.
Overall, the changes further narrow the scope of the bill to address a lot of specific concerns. The “reasonable assurance” standard was changed to the weaker “reasonable care.” The proposed new regulatory agency, the Frontier Model Division, is gone. One change expanded the scope of when the California Attorney General (AG) can seek an injunctive relief (i.e. a court order to halt).
Many eyes are now on Anthropic, which is the only major AI company that may actually support the bill. From my reporting in The Nation:
The nearest thing to industry support has come from Anthropic, the most safety-oriented top AI company. Anthropic published a “support if amended” letter requesting extensive changes to the bill, the most significant of which is a move from what the company calls “broad pre-harm enforcement” to a requirement that developers create safety plans as they see fit. If a covered model causes a catastrophe and its creator’s safety plan “falls short of best practices or relevant standards, in a way that materially contributed to the catastrophe, then the developer should also share liability.” Anthropic calls this a “deterrence model” that would allow developers to flexibly set safety practices as standards evolve.
No other major AI companies took the position: “support if amended.”
Anthropic also appears to have gotten most of the changes they requested.
Their biggest request — to remove pre-harm enforcement — seems to have been mostly met. The state AG can now only seek civil penalties if a model has caused catastrophic harm or poses an imminent risk to public safety. Previously, the AG could seek penalties if a covered developer didn’t comply with the safety measures required by the bill, even if their model didn’t harm anyone.
I am not a lawyer, but I think this change sounds like it weakens the bill more than it actually does. I think this analysis from AI safety researcher Michael Cohen is right:
probably, in the "real world", fancy lawyers will scare most companies into complying.
Cohen also thinks it’s “jaw-dropping” that Anthropic’s letter included this line:
And finally, it should appeal to honest skeptics of catastrophic risk, who can choose not to mitigate against risks they don’t believe in (though they do so at their own peril).
According to Cohen, the main things they asked for and didn’t get were:
- reduced whistleblower protections
- the elimination of know-your-customer requirements on cloud computing providers
- making auditors optional
A spokesperson for Anthropic wrote to me that, "We are reviewing the new bill language as it becomes available."
I don’t expect any other major AI company to support the bill, even in its amended form.
The amendments have changed at least some minds.
Samuel Hammond, a fellow at a centrist think tank, once opposed the bill. But in response to the amendments, he wrote that, “All these changes are great. This has shaken out into a very reasonable bill.”
Ethereum creator Vitalik Buterin also praised the amendments and wrote that they further addressed his original top two concerns.
In his thread analyzing the amendments, Cohen wrote:
If Anthropic still opposes the bill at this point, I will be devastated. I hope they don't forget that even if they do everything safely, that solves absolutely nothing unless other people do things safely.
This gets to the heart of the problem with self-regulation. If it’s faster/cheaper to build powerful AI systems unsafely, that’s what racing actors will do. The incentives only get stronger as AI systems get more powerful (and profitable).
Self-regulation is already showing itself to be insufficient, as I wrote in The Nation:
All the major AI companies have made voluntary commitments. But overall, compliance has been less than perfect.
According to the AI’s leading industrialists, the stake couldn’t be higher:
This is a real mask-off moment for the AI industry. If we listen to the top companies, human-level AI could arrive within five years, and full-blown extinction is on the table. The leaders of these companies have talked about the need for regulation and repeatedly stated that advanced AI systems could lead to, as OpenAI CEO Sam Altman memorably put it, “lights out for all of us.”
But now they and their industry groups are saying it’s too soon to regulate. Or they want regulation, of course, but just not this regulation.
I’ll leave you with my favorite quote I got for the story:
Wiener represents San Francisco and, as a result, has borne a significant personal and political cost by shepherding SB 1047, says someone working on the bill: “You don’t have to love [Wiener] on everything to realize that he is just a stubborn motherfucker.… The amount of political pain he is taking on this is just unbelievable.… He has just lost a lot of relationships and political partners and people who are just incredibly furious at him over this. And I just think he actually thinks the risks are real and thinks that he has to do something about it.”
Forecasts:
Metaculus: 40% that it passes (n=69)
Manifold:
Thank you for the comprehensive research! California state policy as a lever for AI regulation hasn't been much on my radar yet, and as a European concerned about AI risk, I found this very insightful. Curious if you (or anyone here) have thoughts on the following:
1) Is there anything we can and should do right now? Any thoughts on Holly's "tell your reps to vote yes on SB 1047" post from last week? Anything else we can do?
2) How do you see the potential for California state regulation in the next few years. Should we invest more resources in this, relative to US AI policy?
Not really an answer to your questions, but I think this guide to SB 1047 gives a good overview of a related aspects.
First: I do completely agree that several modifications are absolutely egregious and without any logical explanation - primarily the removal of whistleblower protections. However, I think it is also important that we recognize that SB1047 does have flaws and isn't perfect and we should all be welcoming of constructive feedback both for and against. Some level of reasonable compromise when pushing forward unprecedented policy such as this is always going to happen for better or worse.
IMHO, the biggest problems with the bill as originally written were the ability to litigate against a company before any damages had actually occurred, and moreover - the glaring loopholes that existed with fixed-flops thresholds for oversight. Anybody with any understanding of machine learning training pipelines could point out any number of loopholes and easy circumventions (e.g., more iterative segmented training checkpoints/versioning essentially delegating large training runs into multiple smaller runs - or segmentation and modularization of models themselves)
We also need to be humble and open-minded about unintended consequences (e.g., it's possible this bill pushes some organizations to open-source or open-weight distribution models, or of course encourage big tech to relocate AI to states with less regulation). If we treat all of industry as 'The Enemy' we risk loosing key allies in the AI research space (both individuals as well as organizations).
Can you explain why you find this problematic? It's not self-evident to me, because we do this too for other things, e.g. drunk driving, pharmaceuticals needing to pass safety testing
I'm not sure I follow your examples and logic, perhaps you could explain because drunk driving is in itself a serious crime in every country I know of. Are you suggesting it be criminal to merely develop an AI model regardless of whether it's commercialized or released?
Regarding pharmaceuticals, yes, they certainly do need to pass several phases of clinical research and development to prove sufficient levels of safety and efficacy because by definition, FDA approves drugs to treat specific diseases. If those drugs don't do what they claim, people die. The many reasons for regulating drugs should be obvious. However, there is no such similar regulation on software. Developing a drug discovery platform or even the drug itself is not a crime (as long as it's not released.)
You could just as easily extrapolate to individuals. We cannot legitimately litigate (sue) or prosecute someone for a crime they haven't committed. This is why we have due process and basic legal rights.( Technically anything can be litigated with enough money thrown at it but you cant sue for damages unless damages actually occurred)
Drunk driving is illegal because it risks doing serious harm. It's still illegal when the harm has not occurred (yet). Things can be crimes without harm having occurred.
Executive summary: The tech industry is actively opposing meaningful AI safety regulations like California's SB 1047 bill, despite public claims about AI risks, revealing a conflict between corporate interests and public safety.
Key points:
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.