"The new influence web is pushing the argument that AI is less an existential danger than a crucial business opportunity, and arguing that strict safety rules would hand America’s AI edge to China. It has already caused key lawmakers to back off some of their more worried rhetoric about the technology.
... The effort, a loosely coordinated campaign led by tech giants IBM and Meta, includes wealthy new players in the AI lobbying space such as top chipmaker Nvidia, as well as smaller AI startups, the influential venture capital firm Andreessen Horowitz and libertarian billionaire Charles Koch.
... Last year, Rep. Ted Lieu (D-Calif.) declared himself “freaked out” by cutting-edge AI systems, also known as frontier models, and called for regulation to ward off several scary scenarios. Today, Lieu co-chairs the House AI Task Force and says he’s unconvinced by claims that Congress must crack down on advanced AI.
“If you just say, ‘We’re scared of frontier models’ — okay, maybe we should be scared,” Lieu told POLITICO. “But I would need something beyond that to do legislation. I would need to know what is the threat or the harm that we’re trying to stop.”
... After months of conversations with IBM and its allies, Rep. Jay Obernolte (R-Calif.), chair of the House AI Task Force, says more lawmakers are now openly questioning whether advanced AI models are really that dangerous.
In an April interview, Obernolte called it “the wrong path” for Washington to require licenses for frontier AI. And he said skepticism of that approach seems to be spreading.
“I think the people I serve with are much more realistic now about the fact that AI — I mean, it has very consequential negative impacts, potentially, but those do not include an army of evil robots rising up to take over the world,” said Obernolte."
Something that crystallized for me after listening to the A16Z podcast a bit is there are at least 3 distinct factions in the AI debate: the open-source faction, the closed-source faction, and the Pause faction.
The open-source faction accuses the closed-source faction of seeking regulatory capture.
The Pause and closed-source factions accuse the open-source faction of enabling bioterrorism.
The Pause faction accuses the closed-source faction of hypocrisy.
The open-source faction accuses the Pause faction of being inspired by science fiction.
The closed-source faction accuses the Pause faction of being too theoretical, and insufficiently empirical, in their approach to AI alignment.
If you're part of the open-source faction or the pause faction, the multi-faction nature of the debate might not be as obvious. From your perspective, everyone you disagree with looks either too cautious or too reckless. But the big AI companies like OpenAI, Deepmind, and Anthropic actually find themselves in the middle of the debate, pushing in two separate directions.
Up until now, the Pause faction has been more allied with the closed-source faction. But with so many safety people quitting OpenAI, that alliance is looking less tenable.
I wonder if it is worth spending a few minutes brainstorming a steelman for why Pause should ally with the open-source faction, or at least try to play the other two factions against each other.
Some interesting points from the podcast (starting around the 48-minute mark):
Marc thinks the closed-source faction fears erosion of profits due to commoditization of models.
Dislike of big tech is one of the few bipartisan areas of agreement in Washington.
Meta's strategy in releasing their models for free is similar to Google's strategy in releasing Android for free: Prevent a rival company (OpenAI for LLMs, Apple for smartphones) from monopolizing an important technology.
That suggests Pause may actually have a few objectives in common with Meta. If Meta is mostly motivated by not letting other companies get too far ahead, slapping a heavy tax on the frontier could satisfy both Pause and Meta. And the more LLMs get commoditized, the less profitable they become to operate, and the less investors will be willing to fund large training runs.
It seems like most Pause people are far more concerned about general AI than narrow AI, and I agree with them. Conceivably if you discipline Big AI, that satisfies Washington's urge to punish big tech and pursue antitrust, while simultaneously pushing the industry towards a lot of smaller companies pursuing narrower applications. (edit: this comment I wrote advocates taxing basic AI research to encourage applications research)
This analysis is quite likely wrong. For example, Marc supports open-source in part because he thinks it will cause AI innovation to flourish, and that sounds bad for Pause. But it feels like someone ought to be considering it anyways. If nothing else, having a BATNA could give Pause leverage with their closed-source allies.
Thank you! You might like the 3 minute youtube version as well.
Fwiw I think the website played well with at least some people in the open-source faction (in OP's categorization). Eg see here on the LocalLlama reddit.