"The new influence web is pushing the argument that AI is less an existential danger than a crucial business opportunity, and arguing that strict safety rules would hand America’s AI edge to China. It has already caused key lawmakers to back off some of their more worried rhetoric about the technology.
... The effort, a loosely coordinated campaign led by tech giants IBM and Meta, includes wealthy new players in the AI lobbying space such as top chipmaker Nvidia, as well as smaller AI startups, the influential venture capital firm Andreessen Horowitz and libertarian billionaire Charles Koch.
... Last year, Rep. Ted Lieu (D-Calif.) declared himself “freaked out” by cutting-edge AI systems, also known as frontier models, and called for regulation to ward off several scary scenarios. Today, Lieu co-chairs the House AI Task Force and says he’s unconvinced by claims that Congress must crack down on advanced AI.
“If you just say, ‘We’re scared of frontier models’ — okay, maybe we should be scared,” Lieu told POLITICO. “But I would need something beyond that to do legislation. I would need to know what is the threat or the harm that we’re trying to stop.”
... After months of conversations with IBM and its allies, Rep. Jay Obernolte (R-Calif.), chair of the House AI Task Force, says more lawmakers are now openly questioning whether advanced AI models are really that dangerous.
In an April interview, Obernolte called it “the wrong path” for Washington to require licenses for frontier AI. And he said skepticism of that approach seems to be spreading.
“I think the people I serve with are much more realistic now about the fact that AI — I mean, it has very consequential negative impacts, potentially, but those do not include an army of evil robots rising up to take over the world,” said Obernolte."
It is always appalling to see tech lobbying power shut down all the careful work done by safety people.
Yet the article highlights a very fair point: that safety people have not succeeded at being clear and convincing enough about the existential risks posed by AI. Yes, it's hard, yes it's a lot about speculations. But that's exactly where impact lies : trying to have a consistent and pragmatic discourse about AI risks, that is not uselessly alarmist or needlessly vague.
The state of the EA community is a good example of that. I often hear that yes, risks are high, but what risks exactly, and how can they be quantified? Impact measurement is awfully vague when it comes to AI safety (and a minor measure, AI governance).
It seems like the pivot towards AI Pause advocacy has happened relatively recently and hastily. I wonder if now would be a good time to step back and reflect on strategy.
Since Eliezer's Bankless podcast, it seems like Pause folks have fallen into a strategy of advocating to the general public. This quote may reveal a pitfall of that strategy:
I hypothesize... (read more)