"The new influence web is pushing the argument that AI is less an existential danger than a crucial business opportunity, and arguing that strict safety rules would hand America’s AI edge to China. It has already caused key lawmakers to back off some of their more worried rhetoric about the technology.

... The effort, a loosely coordinated campaign led by tech giants IBM and Meta, includes wealthy new players in the AI lobbying space such as top chipmaker Nvidia, as well as smaller AI startups, the influential venture capital firm Andreessen Horowitz and libertarian billionaire Charles Koch.

... Last year, Rep. Ted Lieu (D-Calif.) declared himself “freaked out” by cutting-edge AI systems, also known as frontier models, and called for regulation to ward off several scary scenarios. Today, Lieu co-chairs the House AI Task Force and says he’s unconvinced by claims that Congress must crack down on advanced AI.

“If you just say, ‘We’re scared of frontier models’ — okay, maybe we should be scared,” Lieu told POLITICO. “But I would need something beyond that to do legislation. I would need to know what is the threat or the harm that we’re trying to stop.”

... After months of conversations with IBM and its allies, Rep. Jay Obernolte (R-Calif.), chair of the House AI Task Force, says more lawmakers are now openly questioning whether advanced AI models are really that dangerous.

In an April interview, Obernolte called it “the wrong path” for Washington to require licenses for frontier AI. And he said skepticism of that approach seems to be spreading.

“I think the people I serve with are much more realistic now about the fact that AI — I mean, it has very consequential negative impacts, potentially, but those do not include an army of evil robots rising up to take over the world,” said Obernolte."

97

2
0

Reactions

2
0
Comments7


Sorted by Click to highlight new comments since:

This is from the same guy who wrote about "the shadowy influence" of EA on AI safety regulation. Obviously he's covering money in politics, but it's interesting how he frames industry lobbying as business as usual but philanthropy as weird and possibly nefarious.

It is always appalling to see tech lobbying power shut down all the careful work done by safety people.

Yet the article highlights a very fair point: that safety people have not succeeded at being clear and convincing enough about the existential risks posed by AI. Yes, it's hard, yes it's a lot about speculations. But that's exactly where impact lies : trying to have a consistent and pragmatic discourse about AI risks, that is not uselessly alarmist or needlessly vague.

The state of the EA community is a good example of that. I often hear that yes, risks are high, but what risks exactly, and how can they be quantified? Impact measurement is awfully vague when it comes to AI safety (and a minor measure, AI governance).

It seems like the pivot towards AI Pause advocacy has happened relatively recently and hastily. I wonder if now would be a good time to step back and reflect on strategy.

Since Eliezer's Bankless podcast, it seems like Pause folks have fallen into a strategy of advocating to the general public. This quote may reveal a pitfall of that strategy:

“I think the more people learn about some of these [AI] models, the more comfortable they are that the steps our government has already taken are by-and-large appropriate steps,” Young told POLITICO.

I hypothesize a "midwit curve" for AI risk concern:

  • At a low level of AI knowledge, members of the general public are apt to anthropomorphize AI models and fear them.

  • As a person acquires AI expertise, they anthropomorphize AI models less, and become less afraid.

  • Past that point, some folks become persuaded by specific technical arguments for AI risk.

It puzzles me that Pause folks aren't more eager to engage with informed skeptics like Nora Belrose, Rohin Shah, Alex Turner, Katja Grace, Matthew Barnett, etc. Seems like an ideal way to workshop arguments that are more robust, and won't fall apart when the listener becomes more informed about the topic -- or simply identify the intersection of what many experts find credible. Why not more adversarial collaborations? Why relatively little data on the arguments and framings which persuade domain experts? Was the decision to target the general public a deliberate and considered one, or just something we fell into?

My sense is that some Pause arguments hold up well to scrutiny, some don't, and you might risk undermining your credibility by making the ones which don't hold up. I get the sense that people are amplifying messaging which hasn't been very thoroughly workshopped. Even though I'm quite concerned about AI risk, I often find myself turned off by Pause advocacy. That makes me wonder if there's room for improvement.

Something that crystallized for me after listening to the A16Z podcast a bit is there are at least 3 distinct factions in the AI debate: the open-source faction, the closed-source faction, and the Pause faction.

  • The open-source faction accuses the closed-source faction of seeking regulatory capture.

  • The Pause and closed-source factions accuse the open-source faction of enabling bioterrorism.

  • The Pause faction accuses the closed-source faction of hypocrisy.

  • The open-source faction accuses the Pause faction of being inspired by science fiction.

  • The closed-source faction accuses the Pause faction of being too theoretical, and insufficiently empirical, in their approach to AI alignment.

If you're part of the open-source faction or the pause faction, the multi-faction nature of the debate might not be as obvious. From your perspective, everyone you disagree with looks either too cautious or too reckless. But the big AI companies like OpenAI, Deepmind, and Anthropic actually find themselves in the middle of the debate, pushing in two separate directions.

Up until now, the Pause faction has been more allied with the closed-source faction. But with so many safety people quitting OpenAI, that alliance is looking less tenable.

I wonder if it is worth spending a few minutes brainstorming a steelman for why Pause should ally with the open-source faction, or at least try to play the other two factions against each other.

Some interesting points from the podcast (starting around the 48-minute mark):

  • Marc thinks the closed-source faction fears erosion of profits due to commoditization of models.

  • Dislike of big tech is one of the few bipartisan areas of agreement in Washington.

  • Meta's strategy in releasing their models for free is similar to Google's strategy in releasing Android for free: Prevent a rival company (OpenAI for LLMs, Apple for smartphones) from monopolizing an important technology.

That suggests Pause may actually have a few objectives in common with Meta. If Meta is mostly motivated by not letting other companies get too far ahead, slapping a heavy tax on the frontier could satisfy both Pause and Meta. And the more LLMs get commoditized, the less profitable they become to operate, and the less investors will be willing to fund large training runs.

It seems like most Pause people are far more concerned about general AI than narrow AI, and I agree with them. Conceivably if you discipline Big AI, that satisfies Washington's urge to punish big tech and pursue antitrust, while simultaneously pushing the industry towards a lot of smaller companies pursuing narrower applications. (edit: this comment I wrote advocates taxing basic AI research to encourage applications research)

This analysis is quite likely wrong. For example, Marc supports open-source in part because he thinks it will cause AI innovation to flourish, and that sounds bad for Pause. But it feels like someone ought to be considering it anyways. If nothing else, having a BATNA could give Pause leverage with their closed-source allies.

Thanks for the link to Open Asteroid Impact. That's some really funny satire. :-D

Thank you! You might like the 3 minute youtube version as well.

Fwiw I think the website played well with at least some people in the open-source faction (in OP's categorization). Eg see here on the LocalLlama reddit. 

[brainstorming]

It may be useful to consider the % of [worldwide net private wealth] that is lost if the US government commits to certain extremely strict AI regulation. We can call that % the "wealth impact factor of potential AI regulation" (WIFPAIR). We can expect that, other things being equal, in worlds where WIFPAIR is higher more resources are being used for anti-AI-regulation lobbying efforts (and thus EA-aligned people probably have less influence over what the US government does w.r.t. AI regulation).

The WIFPAIR can become much higher in the future, and therefore convincing the US government to establish effective AI regulation can become much harder (if it's not already virtually impossible today).

If at some future point WIFPAIR gets sufficiently high, the anti-AI-regulation efforts may become at least as intense as the anti-communist efforts in the US during the 1950s.

Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f
Recent opportunities in AI safety
20
Eva
· · 1m read