By Robert Wiblin | Watch on Youtube | Listen on Spotify | Blog post
When the Pentagon tried to strong-arm Anthropic into dropping its ban on AI-only kill decisions and mass domestic surveillance, the company refused. Its critics went on the attack: Anthropic and its defenders are hypocritical, naive, and anti-democratic. Rob Wiblin takes each of these three charges seriously, and then dismantles them. Each invokes an abstract principle that sounds reasonable, but is in fact a mediocre argument dressed up as a hard truth.
We shouldn't allow ourselves to be tricked because the stakes are significant. Rather than end the contract, Secretary of Defense Pete Hegseth branded Anthropic a “supply chain risk” — a label that bars federal contracts and isolates them from other companies that do business with the government. If it sticks, it could effectively murder Anthropic and set a dangerous precedent allowing the government to dictate how private companies operate.
Learn more & full transcript: https://80k.info/dow
This episode was recorded March 25, 2026.
Host: Rob Wiblin
Video editing: Dominic Armstrong
Transcript, visuals & web: Nick Stockton, Elizabeth Cox, and Katy Moore
I’ve spent years calling for more government oversight of frontier AI development. So am I a hypocrite for opposing the Pentagon’s attempt to commit “corporate murder” against Anthropic? That’s what I’ve heard. As venture capitalist Marc Andreessen put it on Twitter: “Every single person who was in favor of government control of AI, is now opposed to government control of AI.”
It’s a natural way to think. But it’s also completely wrong — and I want to explain why it’s wrong, because you see the same underlying confusion all over the place.
It’s not just hypocrisy though. I count at least three distinct charges critics have levelled at Anthropic and people who support them in their dispute with the Pentagon.
I mentioned the hypocrisy charge, but there’s a separate accusation of naivety: that when you’re building something as powerful as AI, the state will inevitably crush you if you try to set conditions on their use of it, and so Anthropic was in the wrong to pick this fight.
And third, there’s an accusation of being undemocratic: that a private company has no business telling the elected government what it can and can’t do with military technology.
I’m going to take each of these charges seriously and explain where I think they go wrong and why. Let’s dive in.
Charge 1: Hypocrisy
Let’s start with hypocrisy, because it’s the charge I hear most often and the most straightforward of the three.
Just to quickly jog your memory about the dispute we’re talking about: Anthropic had a contract with the Pentagon that included two restrictions on the use of their AI: no mass domestic surveillance, and no decisions to kill people made by artificial intelligence alone. The Trump administration demanded those restrictions be removed, Anthropic refused, and rather than merely ending the contract, which people mostly agree would have been fine, Secretary of War and Defense Pete Hegseth declared Anthropic a “supply chain risk” — a designation previously used only for foreign adversaries — in so doing, threatening the company’s ability to do much business in the United States at all.
The case to overturn that designation has attracted support from expected allies and unexpected ones alike — including Anthropic’s competitors OpenAI and Microsoft, as well as conservative technology experts who for the most part otherwise agree with the Trump administration’s approach to AI.
For years, people in the AI safety world — absolutely including me — have argued that frontier AI is too important and too dangerous to leave entirely in the hands of private companies. We’ve called for government oversight, safety standards, a wide range of different things.
So the hypocrisy logic runs: you wanted government control of AI. Well, now you’ve got government control of AI. So why complain?
But to state the obvious: supporting public oversight of frontier AI training doesn’t require you to support the government strong-arming a company into allowing its product to be used for domestic mass surveillance. Those views just on its face aren’t even in tension.
Of course, people don’t only care who decides, who has control over AI — they also care what they decide, and what they decide to do. I might support legislation that allows the leader of a city’s fire service to close streets for public safety, but if that fire chief starts closing arbitrary streets and demanding bribes to pass through them, I wouldn’t be a hypocrite for opposing that as well.
The reason people get confused here is because they’re mentally compressing something extremely multidimensional down onto a single axis of variation. In this case, the axis being ‘more government control of AI’ versus ‘less government control of AI.’
But that’s an absurdly crude way to think about things. Everyone has always known full well that it doesn’t just matter whether the government gets involved; it matters how it gets involved, on what terms, and what it’s trying to accomplish when it does.
You won’t be shocked to hear that supporters of AI regulation in general bicker endlessly among themselves about exactly what details here would be helpful versus harmful. Nothing this complex and delicate can be boiled down to ‘more government good, less government bad.’
I think the debate about the Anthropic/Pentagon dispute in particular got so abstract so fast in part because, for AI commentators, “Who should control AI?” is a much more interesting and important and generative topic than a narrow military contract dispute — and in part because for people who want to defend the Pentagon, it’s a hell of a lot easier to defend the general principle that the government should have some influence over AI than to defend what the Pentagon was actually doing in this instance.
A phenomenon at play here is that there’s often a genuine tradeoff between what decision-making process seems best in the abstract, and our opinions about what outcome is best in a particular case.
Say I think zoning decisions should be made at the city level — that that’s the right process. But the current mayor happens to be blocking all new housing construction, while I support doing the opposite. I now have a real tension: adopting the process I think is best as a general rule would produce an outcome I think is genuinely harmful, at least in the immediate term.
Reasonable people can disagree about which consideration ought to win out in any given instance. But there’s no rule that says you’re only allowed to evaluate the process, never the outcome that the process would actually produce in real life. Noticing the tension between these two things, and weighing them against one another, isn’t hypocritical — it’s clear thinking.
Charge 2: Naivety
The second line of criticism is that Anthropic was naive or foolish to resist the government’s demands in this case. The most influential version of this line of argument comes from Ben Thompson at the blog Stratechery.
Thompson’s reasoning runs roughly like this: Dario Amodei, the CEO of Anthropic, has very publicly compared advances in AI to nuclear weapons. Well, if AI really is that powerful, then any company building it is constructing a power base that could rival the US military. Realistically, no government is going to tolerate that.
And Thompson also observes that international law is ultimately a function of power — that “might still makes right.” And he applies that same logic domestically, reaching a stark conclusion: Anthropic either had to accept a fully subservient position relative to the US government, or the US government would invariably try to destroy it.
I’ll evaluate the two underlying arguments in turn:
- That the government’s actions here are so natural it’s absurd and counterproductive to object to them.
- Secondly, that the government was motivated by fears of a private company threatening their sovereignty.
First up: Thompson’s piece opens with a long passage arguing that international law is essentially fake, that power dynamics determine behaviour at the international level. He then applies that same realist framework to Anthropic: the company is “fundamentally misaligned with reality” for resisting demands from the executive branch — which is, after all, so, so much more powerful than it as a private company.
As a prediction of how powerful actors tend to behave, that may be correct, at least to a large extent. But the post then slips from attempting to describe reality into a prescription of what we should accept and how we ought to react to it — literally arguing that possessing overwhelming might can make your actions right.
It’s actually pretty easy to accidentally equivocate between something being predictable on the one hand and it being acceptable on the other. And I think Thompson here was being very sincere and saw himself as pointing out some hard truths. And he did have useful things to say in that post.
But it really is very dangerous to start seeing harmful or unlawful actions as unobjectionable, just because they’re not surprising or because they’re being done by very powerful actors.
As to whether it’s naive and pointless to object in this case, that remains to be seen.
- Anthropic’s resistance has galvanised something like 90% of the tech industry to oppose the use of the supply chain risk designation for this purpose.
- Companies that have previously stayed quiet have been alarmed enough by the potential precedent that they’re joining the effort to push back.
- Past guest of the show Dean Ball, who wrote AI policy for the Trump administration, called the “attempted corporate murder” of Anthropic perhaps “the most [damaging] policy move” he’d ever seen the US government try to take — a high bar, surely.
- And using AI for surveillance or fully autonomous kill chains is even more controversial in Silicon Valley now than it was before — and it was already pretty controversial.
The US is still a nation of laws for the most part, and legal analysts give Anthropic a good chance of prevailing in court, likely even securing a preliminary injunction to block the order before this video goes out. If that happens, the “naive” company will have established a legal precedent that helps protect the entire AI industry from economic coercion — a risky path to choose, perhaps, but not necessarily a stupid one.
Second, in the rest of the post, Thompson gives interesting arguments for what views might sensibly motivate the kind of extreme action the Pentagon has taken here. But when you look at the specifics of this actual case, those motivations just don’t seem to be what’s driving things.
If the US government had genuinely just become convinced that AI companies were building something as dangerous as private nuclear weapons, what might we expect them to do? Well, they’d presumably focus on the whole AI industry, not single out the one company that was most proactive about working with them and alerting them to exactly this risk. That OpenAI or xAI, that they’re offering them looser contractual terms on business contracts, that would hardly make them safe if the concern were really that a private company could very quickly accumulate military power to rival the entire US government.
They’d presumably be thinking about rules to prevent something this explosive being built by incompetent idiots, or built in such a way that the designs get leaked to China.
They would likely propose some legislation to Congress that would enable them to handle the many other issues that this is going to create down the road.
And they would presumably try to keep AI researchers roughly on their side, not alienate them on an unprecedented scale over a relatively minor issue.
But none of that is happening. Indeed, it’s generally antithetical to current US government policy. What is happening is that one company is getting punished for rejecting the government’s proposed terms in a contract dispute. Thompson’s high-level abstract arguments about AI’s transformative power justifying government intervention, they probably make sense. I kind of agree with them. But it’s hard to find evidence that that argument is what’s motivating the government’s particular actions in this dispute with Anthropic.
Charge 3: Undemocratic
Let’s turn to the third charge: that Anthropic’s position is an undemocratic one — that by setting conditions on how the military uses its technology, Anthropic is in effect usurping a role that properly belongs to elected leaders.
The strongest version of that argument came from Palmer Luckey, the founder of military contractor Anduril. Luckey thinks the core two questions are: “Do we believe in democracy?” and “Should our military be regulated by our elected leaders, or corporate executives?”
He goes on to argue that even seemingly innocuous terms and conditions — like “you cannot target innocent civilians” — involve difficult judgement calls about what counts as a civilian, what counts as targeting, and so on. And under Anthropic’s proposed framework, Anthropic would get some say on those questions — questions which really feel like government policy decisions. Luckey sees that setup as fundamentally at odds with democratic self-government.
Luckey is pointing to a legitimate issue here. From the government’s perspective, it’s most straightforward for your military operations to be unconstrained by the opinions and moral qualms of your suppliers. Even to me, that practical worry is a potentially reasonable argument for the government to end their contract with Anthropic and look for other AI providers who are more straightforward to work with.
The story is a little bit more complicated than sometimes portrayed though. In the current contract, Anthropic couldn’t cut the military off from Claude suddenly, even if they really objected to how it was being used. And the government has now accepted the same conditions for contract termination from OpenAI that were supposedly completely intolerable from Anthropic just a month ago.
Plus, a far-sighted secretary of defence might welcome contractual barriers against certain uses of AI, not because they think they’d abuse the technology — presumably they’d trust themselves — but because they see value in guardrails that could limit a less scrupulous secretary of defence down the road.
And if we’re making appeals to democratic self-government, it’s worth noting how the public feels about this issue. A YouGov/Economist poll found that Americans are nearly twice as likely to support AI companies restricting military use of their tools as to say the military should be able to use them however it wants. The public actually wants these restrictions placed on their own government. Is it really so democratic to deny the people what they want?
But the fundamental issue here is a different one. Luckey is using the vague expression “believe in democracy” to equivocate between two entirely different things.
- Yes, democracy requires that the public gets to choose its leaders and that those leaders get to make decisions about national security.
- But no, it does not require that any private individual must supply their labour and their products for any purpose the government demands, on terms entirely set by the government, on pain of destruction.
As one Twitter joker put it: “remember—you should do whatever the government wants, even things you think are immoral, because otherwise you’re deciding what you do instead of the government, which is undemocratic”
Of course, the reverse is closer to the truth: being compelled to personally work on projects you oppose or face crushing government retaliation is clearly required for democracy to exist. And it’s undemocratic countries, like China and North Korea, where the state demands a right to make you offers you can’t refuse on any topic of their choice.
You don’t have to debate on their terms
There’s a common thread across all three of these charges. In each case, an abstract principle that sounds kind of reasonable gets invoked in a way that diverts attention from both common sense, and what’s actually going on:
- “You wanted government involvement in AI” becomes a reason you can’t object to any government action on AI.
- “Powerful states will inevitably assert control over powerful technology” becomes a reason we just have to lie down and accept whatever form that assertion takes.
- “Democratic leaders should make decisions about national security” becomes a reason no person or company can ever set terms when they sell things to the government.
These questions are going to come up more and more, because AI really is becoming more powerful and governments really will need to be involved in governing it in some form or other. And as the debate goes mainstream, it could easily become a dumbed-down culture war where you’re either for government control or against it.
But caring about precisely what the government is doing and whether it’s justified isn’t hypocritical, naive, or undemocratic. People should be proud to say that they care about the specifics and are actively pushing for the ones they think would be best. And they certainly shouldn’t allow themselves to be bullied into silence by the kind of mediocre arguments we’ve seen above.
Learn more
- A timeline of the Anthropic-Pentagon dispute by Justin Hendrix
- Anthropic and alignment by Ben Thompson
- Palmer Luckey says Silicon Valley has the Pentagon all wrong: ‘Stick to a position that this is in the hands of the people’ by Jake Angelo
- Many Americans think AI companies should be able to limit how the U.S. military uses their tools by David Montgomery
- Rob’s interview with Dean Ball on how AI is a huge deal — but we shouldn’t regulate it yet

I agree with the "you don't have to debate on their terms" point here, but I think for 99% of your readers/listeners, it cuts far more strongly in a different way than that you're implying.
The debate has generally been set in terms of "Anthropic vs. DoW", and, while I know zero people in our community who have taken the government's side on this, I've seen many EAs and adjacent people become increasingly uncritical supporters of Anthropic, just because they're standing against the obviously bad actor in this situation.
I think it's important to remember: