The U.S. military used Anthropic's Claude AI model during the operation to capture Venezuela's Nicolás Maduro, two sources with knowledge of the situation told Axios.

  • Now, the blowback may threaten the company's business with the Pentagon.

The latest: After reports on the use of Claude in the raid, a senior administration official told Axios that the Pentagon would be reevaluating its partnership with Anthropic.

  • "Anthropic asked whether their software was used for the raid to capture Maduro, which caused real concerns across the Department of War indicating that they might not approve if it was," the official said.
  • "Any company that would jeopardize the operational success of our warfighters in the field is one we need to reevaluate our partnership with going forward."
  • An Anthropic spokesperson denied that: "Anthropic didn't make any such call to the Department of War."

Why it matters: The episode highlights the tensions the major AI labs face, as they enter into business with the military while trying to maintain some limitations on how their tools are used.

Breaking it down: AI models can quickly process data in real-time, a capability prized by the Pentagon given the chaotic environments in which military operations take place.

  • Axios could not confirm the precise role that Claude played in the operation to capture Maduro. The military has used Claude in the past to analyze satellite imagery or intelligence. The sources said Claude was used during the active operation, not just in preparations for it.
  • No Americans were killed in the raid. Cuba and Venezuela both said dozens of their soldiers and security personnel were killed.

Friction point: The Pentagon wants the AI giants to allow them to use their models in any scenario so long as they comply with the law.

  • Anthropic, which has positioned itself as the safety-first AI leader, is currently negotiating with the Pentagon around its terms of use. The company wants to ensure in particular that its technology is not used for the mass surveillance of Americans or to operate fully autonomous weapons.
  • The company is confident the military has complied in all cases with its existing usage policy, which has additional restrictions, a source familiar with those discussions told Axios.

What they're saying: "We cannot comment on whether Claude, or any other AI model, was used for any specific operation, classified or otherwise," the Anthropic spokesperson said.

  • "Any use of Claude — whether in the private sector or across government — is required to comply with our Usage Policies, which govern how Claude can be deployed. We work closely with our partners to ensure compliance."
  • Defense Secretary Pete Hegseth has leaned into AI and said he wants to quickly integrate it into all aspects of the military's work, in part to stay ahead of China.
  • Senior Pentagon officials have expressed frustration with Anthropic's posture on ensuring safeguards, a source familiar with those discussions said.

The big picture: Anthropic is one of several major model-makers that are working with the Pentagon in various capacities.

  • OpenAI, Google and xAI have all reached deals for military users to access their models without many of the safeguards that apply to ordinary users. It's unclear if any other models were used during the Venezuela operation.
  • But the military's most sensitive work — from weapons testing to comms during active operations — happens on classified systems. For now, only Anthropic's system is available on those classified platforms.
  • Anthropic also has a partnership with Palantir, the AI software firm that has extensive Pentagon contracts, that allows it to use Claude within its security products. It's not clear whether the use of Claude in the operation was tied to the Anthropic-Palantir partnership.

What to watch: Discussions are ongoing between the Pentagon and OpenAI, Google and xAI about allowing the use of their tools in classified systems. Anthropic and the Pentagon are also in discussions about potentially loosening the restrictions on Claude.

Editor's note: The headline and story were updated based on comments from a senior U.S. official.

14

0
0

Reactions

0
0

More posts like this

Comments2
Sorted by Click to highlight new comments since:

I’m sure there’s some good money in it but Anthropic signed this deal around 8 months ago, when they were making substantially less money. I’m just not sure it’s worth the fight when other frontier labs have comparably performant models and substantially less moral qualms—why risk the walkouts and resignations?

I haven’t spent a lot of time thinking about this, but I suspect a couple reasons to continue pursuing this contract beyond the present revenue include (1) retaining relationships and a reputation that provides option value for (especially defense-related) future contracts and (2) increasing the likelihood that safer models are used in high-stakes settings, especially ones that could carry some non-negligible AI-related risks. While those are plausible (and plausibly right) lines of reasoning, I’m writing them without taking a stance on specific details that have central importance to their truth (e.g. are Anthropic’s models “safer” than competitors’? Seems quite likely based on reputation, but I’m not well-informed enough to make that claim confidently). If you’re right and the alternative is the use of another lab’s models for the same jobs, and if the article is right that Anthropic’s models are the only ones being used on classified networks, then I don’t think there are good reasons for Anthropic to intentionally cede that space to competitors. 

Curated and popular this week
Relevant opportunities