OpenAI CEO Sam Altman wrote in a memo to staff that he will draw the same red lines that sparked a high-stakes fight between rival Anthropic and the Pentagon: no AI for mass surveillance or autonomous lethal weapons.

Why it matters: If other leading firms like Google follow suit, this could massively complicate the Pentagon's efforts to replace Anthropic's Claude, which was the first model integrated into the military's most sensitive work.

  • It would also be the first time the nation's top AI leaders have taken a collective stand about how the U.S. government can and can't use their technology.

The flipside: Altman made clear he still wants to strike a deal with the Pentagon that would allow ChatGPT to be used for sensitive military contexts.

  • Despite the show of solidarity, such a deal could see OpenAI replace Anthropic if the Pentagon follows through with its plan to declare the latter a "supply chain risk."

What he's saying: "[R]egardless of how we got here, this is no longer just an issue between Anthropic and the [Pentagon]; this is an issue for the whole industry and it is important to clarify our stance," Altman wrote Thursday evening in a memo obtained by Axios.

  • "We have long believed that AI should not be used for mass surveillance or autonomous lethal weapons, and that humans should remain in the loop for high-stakes automated decisions. These are our main red lines."

The intrigue: ChatGPT is already available in the military's unclassified systems, and talks to move it into the classified space have accelerated amid the Pentagon-Anthropic fight, sources tell Axios.

  • But the Pentagon has insisted OpenAI and Google would have to agree the military can use their models for "all lawful purposes," the same standard Anthropic rejected since it didn't incorporate their specific guardrails.
  • Elon Musk's xAI recently agreed to those terms, but Grok is not seen as a wholesale alternative to Claude.

In his memo, Altman wrote that the military will need AI, and he hopes to "help de-escalate things."

  • "We are going to see if there is a deal with the [Pentagon] that allows our models to be deployed in classified environments and that fits with our principles. We would ask for the contract to cover any use except those which are unlawful or unsuited to cloud deployments, such as domestic surveillance and autonomous offensive weapons," Altman said.
  • The Wall Street Journal first reported on the memo.

Between the lines: OpenAI's ideas for enforcing its red lines include preserving the company's ability to continuously strengthen its security and monitoring systems as it learns from real-world deployments, a source familiar told Axios.

  • The company also wants researchers with security clearances who can track how the technology is being used and advise the government on risks.
  • Finally, the source said, OpenAI wants certain technical safeguards — including confining models to the cloud rather than edge environments like autonomous weapons.

What to watch: Based on how Pentagon officials have described their position to Axios, those proposals could face the same resistance Anthropic encountered: too much private company influence over critical government work.

State of play: After Anthropic CEO Dario Amodei stood firm by his company's red lines, employees from OpenAI and Google signed onto a letter in solidarity on Thursday, pushing executives at their respective companies to resist "pressure" from the Pentagon.

  • While Anthropic said it intended to continue negotiations, a rupture appeared close. Emil Michael, the Pentagon official handling negotiations with Anthropic and the other major AI firms, denounced Amodei as a "liar" with a "God complex" who was "putting our nation's safety at risk."
  • Many others in D.C. and Silicon Valley praised Anthropic for taking a principled stand at the risk of a major financial hit.
  • Altman and Amodei are former colleagues at OpenAI who have become fierce rivals since the latter left to start Anthropic.

The other side: Defense officials contend they have no intention of conducting mass surveillance or swiftly deploying autonomous weapons.

  • Their primary objection is having a private company dictate how the U.S. government can deploy AI for national security purposes, particularly during a technological race with China.
  • Defense officials told Axios their interactions with Anthropic left them concerned the company might raise questions about the deployment of their technology at critical junctures. Anthropic denies that.
  • It's possible the negotiations with OpenAI will be less adversarial.

What to watch: "We have had some meetings to discuss this over the past couple of days, and will have more tomorrow with our safety teams before we decide what to do. We will also set up an all hands and office hours as soon as we can," Altman said, referring to those negotiations.

  • "This is a case where it's important to me that we do the right thing, not the easy thing that looks strong but is disingenuous. But I realize it may not 'look good' for us in the short term, and that there is a lot of nuance and context."

8

0
0

Reactions

0
0

More posts like this

Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities