How Anthropic’s Talks With the U.S. Defense Department Fell Apart

Written by: Mane Sachin

Published on:

Follow Us

The final minutes before the deadline were tense.

For weeks, Pentagon officials had been working to finalize a $200 million artificial intelligence contract with Anthropic. The agreement would have brought the company’s AI systems into classified Defense Department networks. By Friday afternoon, only a narrow disagreement over surveillance language remained.

Emil Michael, the Pentagon’s chief technology officer and a former Uber executive, had led the negotiations. According to people familiar with the discussions, both sides were close. The sticking point centered on how the company’s technology could be used in matters involving the surveillance of Americans.

The Defense Department wanted full authority to use the systems for any lawful purpose. Anthropic insisted on limits, arguing that its AI should not be deployed for mass domestic surveillance or in autonomous weapons systems without meaningful human control.

As the 5:01 p.m. deadline approached, Michael asked to speak directly with Anthropic’s CEO, Dario Amodei, to settle the final wording. He was told Amodei was tied up in another meeting and needed more time.

That time was not granted.

Shortly after the deadline passed, Defense Secretary Pete Hegseth announced that Anthropic would be designated a “supply chain risk,” effectively cutting the company off from U.S. government work. In a social media post, he said the military would not allow itself to be constrained by what he described as ideological positions from technology firms.

Behind the scenes, an alternative had already taken shape. While negotiations with Anthropic were stalling, the Pentagon had been in discussions with OpenAI. A framework agreement had been drafted. Later that night, OpenAI CEO Sam Altman confirmed that his company had secured a deal to provide AI tools for classified defense systems.

People familiar with the talks say the breakdown reflected more than just a contract dispute. It exposed deeper disagreements about the role of artificial intelligence in national security. Amodei had publicly argued that certain uses of AI — particularly large-scale analysis of commercial data tied to Americans — could undermine democratic values. He also warned that some military applications go beyond what today’s systems can safely handle.

Pentagon officials countered that lawful decisions about how technology is used rest with the government, not private companies. They maintained that contractors cannot dictate national security policy.

Tensions between the individuals involved added another layer to the conflict. Michael, Amodei and Altman have long histories in Silicon Valley, including past professional ties and rivalries. As negotiations dragged on, disagreements spilled into public view through pointed remarks on social media.

Anthropic has since said it plans to challenge the Pentagon’s designation in court. The “supply chain risk” label has typically been applied to foreign companies viewed as national security threats, making its use against a U.S. AI firm unusual.

Meanwhile, some officials within intelligence agencies that already rely on Anthropic’s systems have reportedly encouraged a compromise, suggesting that discussions may not be entirely over.

The episode comes as the Defense Department pushes to expand artificial intelligence across military operations. Earlier guidance from Pentagon leadership called for broader AI integration and fewer contractual restrictions, prompting companies to revisit existing agreements.

In the end, what appeared to be a nearly finalized deal unraveled over a few unresolved words and a deeper divide over AI safeguards.

OpenAI now moves ahead with the Pentagon contract. Anthropic prepares for a legal battle. And the debate over how artificial intelligence should be used in matters of war and surveillance is likely to continue.

Also Read: Anthropic Unveils AI Feature in Claude Code to Scan Codebases and Suggest Patches

Mane Sachin

My name is Sachin Mane, and I’m the founder and writer of AI Hub Blog. I’m passionate about exploring the latest AI news, trends, and innovations in Artificial Intelligence, Machine Learning, Robotics, and digital technology. Through AI Hub Blog, I aim to provide readers with valuable insights on the most recent AI tools, advancements, and developments.

For Feedback - aihubblog@gmail.com