Skip to content

Anthropic-Pentagon AI Deal Collapses: Trump Orders Federal Ban As Safety Clash Escalates

Anthropic-Pentagon AI Deal Collapses: Trump Orders Federal Ban as Safety Clash Escalates

By Perplexity News Staff | February 28, 2026

WASHINGTON — Negotiations between AI powerhouse Anthropic and the U.S. Department of Defense dramatically unraveled this week, culminating in President Donald Trump’s directive to federal agencies to phase out the company’s technology amid a fierce dispute over AI safety guardrails.[1][2]

The breakdown, which unfolded publicly just days before a critical Friday deadline, pits Anthropic’s commitment to ethical AI use against the Pentagon’s insistence on unrestricted access to its advanced model, Claude. Anthropic CEO Dario Amodei drew a firm line, stating the company “cannot in good conscience accede” to demands that would permit uses like mass surveillance of Americans or fully autonomous weapons.[1][2][3]

From Private Talks to Public Feud

What began as months of private discussions exploded into a high-stakes public showdown under the Trump administration. Pentagon Chief Technology Officer Emil Michael accused Amodei of having a “God-complex” and prioritizing personal control over national security, escalating tensions in a series of blistering social media posts and interviews.[1][2]

Michael claimed the military had offered “very good concessions,” but Anthropic dismissed the latest contract language as inadequate, noting it was “paired with legalese that would allow those safeguards to be disregarded at will.”[1][3] Pentagon spokesman Sean Parnell reinforced the hardline stance, declaring on social media that no company would “dictate the terms regarding how we make operational decisions,” setting a 5:01 p.m. ET Friday deadline.[3]

By late Friday, the impasse led to decisive action. Defense Secretary Pete Hegseth labeled Anthropic a “supply chain risk,” effectively barring it from working with defense contractors like Boeing and Lockheed Martin. President Trump, in a Friday announcement, ordered all federal agencies to cease using Anthropic’s technology within six months.[1][4]

“He wants nothing more than to try to personally control the US Military and is ok putting our nation’s safety at risk,” Michael wrote on X about Amodei.[1]

High Stakes for AI in National Security

The feud centers on a $200 million military contract, highlighting broader tensions in the race to integrate AI into defense operations. Anthropic sought “narrow assurances” prohibiting Claude’s use in mass surveillance—deemed illegal by the Pentagon—or fully autonomous weapons without human oversight.[3][4]

Pentagon officials countered that such restrictions are unnecessary, with Parnell stating the military has “no interest” in those applications. Yet, sources indicate discussions explored invoking the Defense Production Act, a Cold War-era law, to compel Anthropic’s compliance—a move fraught with legal uncertainties.[2][3]

The dispute has ripple effects across the AI sector. OpenAI CEO Sam Altman expressed mixed support for Anthropic, telling CNBC, “For all the differences I have with Anthropic, I mostly trust them as a company, and I think they really do care about safety.” He noted OpenAI’s ongoing support for “warfighters” but uncertainty over the outcome.[3] Unlike Anthropic and xAI, OpenAI has not yet been cleared for classified Pentagon work.[4]

Political and Industry Backlash

Bipartisan lawmakers and former Defense Department AI leaders voiced concerns over the Pentagon’s aggressive tactics. “This approach risks alienating innovative companies at a time when AI superiority is critical to national security,” one anonymous former official told reporters.[3]

Anthropic, known for its safety-first ethos, faces severe business repercussions. The supply chain designation means it cannot collaborate with major defense firms, potentially costing millions and stalling its growth in government contracts. Amodei, in a blog post, called the Pentagon’s threats “contradictory,” noting they label Claude both a security risk and “essential to national security.”[2]

The Pentagon has already directed contractors to assess their exposure to Anthropic, laying groundwork for broader blacklisting.[2] Analysts warn this could accelerate consolidation in the AI-defense space, pushing reliance toward competitors like xAI or OpenAI.

Implications for AI Ethics and U.S. Military Tech

This clash underscores a pivotal moment for AI governance. Proponents of Anthropic’s stance argue it protects against dystopian misuse, while critics see it as naive idealism hindering military readiness amid global competition from China and others.

“Anthropic’s refusal sets a dangerous precedent,” Michael told CBS News, emphasizing the need for flexible AI tools in dynamic battlefields.[1] Amodei countered that ethical boundaries are non-negotiable, even at the cost of federal business.

As the six-month phase-out begins, eyes turn to how the Pentagon will pivot. Will OpenAI fill the void? Or does this signal a broader push for domestic AI dominance without safety strings attached? The fallout could reshape U.S. AI policy for years.

In a related development, Trump’s comments on Iran negotiations hinted at leveraging AI advancements, though no direct link was made.[4]

What’s Next?

While Anthropic indicated openness to further talks pre-deadline, Trump’s order appears to slam the door. Legal challenges to the supply chain label or Defense Production Act invocation loom, but for now, the AI safety debate rages on—with real-world consequences for tech, defense, and policy.

(Word count: 1028)

Table of Contents