Get App
Download App Scanner
Scan to Download
Advertisement

Anthropic Vs US Government: All You Need To Know About The High-Octane Feud

Anthropic has filed a lawsuit against the U.S. government, contending the "supply-chain risk" label is baseless.

Anthropic Vs US Government: All You Need To Know About The High-Octane Feud
Anthropic has filed a lawsuit against the U.S. government.
Pexels

On March 9, Anthropic filed a lawsuit against the U.S. government, describing its blanket ban as retaliation for the company's refusal to strip safety guardrails from its Claude AI model. Anthropic emphasised its readiness to partner with the U.S. military but only on acceptable terms. It simultaneously launched an appeal in the D.C. Circuit Court challenging another legal basis the government had cited.

The Anthropic Side Of The Story

According to Anthropic, it invested years in making Claude the frontier AI system for the government. Claude was deployed on classified military networks, and the company created a customised “Claude Gov” edition and relaxed many restrictions in a bid to support national security. Anthropic said it even cut off Claude use by China-linked companies, forgoing “several hundred million dollars in revenue.”

However, contract talks for the Pentagon's GenAI.mil platform in the fall of 2025 marked the beginning of the standoff. The Department of Defense reportedly wanted Claude for “all lawful uses,” which in a manner would allow “safeguards to be disregarded at will,” according to an Anthropic spokeswoman.

Anthropic didn't agree on two aspects: “mass domestic surveillance” and “fully autonomous weapons,” with CEO Dario Amodei later saying, “such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now.” The company said that Claude has never been evaluated for those purposes and cannot handle them safely.

How The US Government Responded

Pentagon officials presented a different story, with an official claiming tensions spiked after the U.S. raid in Venezuela. An Anthropic executive allegedly contacted a Palantir colleague to ask whether Claude had played any role in the Maduro raid — and the undertone was one of disapproval of the AI's use.

The situation quickly deteriorated. On Feb. 24, Secretary of Defense Pete Hegseth told Amodei to comply within four days or face forced cooperation under the Defense Production Act or removal from the supply chain as a “national security risk.” Amodei refused — publicly — on Feb. 26.

The next day, President Donald Trump ordered government agencies to stop using Anthropic immediately and branding it a “radical left, woke company.” This was followed by Hegseth announcing on X that Anthropic constituted a “Supply-Chain Risk to National Security,” prohibiting military contractors and suppliers from doing business with it.

Yet, Anthropic's filing claims the Pentagon executed a major airstrike on Iran using its tools shortly after the ban took effect.

Enter OpenAI

Around the same time, rival OpenAI announced a deal with the government. On Feb. 28, CEO Sam Altman posted on X that OpenAI had agreed to deploy its models on the Department of War's classified network.

However, while Anthropic's firm footing drew praise, OpenAI faced intense public backlash for buckling under pressure, including a surge in app uninstalls and a sharp drop in downloads.

Anthropic Sues US Government

In a one-of-a-kind lawsuit against the U.S. government, Anthropic contends the “supply-chain risk” label is baseless, citing its FedRAMP authorisation, security clearances, and prior government endorsements. The lawsuit alleges violations of the Administrative Procedure Act, First Amendment, Fifth Amendment, presidential statutory limits, and rules against unauthorised agency sanctions.

Support From AI Majors

On the same day as the suit, over 30 employees from OpenAI and Google DeepMind filed an “amicus brief” supporting Anthropic. It describes the government's risk designation as an “improper and arbitrary use of power” with serious consequences for the AI industry.

Anthropic Says Standoff Could Cost Billions

As of now, “the government's actions could reduce Anthropic's 2026 revenue by multiple billions of dollars,” CFO Krishna Rao wrote in a court filing. He added: “The mere fact of the purported designation (supply-chain risk) — combined with President Trump's and Secretary Hegseth's public statements — risks substantially undermining market confidence and Anthropic's ability to raise the capital critical to train next-generation models.”

Also read: Using AI, Hackers Can Even Tell Where You Walk Your Dog — Here's How To Prevent That

Essential Business Intelligence, Continuous LIVE TV, Sharp Market Insights, Practical Personal Finance Advice and Latest Stories — On NDTV Profit.

Newsletters

Update Email
to get newsletters straight to your inbox
⚠️ Add your Email ID to receive Newsletters
Note: You will be signed up automatically after adding email

News for You

Set as Trusted Source
on Google Search