Anthropic PBC sued the Defense Department for declaring that the artificial intelligence giant posed a risk to the US supply chain, after a dispute with the Pentagon over safeguards on the company's technology.
San Francisco-based Anthropic is challenging a decision by the department and other federal agencies to shift their AI work to other providers, based on a risk designation typically reserved for companies from countries the US views as adversaries.
“These actions are unprecedented and unlawful,” the company said in a complaint filed Monday in San Francisco federal court. “The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech.”
Last week, the Pentagon formally notified Anthropic of its determination. Chief Executive Officer Dario Amodei then issued a statement saying the government's actions were not “legally sound” and had left the company with “no choice but to challenge it in court.”
The dispute erupted last month, after the Pentagon wanted to use Claude for any purpose within legal limits, and without any usage restrictions from Anthropic. The firm had insisted that the chatbot not be used for mass surveillance against Americans or in fully autonomous weapons operations.
ALSO READ: Anthropic Is Measuring AI-Induced Job Loss Risk — These 10 Occupations Are Most Vulnerable
In response, Defense Secretary Pete Hegseth on Feb. 27 ordered the Pentagon to bar its contractors and their partners from any commercial activity with Anthropic. In a post on X, Hegseth set a six-month period for Anthropic to hand over AI services to another provider.
Trump blasted Anthropic the same day on his Truth Social network, saying “The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War.” In his post, the president directed US government agencies to stop using Claude.
The company's lawsuit names as defendants the Department of War — which the Trump administration uses to describe the Department of Defense — as well as more than a dozen other federal agencies.
The Department of Defense didn't immediately respond to a request for comment on the lawsuit.
In the days after the department first announced its risk designation, consumers drove “unprecedented demand” for Anthropic's chatbot Claude in a show of support for the company's resistance to the government push for unfettered use of its technology.
ALSO READ: US State Department Turns To OpenAI Chatbot As Federal Agencies Retreat From Anthropic
Meanwhile, rival OpenAI announced it had struck an agreement to let the Pentagon deploy its artificial intelligence models in its classified network. OpenAI chief Sam Altman later said he was working with the Defense Department to add more guardrails around surveillance.
Founded in 2021 by former OpenAI employees, Anthropic quickly cemented itself as a rival to the ChatGPT maker with Claude, which it billed as more safety- and business-focused. Today, the San Francisco-based company has more than 300,000 business customers who use its models to streamline workplace responsibilities, particularly in the field of computer programming where it has emerged as a market leader with its AI coding assistant, Claude Code.
The case is Anthropic v. US Department of War, 26-cv-01996, US District Court, Northern District of California (San Francisco).
Essential Business Intelligence, Continuous LIVE TV, Sharp Market Insights, Practical Personal Finance Advice and Latest Stories — On NDTV Profit.