- US judge halts Pentagon from labeling Anthropic as a supply chain risk temporarily
- Judge Lin called the government’s action unfair and lacking proper justification
- The label could have cut Anthropic off from major government contracts and business
A US federal judge has temporarily stopped the Pentagon from labeling artificial intelligence company Anthropic as a “supply chain risk”, a move that could have severely hurt the firm's business and government contracts.
The decision was issued by District Judge Rita Lin, who said the government's action appeared unfair and lacked proper justification. She also paused an earlier directive linked to former President Donald Trump that had asked federal agencies to stop using the company's AI tools, including its chatbot Claude.
Also Read: US Labor Department Proposes Major Wage Hike For H1B, Other Visa Workers
In simple terms, calling a company a “supply chain risk” means the government considers it unreliable or unsafe to work with—something that can effectively cut it off from major contracts. Anthropic argued in court that this label was imposed suddenly and without clear evidence, damaging its reputation and operations.
The dispute reportedly began after disagreements between the company and the US Department of Defense over how its AI technology should be used. Anthropic had raised concerns about its tools being deployed in sensitive areas such as autonomous weapons or mass surveillance. Following these disagreements, the Pentagon moved to restrict its use.
During the court hearing, Judge Lin questioned why such a strong step was taken against a US-based technology company, especially using powers that are usually reserved for dealing with foreign threats.
She indicated that if the government had concerns, it could simply stop using the company's products instead of issuing a broad and damaging designation.
The court's order does not force the Pentagon to continue working with Anthropic. It only prevents the government, for now, from officially branding the company as a risk while the legal case continues.
The judge has also given the government time to challenge the decision in a higher court.
Anthropic welcomed the ruling, saying it would continue focusing on developing safe and responsible AI systems.
The Pentagon has not yet publicly responded to the order
Essential Business Intelligence, Continuous LIVE TV, Sharp Market Insights, Practical Personal Finance Advice and Latest Stories — On NDTV Profit.