Judge blocks Pentagon from labeling Anthropic AI a "supply chain risk" and halts Trump's ban on federal use
CBSN
A judge has blocked the Trump administration from labeling Anthropic a supply chain risk and cutting off all federal work with the artificial intelligence firm, an early win for Anthropic in its bitter feud with the government over AI guardrails. In:
A judge has blocked the Trump administration from labeling Anthropic a supply chain risk and cutting off all federal work with the artificial intelligence firm, an early win for Anthropic in its bitter feud with the government over AI guardrails.
U.S. District Judge Rita Lin on Thursday ruled in favor of Anthropic, which sued the federal government earlier this month for taking actions that it called an "unprecedented and unlawful" attempt to punish the company for First Amendment-protected speech.
Lin's ruling in the case prevents the government from enforcing its supply chain risk designation against Anthropic, a move that aimed to stop private government contractors from using the company's powerful Claude AI model. It also halts an order by President Trump for every federal agency to "IMMEDIATELY CEASE all use of Anthropic's technology."
The dispute revolves around Anthropic's push to bar the military from using Claude for domestic surveillance or to power fully autonomous weapons. The Trump administration has said it needs the ability to use AI for "all lawful purposes."
The judge wrote that her ruling does not stop the Trump administration from taking "lawful actions" that were allowed beforehand, so it does not necessarily mean the government must use Anthropic.

The rate of population growth in U.S. metro areas diminished nationwide in 2025, with those along the U.S.-Mexico border seeing the steepest dropoffs, according to the U.S. Census Bureau. The agency primarily attributed the losses to declining immigration as well as hurricanes that prompted people to leave parts of the Gulf Coast. In:












