
Pentagon reveals why US Military has banned Claude, labelled Anthropic as supply risk
India Today
US Defense Undersecretary Emil Michael has claimed that Anthropic's Claude AI model could pollute the Pentagon's supply chain. His comments come at a time when the AI startup is suing the US Department of Defense for being designated as a supply chain risk. Here is everything you need to know.
US Defense Undersecretary Emil Michael believes that Anthropic’s Claude AI model could pollute the Pentagon’s supply chain. The Dario Amodei-led AI firm has sued the US Department of Defense for labelling it as a supply chain risk after a fallout over unrestricted AI use.
Michael told CNBC that Anthropic’s model had a “different policy preference” baked in, as compared to the Pentagon, which could pollute the entire supply chain.
Earlier this month, Anthropic was designated as a supply chain risk by the Pentagon for failing to meet its demands for unrestricted AI use. The US Military wanted access to the AI model for “all lawful purposes.” However, Anthropic drew red lines over potential use of AI for mass domestic surveillance or the development of autonomous weapons.
Now, Emil Michael has stated that this different policy preference was the reason that the Dario Amodei-led startup was given this label. He said, “That’s really where the supply chain risk designation came from.”
But why? Michael claims that this could make things ineffective in combat. He explained, “We can’t have a company that has a different policy preference that is baked into the model through its constitution, its soul, its policy preferences, and pollutes the supply chain.” Emil added, “So our warfighters are getting ineffective weapons, ineffective body armor, ineffective protection.”
Do note that the supply chain risk label is usually reserved for foreign adversaries such as China’s Huawei. Anthropic has filed a lawsuit challenging this designation.













