
Anthropic at odds with Pentagon over safety guardrails after Maduro capture
India Today
AI startup Anthropic is at risk of being cut off by the Pentagon. As per reports, the US Department of War is frustrated with the restrictions on Anthropic's Claude AI model.
Anthropic is reportedly at odds with the Pentagon over safety guardrails on its AI models. As per reports, the US Department of Defense is reviewing its partnership with the AI startup. This comes just days after the US government claimed that it had used Claude in its mission to capture Venezuelan President Nicholas Maduro.
As per Axios, the Pentagon is considering labelling Anthropic a ‘supply chain risk.’ If this is passed, then all defense contractors with the US government would need to cut ties with Anthropic, potentially reshaping the AI landscape for military suppliers.
This follows months of discussions between the two sides over how the military can use Claude. At the heart of the dispute are Anthropic’s restrictions against using its technology to develop fully autonomous weapons or to facilitate mass surveillance of American citizens, limits that Pentagon officials have argued could hamper military flexibility.
Pentagon spokesperson Sean Parnell told Axios, "The Department of War’s relationship with Anthropic is being reviewed. Our nation requires that our partners be willing to help our warfighters win in any fight." The sticking point centres on the Pentagon’s demand to use AI tools for "all lawful purposes," while Anthropic remains cautious about potential misuse.
Anthropic’s position is shaped by concerns over the unintended consequences of advanced AI, aiming to avoid enabling weaponry that fires without human intervention or widespread domestic surveillance.
A company spokesperson told Axios, "We are having productive conversations, in good faith, with DoW on how to continue that work and get these complex issues right."













