
AI and the national security calculus Premium
The Hindu
The developments in the Anthropic case have serious implications for AI development and national security calculus worldwide
Anthropic, an American Artificial Intelligence (AI) lab, is asking for three Chinese AI labs (DeepSeek, MoonshotAI, and MiniMax) to be treated as national security threats. The AI models of Anthropic and other American labs have also reportedly been used by the U.S. military in the Iran attacks to fast-track the “kill chain” from target identification to legal approval and strike.
The Pentagon has labelled Anthropic a “supply chain” risk — a designation associated with foreign adversaries, for raising concerns about how its technology is being used in military operations. This decision is now being challenged in court. These developments over the course of a few weeks have serious implications for AI development and national security calculus worldwide.
The Chinese AI labs have been accused of distilling frontier models from American AI companies. In a nutshell, this involves taking a stronger AI model’s outputs to teach a weaker model. The attacks were sophisticated and used deceptive techniques to mask the identity and intent of the distillers. Anthropic claims that this happened on an industrial scale — “16 million exchanges with Claude through approximately 24,000 fraudulent accounts, in violation of our terms of service and regional access restrictions”.
Generative AI is often equated with nuclear technologies, with the aim of containing the proliferation of the technology. However, it is a dual-use general-purpose technology that is more comparable to semiconductors than nuclear weapons. Unlike nuclear technologies, where governments drive research and development efforts, cutting-edge AI research happens in the private sector for civilian applications. It just so happens that the same technology also has military applications.
Nuclear non-proliferation works because fissile material is rare, controlled and traceable. The same is not true for mathematical AI models. The fact that DeepSeek was able to achieve comparable performance of frontier models at a fraction of the cost after export controls were imposed is proof that restrictions are not effective. The nuclear narrative asks us to treat querying an AI model as equivalent to weapons proliferation.
Anthropic’s argument that a distilled model will be used less responsibly lies on weak foundations. Models from frontier American AI labs such as Anthropic, OpenAI, Google and xAI could be used by the U.S. military for applications such as surveillance, cyberwarfare and lethal autonomous weapons systems. In fact, when Anthropic recently raised concerns about the kinds of uses its models were put to, it faced the threat of being removed from defence systems and designated as “supply chain risks”. Its rival, OpenAI, however, has accepted a permissive contract for military uses, highlighting a race to the bottom, given the competitive pressure to serve government clients. When their own models are being put to such uses, the argument that distilled models will not have guardrails collapses.













