
Anthropic accuses DeepSeek and 2 other Chinese AI models of stealing its data, people on internet say it is fair
India Today
Anthropic has accused three Chinese AI firms, including DeepSeek, of distilling data from its Claude chatbot to train and improve their own AI models. Surprisingly, instead of finding support Anthropic found itself at a receiving end from people as they accused the AI firm of doing the same earlier.
This has been going on for some time, this tension and cold war between Chinese AI companies and their US counterparts. Anthropic, the US giant behind Claude AI, is at the forefront of it. On Tuesday, the company opened a new front in this war as it accused three leading Chinese AI companies, which are DeepSeek, MiniMax and Moonshot AI, of stealing data from Claude to train their AI models.
In a blog post Anthropic said that the Chinese companies used thousands of accounts to carry out conversations with Claude lasting millions of tokens. The information gleaned from these conversations was then used to fine tune Chinese AI models. Essentially, Anthropic is saying that the Chinese AI models are cheating, doing something like a student might do during an exam when they peek into their friend's answer sheet and copy the answers. In the AI world this process is called distillation, or in other words training a smaller or new AI model on the output of a superior AI model. Companies like Anthropic and OpenAI do that internally.
In a statement released this week, Anthropic alleged that the Chinese companies created around 24,000 fraudulent accounts that together generated more than 16 million exchanges with Claude. The goal, according to Anthropic, was to extract high-quality outputs in areas like coding, reasoning and tool use, strengths the company says define its model. “We have identified industrial-scale campaigns by three AI laboratories—DeepSeek, Moonshot, and MiniMax—to illicitly extract Claude’s capabilities to improve their own models,” accuses Anthropic.
While Anthropic acknowledges that distillation can be a legitimate method, it argues that it is illegal when used to copy powerful AI without permission. “Models built through illicit distillation are unlikely to retain those safeguards, meaning that dangerous capabilities can proliferate with many protections stripped out entirely,” Anthropic says.
While Anthropic is throwing some serious allegations at Chinese models, the internet’s reaction has been anything but sympathetic.
Social media quickly filled with memes suggesting that the entire AI industry relies on massive data scraping. Many users pointed to Anthropic’s own legal troubles, referencing recent lawsuits where the company agreed to pay at least $1.5 billion to settle a lawsuit filed by authors who accused the company of using pirated books to train its Claude AI chatbot.

Reddit is exploring biometric verification methods such as Face ID and Touch ID to ensure users are real humans, not bots, while pledging to maintain the platform's tradition of anonymity. CEO Steve Huffman said the company is planning to address the rising influence of AI-generated content and protect authentic user engagement.

In a push towards more inclusive school environments, the Central Board of Secondary Education has rolled out fresh directives on menstrual hygiene across its affiliated institutions. The move comes after a landmark ruling by the Supreme Court of India that places menstrual health within the framework of fundamental rights.











