Anthropic (ANTHRO) on Monday accused several Chinese artificial intelligence laboratories of attacking its models and stealing its data to improve their own respective AI models.
“We’ve identified industrial-scale distillation attacks on our models by DeepSeek, Moonshot AI, and MiniMax,” Anthropic said. “These labs created over 24,000 fraudulent accounts and generated over 16 million exchanges with Claude, extracting its capabilities to train and improve their own models.”
The process known as distillation, where a large AI model is used to train or mimic a smaller, less complex model, is widely accepted in the industry. However, Anthropic is accusing the aforementioned companies, including DeepSeek (DEEPSEEK), of removing safeguards, and putting model capabilities where they can be applied for usage in military, intelligence, or surveillance needs.
Delving deeper, Anthropic said that DeepSeek was responsible for more than 150,000 exchanges, with the operations going after reasoning capabilities across different tasks; Rubric-based grading tasks that made Anthropic’s Claude function as a reward model; and creating censorship-safe alternatives for certain sensitive queries.
Moonshot AI, which has received funding from Alibaba (BABA), among others, has been accused of more than 3.4M exchanges that targeted agentic reasoning, coding and data analysis, computer-use agent development, and computer vision.
MiniMax, which went public in Hong Kong last month, was accused of more than 13M exchanges that targeted agentic coding tool use and orchestration.
“These attacks are growing in intensity and sophistication,” Anthropic added. “Addressing them will require rapid, coordinated action among industry players, policymakers, and the broader AI community.”
DeepSeek, Moonshot AI, and MiniMax did not immediately respond to a request for comment from Seeking Alpha.