Anthropic Accuses Chinese AI Labs of Unauthorized Data Extraction Amidst U.S. AI Chip Export Debates
In a recent development that underscores the intensifying global competition in artificial intelligence (AI), Anthropic, a leading AI research organization, has accused three Chinese AI companies—DeepSeek, Moonshot AI, and MiniMax—of creating over 24,000 fraudulent accounts to exploit its AI model, Claude. This alleged activity, involving more than 16 million interactions, was reportedly aimed at enhancing the Chinese firms’ own AI systems through a method known as distillation.
Understanding Distillation in AI Development
Distillation is a prevalent technique in AI training where a smaller model learns to replicate the behavior of a larger, more complex model. While this approach is typically used internally to develop efficient models, it can be misused by competitors to replicate and integrate proprietary features from other organizations’ AI systems. In this case, Anthropic claims that the Chinese labs targeted Claude’s advanced capabilities, including agentic reasoning, tool utilization, and coding proficiency.
Detailed Allegations Against Chinese AI Labs
– DeepSeek: Anthropic identified over 150,000 interactions from DeepSeek, focusing on enhancing foundational logic and alignment, particularly in developing censorship-resistant alternatives for sensitive policy-related queries.
– Moonshot AI: This firm allegedly engaged in more than 3.4 million exchanges aimed at improving agentic reasoning, tool usage, coding, data analysis, development of computer-use agents, and computer vision technologies.
– MiniMax: With approximately 13 million interactions, MiniMax reportedly concentrated on advancing agentic coding, tool utilization, and orchestration. Notably, Anthropic observed that MiniMax redirected nearly half of its traffic to extract capabilities from the latest Claude model upon its release.
Context of U.S. AI Chip Export Policies
These allegations emerge amidst ongoing debates in the United States regarding the enforcement of export controls on advanced AI chips—a policy designed to limit China’s progress in AI development. In January 2026, the U.S. government permitted companies like Nvidia to export advanced AI chips, such as the H200, to China. Critics argue that easing these export controls could bolster China’s AI computing capabilities at a pivotal moment in the global race for AI supremacy.
Implications of Distillation Attacks
Anthropic emphasizes that the scale of the alleged data extraction by DeepSeek, MiniMax, and Moonshot AI necessitates access to advanced computing resources. The organization asserts that such distillation attacks reinforce the need for stringent export controls, as restricted access to advanced chips can limit both direct model training and the scale of unauthorized distillation efforts.
Industry and National Security Concerns
The AI industry has expressed growing concern over the misuse of distillation techniques. Earlier this month, OpenAI accused DeepSeek of employing distillation to replicate its products. DeepSeek gained attention a year ago with the release of its open-source R1 reasoning model, which nearly matched the performance of leading American AI labs at a fraction of the cost. The company is expected to soon unveil DeepSeek V4, a model reportedly capable of outperforming Anthropic’s Claude and OpenAI’s ChatGPT in coding tasks.
Anthropic warns that distillation not only threatens to undermine American AI leadership but also poses significant national security risks. The organization highlights that AI systems developed through unauthorized distillation may lack essential safeguards, potentially enabling malicious actors to use AI for developing bioweapons, conducting cyberattacks, or engaging in mass surveillance. This risk is particularly concerning if such models are open-sourced, allowing for widespread access without adequate safety measures.
Call for Coordinated Response
In response to these challenges, Anthropic is investing in defenses to make distillation attacks more difficult to execute and easier to detect. However, the organization stresses the need for a coordinated response across the AI industry, cloud service providers, and policymakers to effectively address and mitigate the risks associated with unauthorized data extraction and the proliferation of AI technologies without proper safeguards.