White House Accuses China of 'Industrial-Scale' AI Model Distillation, Commits to Sharing Intelligence
Key Takeaways
- ▸The White House has escalated its response to AI model distillation, publicly accusing China of conducting 'industrial-scale' theft of US AI models—a threat that OpenAI and Anthropic have been reporting since February
- ▸Chinese AI labs used fraudulent accounts, jailbreaking techniques, and commercial proxy services to extract millions of interactions from US models, using the knowledge to train cheaper rival models
- ▸The U.S. government is committing to share threat intelligence with American AI companies and explore legislative and accountability measures, including the proposed Deterring American AI Model Theft Act
Summary
The White House Office of Science and Technology Policy released a policy memo on Wednesday accusing China of conducting 'industrial-scale' theft of American artificial intelligence models through distillation—a technique where thousands of queries are fed to a frontier AI model and responses are used to train cheaper rival models. Michael Kratsios, the OSTP director, said the US 'has evidence that foreign entities, primarily in China, are running industrial-scale distillation campaigns to steal American AI' and committed to taking action to protect American innovation.
The memo builds on months of allegations from US AI companies. In February, OpenAI sent a formal memo to Congress accusing DeepSeek of distilling its models using obfuscated proxies and jailbreaking techniques to circumvent access restrictions. Anthropic published more detailed evidence in February, identifying three Chinese laboratories—DeepSeek, MiniMax, and Moonshot AI—that together created approximately 24,000 fraudulent accounts generating over 16 million exchanges with Claude, targeting foundational logic, alignment techniques, agentic reasoning, and tool use.
The government committed to sharing threat intelligence with US AI companies about foreign distillation campaigns and exploring measures to hold perpetrators accountable. By early April, OpenAI, Anthropic, and Google had begun sharing distillation threat intelligence through the Frontier Model Forum, a cyber-threat-sharing framework modeled on security practices. The policy memo arrives three weeks before a planned Trump-Xi summit in Beijing on May 14, positioning AI technology protection as both a national security imperative and a potential negotiating point.


