Conceptual diagram of AI Photo: VCG
A Bloomberg report on Monday claimed that OpenAI, Anthropic PBC and Alphabet Inc's Google have begun working together to "clamp down" on Chinese competitors for so-called "extracting results from cutting-edge US AI models to gain an edge in the global AI race."
The move has drawn rebuttals from Chinese experts and industry observers, who said the action reflects anxiety over China's rapid progress in open-source AI and its impact on US tech hegemony.
Bloomberg cited people familiar with the matter as saying that these firms are sharing information through the Frontier Model Forum, an industry nonprofit that the three tech companies founded with Microsoft in 2023, to detect so-called adversarial distillation attempts that violate their terms of service.
According to Bloomberg, the rare collaboration among the three competitors underscores US firms' concerns that some users - especially those from China - may be developing lower-cost imitations of commercial products and, in the process, creating potential national security risks.
OpenAI confirmed it is part of the information-sharing effort on adversarial distillation through the Frontier Model Forum and pointed to a recent memo it sent to the US Congress on the practice, in which it accused Chinese firm DeepSeek of trying to "free-ride on the capabilities developed by OpenAI and other US frontier labs," Bloomberg reported.
Feng Haoqin, a research fellow at Beijing-based Think Tank Fourth Wave Technology, told the Global Times that distillation is a technique that uses an earlier "teacher" AI model to train a newer "student" model. A trained student model can reproduce the functions of the earlier system and is usually much cheaper than building an original model from scratch.
Some forms of distillation are widely accepted and even encouraged by AI labs, such as when companies create smaller, more efficient versions of their own models, or allow outside developers to use distillation to build non-competitive technologies, Feng said. However, the legal boundaries of model distillation remain unclear, and mutual distillation among US companies is widespread, Feng added.
"US AI giants accused Chinese companies of illegal model distillation but failed to provide evidence," Feng said. Targeting Chinese firms is mainly driven by concerns over market competition, rather than so-called national security concerns, as the rapid development of China's large AI models is putting technical and market pressure on their US counterparts.
However, this challenge stems from Chinese firms' long-standing, relentless investment in R&D and efforts to tackle technical problems, and what is decisive is Chinese companies' independent innovation, Feng said.
Regarding the source of the training data for its DeepSeek-V3-Base, in a paper and public statements published in January 2025, the Chinese startup said its DeepSeek‑V3‑Base training data came from ordinary web pages and e-books and did not contain any synthetic data, according to Yicai's report in September.
For the training data of DeepSeek-V3-Base, we exclusively use plain web pages and e-books, without incorporating any synthetic data…. We did not intentionally include synthetic data generated by OpenAI during the pre-training cooldown phase; all data used in this phase were naturally occurring and collected through web crawling, said DeepSeek, Yicai reported, citing the paper.
This is not the first time that American AI firms challenged their Chinese peers. On February 23, Anthropic accused China's DeepSeek, Moonshot, and MiniMax of carrying out "industrial-scale distillation attacks" on its Claude model, elevating the matter to the level of "national security" and claiming the distilled models could be used for malicious cyber activities, disinformation campaigns and mass surveillance.
In a press note, Anthropic acknowledged that distillation is a widely used and legitimate training method. For example, frontier AI labs routinely distill their own models to create smaller, cheaper versions for their customers. But it also claimed that distillation can also be used for illicit purposes: competitors can use it to acquire powerful capabilities from other labs in a fraction of the time, and at a fraction of the cost, that it would take to develop them independently.
Ma Jihua, a veteran industry analyst, told the Global Times on Tuesday that the reported move is rather a collective action by US companies to defend their commercial interests. Meanwhile, targeting Chinese AI firms is "evidence of the progress and strength" of China's AI sector and amounts to a collective defense of commercial interests by US firms facing challenges to their technological dominance, he said.
US companies' accusations are a direct expression of technological-hegemony anxiety. In fact, so-called distillation techniques are an inevitable part of AI's development, Ma said. US companies also benefit from the fruits of China's open-source achievements and technological innovations.
The concept of distillation was first formally introduced by Geoffrey Hinton, often hailed by media outlets as the "godfather of AI," in a 2015 paper. Distillation is extracting knowledge from a larger language model to train a smaller one, Tian Feng, former dean of SenseTime's Intelligence Industry Research Institute, told the Global Times on Tuesday, adding that "the essence of distillation is knowledge transfer, not wholesale copying of the architecture," Tian said.
In machine learning, distillation is a process where outputs from a large, pre-trained model are used to train another, usually smaller model to exhibit similar capabilities, Tian noted, adding that successful Chinese open‑source AI models such as DeepSeek‑R1, MiniMax M2.7 and Qwen 3.6 Plus demonstrate that combinations of high‑quality data and efficient algorithms can sometimes substitute for large compute budgets, challenging the business model of expensive closed systems used by its US counterparts.