Rivals OpenAI, Anthropic PBC, and Alphabet Inc.’s Google have begun working together to try to clamp down on Chinese competitors extracting results from cutting-edge US AI models to gain an edge in the global AI race.
The firms are sharing information through the Frontier Model Forum, an industry non-profit that the three tech firms founded with Microsoft Corp. in 2023, to detect so-called adversarial distillation attempts that violate their terms of service, according to people familiar with the matter.
The rare collaboration underscores the severity of a concern raised by US AI firms that some users, especially in China, are creating imitation versions of their products that could undercut them on price and siphon away customers while posing a national security risk. US officials have estimated that unauthorised distillation costs Silicon Valley labs billions of dollars in annual profit, according to a person familiar with the findings who described them on the condition of anonymity.
OpenAI confirmed it’s part of the information sharing effort on adversarial distillation through the Frontier Model Forum and pointed to a recent memo it sent to Congress on the practice, where it accused Chinese firm DeepSeek of trying to “free-ride on the capabilities developed by OpenAI and other US frontier labs”.
Google, Anthropic, and the Frontier Model Forum declined to comment.
Distillation is a technique where an older “teacher” AI model is used to train a newer, “student,” model that replicates the capabilities of the earlier system — often at a much lower cost than producing an original model from scratch. Some forms of distillation are widely accepted and even encouraged by AI labs, such as when companies create smaller, more efficient versions of their own models, or allow outside developers to use distillation to build non-competitive technology.
Yet distillation has been controversial when used by third parties—particularly in adversary nations like China or Russia—to replicate proprietary work without authorisation. Leading US AI labs have warned that foreign adversaries could use the technique to develop AI models stripped of safety guardrails, such as limits that would prevent users from creating a deadly pathogen.
Most models made by Chinese labs are open weight, meaning that parts of the underlying AI system are publicly available for users to freely download and run on their own platforms, and therefore cheaper to use. That poses an economic challenge for US AI companies that have kept their models proprietary, betting that customers will pay for access to their products and help offset the hundreds of billions of dollars they’ve spent on data centres and other infrastructure.
Distillation first drew significant scrutiny in January 2025 in the weeks after DeepSeek’s surprise release of the R1 reasoning model that took the AI world by storm. Soon after, Microsoft and OpenAI investigated whether the Chinese startup had improperly exfiltrated large amounts of data from the US firm’s models to create R1, Bloomberg previously reported.
In February, OpenAI warned US lawmakers that DeepSeek had continued to use increasingly sophisticated tactics to extract results from US models, despite heightened efforts to prevent misuse of its products. OpenAI claimed in its memo to the House Select Committee on China that DeepSeek was relying on distillation to develop a new version of its breakthrough chatbot.
Information-sharing by US AI companies about adversarial distillation echoes a standard practice in the cybersecurity industry, where firms regularly swap data on attacks and adversaries’ tactics as a way to strengthen network defenses. By working together, the AI firms are similarly seeking to more effectively detect the practice, identify who’s responsible and try to prevent unauthorized users from succeeding.
Trump administration officials have signaled their openness to fostering information sharing among AI companies to rein in adversarial distillation. The AI Action Plan unveiled by President Donald Trump last year called for the creation of an information sharing and analysis center, in part for this purpose.
For now, information sharing on distillation remains limited due to AI companies’ uncertainty about what can be shared under existing antitrust guidance to counter the competitive threat from China, according to people familiar with the matter. The firms would benefit from greater clarity from the US government, the people said.
Distillation has ranked as a top concern among American AI developers since DeepSeek rattled global markets in early 2025 with its R1 release. Highly capable open-source models continue to proliferate in China, and many in the industry are watching closely for a major upgrade to DeepSeek’s model.
Last year, Anthropic blocked Chinese-controlled companies from using its Claude chatbot model, and in February it identified three Chinese AI labs—DeepSeek, Moonshot, and MiniMax—as illicitly extracting the model’s capability via distillation. This year, Anthropic said the threat “extends beyond any single company or region” and poses a national security risk, since distilled models often lack safety guardrails designed to prevent bad actors from using AI tools for malicious activities.
Google has published a blog saying it identified an increase in model extraction attempts. The three US AI labs have not yet provided evidence showing how much of China’s model innovation is reliant on distillation, but they note that the prevalence of attacks can be measured based on volumes of large-scale data requests.
