Skip to main content
All articles

Anthropic's Distillation Claims: IP Theft and Cybersecurity Risks for UK AI Firms

AI Models & Capabilities25 February 20266 min read
Share
Anthropic's Distillation Claims: IP Theft and Cybersecurity Risks for UK AI Firms

When 16.4 million deceptive queries are used to extract the 'reasoning DNA' of an AI like Claude, the traditional definition of IP theft fails. For UK law firms, this industrial-scale distillation by firms like DeepSeek represents a new category of cybersecurity risk: the commoditisation of proprietary logic.

What Anthropic Has Alleged

Anthropic published detailed findings this month identifying three industrial-scale campaigns to extract capabilities from Claude. The named parties are DeepSeek, Moonshot AI, and MiniMax, all Chinese AI laboratories. Their method involved creating approximately 24,000 deceptive accounts and generating more than 16 million exchanges with Claude. Anthropic attributed the campaigns using IP address clusters, infrastructure metadata, and behavioural patterns in query sequences.

The technique is called model distillation. A weaker model is trained on the outputs of a stronger one, absorbing its reasoning style, chain-of-thought structure, coding logic, and tool-use patterns. Done openly between consenting parties, distillation is legitimate and well-documented in AI research. Done covertly, at this scale, using fake accounts in breach of terms of service and regional access restrictions, it is something else. What exactly, under UK law, is the more interesting question.

The targets were specific: agentic reasoning, structured tool use, coding capability, and chain-of-thought data. These are precisely the features that make professional-grade AI useful for legal work, financial analysis, and technical advisory services. The campaign was not opportunistic. It was systematic extraction of competitive advantage.

Computer Misuse Act 1990

The Computer Misuse Act 1990 makes it an offence under section 1 to access a computer system without authorisation or to exceed authorised access. Using approximately 24,000 fraudulently created accounts to access Anthropic's API infrastructure might, on the face of it, satisfy the actus reus of unauthorised access.

The Act has extraterritorial reach under section 4: it applies where either the accused or a significant link to the offence is in the United Kingdom. That matters here. UK-based entities using similar techniques against AI providers would be fully exposed. Overseas actors become harder to prosecute directly, but UK affiliates, resellers, or infrastructure providers could face secondary liability if they knowingly facilitated access.

The National Cyber Security Centre has consistently described API abuse at scale as a cybersecurity incident warranting investigation, not merely a terms of service dispute. The framing matters. Regulators and prosecutors treat these differently, and the 1990 Act gives the Crown a statutory hook that civil proceedings alone do not.

Trade Secrets and Intellectual Property

The Trade Secrets (Enforcement, etc.) Regulations 2018, which transposed EU Directive 2016/943 into UK law before Brexit, define a trade secret as information that is secret, has commercial value because of its secrecy, and has been subject to reasonable steps to keep it secret. Anthropic's model weights, training pipelines, and reasoning architectures almost certainly qualify.

The complication is that what was extracted here were outputs, not source code. The competitors queried Claude and collected its responses. They did not access Anthropic's systems directly in the traditional sense of exfiltrating files. Whether systematically harvesting outputs at 16 million exchanges constitutes misappropriation of a trade secret under the 2018 Regulations requires analysis of whether the outputs themselves are protectable, or whether the protectable asset is the underlying model that generated them.

English common law on breach of confidence offers a complementary route. The three-part Coco v AN Clark test (information of a confidential nature, communicated in circumstances importing an obligation of confidence, unauthorised use) has been applied broadly by the courts. Accepting terms of service before accessing an API arguably imports an obligation of confidence over the proprietary logic being queried. That argument has not yet been tested on these facts, but it is not entirely fanciful.

National Security and Export Control Implications

The UK government has been tightening AI export controls in alignment with US policy, including restrictions on advanced semiconductor exports under the Export Control Order 2008 as amended. The distillation campaigns described by Anthropic are a direct attempt to circumvent those controls. If a lab cannot access the chips needed to train a frontier model, training on the outputs of one achieves a comparable result.

UK firms advising on AI deployment, building products on top of foundation models, or operating in regulated sectors need to understand this context. The risk is not only legal exposure for themselves. It is the proliferation of models whose capabilities were obtained without the safety testing and alignment work that went into the originals. A distilled model inheriting Claude's reasoning patterns without inheriting Anthropic's safety infrastructure is a different product. That distinction matters under the AI Safety Institute's evaluation frameworks and will matter increasingly as the AI Act's influence reaches UK procurement decisions.

What This Means

If your firm builds on a foundation model API, or advises clients who do, three immediate questions are worth asking.

First, do your commercial contracts with AI providers address what happens if the model you are using was trained, in part, on improperly extracted data? Indemnity provisions in most standard enterprise AI agreements do not currently contemplate this scenario with any precision.

Second, if you are developing proprietary AI capabilities in-house, are those capabilities protected as trade secrets under the 2018 Regulations? That requires documented steps: access controls, confidentiality policies, restricted distribution of training data and model architecture details. Many firms have the underlying assets; fewer have the paperwork to establish legal protection.

Third, and more practically, are you monitoring your own API usage for anomalous patterns? Anthropic detected these campaigns through behavioural analysis of query sequences. If you operate an AI product with external API access, the same threat applies to your proprietary fine-tuned models. Security teams that have not reviewed their AI infrastructure in the last six months should do so now.

The Bigger Picture

Anthropic's disclosure is significant not because distillation is new, but because the scale and attribution are now documented and public. Three named laboratories, 16 million exchanges, industrial organisation. This is not grey-hat experimentation. It is deliberate extraction of competitive advantage from a commercial product by entities that had been banned from accessing it.

For UK AI firms, the lesson is not to panic. It is to take seriously that proprietary AI capability is now an asset with the same characteristics as any high-value trade secret, and to protect it accordingly. The legal frameworks exist. They are imperfect, and cross-border enforcement is genuinely difficult. But firms that have done nothing to document, protect, and monitor their AI assets will find themselves without legal recourse when something goes wrong.

Anthropic has called for an industry-wide response. That response starts with individual firms getting their own houses in order.

If you want to assess how your AI infrastructure measures up against these risks, get in touch.

Sources

  1. 1Anthropic accuses Chinese AI labs of mining Claude as US debates AI chip exports
  2. 2Anthropic Says Chinese AI Firms Used 16 Million Claude Queries to ...
  3. 3The Attack That Looked Like Nothing at All: Anthropic's Distillation Breach Breakdown
  4. 4Detecting and preventing distillation attacks - Anthropic

Stay ahead of the curve

Get practical AI insights for lawyers delivered to your inbox. No spam, no fluff, just the developments that matter.

CJ

Chris Jeyes

Barrister & Leading Junior

Founder of Lextrapolate. 20+ years at the Bar. Legal 500 Leading Junior. Helping lawyers and legal businesses use AI effectively, safely and compliantly.

Get in Touch
AI securitymodel distillationintellectual propertyClaudeAnthropicUK lawtrade secretscybersecurityAI strategy