China's 'Transfer Station' Economy: How a Grey Market Undermines Anthropic's Geoblocking
Key Takeaways
- ▸A sophisticated grey market ecosystem operates openly on GitHub, Taobao, Twitter, and Telegram, offering Claude tokens at 10% of official prices to evade Chinese access restrictions.
- ▸Access control measures (geoblocking, KYC, biometric verification) have inadvertently created a parallel evasion infrastructure involving SMS farms and biometric harvesting that extends beyond AI governance into criminal markets.
- ▸The proxy economy reveals blind spots in frontier AI safety frameworks designed primarily around geopolitical containment rather than addressing misuse, provider traceability, and harm prevention across supply chains.
Summary
A thriving grey market for Claude API access has emerged in China, operating through intermediary services known as "transfer stations" (中转站) that allow developers to access Anthropic's models at approximately 10% of official pricing. The market extends far beyond frontier AI laboratories to include university researchers, students, developers, and hobbyists—a much broader user base than official government warnings about "industrial-scale distillation campaigns" might suggest. Every access control measure Anthropic has implemented—geoblocking, phone verification, credit card requirements, and live biometric KYC checks—has spawned a corresponding evasion infrastructure, including SMS farms and biometric harvesting operations that exploit the supply chain. The transfer station economy has become so normalized that Claude models thrive on Chinese e-commerce platforms like Taobao, and Singapore's "surprising" lead in per-capita Claude token consumption is widely understood to be a routing point for Chinese users circumventing access restrictions.
- The grey market extends far beyond state-sponsored research labs to ordinary developers, students, and entrepreneurs, suggesting that access restrictions alone cannot govern frontier AI model distribution.
Editorial Opinion
This report exposes a fundamental tension in US AI governance strategy: access-based controls assume that restricting where models can be legally used will prevent misuse, but economic incentives and technological sophistication have created a thriving underground market that may actually concentrate risk. Anthropic's rigorous geoblocking—the strongest of any frontier lab—has been thoroughly circumvented, raising questions about whether similar barriers (compute export controls, chip sales restrictions) will fare better. Most concerning is that the evasion infrastructure itself (biometric harvesting, fraudulent account networks) now poses independent harms that extend beyond AI safety into exploitation and privacy risks. The policy community would be wise to acknowledge that frontier AI governance requires mechanisms beyond geography-based access control.


