Chinese tech giants are strategically moving AI model training offshore to skirt U.S. chip bans, a key tactic in navigating global supercomputer politics. Data centers in Singapore, packed with leased Nvidia H100 GPUs, now power AI models like Alibaba’s Qwen and ByteDance’s Doubao, according to a Semafor report. This approach keeps high-performance chips outside mainland China, providing the essential computing power needed to remain competitive. This strategy gained traction after the U.S. banned exports of Nvidia’s H20 variant in 2024. By relocating large language model (LLM) training to neutral locations like Singapore, these companies formally comply with U.S. export controls while still accessing top-tier GPUs, creating a legal gray area that is reshaping the global AI landscape.
The Strategic Importance of Offshore AI Training
This offshore strategy is critical because it allows Chinese firms to access powerful, restricted Nvidia GPUs by leasing them in neutral countries. This bypasses U.S. export controls, overcomes domestic chip shortages, and navigates data laws, ensuring they remain competitive in the global AI race.
Offshore data centers provide a solution to three major challenges: chip scarcity, demanding export licenses, and China’s own data localization laws. Because the hardware remains under foreign ownership, Chinese engineers can simply rent processing time. This arrangement is crucial, as analysts estimate that Nvidia GPUs still power approximately 75% of China’s AI training workloads. Companies like Alibaba and ByteDance have secured thousands of H100 and A100 GPUs abroad for intensive training, reserving domestic chips for less demanding inference tasks. This has enabled major advancements, such as Qwen becoming a top-tier open-source LLM. As reported by the Times of India, chip shipments to these offshore centers surged immediately following the April 2024 U.S. restrictions, highlighting a direct response to the policy shift.
How the Offshore Training Model Works
- Lease Capacity: Secure multi-year contracts with data center operators outside of China.
- Access High-Performance GPUs: Utilize top-tier Nvidia GPUs that are prohibited from being directly exported to China.
- Manage Data Flows: Use anonymized public data for large-scale pre-training offshore, while conducting sensitive fine-tuning with user data on domestic servers to comply with privacy laws.
- Repatriate AI Models: Transfer the completed model weights back to China through encrypted channels for deployment on local or hybrid cloud infrastructure.
This operational model protects hardware suppliers from U.S. sanctions, as the restricted chips are never physically imported into China. Simultaneously, it grants Chinese developers full access to Nvidia’s CUDA software ecosystem. This strategy also reduces U.S. regulatory leverage, as companies can shift operations to other regions, like the Middle East, if restrictions are tightened further.
Emerging Regulatory and Technical Challenges
This cross-border strategy is attracting regulatory scrutiny worldwide over concerns about “regulatory arbitrage.” For example, the EU’s AI Act requires clear audit trails, and proposed California legislation (SB1047) includes steep fines for models without documented origins. Concurrently, Beijing is pushing companies to use domestic chips like Huawei’s Ascend, which creates a complex, split-hardware environment that is difficult to optimize. As a counter-strategy, some firms like DeepSeek are hedging their bets. After stockpiling Nvidia A100s before the ban, the startup now trains domestically while also collaborating with Huawei on future Chinese accelerators, preparing for a scenario where offshore options are no longer viable.
The Future of Global AI Compute
In response to shifting U.S. policies, Chinese tech firms are constantly adjusting their procurement strategies – balancing between purchasing compliant chips, leasing offshore GPU capacity, and investing in domestic hardware. This has created a dynamic and globalized map of AI computation, driven by the relentless need to power next-generation models that grow exponentially larger.
How are Chinese tech giants bypassing US chip export controls?
Chinese companies including Alibaba and ByteDance now train their flagship models (Qwen, Doubao) in Singapore and Malaysia, leasing capacity from non-Chinese data-center owners.
– The rigs run the same top-tier Nvidia GPUs that Washington bars from export to China proper, but because the chips stay under foreign title they sit outside the direct reach of US curbs.
– Activity accelerated after the April 2024 ban on the H20 chip and again following the April 2025 tightening that threatened $5.5 billion in stranded Nvidia inventory.
Which regions are hosting these offshore training clusters?
Southeast Asia is the hub of choice: Singapore for its dense fiber, contract clarity and neutrality; Malaysia for cheaper power and land.
– Alibaba and ByteDance are also adding Middle-East footprints, turning the workaround into a global cloud-expansion play rather than a purely defensive move.
– Together these sites already house well over a million export-compliant GPUs – five times the volume of Huawei’s home-grown Ascend series that Beijing promotes as a substitute.
What legal and compliance risks remain?
Even when hardware ownership is foreign, Chinese data-sovereignty rules still forbid moving certain domestic data offshore.
– Models can be pre-trained abroad, but client-specific fine-tuning or sensitive data handling must physically stay inside China, creating a two-stage pipeline that raises audit and IP-leak questions.
– Washington is studying “diffusion-rule” updates that could treat any model originating from controlled chips as a controlled export, so the regulatory goal-posts are still moving.
How has Nvidia responded to the shifting rules?
Nvidia booked a $5.5 billion charge when the April 2025 controls first appeared, then saw its stock jump after a July 2025 partial reversal allowed renewed H20 shipments under license.
– CEO Jensen Huang called it a “turning point” and pledged to clear $4.5 billion of backed-up inventory while accelerating two new $500 billion U.S. super-fab projects to keep the most advanced Blackwell generation at home.
– The yo-yo policy keeps Nvidia’s second-largest market – China worth $17 billion in 2024 revenue – hanging on every White House statement.
Could domestic Chinese chips end the need for workarounds?
Beijing wants AI-grade semiconductors made locally by 2027, but today roughly 75 percent of training compute in China still relies on Nvidia’s CUDA ecosystem.
– Firms such as DeepSeek hoarded pre-ban chips and now co-design with Huawei, yet even optimistic roadmaps show a two-to-three generation gap versus Nvidia’s top cards.
– Until that gap closes, offshore training looks set to persist as a structural – not stop-gap – feature of Chinese AI strategy.
















