Starting in September 2025, all AI-generated content in China, like text, images, videos, and audio, must have a clear label and a hidden digital mark. This new rule forces everyone – creators, platforms, and users – to make sure AI-made posts are marked, or face fines and removal. Popular Chinese platforms like WeChat and Douyin have already started tagging AI posts and warning creators. The law is tougher than rules in other countries, and some neighbors are thinking about copying it. As soon as it began, most creators quickly started following the law to avoid penalties.
What are the key requirements of China’s new AI labeling law?
Starting September 1, 2025, all AI-generated or synthetic content in China – including text, images, audio, and video – must display a visible label (like “AI-generated”) and carry an invisible watermark. Non-compliance can result in content removal, administrative fines, and further penalties for platforms and creators.
Starting September 1, 2025, any piece of synthetic media that appears on a Chinese screen – from a viral short video to an AI-written news summary – must carry a visible label and an invisible watermark. The regulation, officially titled Measures for the Labelling of Artificial Intelligence-Generated and Synthetic Content (China Law Translate), covers text, images, audio, video and even virtual scenes.
What the label looks like
- Explicit : words such as “AI-generated” overlaid on images or inserted into chatbot replies.
- Implicit : metadata tags (digital watermarks, unique content IDs) that crawlers and platform filters can read even if the visual label is removed.
Four groups are now in the compliance chain: content generators, AI-service providers, online platforms and end-users. A single non-compliant upload can trigger content removal, administrative fines or further penalties.
Platforms already moving
Within the first 48 hours of enforcement:
Platform | Action taken |
---|---|
Auto-label for every AI image sent in chats; new upload wizard for creators | |
Douyin | Algorithmic pre-scan that adds “疑似AI生成” (suspected AI) tags when metadata is missing |
Dedicated “AI” tab on timelines, separating synthetic posts from organic ones | |
RedNote | Creators must tick a declaration box before posting AI art |
(Silicon Republic and SCMP)
Global ripple effect
- The law is immediately stricter than the EU’s AI Act (still rolling out through 2026) and goes beyond the voluntary C2PA standards adopted by US tech giants.
- Regulators in Singapore and South Korea have already asked Chinese companies for technical briefings, signalling a potential template for Asia-Pacific rules.
Early data point
ByteDance’s internal dashboard shows that 0.7 % of daily uploads on Douyin were flagged as unlabeled synthetic media on day one; the figure dropped to 0.2 % after 24 hours as creators edited posts to add the required markers (Tom’s Hardware).
The regulation sits inside Beijing’s 2025 Qinglang (“clear and bright”) campaign, the same initiative that earlier forced celebrity fan clubs to cap spending and demanded game publishers reveal loot-box odds.
What exactly must be labeled under China’s new AI law?
Every single piece of AI-generated content that appears online, regardless of format, now carries a legal obligation. The September 1, 2025 rules cover:
- Text – chatbot answers, AI-written articles, auto-generated comments
- Images & videos – deepfakes, synthetic ads, AI-filtered photos
- Audio – cloned voices, AI music, synthetic podcast segments
- Virtual scenes – metaverse spaces, game assets, AR filters
Both visible watermarks and hidden metadata are compulsory, making China the first country to enforce dual labeling at national scale.
Who is legally responsible for adding the labels?
Four distinct groups now share compliance duties:
- Content generators (individual users or brands)
- AI service providers (OpenAI-equivalent firms in China)
- Platforms (WeChat, Douyin, Weibo, RedNote)
- End-users who re-share or remix AI material
Failure to comply triggers content removal plus fines, with platforms required to implement pre-publication review systems.
How does the law compare to upcoming EU and US rules?
Policy dimension | China (Sept 2025) | EU AI Act (phased 2024-26) | US (late 2025 status) |
---|---|---|---|
Scope of labeling | All AI content online | Deepfakes + high-risk AI | Voluntary industry codes |
Technical standard | Single national GB standard | Guidance, no single spec | C2PA open standard (opt-in) |
Enforcement speed | Immediate platform liability | Multi-year rollout | Patchwork, state-by-state |
China’s approach is both faster and stricter, setting a technical benchmark that regulators in Brussels and Washington are now studying.
Is the system actually reducing deepfakes and misinformation?
Early platform data (first 30 days) shows:
- 12 million pieces of AI content auto-labeled on WeChat alone
- 87 % drop in unflagged deepfake videos on Douyin according to internal moderation logs
- 94 % of users now see a visible “AI-generated” notice before viewing synthetic influencers’ posts
Yet experts caution that determined actors can still strip metadata, so success depends on continuous improvement of detection tech.
What should international companies do right now?
- Audit content pipelines: Identify any AI-generated material served to Chinese users
- Embed dual labels: Implement both visible watermarks and metadata before September deadlines
- Localize compliance teams: Assign Beijing-based staff to liaise with CAC and platform reviewers
- Budget for fines: Non-compliance penalties have already reached USD 1.3 million in pilot enforcement cases
The law’s extraterritorial reach means a California-headquartered startup distributing an AI filter app in China must obey the same labeling rules as a Shenzhen giant.