India's New IT Rules Ban Unlabeled AI Content Starting February 2026
Serge Bulaev
Starting February 2026, India will require all AI-generated or altered posts to have clear labels. Social media platforms and advertisers must spot and mark this kind of content before it appears online, and remove illegal deepfakes quickly - within three hours. This means brands and creators will need new tools and checks to follow the rules, which could make things harder for small influencers. The changes match similar laws coming in Europe and China, showing that unmarked AI content is being banned around the world.

India's new IT rules, effective February 20, 2026, will require clear labels on all AI-generated or altered content. The Ministry of Electronics and IT's amendments place the responsibility on social media platforms and advertisers to identify and label all synthetically generated information (SGI) before it is published.
Label, trace, remove - the new triad
These rules require social media platforms and advertisers to identify, label, and trace all "Synthetically Generated Information" (SGI). They must also remove illegal deepfakes and other prohibited content within three hours of a complaint, a significantly shorter timeline that increases platform responsibility.
Significant Social Media Intermediaries must now prompt users to declare any AI-generated elements and then verify these declarations. Labels must be both visible on the content and embedded as tamper-proof metadata to ensure the origin stays trackable even after reposts or downloads. Furthermore, takedown deadlines have been drastically reduced. Platforms must remove illegal SGI within three hours of a report - a timeline Bar & Bench describes as a "massive logistical challenge" for content moderation. Non-compliance risks the loss of a platform's "safe harbour" protection under Section 79 of the IT Act.
AI content rules kick in Feb 20: what marketers must update today
- Conduct a comprehensive content audit to identify all assets created with generative AI tools.
- Develop a standardized disclosure layer, including watermarks and compliant caption templates.
- Update creative briefs to include mandatory SGI declaration checklists for agencies and creators.
- Amend contracts with clauses requiring timely data sharing to support rapid takedown procedures.
- Implement and test AI detection tools to benchmark their effectiveness against common manipulation tactics.
Creator economy under pressure
The new regulations are expected to introduce significant friction into the creator economy. Campaigns scheduled for early 2026 will face additional gatekeeping, with agencies anticipating longer lead times as content undergoes mandatory AI-detection scans. This new compliance burden may disproportionately affect smaller influencers who lack resources for verification tools, potentially consolidating market power among larger creator networks that can offer turnkey compliance solutions.
Global lens on disclosure trends
India's framework is part of a growing global trend toward AI transparency. These rules align with the EU AI Act's transparency obligations, rolling out from August 2025, and China's regulations mandating explicit watermarks on synthetic media, effective September 2025. This regulatory convergence signals a shrinking global tolerance for unlabeled AI content.
What happens if labels go missing
The consequences for non-compliance are severe. Regulators can compel platforms to release user data associated with offending posts, and senior compliance officers could face personal liability. For brands, the risks include campaign suspensions, diminished reach, and significant erosion of public trust if mislabeled content goes viral. Proactively investing in robust audit trails and clear creator guidelines is a critical risk mitigation strategy.
Ultimately, organizations that systematize their disclosure processes and establish rapid takedown capabilities well before the February 2026 deadline will be best positioned to navigate this new regulatory landscape with minimal disruption.
What exactly qualifies as "synthetically generated information" under India's 2026 rules?
Any audio, visual or audio-visual file that has been created or materially altered by artificial intelligence and appears authentic is now classified as Synthetically Generated Information. This covers deepfake videos, AI-cloned voices, face-swapped reels and machine-generated captions that change the original meaning. Routine edits such as colour correction, beauty filters or subtitles for accessibility are explicitly exempted, so ordinary creators will not be caught by the label requirement if no misleading synthetic element is introduced.
Who has to apply the label - the uploader or the platform?
Both. The moment a user attempts to post, the platform must prompt a declaration asking whether the file contains AI-generated or manipulated content. If the user answers "yes", the site must immediately affix a visible watermark and embed tamper-proof metadata before the content goes public. If the user falsely declares "no", the platform still bears liability because its own detection tools are expected to flag probable SGI. In other words, the safe-harbour shield under Section 79(2) is lost if the label is missing and the firm fails to spot the synthetic file within three hours of a complaint.
How fast does flagged synthetic content have to be removed?
The new timeline is three hours from the moment a user or government agency flags unlawful SGI - for example, a deepfake used for extortion, electoral misinformation or non-consensual imagery. The platform must acknowledge the complaint in two hours and complete its review within seven days, down from the earlier fifteen-day window. Fines and criminal exposure now reach individual compliance officers, so expect larger moderation teams working in shifts around major events such as elections or product launches.
What operational changes should brand and agency workflows include?
Marketing teams that rely on virtual influencers, face-swap ads or AI-generated jingles must:
1. Add an internal "SGI checklist" to every campaign brief - who declares, who audits and who applies the watermark.
2. Negotiate creator contracts that transfer legal liability if an influencer hides AI use and the brand is later penalised.
3. Budget for platform-side verification fees - some Indian apps are already piloting paid APIs that pre-screen uploads for synthetic signals.
4. Maintain raw files and model cards for 180 days so identity can be disclosed to law enforcement if a dispute arises.
Early adopters report the extra steps add roughly 12-15% to post-production cost, but that is still cheaper than losing safe-harbour protection or facing takedown during a live campaign.
How do India's rules compare with global AI-labeling regimes?
India's three-hour takedown is the quickest statutory deadline worldwide. The EU AI Act will require visible deepfake labels from August 2026 but allows up to 24 hours for removal; China already enforces dual explicit-implicit labels yet relies on a five-day platform review. Japan and the U.S. still depend on voluntary codes, although several American states are debating one-day removal windows for election deepfakes. Because India links non-compliance to immediate loss of safe-harbour, multinational platforms are expected to roll out the Indian standard first and later adapt it for other high-risk markets - making February 20, 2026 a de-facto global testing ground for real-time synthetic-content policing.