EU, UK, India open probes as X floods with AI deepfakes

Serge Bulaev

Serge Bulaev

Social media platform X was overwhelmed by a wave of explicit fake images made with AI tool Grok, creating a global uproar. In just one day, researchers found over 160,000 deepfakes, many targeting famous people and even political leaders. The EU, UK, and India quickly launched investigations, demanding answers and threatening big penalties. Even after putting stricter rules and paywalls in place, X struggled to stop offensive images from spreading. Experts warn that unless stronger safeguards and checks are added, these dangerous AI images could multiply again soon.

EU, UK, India open probes as X floods with AI deepfakes

A flood of explicit AI deepfakes from its Grok tool has prompted the EU, UK, and India to open probes into social media platform X. The incident triggered a global uproar after researchers documented over 160,000 fake images of public figures in a single day. Despite X implementing new rules and placing the feature behind a paywall, the platform has failed to stop the content's spread, raising alarms about the urgent need for stronger safeguards.

A 24-Hour Crisis That Went Global

The crisis began when researchers documented over 160,000 explicit AI-generated images flooding X in a 24-hour period. The deepfakes, created with the platform's Grok tool, targeted public figures and minors, triggering swift regulatory action from the EU, UK, and India over platform safety failures.

The crisis erupted on December 30, 2025, when university lab investigators documented over 160,000 deepfakes before mass removal started. The European Commission quickly ordered X and its parent, xAI, to preserve records for a potential Digital Services Act enforcement action. Regulators in London initiated their own inquiry, while India's IT Ministry demanded an explanation from X within 72 hours regarding the visible non-consensual images. The backlash intensified when US senators called on Apple and Google to remove the X app. In response, Elon Musk restricted Grok's image tool to X Premium subscribers, a move widely criticized as monetizing the creation of deepfakes.

Policy Gaps and New Terms

The platform's updated Terms of Service, effective January 15, 2026, address AI-generated content by including prompts and outputs under its user content definition and banning "jailbreaking" to bypass safety filters. However, critics point out the new terms fail to explicitly prohibit explicit deepfakes. Instead, the policy grants X a perpetual license to use AI content for model training while advising users to only share what they are "comfortable making public," effectively shifting liability. The update also introduces a $100 cap on platform liability while threatening steep penalties for data scraping.

Are Paywalls a Real Fix?

Restricting Grok's image generator to paying subscribers did little to resolve the core issue of consent. Reports confirm that subscribers can still create adult content, while free users can modify existing images using an "Edit image" tool. This has led experts to argue that a paywall is an insufficient fix and that effective mitigation requires a multi-layered approach combining technology and human oversight:

• Watermark every image at the pixel level and detect edits before posting.
• Deploy robust classifiers that flag minor-child nudity with near-zero false negatives.
• Require signed consent tokens for any likeness requests involving real people.
• Staff a 24-hour trust and safety queue with escalation for law enforcement.

Platforms experimenting with watermarking admit removal remains easy, and large-scale AI moderation still misses nuanced violations.

Regulatory Traction Builds

As regulatory pressure mounts, India has warned X that it could lose its safe-harbor protections under the IT Act for hosting obscene deepfakes. Meanwhile, France's Paris Prosecutor's Office has launched a criminal investigation that could lead to two-year prison sentences and €60,000 fines. In the UK, officials are using the Online Safety Act 2023 to compel X to provide detailed risk assessments. Analysts believe the EU will use this incident to test the Digital Services Act's systemic risk provisions, with potential fines reaching six percent of X's global turnover if the platform's response is deemed inadequate.

What Comes Next for Generative Platforms

The incident serves as a critical warning for the entire tech sector, with trust and safety teams industry-wide monitoring the fallout. The Grok crisis demonstrates the speed at which generative AI tools can be weaponized and exposes the inadequacy of existing platform policies. Without industry-wide adoption of robust consent protocols, advanced watermarking technology, and transparent third-party audits, experts warn that another, similar crisis is inevitable.


What triggered the multi-country probes into X's Grok image tool?

A 24-hour sampling window showed ~6,700 sexually explicit AI images were being generated every hour through Grok, targeting actresses, journalists, minors, and heads-of-state. The flood was first spotted by external researchers and quickly drew complaints from users who saw the content in their feeds.

Which regulators acted and what did they demand?

  • European Union - ordered X/xAI to preserve all internal documents related to the incident through 2026, a step that normally precedes a formal investigation
  • United Kingdom - Ofcom and the privacy regulator opened an inquiry and signalled that "all options", including fines, are on the table
  • India - MeitY gave X 72 hours to file a compliance report, warning that failure would strip the platform of intermediary safe-harbour protection
  • France - the Paris prosecutor launched a criminal probe; the offence carries a two-year prison term and a €60,000 fine
  • Malaysia - the communications regulator is examining breaches of the 1998 CMA and the new Online Safety Act

Did X really try to fix the problem by locking the feature behind a paywall?

Yes, but critics say the move is largely symbolic. Free accounts can still reach the "Edit image" button on posts and share already-created deepfakes, while paying users can generate new ones. The UK government called the paywall "insulting" because it monetises the same abusive capability it claims to restrict.

How effective are paywalls, watermarks, and AI classifiers at stopping non-consensual deepfakes?

Evidence from 2025 - 26 shows limited real-world impact:
- Watermarks can be "readily removed" with free editing tools
- Platform AI classifiers miss nuanced, non-consensual imagery when humans are out of the loop
- Paywalls do not stop content that is scraped, re-uploaded, or created off-platform and then shared virally
OpenAI's Sora 2 and Pinterest introduced similar guardrails, yet an estimated 57% of all online visuals are now AI-generated, with disclosure easily bypassed.

What concrete steps can platforms and policymakers take next?

  1. Pre-publication moderation - require human review for any image that nudity or likeness classifiers flag
  2. Consent-based upload - demand proof of consent from people depicted before the file can go live
  3. Visible provenance data - embed cryptographically signed metadata that survives re-encoding so regulators and users can trace origin
  4. Strict liability fines - peg penalties to company revenue, not image count, to make mass abuse economically unattractive
  5. Open audit APIs - let vetted researchers measure removal times and false-positive rates, turning public pressure into an enforcement lever