YouTube expands AI deepfake detection to all adult users

Serge Bulaev

Serge Bulaev

YouTube is making its AI deepfake detection tool available to all adult users, letting people aged 18 and over check if their face is being misused in AI-generated videos. Users may submit a selfie, and YouTube compares it to new uploads; if a likely match appears, users get an alert and may request removal if the use seems unauthorized. The number of takedown requests has reportedly stayed very small, which might mean few people know about the tool or it is working well so far. This change suggests YouTube is shifting toward letting regular users, not just famous people, help find possible impersonations. There may still be challenges, such as false positives and the need for more technology to catch deepfakes early.

YouTube expands AI deepfake detection to all adult users

In a significant move to combat digital impersonation, YouTube is expanding its AI deepfake detection tool to all adult users, empowering anyone over 18 to identify unauthorized uses of their likeness in synthetic videos. A The Verge report confirms the feature, previously in a limited pilot, has become more widely available. This expansion marks a strategic shift toward user-empowered moderation, allowing everyday individuals, not just public figures, to police the use of synthetic media.

How the AI likeness tool works

The tool works by having users submit a facial scan for reference. YouTube's AI system then continuously scans new video uploads for matching faces. If a potential match is found, the user receives a private alert, allowing them to review the video and request its removal.

The process involves three main steps for enrolled users:
- Scan Submission: Users provide a selfie-style facial scan to create a reference template.
- Automated Matching: YouTube's AI compares this template against newly uploaded videos to detect potential likenesses.
- User Review & Action: Upon finding a match, the system alerts the user, who can then review the video for context (like parody) and submit a removal request for human moderation if the use is unauthorized.

According to Engadget, YouTube has seen a "very small" number of takedown requests so far. This could suggest the tool is working effectively to deter misuse or that user awareness of the feature is still growing.

Why "YouTube is expanding its AI deepfake detection tool to all adult users" now

The expansion comes amid two critical trends: the rising sophistication of synthetic media and increasing regulatory scrutiny. As of December 2025, Fortune estimated that deepfake realism could consistently deceive the average viewer. Simultaneously, platforms like YouTube are under pressure from regulations like the EU Digital Services Act, which mandates swift action on harmful content.

Empowering users to report misuse directly addresses the scale of the problem. While this adds to the moderation workload, it's a necessary step. Industry reports indicate that a significant portion of deepfake-related financial losses are linked to social media, underscoring the urgency for early detection to prevent widespread harm and erosion of trust.

Early lessons for platforms and users

This rollout offers several key takeaways for the digital ecosystem:

  1. Democratized Protection: By opening the tool to all adults, YouTube is closing the protection gap that previously existed between public figures and private citizens.
  2. Increased Moderation Load: A user-driven system inevitably increases the volume of content for review, including potential false positives, shifting the operational burden.
  3. Nuanced Policy Enforcement: Removal is not automatic. YouTube's policies require moderators to consider context, such as fair use for parody or commentary, before acting on a request.
  4. A Multi-Layered Approach is Essential: As Fortune noted, user reporting is just one part of a complete solution. Effective, scalable defense also requires proactive detection at the time of upload and robust content provenance standards.

Ultimately, YouTube's strategy represents an evolving model where automated detection, community reporting, and policy enforcement work together to manage the challenges of synthetic media.


How do I enroll in YouTube's deepfake detection tool?

Any signed-in YouTube user who is 18 or older can enroll via their account's Privacy & Safety settings. The process requires a brief selfie-style scan to create a private biometric key. This key is used only for matching and can be deleted by the user at any time.

What exactly is flagged?

The system flags facial likeness only; it does not analyze voice, body type, or apparel. When a match is detected, you will receive an alert. The tool will flag all appearances, including potential parody or commentary, leaving the decision to request a takedown up to you.

How effective is the match quality?

Initial pilot data indicates high accuracy for clear, front-facing images in good lighting. Match effectiveness decreases with low-resolution video (below 240p), heavy filters, or partially obscured faces. The AI model is updated automatically.

What happens after I request a takedown?

After clicking "Request removal" in the alert and selecting a reason, the video is sent to a priority queue for human review. YouTube aims for a decision within 24 hours in the US and EU. If a video is removed, the original uploader can appeal, but the content remains offline during the appeal process.

Does the tool work for minors, brands, or deceased individuals?

Currently, the tool is available exclusively to living individuals aged 18 and over. It cannot be used by or for minors, brands, or on behalf of deceased persons. However, YouTube informed The Verge that it has plans to add support for businesses and estates in the future.