Swift, McConaughey use trademarks to combat AI deepfakes

Serge Bulaev

Serge Bulaev

Matthew McConaughey and Taylor Swift are using trademark law to try to protect their voices, photos, and catchphrases from being copied by AI deepfakes. Their legal filings may help stop companies from using their voices or images in ads without permission. Experts suggest these trademarks might work for celebrities with well-known brands, but it is uncertain if courts will agree that deepfakes always cause trademark harm. The law may not cover all types of deepfakes, especially those that are not used to sell products. There are also still questions about how the government will handle these new kinds of trademarks.

Swift, McConaughey use trademarks to combat AI deepfakes

To combat the growing threat of AI deepfakes, Taylor Swift and Matthew McConaughey are using trademark law to protect their identities. This pioneering legal strategy involves registering their voices, images, and catchphrases as commercial assets, creating a powerful tool to fight unauthorized digital replicas in advertising and media. This article examines the specifics of their filings, the legal reasoning behind the strategy, and the potential limitations of this new approach.

Celebrities Move to Trademark Voices and Likenesses to Combat AI Deepfakes

Celebrities are registering specific assets like voice clips, photos, and catchphrases as official trademarks. This strategy provides a federal legal basis to challenge unauthorized commercial deepfakes that create consumer confusion or falsely imply an endorsement of a product or service, strengthening control over their personal brands.

According to a Clark Hill alert, Taylor Swift's company, TAS Rights Management, has applied to register audio clips of her greetings ("HEY, IT'S TAYLOR") and a concert photo. Similarly, Matthew McConaughey's J.K. Livin Brands has filed to trademark a headshot, adding to its existing registration for his famous catchphrase, "ALRIGHT ALRIGHT ALRIGHT." These filings focus on entertainment services, signaling a clear intent to police commercial misuse.

Key assets covered by these filings include:
- Short audio clips featuring the artist's distinctive voice
- Still images from recognizable performances
- Established catchphrases tied to their public brand

Rationale and Early Challenges

This strategy provides a "federal legal hammer" to challenge AI clones that imply an endorsement without permission. Instead of relying on copyright, the approach uses the Lanham Act to argue that such deepfakes cause false endorsement or consumer confusion. However, a Variety analysis cautions that this is untested legal ground; courts have not yet affirmed that deepfake use automatically constitutes a trademark injury.

Furthermore, experts note that this is a celebrity-specific solution. The strategy is unavailable to ordinary citizens who lack the established branding and commercial use required for a trademark. While it may deter marketers from using celebrity clones in ads, it is less likely to stop non-commercial or purely expressive deepfakes.

Gaps in Federal Law

This trademark approach targets a specific legal gap. While the TAKE IT DOWN Act mandates the removal of non-consensual sexual deepfakes, it does not address commercial brand impersonation. Trademark law thus offers a focused, business-oriented remedy that complements, but does not replace, other legal protections against digital impersonation.

Practical Effect for Brand Control

The practical impact is direct. If an AI generates a song with a synthetic "Hey, it's Taylor," her registered sensory mark enables her team to allege counterfeit use. Likewise, if an ad uses a fake "Alright Alright Alright," McConaughey can pursue a false endorsement claim. Clark Hill notes these registrations also clarify ownership of key assets, strengthening their position in licensing negotiations. The U.S. Patent and Trademark Office now faces the novel challenge of evaluating these voice and likeness marks, with future legal battles poised to redefine the boundaries of trademark law in the AI era.


What specific trademarks are Taylor Swift and Matthew McConaughey filing to combat AI deepfakes?

According to recent filings detailed by Clark Hill, both stars are securing federal trademark registrations for distinct elements of their identity that generative AI systems commonly replicate. Taylor Swift's company, TAS Rights Management, LLC, has applied to register sound recordings of her saying "HEY, IT'S TAYLOR" and "HEY, IT'S TAYLOR SWIFT," alongside performance photographs from her concerts. Matthew McConaughey's company, J.K. Livin Brands Inc., filed to register his photograph and had previously secured rights to his iconic phrase "ALRIGHT ALRIGHT ALRIGHT" as an audio trademark. These registrations cover various entertainment services, establishing a federal legal foundation to challenge unauthorized digital replicas that mimic their voices and likenesses in commercial contexts.

Why are celebrities turning to trademark law instead of traditional right-of-publicity protections?

While right-of-publicity laws guard against unauthorized commercial use of a person's identity, these protections vary significantly by state and lack a unified federal statute. Trademark law provides nationwide protection under the Lanham Act, centering on preventing consumer confusion regarding whether a celebrity endorses or sponsors AI-generated content. This strategy proves particularly valuable because generative AI can now create entirely new content that mimics an artist's voice without technically copying an existing recording. By treating distinctive voice clips, catchphrases, and performance images as source identifiers comparable to brand logos, celebrities gain stronger grounds to pursue claims for false endorsement or counterfeit commercial uses that mislead audiences about authentic partnerships.

What limitations does this trademark strategy face against AI-generated content?

Despite its innovative potential, trademark doctrine cannot address every form of deepfake harm. The law primarily protects against uses that create consumer confusion about commercial sponsorship or origin, meaning it may not cover non-commercial political commentary, purely expressive works, or personal reputational attacks lacking a commercial nexus. Furthermore, as noted by Techstrong AI, for regular citizens whose voices might be cloned for phone scams or non-commercial deepfakes, the trademark route remains unavailable without established commercial use and distinctiveness in the marketplace. The approach also requires celebrities to proactively register specific marks rather than enjoying automatic broad protection against all AI impersonations.

How does this strategy align with pending federal legislation like the NO FAKES Act?

These trademark filings emerge as Congress considers the NO FAKES Act of 2025, which would create specific federal protections for voice and visual likeness against unauthorized digital replicas. While the TAKE IT DOWN Act (enacted in May 2025) addresses non-consensual intimate deepfakes through platform takedown requirements, it does not cover broader commercial misuse of celebrity identities. Current trademark applications serve as an immediate defensive mechanism while comprehensive legislative solutions remain pending, allowing public figures to leverage existing federal intellectual property frameworks. If enacted, the NO FAKES Act would provide additional statutory rights specifically for digital replicas, potentially working alongside trademark registrations to create layered protection against synthetic media exploitation.

What does this trend indicate for identity protection in the AI era?

The Swift and McConaughey approach signals a fundamental shift toward treating personal identity elements as commercial brand assets requiring active federal registration and enforcement. Swift has filed more than 300 trademark applications in the United States alone, demonstrating that high-profile individuals increasingly view comprehensive trademark portfolios as essential infrastructure for digital-age identity management. However, legal experts emphasize that no single legal doctrine provides complete protection against AI misuse, suggesting that effective defense strategies will require combining trademark registrations with right-of-publicity claims, contract enforcement, and platform-specific monitoring. This evolution may ultimately influence how courts interpret identity rights as generative AI technologies become more sophisticated and accessible.