Matthew McConaughey has moved to lock down his identity in the age of synthetic media. The Oscar winner has secured approval for eight trademark applications with the US Patent and Trademark Office, a strategy aimed at stopping unauthorized AI clones of his face and voice, according to reporting by The Wall Street Journal. It’s a playbook more celebrities are likely to follow as deepfake scams proliferate and the legal lines around digital likenesses remain blurry.
Actor Seeks Legal Shield Against AI Clones
McConaughey’s filings reportedly cover a spectrum of uses, from images and short clips in specific scenes—think a holiday setting under a Christmas tree or an intimate front-porch moment—to signature quotations from performances. The goal is to establish a clear perimeter: if his likeness, voice, or recognizable cues appear in media or advertising, it should be with consent and attribution. That clarity matters as AI tools make it trivial to synthesize a convincing McConaughey pitch for a product he never endorsed.

The approach mirrors a broader shift by public figures, who are supplementing traditional publicity and privacy rights with trademark law to deter misuse before it spreads. It also provides a practical pathway to faster takedowns on platforms that triage trademark complaints differently from general impersonation reports.
Why Trademarks Add Teeth Beyond Publicity Rights
Right-of-publicity laws in most US states already prohibit commercial use of a person’s name or likeness without permission. But trademarks add federal, nationwide protection and a confusion standard—crucial when AI content is designed to look and sound real. A registered mark tied to a person’s name, voice, or iconic expressions can support claims of false endorsement and make enforcement more straightforward across jurisdictions.
Trademarks also unlock practical tools. Platforms and marketplaces often prioritize clear trademark infringement in their reporting systems; domain-name disputes can be escalated through established processes; and repeat infringers face greater risk when they exploit protected marks. While trademarks don’t cover every instance of noncommercial or newsworthy use, they strengthen the hand of celebrities whose personas are being spoofed at scale.
Deepfake Scams Are Surging Across Ads and Social Media
AI-generated endorsements have already ensnared consumers with fake ads featuring familiar faces. In recent high-profile incidents, fabricated videos and voices of Tom Hanks and Taylor Swift circulated online promoting bogus products. The stakes are broader than celebrity reputations. The Federal Trade Commission reports Americans lost more than $10 billion to fraud in 2023, with impostor scams leading at $2.7 billion—an environment where AI voice and video tools are potent force multipliers.
Security researchers and industry groups describe a steep rise in synthetic media used for social engineering, from voice-cloned “CEO” calls to AI-fabricated relatives seeking emergency funds. Firms including Pindrop and McAfee have documented sharp growth in these attacks, reflecting how easy-to-use cloning tools have lowered the barrier for criminals. When a voice sounds like McConaughey or any trusted figure, victims are more likely to click, share, or buy.

Hollywood And Lawmakers Race To Set Guardrails
After the 2023 SAG-AFTRA strike, actors secured contractual rights around consent and compensation for digital replicas, but unions and legal scholars argue the protections need to go further. In the Boston College Law Review, professor Victoria Haneman has called for stronger post-mortem controls and a “right to be dead” to curb unauthorized digital resurrection of performers.
States are experimenting, too. Tennessee’s ELVIS Act expanded protections in 2024 to cover a person’s voice—directly targeting AI cloning in music and advertising. New York modernized its publicity laws in 2020 to include post-mortem rights and deepfake provisions, while California’s statutes continue to serve as a model for living artists. These rules complement, but don’t replace, federal remedies like trademark and copyright—hence the layered approach McConaughey is taking.
What It Means for Platforms, Creators, and Fans
For tech platforms, the message is clear: provenance and consent signals must be built into content systems. Standards efforts like the Content Authenticity Initiative and the C2PA specification, which attach tamper-evident metadata about how images and videos were made, are gaining momentum. Watermarking and disclosure policies are also becoming table stakes as policymakers scrutinize deceptive AI media.
For audiences, the practical advice is simple. Treat celebrity endorsements with skepticism unless they appear on verified channels, look for visible content credentials when available, and be wary of urgent pitches tied to health cures, giveaways, or investment schemes. If an ad seems “alright, alright, alright” but the source is murky, it’s safer to assume it isn’t real.
McConaughey’s move won’t eliminate deepfakes, but it sets a template: combine state publicity rights, union protections, and federal trademarks to create overlapping defenses. As AI tools get better at mimicry, that layered legal perimeter—paired with better platform governance—may be the most effective way to keep the real from being overwhelmed by the synthetic.
