A fresh moderation lapse is haunting YouTube after users discovered that explicit, uncensored porn as well as a bevy of other harmful content is showing up in channel profile photos and even video playlists that are accessible to anyone visiting the site, whether or not they’re logged into their account. The finding highlights a blind spot in the site’s safety features at a time when regulators and parents are insisting that stronger protections be put in place for minors.
Searches for niche keywords have also turned up channels whose avatars feature pornographic imagery and playlists of short NSFW clips mixed in with otherwise innocuous titles. And crucially, those images are visible to signed-out users: It’s incredibly easy to view them on any web browser, and there are plenty of workarounds for parental controls.
- How users are finding explicit content via avatars and playlists
- A Blind Spot In YouTube’s Moderation Stack
- Signs of coordinated spam networks exploiting YouTube avatars
- Why the risks are especially acute for teens on YouTube
- What YouTube can do now to close the avatar loophole
- What users and parents can do now to reduce exposure

How users are finding explicit content via avatars and playlists
People get explicit results as well when they search for terms related to adult animation or “try-on” trends, with phrases like “mmd r18” and “see through try on haul,” users say. Often, the channels themselves don’t have full-length porn videos on them; they’re basically curated playlists of short NSFW clips, each receiving millions of views and cross-linking to other accounts like it. The most active channels reportedly have more than 200,000 subscribers but post only a small number of original videos.
By this maneuver, they seem to be trying to get more recommendation traffic without setting off the same alarms as uploads and thumbnails do.
Because avatars appear across search results, comments, and channel pages, a single offending profile image can spread far and quickly with little friction.
A Blind Spot In YouTube’s Moderation Stack
Under its Community Guidelines, YouTube prohibits sexually explicit material or pornography in thumbnails and channel art. The company often points to machine learning and human review in enforcing those rules, and its Transparency Report has also chronicled millions of video takedowns and tens of millions of comments removed each quarter.
But avatars are a unique type of asset. Multiple trust-and-safety specialists also point out that platforms tend to put more resources and development time into classifiers for videos, thumbnails, and comments than for profile images, which can have lighter pre-screening as well as greater reliance on user reports. If avatars don’t go through serious nudity detection — or if enforcement is reactive only to complaints — bad actors can cycle through new accounts faster than moderators can react.
YouTube recently started testing automatic blurring of thumbnails associated with adult themes. But that experiment doesn’t cover avatars, and this is a loophole that opportunistic channels are exploiting.

Signs of coordinated spam networks exploiting YouTube avatars
The patterns — interconnecting channels, overlapping playlists, and speedy subscriber growth on accounts with little or no original content — suggest automated or semi-automated operations. Researchers at the Stanford Internet Observatory and the Tech Transparency Project have also previously documented similar growth-hacking tactics on big social platforms: discovery facilitated by bots, keyword clustering, and channel “About” links to funnel audience off-platform.
By clustering content in playlists rather than hosting it directly, operators can enjoy the benefits of YouTube’s recommendation features while distributing risk across a large number of throwaway accounts. If dozens of other channels share the same playlists, or mirror avatars, deleting one won’t take down an entire network.
Why the risks are especially acute for teens on YouTube
In this age, it’s YouTube, and no other platform comes close. According to Pew Research Center, 95 percent of American teenagers have access to a smartphone and 45 percent say they are online “almost constantly.” Ninety-five percent also report they use YouTube. Since these explicit avatars are visible to users who aren’t signed in, they can skirt account-based protections like Restricted Mode and supervised experiences.
The risk of exposure also has regulatory implications. Very large online platforms would have to take measures to address systemic risks — including those that cause harm to minors — under the DSA; violators of the strongest imperatives could be fined up to 6 percent of worldwide revenue. In the United States, increased scrutiny from lawmakers and the Federal Trade Commission over youth safety has raised questions about whether lapses — even if exploited by spammers — can carry outsized risks.
What YouTube can do now to close the avatar loophole
A number of steps could narrow the gap fast.
- First, put all avatars through the exact same (or stricter) nudity and sexual content machine classifiers as thumbnails, with match-and-hash or rechecks for reuse across multiple accounts.
- Second, temporarily throttle or freeze playlist creation and public listing for recently created channels until avatars have passed automated review and spot human review.
- Third, use graph analysis to extract interlinked playlist clusters and coordinated subscriber spikes — bot-farm signals — and quarantine these networks pending investigation.
- Fourth, try the thumbnail-blur experiment for avatars, especially for people not signed in, with an opt-in “show avatars” toggle for adults.
- Lastly, streamline how reporting works to allow users to flag avatars directly from search results and comments, and publish avatar-specific enforcement stats in the Transparency Report.
What users and parents can do now to reduce exposure
- Turn on Restricted Mode and supervised experiences where available, and consider DNS-level filters that will prevent access to known adult domains linked from channel profiles.
- Flag questionable avatars with YouTube’s “Report user” function and suspicious playlist networks.
- Do not engage with NSFW material that can feed its recommendations.
The most recent findings don’t imply that YouTube is rife with pornography. They do demonstrate how one overlooked surface of risk — profile photos — can undermine a broader security strategy. That loophole can be closed in a way that is both technically feasible and long overdue on such a platform — which nearly every teen in the country utilizes.
