Payin’ customers claim YouTube Music’s recommendations are being hijacked by AI freebies, with the “Not interested” and thumbs-down buttons sputtering out when it comes to trying to stamp down on the sludge. The result is a feed that feels less funky, less personal and more disconnected from what the subscribers actually want to be listening to.
Reports from established users detail autoplay and mixes perpetually featuring “synthetic artists” equipped with mammoth, generic catalogs — sound recordings that satisfy upload guidelines but lack the traits listeners typically bathe in when it comes to authentic performers. Some exasperated subscribers are openly discussing canceling because discovery is at the heart of what they’re paying for, and it’s falling through.

What users are seeing in YouTube Music recommendations
Complaints amassed on Reddit tell of a scheme all their own: mystery channels disgorging dozens or hundreds of near-identical songs under bland titles, slotted into “Mixed for you” rows and seeding autoplay after every authentic track. After manually hiding songs or tapping “Not interested,” similar content may play again to your ears, from the same pseudo-artist across multiple mixes.
Several posters write that this “AI slop” operates as spam rather than music. Album art can seem templated, metadata is cluttered with mood tags and the stream of work feels endless, which allows recommendation systems to toss these tracks into the bin with other music that’s like it but not really sure. Tech blogs following the issue are echoing user complaints that the feedback mechanisms aren’t taking hold.
For premium subscribers, the optics are tough: You pay for ad-free, quality discovery, but the system continues to push you into content you have told it that you do not want. It implies that the platform’s signals of taste and quality are being mixed up in an unprecedented surplus of cheap uploads.
Why the recommendation algorithm is vulnerable to spam
Rec engines depend on scale, but scale now has two edges. Generative tools mean that anyone can pump out libraries cheaply at close to zero marginal cost, or swamp collaborative filters and “similar track” links. When there are tens of thousands of songs all with near-identical characteristics, statistical shortcuts the model takes turn from strength to liability.
The influx is measurable. Luminate’s 2023 Year-End Music Report estimated there were around 158 million tracks in the worldwide catalog, and that about 120,000 new ISRCs were added every day. It also found that 45 million tracks got no plays at all that year, and most of them dribbled out in obscure small audiences. That long tail is fertile territory for the AI-generated filler to hide in and, on occasion, game.
Platforms have long struggled with uploads that are not quite illegal, but still fall into a gray area. Spotify briefly took down tens of thousands of songs from AI startup Boomy over apparent manipulation before restoring many after a review. It’s rougher for YouTube Music, whose open upload system mixes official releases with user-generated videos and now machine-made hits — good or at least expansive in theory, but a quality-control minefield if guardrails are slow to go up.
How rival streaming platforms are responding to AI spam
The platform has introduced tagging and deprioritization of what it terms “non-artist noise,” and is, in collaboration with Universal Music Group, trialing an artist-centric payout model to check the proliferation of spammy catalogs.

Spotify’s been getting aggressive about cracking down on fake streams lately, and has considered tagging AI-helmed content while also reiterating that it relies on human editorial curation for discovery.
Apple Music is heavily reliant on human-led programming and narrowly edited playlists, which some users say has kept AI filler at bay. None of these are the perfect approach, but at least they’re all steps that signal detection and labeling and payment incentives matter.
YouTube, for its part, has released AI principles and added labels to videos featuring synthetic media. But YouTube Music doesn’t currently offer its listeners an easy switch to remove AI-generated music from their recommendations. In the absence of an obvious opt-out, users are left fighting against an algorithm that seems to prioritize engagement with a volume knob rather than the intentions of listeners.
What paying listeners can do now to improve recommendations
Short term, you can make defensive moves: Aggressively add real artists to your library, start saving albums and making playlists to give the system more strong positive signals. Pause watch history for music breaks so that irrelevant video views don’t cloud your taste profile, and keep tapping “Not interested” on lousy channels to teach the model, even if it feels futile.
Turning off autoplay and depending on owned playlists or uploaded libraries is a way some users say they can wrest back control. Others are trying out competing services to see what they can get away with and keeping YouTube Music for falling back on old favorites and uploads. Until there’s a clear-cut topline “Hide AI content” or “Verified artist only” filter, discovery might take more human sifting than subscribers were hoping for.
The stakes for YouTube Music and subscriber trust
The company said that YouTube Music and Premium had more than 100 million subscribers worldwide. That growth relies on trust that recommendations turn up great music, not generic filler. As discovery deteriorates, churn risk climbs — especially as competitors boast better curation and anti-spam tactics.
Streaming is now responsible for the bulk of recorded music revenues worldwide, according to IFPI, making recommendation quality a strategic battleground. Repairing the AI slop problem may need a mix of detection, labeling, payout reform and a user-facing control to block synthetic catalogs. Paying subscribers aren’t demanding perfection — they’re asking to be listened to.