Spotify is piloting a new Artist Profile Protection feature designed to stop low-quality or impersonated AI tracks from landing on the wrong artist pages. The beta gives artists a pre-release approval queue, letting them review and either approve or decline any incoming release tagged to their name before it goes live, influences their stats, or appears in recommendations.
The move targets a growing headache across streaming: mislabeled uploads, name collisions, and deliberate attempts to piggyback on well-known acts using AI-generated vocals or styles. With hundreds of millions of listeners and a catalog running into the tens of millions of tracks, even small metadata errors can snowball into skewed recommendation feeds, corrupted discographies, and frustrated fans.
How The New Verification Step Works For Artists
Eligible artists in the beta will see an Artist Profile Protection toggle inside Spotify for Artists on desktop and mobile web. When a distributor delivers a release carrying that artist’s name, Spotify sends a notification. The release only posts to the profile after the artist or their team clicks approve; declined items are kept off the page and excluded from algorithmic surfaces like Release Radar and Daily Mix.
The company frames the feature as optional but particularly useful for artists who share common names, have suffered repeated misattributions, or want tighter control over their public catalog. It complements existing back-end checks by adding a human-in-the-loop step at the moment that matters most—before a misattributed track can affect discovery, charting, or royalty reporting.
A Timely Response To AI Impersonation Risks
Generative tools have supercharged a long-standing problem. The viral “fake Drake” incident underscored how convincingly cloned vocals can mislead listeners and algorithms. Sony Music recently said it requested takedowns of more than 135,000 AI-generated tracks that impersonated its artists across streaming platforms, illustrating both the scale and speed of the challenge.
Streaming fraud and identity misuse are not theoretical edge cases. Deezer reported that an estimated 7% of streams in one market were fraudulent or suspicious, and Spotify previously removed tens of thousands of AI-assisted tracks from Boomy amid stream manipulation concerns. Trade bodies like IFPI and the RIAA have warned that mislabeled content and fake activity distort revenues, dilute artist brands, and degrade listener trust.
Artist Profile Protection tackles a specific slice of this ecosystem: keeping unwanted tracks—whether created by mistake or malice—off an artist’s page. That, in turn, protects release strategies, keeps recommendation systems calibrated, and reduces the clean-up burden on support teams and distributors.
Metadata Is The Battlefield For Streaming Accuracy
Most misattributions start with metadata. Similar stage names, incomplete credits, or incorrect identifiers can route a track to the wrong place. Industry standards like ISRC for recordings and ISNI for creator identities help, but adoption is uneven and data quality varies widely among independent uploads. Profile-level approval effectively adds a final gate that catches mistakes even when upstream data is noisy.
The approach pairs well with other initiatives: distributor-side name verification, stronger DDEX-compliant deliveries, and emerging provenance tech. Researchers have proposed audio watermarking systems such as AudioSeal from Meta, while standards efforts like C2PA aim to carry content provenance through the creative pipeline. None of these are silver bullets, but combined with artist approvals they make impersonation less rewarding and easier to detect.
What Artists And Teams Should Watch During The Beta
For managers and labels, the operational questions are practical. How fast can teams review high volumes during busy release cycles? Will major distributors integrate clearer flags to reduce false positives? How will the workflow handle featured appearances, compilations, or remixes where multiple artists share ownership? The success of the beta will hinge on minimizing delays while maximizing precision.
There is also a discovery angle. Keeping unauthorized tracks off profiles prevents them from training recommendation models in the wrong direction. That protects audience segmentation and can preserve conversion metrics—vital for pre-saves, first-day velocity, and playlisting odds. For fans, it simply means that pressing play on an artist’s page is more likely to deliver what they expect.
A Step Toward Accountable AI In Music Platforms
No single guardrail will stop every bad upload or AI clone, but moving decision rights to the artist page is a meaningful shift. It acknowledges that identity is core IP, just as important as the sound recording itself. Combined with stricter distributor onboarding, anomaly detection for bots, and cross-platform industry cooperation, this kind of pre-release control can raise the cost of abuse and reduce cleanup after the fact.
If the beta proves effective, expect broader rollout and deeper ties to catalog verification, label tools, and possibly provenance signals baked into delivery. In a streaming era defined by abundance, keeping artist pages accurate is not just housekeeping—it is infrastructure. Spotify’s test suggests that giving creators a direct veto may be one of the simplest, most scalable defenses against AI slop crowding the stage.