For listeners, most cannot tell when a song has been created by a machine. A new survey, conducted by the music streaming service Deezer with the research firm Ipsos, found that 97 percent of respondents could not correctly identify a track made entirely using AI technology as non-human. The revelation arrives as streaming platforms grapple with the challenge of how to surface synthetic music without sowing distrust among listeners or undercutting artists.
Inside the results of the Deezer and Ipsos AI music survey
The study also suggests a discomfort gap: 52 percent reported feeling uncomfortable after learning that they could not discern human from AI. At the same time, interest is still high. Nearly two-thirds, 66 percent, said they would give AI-generated music a listen at least once, and 46 percent think AI could help them find new sounds. And it’s that mix of openness and apprehension that is shaping how platforms and labels are approaching the technology.

One reason it’s such a tough thing to detect: modern pop production already operates with crutches like quantization, sample libraries, pitch correction, and preset-heavy mixing. When a large segment of the popular music realm converges on similar structures and textures, an AI system trained on those patterns will generate outputs that can slip right into your favorite playlists, especially at lower bitrates or coming out of mobile speakers where nuance tends to blur.
Demand for clear labels and transparency on AI music
The survey highlights a distinct consumer demand for transparency. Four in five listeners (80 percent) want to see AI-generated tracks identified as such on platforms, and 72 percent want to know when recommendations involve entirely synthetic music. Nearly half (45 percent) would block AI music outright if they could, and 40 percent say they’d skip AI-based tracks when confronted with them. In other words, permission matters: people are OK with experimentation, but they want to understand what is being said.
That would mirror moves elsewhere in the media. Major social networks have made strides in labeling manipulated or synthetic content, and YouTube has announced disclosure requirements for “realistic” AI content. And in music, Deezer has experimented with AI-detection projects for spotting deepfake vocals and spammy “noise” uploads. Spotify has deleted batches of low-quality, bot-like releases previously. Metadata and tagging are also starting to become table stakes in the eyes of labels and distributors.
Artist rights and industry fault lines over AI training
Worries peak when the discussion shifts to training data and incomes. The survey found that 65 percent of respondents say AI systems should not train on copyrighted music, and 70 percent think AI is a threat to musicians’ earnings. That sentiment aligns with a growing industry trend: the Recording Industry Association of America has supported lawsuits against AI music startups for what they claim is unlicensed training, while major labels have called for consent and compensation frameworks.
The volume challenge is real. Today, Deezer estimates that approximately 50,000 totally AI-generated songs are uploaded to streaming services every day — or just below a third of all new releases, according to its math. Systems for provenance, watermarking, and content authentication are critical at this scale. Google DeepMind’s SynthID and similar audio watermarking research, along with projects like Content Credentials for media provenance, also seek to help platforms trace origins without making the audio sound worse.

Why People Have a Hard Time Recognizing AI
Psychologically, listeners rely on context. If a track is in a comfort zone of an established genre, with solid mixing and mastering, our brains just click on “human.” AI models capitalize on those priors by replicating genre tropes — the four-chord loop, predictable risers, quantized drums, and glossy vocal timbres. Academics working in the field, for example on recent ICASSP and Interspeech conference papers, demonstrate that while detectors may flag synthetic speech — and even singing — their accuracy decreases when using high-fidelity quality models or style/genre specialization.
Complicating things, commercial detectors may have false positives against truly human tracks that are heavily processed. Producers have been using Auto-Tune, Melodyne, drum replacement, and AI-assisted mastering for years; “AI or not” isn’t a binary label that represents this continuum very well. Listeners seem to understand that distinction: the backlash has been against entire generated songs and misleading presentation, not creative tools employed by humans.
A practical path forward for labeling and consent
Clear labels, choice, and consent are the immediate fixes. The survey indicates that platforms should:
- Annotate synthetic tracks
- Flag AI in recommendations
- Provide filters to exclude AI music
On the rights side, consent-based training licensing and recognizable watermarking will promote a common cause for creators, AI developers, and services. EU and U.S. policymakers are already considering transparency rules for generative models, so standards are on the way.
What the public wants, in other words, is more nuanced: they can’t reliably identify AI today; they’re curious to hear it; and they don’t want to be deceived. And so, as AI-assisted composition improves, and the tools for detection continue to develop, the collision between innovation and attribution will come to define the next phase of streaming. The successful platforms will allow people to discover easily while keeping the provenance unmistakable.