An AI chatbot and image generator sold predominantly as a tool for creating “spicy” roleplay images has been all but abandoned by its developers, leaving potentially millions of logged-in users’ personal content and details publicly available on the open internet. As first reported by 404 Media, Secret Desires left its cloud storage containers exposed to the open internet, exposing nearly two million images and videos next to names, workplaces and universities.
The incident is a sobering illustration of how rapidly proliferating, adult-oriented AI services can draw together sensitive content with lax security, introducing a disproportionate risk not just for nonconsensual deepfakes but also potential child pornography.

Within an hour of journalists contacting the company, one found that the exposed files were no longer accessible; Secret Desires did not respond to a request for comment.
What Investigators Found Left Exposed in Secret Desires
Among the explicit, AI-generated images and videos that researchers discovered within those open buckets was material created by a now-defunct face swap feature. The trove purportedly included images scraped from social networks as well as private screenshots, with files linked to influencers, public figures and everyday internet users. Disturbingly, some of the named files used words suggesting they contained images of underage victims, representing how AI tools still put society at risk by being weaponized to produce illegal content.
More concerning is that the storage structure may have contained personally identifiable information, beyond just imagery. Combine explicit media files with an individual’s name, school and place of work and all the hell wrought by harassment, extortion and doxing is supercharged — ills experts at organizations like the Electronic Frontier Foundation and Internet Watch Foundation have already forecast will get worse as generative AI tools become more widely available.
A Cloud Security Failure Waiting to Happen
The underlying issue here — poorly configured cloud storage — is both mundane and widespread. Security teams have long warned that public buckets and loose access controls are among the most popular paths to megaleaks. This was echoed in countless threat assessments from leading security companies, finding that misconfiguration continues to be one of the top reasons for cloud data exposure — especially for quickly growing startups without mature governance.
Any service processing intimate content today should at a minimum support private-by-default buckets, strict identity and access management, short-lived presigned links, encryption at rest and in transit, and continuous posture monitoring. No sensitive data should be stored together with media filenames. Metadata should be minimized or tokenized to avoid the possibility of easy matching.

The Risk of Deepfakes Isn’t Theoretical
AI face swap and image-to-image tools can quickly generate explicit deepfakes for images mined from social media, school portraits or dating sites.
Advocates and academics have documented that the vast majority of deepfake targets are women. And including AI chatbots that offer “limitless intimacy” only further normalizes production at scale, while reducing technical barriers to entry for prospective users who have no experience in this field.
Law enforcement and child safety groups, including the National Center for Missing and Exploited Children, have called on platforms to actively monitor such synthetic sexual imagery of minors. Measures like hashing and PhotoDNA-style matching, act-now filtering of the worst-offender content, and age approximation can mitigate risk but rather awkwardly. Systemwide watermarking and provenance standards such as C2PA can assist with traceability, provided the approach is adopted at scale.
Compliance and Liability Are Trailing Behind in AI Safety
In addition to reputational harm, AI platforms are increasingly at risk for exposure under the law. Regulators have indicated that weak security and misleading safety claims may amount to unfair or deceptive business practices. In countries subject to data protection laws, the mixing of personally identifiable information and personal content carries a risk of breach notification obligations combined with large fines. With the EU’s AI Act and state-level deepfake legislation working through channels, there’s an increasingly clear mandate for more scrutiny of high-risk use cases and more robust accountability for abuse prevention.
For sexual-content AI tools, a defensible compliance posture today increasingly involves supporting age assurance measures such as robust PAS (positive age screening), explicit consent flows for training data acquisition, granular reporting capabilities for users and quick takedown pipelines in collaboration with trusted flaggers like the IWF and NCMEC. Tam points out that security cannot be bolted on after growth; it needs to be built in.
If You Believe You Were Affected by This Data Exposure
- Those who suspect their images may be involved should save evidence, report the images to those platforms and consider filing a report with NCMEC if minors are included in the image.
- Victims can also seek removal via industry-established protocols that manage cases of intimate-image abuse and deepfake takedowns.
- Where they exist, data protection requests can force platforms to reveal what was stored and begin the process of deleting.
The Secret Desires leak is a shot across the bow for all of the “NSFW AI” sector. When intimacy is what products traffic in, there’s zero margin for error. Open buckets and face swap gimmicks can juice growth, but they also attract the sort of harm — and scrutiny — that can kill a platform overnight.
