FindArticles FindArticles
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
FindArticlesFindArticles
Font ResizerAa
Search
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
Follow US
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
FindArticles © 2025. All Rights Reserved.
FindArticles > News > Technology

Secret Desires AI Leak Reveals Millions of Photos

Gregory Zuckerman
Last updated: November 19, 2025 11:04 pm
By Gregory Zuckerman
Technology
7 Min Read
SHARE

An AI chatbot and image generator sold predominantly as a tool for creating “spicy” roleplay images has been all but abandoned by its developers, leaving potentially millions of logged-in users’ personal content and details publicly available on the open internet. As first reported by 404 Media, Secret Desires left its cloud storage containers exposed to the open internet, exposing nearly two million images and videos next to names, workplaces and universities.

The incident is a sobering illustration of how rapidly proliferating, adult-oriented AI services can draw together sensitive content with lax security, introducing a disproportionate risk not just for nonconsensual deepfakes but also potential child pornography.

Table of Contents
  • What Investigators Found Left Exposed in Secret Desires
  • A Cloud Security Failure Waiting to Happen
  • The Risk of Deepfakes Isn’t Theoretical
  • Compliance and Liability Are Trailing Behind in AI Safety
  • If You Believe You Were Affected by This Data Exposure
The book cover for Secret Desires by Aashi Irf, featuring red roses with golden sparks and lightning, resized to a 16:9 aspect ratio.

Within an hour of journalists contacting the company, one found that the exposed files were no longer accessible; Secret Desires did not respond to a request for comment.

What Investigators Found Left Exposed in Secret Desires

Among the explicit, AI-generated images and videos that researchers discovered within those open buckets was material created by a now-defunct face swap feature. The trove purportedly included images scraped from social networks as well as private screenshots, with files linked to influencers, public figures and everyday internet users. Disturbingly, some of the named files used words suggesting they contained images of underage victims, representing how AI tools still put society at risk by being weaponized to produce illegal content.

More concerning is that the storage structure may have contained personally identifiable information, beyond just imagery. Combine explicit media files with an individual’s name, school and place of work and all the hell wrought by harassment, extortion and doxing is supercharged — ills experts at organizations like the Electronic Frontier Foundation and Internet Watch Foundation have already forecast will get worse as generative AI tools become more widely available.

A Cloud Security Failure Waiting to Happen

The underlying issue here — poorly configured cloud storage — is both mundane and widespread. Security teams have long warned that public buckets and loose access controls are among the most popular paths to megaleaks. This was echoed in countless threat assessments from leading security companies, finding that misconfiguration continues to be one of the top reasons for cloud data exposure — especially for quickly growing startups without mature governance.

Any service processing intimate content today should at a minimum support private-by-default buckets, strict identity and access management, short-lived presigned links, encryption at rest and in transit, and continuous posture monitoring. No sensitive data should be stored together with media filenames. Metadata should be minimized or tokenized to avoid the possibility of easy matching.

Secret Desires AI leak exposes millions of private photos in major data breach

The Risk of Deepfakes Isn’t Theoretical

AI face swap and image-to-image tools can quickly generate explicit deepfakes for images mined from social media, school portraits or dating sites.

Advocates and academics have documented that the vast majority of deepfake targets are women. And including AI chatbots that offer “limitless intimacy” only further normalizes production at scale, while reducing technical barriers to entry for prospective users who have no experience in this field.

Law enforcement and child safety groups, including the National Center for Missing and Exploited Children, have called on platforms to actively monitor such synthetic sexual imagery of minors. Measures like hashing and PhotoDNA-style matching, act-now filtering of the worst-offender content, and age approximation can mitigate risk but rather awkwardly. Systemwide watermarking and provenance standards such as C2PA can assist with traceability, provided the approach is adopted at scale.

Compliance and Liability Are Trailing Behind in AI Safety

In addition to reputational harm, AI platforms are increasingly at risk for exposure under the law. Regulators have indicated that weak security and misleading safety claims may amount to unfair or deceptive business practices. In countries subject to data protection laws, the mixing of personally identifiable information and personal content carries a risk of breach notification obligations combined with large fines. With the EU’s AI Act and state-level deepfake legislation working through channels, there’s an increasingly clear mandate for more scrutiny of high-risk use cases and more robust accountability for abuse prevention.

For sexual-content AI tools, a defensible compliance posture today increasingly involves supporting age assurance measures such as robust PAS (positive age screening), explicit consent flows for training data acquisition, granular reporting capabilities for users and quick takedown pipelines in collaboration with trusted flaggers like the IWF and NCMEC. Tam points out that security cannot be bolted on after growth; it needs to be built in.

If You Believe You Were Affected by This Data Exposure

  • Those who suspect their images may be involved should save evidence, report the images to those platforms and consider filing a report with NCMEC if minors are included in the image.
  • Victims can also seek removal via industry-established protocols that manage cases of intimate-image abuse and deepfake takedowns.
  • Where they exist, data protection requests can force platforms to reveal what was stored and begin the process of deleting.

The Secret Desires leak is a shot across the bow for all of the “NSFW AI” sector. When intimacy is what products traffic in, there’s zero margin for error. Open buckets and face swap gimmicks can juice growth, but they also attract the sort of harm — and scrutiny — that can kill a platform overnight.

Gregory Zuckerman
ByGregory Zuckerman
Gregory Zuckerman is a veteran investigative journalist and financial writer with decades of experience covering global markets, investment strategies, and the business personalities shaping them. His writing blends deep reporting with narrative storytelling to uncover the hidden forces behind financial trends and innovations. Over the years, Gregory’s work has earned industry recognition for bringing clarity to complex financial topics, and he continues to focus on long-form journalism that explores hedge funds, private equity, and high-stakes investing.
Latest News
Wallpaper Wednesday Is Back With NEW Android Choices
1TB Dual USB Drive For Nearly 40 Percent Off
Black Friday Live Deals And Doorbusters
Real‑world tests: Apple iPhone 17 outpaces iPhone 16
T-Mobile to Charge for Apple TV+ On Us
VC Jennifer Neundorfer Breaks Down How AI Startups Win
Suno Raises at $2.45B Valuation on $200M Revenue
Function Health raises $298M at a $2.5B valuation
Star Wars And Jack Skellington Echo Dot Deals Are Live
Warner Music Settles With Udio and Inks AI Music Deal
Spotify Buys WhoSampled Music Database to Enhance Credits
AirTag, Echo Pop and Fire TV early Black Friday drops under $25
FindArticles
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
  • Corrections Policy
  • Diversity & Inclusion Statement
  • Diversity in Our Team
  • Editorial Guidelines
  • Feedback & Editorial Contact Policy
FindArticles © 2025. All Rights Reserved.