A new investigation has found that Google Play and the Apple App Store were distributing dozens of AI-powered “nudify” and face-swap apps, despite long-standing policies against sexual content. Some of the apps were removed after the findings were shared with the companies, but others remain available — and many generated meaningful revenue through in-app purchases the platforms tax by up to 30%.
Report finds dozens of AI “nudify” and face-swap apps
The Tech Transparency Project identified 55 such apps on Google Play and 47 on the Apple App Store, with 38 appearing in both stores. Investigators simply searched for terms like “nudify” and “undress,” surfacing a mix of AI image and video generators as well as face-swap tools capable of producing sexualized images of women without consent. After the list was shared, Google removed 31 apps and Apple removed 25, according to the group’s findings, though several titles are still live.
- Report finds dozens of AI “nudify” and face-swap apps
- Revenue and the 30% question for app store AI “nudify” apps
- Policy gaps and platform liability in AI sexualized image apps
- How these apps evade review and bypass app store safeguards
- What platforms could do next to curb AI nudify app abuse
- The bottom line on app store AI “nudify” and face-swap abuse
The discovery lands amid a broader surge in consumer-grade AI tools that can fabricate intimate imagery in minutes. Unlike fringe websites, app store distribution confers a veneer of legitimacy — and access to mainstream payment systems — even when the resulting outputs violate platform rules and local laws.
Revenue and the 30% question for app store AI “nudify” apps
The business model behind these apps is straightforward: freemium features, followed by subscriptions or pay-per-use credits. AppMagic data cited in the report shows the generator DreamFace has pulled in about $1 million, a sum from which store operators typically take up to a 30% cut. DreamFace was removed from Apple’s storefront but remains available on Google Play.
Another app, Collart, accepted prompts to undress women and even to depict pornographic scenarios, the researchers found. It was removed from Apple’s store but is still available on Google Play. Face-swap tools, such as RemakeFace, present an especially acute risk because they allow users to superimpose the face of a person they know onto explicit bodies. That app is still available on both platforms.
Policy gaps and platform liability in AI sexualized image apps
Both companies prohibit sexual content in apps and developer policies explicitly bar pornographic material. Yet dynamic AI outputs make those rules harder to enforce. A generative app can pass initial review with benign examples while its back-end model readily produces explicit content after publication, shifting risk onto users and victims while platforms continue to process payments.
Past research underscores the harm. Sensity’s landmark analysis found in 2019 that 96% of deepfakes online were pornographic and that nearly all targets were women. By making such tools a tap away on official stores — often with polished branding and high user ratings — the threshold for abuse drops further, while detection and remedy remain slow and reactive.
How these apps evade review and bypass app store safeguards
The mechanics of evasion are familiar to trust-and-safety teams. Developers avoid obvious keywords in listings, keep marketing copy vague, and toggle content filters server-side after approval. Some host models off-device, letting them swap in less-restricted checkpoints without a new app update. Keyword screening is easy to bypass with synonyms, and age ratings or disclaimers provide little protection against non-consensual abuse.
Even when takedowns occur, copycat apps often appear under new developer accounts. The result is a whack-a-mole cycle that favors bad actors comfortable churning out reskinned clones.
What platforms could do next to curb AI nudify app abuse
Experts in online safety point to a more proactive posture.
That includes:
- Pre- and post-launch audits that stress-test prompts for sexualized outputs
- Mandatory on-device and server-side safety filters with auditable logs
- Rapid-response pipelines that remove offending apps and terminate associated developer accounts within hours, not days
Stores could also require:
- Stronger developer identity verification
- Restrictions on external model switching without re-review
- Targeted detection for “nudify” and explicit face-swap capabilities
Crucially, they should provide:
- Swift reporting channels for victims
- Evidence preservation for law enforcement
- User notifications for those who may have funded abusive outputs via subscriptions
The bottom line on app store AI “nudify” and face-swap abuse
App store policies are clear on paper, but enforcement has not kept pace with generative AI. The TTP findings show that harmful “nudify” and face-swap apps can slip through review, earn money, and persist even after scrutiny. Until Apple and Google move from reactive takedowns to systemic prevention, their marketplaces will continue to facilitate — and profit from — a growing category of non-consensual AI abuse.