FindArticles FindArticles
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
FindArticlesFindArticles
Font ResizerAa
Search
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
Follow US
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
FindArticles © 2025. All Rights Reserved.
FindArticles > News > Technology

Google And Apple Hosted Dozens Of AI Nudify Apps

Gregory Zuckerman
Last updated: January 28, 2026 10:04 am
By Gregory Zuckerman
Technology
6 Min Read
SHARE

A new investigation has found that Google Play and the Apple App Store were distributing dozens of AI-powered “nudify” and face-swap apps, despite long-standing policies against sexual content. Some of the apps were removed after the findings were shared with the companies, but others remain available — and many generated meaningful revenue through in-app purchases the platforms tax by up to 30%.

Report finds dozens of AI “nudify” and face-swap apps

The Tech Transparency Project identified 55 such apps on Google Play and 47 on the Apple App Store, with 38 appearing in both stores. Investigators simply searched for terms like “nudify” and “undress,” surfacing a mix of AI image and video generators as well as face-swap tools capable of producing sexualized images of women without consent. After the list was shared, Google removed 31 apps and Apple removed 25, according to the group’s findings, though several titles are still live.

Table of Contents
  • Report finds dozens of AI “nudify” and face-swap apps
  • Revenue and the 30% question for app store AI “nudify” apps
  • Policy gaps and platform liability in AI sexualized image apps
  • How these apps evade review and bypass app store safeguards
  • What platforms could do next to curb AI nudify app abuse
  • The bottom line on app store AI “nudify” and face-swap abuse
The Google Play Store logo, a colorful triangle made of green, blue, red, and yellow segments, centered on a light blue gradient background with subtle hexagonal patterns.

The discovery lands amid a broader surge in consumer-grade AI tools that can fabricate intimate imagery in minutes. Unlike fringe websites, app store distribution confers a veneer of legitimacy — and access to mainstream payment systems — even when the resulting outputs violate platform rules and local laws.

Revenue and the 30% question for app store AI “nudify” apps

The business model behind these apps is straightforward: freemium features, followed by subscriptions or pay-per-use credits. AppMagic data cited in the report shows the generator DreamFace has pulled in about $1 million, a sum from which store operators typically take up to a 30% cut. DreamFace was removed from Apple’s storefront but remains available on Google Play.

Another app, Collart, accepted prompts to undress women and even to depict pornographic scenarios, the researchers found. It was removed from Apple’s store but is still available on Google Play. Face-swap tools, such as RemakeFace, present an especially acute risk because they allow users to superimpose the face of a person they know onto explicit bodies. That app is still available on both platforms.

Policy gaps and platform liability in AI sexualized image apps

Both companies prohibit sexual content in apps and developer policies explicitly bar pornographic material. Yet dynamic AI outputs make those rules harder to enforce. A generative app can pass initial review with benign examples while its back-end model readily produces explicit content after publication, shifting risk onto users and victims while platforms continue to process payments.

Past research underscores the harm. Sensity’s landmark analysis found in 2019 that 96% of deepfakes online were pornographic and that nearly all targets were women. By making such tools a tap away on official stores — often with polished branding and high user ratings — the threshold for abuse drops further, while detection and remedy remain slow and reactive.

How these apps evade review and bypass app store safeguards

The mechanics of evasion are familiar to trust-and-safety teams. Developers avoid obvious keywords in listings, keep marketing copy vague, and toggle content filters server-side after approval. Some host models off-device, letting them swap in less-restricted checkpoints without a new app update. Keyword screening is easy to bypass with synonyms, and age ratings or disclaimers provide little protection against non-consensual abuse.

The Google Play logo, featuring a colorful triangular play button icon and the text Google Play in gray, set against a professional flat design background with a soft blue and green gradient.

Even when takedowns occur, copycat apps often appear under new developer accounts. The result is a whack-a-mole cycle that favors bad actors comfortable churning out reskinned clones.

What platforms could do next to curb AI nudify app abuse

Experts in online safety point to a more proactive posture.

That includes:

  • Pre- and post-launch audits that stress-test prompts for sexualized outputs
  • Mandatory on-device and server-side safety filters with auditable logs
  • Rapid-response pipelines that remove offending apps and terminate associated developer accounts within hours, not days

Stores could also require:

  • Stronger developer identity verification
  • Restrictions on external model switching without re-review
  • Targeted detection for “nudify” and explicit face-swap capabilities

Crucially, they should provide:

  • Swift reporting channels for victims
  • Evidence preservation for law enforcement
  • User notifications for those who may have funded abusive outputs via subscriptions

The bottom line on app store AI “nudify” and face-swap abuse

App store policies are clear on paper, but enforcement has not kept pace with generative AI. The TTP findings show that harmful “nudify” and face-swap apps can slip through review, earn money, and persist even after scrutiny. Until Apple and Google move from reactive takedowns to systemic prevention, their marketplaces will continue to facilitate — and profit from — a growing category of non-consensual AI abuse.

Gregory Zuckerman
ByGregory Zuckerman
Gregory Zuckerman is a veteran investigative journalist and financial writer with decades of experience covering global markets, investment strategies, and the business personalities shaping them. His writing blends deep reporting with narrative storytelling to uncover the hidden forces behind financial trends and innovations. Over the years, Gregory’s work has earned industry recognition for bringing clarity to complex financial topics, and he continues to focus on long-form journalism that explores hedge funds, private equity, and high-stakes investing.
Latest News
5 Best Budget FDM 3D Printers Under $500 in 2026
The Future of Vehicles: How the 2 Barrel Carburetor Is Evolving
Which Android Apps Are Best for Photo Editing
Sexologist vs. Sex Therapist: What’s the Difference and Why It Matters for Your Sexual Health
Best Wear OS Watch Faces Revealed for Pixel and Galaxy
Nothing Phone 4a Pro Leak Points To IP65 Upgrade
Doomsday Clock Ticks to 85 Seconds to Midnight
Why Flexible LED Strip Lights Wholesale Is the Smart Choice for Modern Lighting Systems
Android 17 Tests Expanded Blur Across The Interface
YouTube TV Rolls Out Smarter DVR Chapters
Google Accidentally Reveals Aluminium OS Interface
Why Do Montessori Busy Books Recommend Babies?
FindArticles
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
  • Corrections Policy
  • Diversity & Inclusion Statement
  • Diversity in Our Team
  • Editorial Guidelines
  • Feedback & Editorial Contact Policy
FindArticles © 2025. All Rights Reserved.