A new watchdog investigation says Apple’s App Store and Google Play quietly hosted dozens of “nudify” apps that use AI to strip clothing from photos, despite policies that ban sexually explicit and non-consensual imagery. Both companies have begun removals and suspensions, but the scale highlighted by researchers suggests enforcement has not kept pace with rapidly evolving deepfake tools.
What the Investigation Found Across Both App Stores
The Tech Transparency Project, a research initiative of the nonprofit Campaign for Accountability, identified 55 nudify apps on Google Play and 47 on Apple’s App Store. The group says many of these apps are marketed with euphemisms like “photo enhancer” or “body filter,” but their core feature is the same: generating sexualized deepfakes from user-uploaded images without consent.
- What the Investigation Found Across Both App Stores
- How These Apps Slipped Through App Store Review Systems
- Apple and Google Respond to Watchdog Deepfake Report
- The Human Harm Behind the Downloads and Deepfakes
- Where Enforcement Could Improve Across App Marketplaces
- Policy and Legal Pressure Is Rising on AI Deepfake Apps
- Bottom Line: App Store Policies Lag Behind Deepfake Abuse

Using data from AppMagic, the report estimates the flagged apps have amassed more than 705 million downloads globally and generated $117 million in revenue. Because both platforms collect a service fee on in-app purchases and subscriptions, the findings imply Apple and Google have directly profited from the category, even as their rules prohibit it.
How These Apps Slipped Through App Store Review Systems
App store policies are clear on paper. Apple’s guidelines prohibit pornographic content and apps that facilitate exploitation or harassment. Google’s Developer Program Policies bar sexually explicit content, including non-consensual sexualization and “deepfake” imagery that targets individuals. Yet investigators found developers circumvented screening by avoiding explicit keywords, gating nudify features behind paywalls, or pushing the AI generation to cloud servers activated after installation—tactics that make static app reviews less effective.
Branding and age-gates also play a role. Some apps lean on “for entertainment only” disclaimers, generic icons, and innocuous screenshots to pass initial checks. Others rely on frequent rebranding and new publisher accounts, a pattern long seen in moderation cat-and-mouse games around spyware, adware, and gambling apps. The result is a system optimized for scale that sometimes misses sophisticated abuse at the edge.
Apple and Google Respond to Watchdog Deepfake Report
After the report’s publication, Apple told CNBC it removed 28 of the identified apps, while Google said it suspended several and is continuing its investigation. The companies did not detail their review methods or whether they would increase proactive detection for AI image-generation features specifically designed to produce sexualized content.
Critics argue that reactive removals are not enough given the category’s growth. Recommendation systems and ad placements can amplify borderline apps before a takedown arrives, and revenue models built on subscriptions or paid credits can create strong incentives for developers to quickly re-upload under new names. Without deeper pre-publication scrutiny for high-risk AI imaging tools, the cycle likely continues.

The Human Harm Behind the Downloads and Deepfakes
Non-consensual deepfakes can devastate targets, damaging reputations, careers, and mental health. Researchers such as Sensity have repeatedly found that the vast majority of online deepfake content skews toward sexual imagery, often exceeding 90% in sampled datasets. Advocacy groups including the Cyber Civil Rights Initiative say women are disproportionately targeted, with harassment often escalating after a single image spreads through social networks and messaging apps.
The risk is not abstract. Schools and workplaces have documented incidents in which altered images were used for extortion or public shaming. Because many nudify apps promise one-click results from ordinary photos, bystanders and acquaintances can weaponize casual snapshots—turning a routine image into an abusive, synthetic nude in seconds.
Where Enforcement Could Improve Across App Marketplaces
Experts say app stores can tighten review by flagging submissions that combine face detection with body transformation models, requiring demonstrable safeguards for any AI imaging features, and deploying on-device scanning to compare marketing claims to actual app behavior. Clearer disclosure and labeling for any generative AI use, combined with stricter bans on sexualization features, would narrow loopholes.
Transparency also matters. Regular public reports detailing removals by category, estimated impacted users, and revenue clawbacks would create accountability. For repeat offenders, store-wide developer bans and charge reversals could change incentives. Given the cross-platform nature of these apps, coordinated enforcement between Apple and Google would reduce whack-a-mole re-uploads.
Policy and Legal Pressure Is Rising on AI Deepfake Apps
Lawmakers and regulators are circling. Several U.S. states have enacted laws targeting non-consensual deepfakes, and the EU’s emerging AI rules push for transparency and safeguards around synthetic media. The FTC has warned it will scrutinize deceptive or harmful AI uses, including impersonation and intimate-image abuse. Those signals increase risk for platforms that fail to act decisively against nudify tools.
Bottom Line: App Store Policies Lag Behind Deepfake Abuse
The revelation that Apple and Google stores hosted dozens of nudify apps underscores a widening gap between policy and practice. Swift removals are a start, but the numbers in the report—hundreds of millions of downloads and nine-figure revenues—show how quickly harmful AI features can scale. To protect users, the app stores will need to move from reactive takedowns to systematic, AI-aware gatekeeping that stops abuse before it reaches the top of the charts.
