A New Jersey lawsuit over an AI “nudification” app is laying bare a brutal truth about the internet’s latest abuse crisis: even when the images are plainly illegal, victims face an uphill battle to stop the spread and hold creators to account.
The case targets ClothOff, a service accused of stripping clothes from photos and generating explicit deepfakes. Though removed from major app stores and banned on mainstream platforms, it remains accessible via the open web and messaging bots, underscoring how quickly bad actors can reappear after takedowns.

A Case That Exposes the Enforcement Gap in New Jersey
Filed by a Yale Law School legal clinic on behalf of an anonymous New Jersey student, the complaint seeks to shutter ClothOff and force deletion of all images. The plaintiff’s classmates allegedly used the app to alter her Instagram photos, some taken when she was 14. Under U.S. law, AI-manipulated sexual imagery of minors can qualify as child sexual abuse material — a category of content that is illegal to create, distribute, or possess.
Yet the path to relief is anything but straightforward. Investigators face familiar hurdles: devices that are hard to access, ephemeral sharing in private chats, and offshore operators. The company is reportedly incorporated in a secrecy-friendly jurisdiction, with suspected operators overseas — a structure that complicates service of process, evidence gathering, and ultimately, enforcement of any court order.
Victims caught in this gap often bounce between school administrators, local police, and platforms — each constrained by jurisdiction, resources, or policies. Even when authorities agree the content is unlawful, they may struggle to identify disseminators or secure usable forensic evidence before it disappears.
Why Platforms Are Hard to Hold Liable for Deepfakes
Individuals who create or share deepfake images of minors can be prosecuted under federal criminal laws, including provisions of the PROTECT Act that cover morphed or computer-generated depictions. But building a civil or criminal case against a platform is tougher. Courts often want clear evidence that a service was designed or operated with intent to facilitate illegal content, or that it knowingly ignored obvious harms.
That distinction is particularly important in the AI era. A “purpose-built” nudification tool markets a specific, abusive use case. A general-purpose AI system, by contrast, performs many functions; plaintiffs must show knowledge, recklessness, or design choices that make abuse foreseeable and unaddressed. Free-speech protections also shape the analysis, even though CSAM itself is not protected expression.
Meanwhile, intermediary liability doctrines tilt the playing field. Federal criminal law is not shielded, but platforms frequently invoke immunity against certain state-law civil claims. Without targeted statutes or clear evidence of willful blindness, lawsuits against the services that enable deepfake porn can stall, leaving victims to pursue individual wrongdoers who are hard to identify and harder to sue.
The Scale of the Abuse Keeps Growing Online
The enforcement gap is widening as the problem scales. Sensity’s landmark studies found that the vast majority of deepfake videos — 96% in early analyses — depicted non-consensual pornography, with women as the primary targets. In 2020, the firm documented a Telegram ecosystem that auto-generated sexualized images of an estimated 100,000+ women from ordinary photos.

Child safety organizations warn the risks are accelerating. The National Center for Missing and Exploited Children reported tens of millions of annual CyberTipline reports in recent years, and the FBI has issued public alerts about malicious actors using AI to fabricate sexual content featuring minors. Even when images are fake, their legal status can be the same as real abuse material if they meet statutory definitions — and their psychological and reputational harms are indisputable.
A Patchwork of Laws and Global Jurisdictions
More than a dozen U.S. states have passed laws targeting deepfake sexual imagery, building on earlier “image-based abuse” and “revenge porn” statutes. Abroad, countries including South Korea and the United Kingdom have enacted or updated regulations compelling platforms to act against illegal content, with the UK’s Online Safety Act creating new duties and penalties.
But fragmented rules meet borderless services. Operators register companies in lax jurisdictions, move infrastructure frequently, and distribute tools via encrypted apps. Even when victims win in court, enforcing judgments against an entity with no U.S. assets can become a game of whack-a-mole.
What Would Actually Help Victims of Deepfakes
Experts point to a mix of technical and legal fixes.
On the technical side:
- mandatory provenance metadata for AI imagery
- robust hash-matching for known abusive files
- default-on nudification and CSAM filters in commercial models
- faster triage pipelines that escalate minor-related content to trained teams and to NCMEC
Legally, targeted reforms could make a difference:
- streamlined service of process on foreign entities that do business in the U.S.
- emergency data preservation orders
- ex parte asset freezes for obviously unlawful services
- a clear private right of action for victims of synthetic sexual imagery with statutory damages
Existing tools like NCMEC’s Take It Down and the industry-led StopNCII hashing initiative should be broadened to cover AI-manipulated content and integrated across hosting, search, and messaging layers.
The New Jersey case captures the dilemma in stark terms. The law is unambiguous about the illegality of sexualized images of minors. But until courts, lawmakers, and AI providers close the distance between clear prohibitions and real-world enforcement, victims will keep paying the price for technology that makes abuse effortless — and accountability elusive.