Google is extending its fight against image-based abuse by teaming up with StopNCII.org, an activism group in the U.K. seeking to identify and remove nonconsensual intimate imagery (NCII) from Search. Rather, it will use privacy-preserving “hashes” from the nonprofit to identify known intimate photos and videos that victims have flagged and that constitute revenge porn so they can no longer be shared on the platform.
How the partnership works to curb NCII in Search
StopNCII.org allows adults to assign unique “hash” values to the images or clips they want to prevent from appearing on the internet, in a way that makes those files resistant to tampering. Importantly, the content never leaves the device — only the hashed text is sent to a secure bank that participating companies can use to check against. Search can tamp down or eliminate such results from indexed pages if its systems detect similar content, according to its longstanding personal content policies.
Hashing is a technique used across the tech industry to resist harmful media. Robust hashing can match content even when adversaries crop, resize, or make minor modifications to it, in contrast to a plain file checksum. That makes it well-suited to the game of cat-and-mouse that plays out in NCII, where abusers frequently repost altered versions to bypass filters.
It is important to understand the difference between delisting and removing. Google can’t remove files from the web, but it can make such abusive content much more difficult to find. That material still needs to be taken down at the root from the host — that’s why StopNCII’s cross-platform approach matters; a single survivor-generated hash could be honored across multiple services.
Why this matters for tackling nonconsensual imagery
NCII — commonly referred to as “revenge porn,” though experts prefer the term “image-based sexual abuse” — is widespread and particularly damaging. The U.K.’s Revenge Porn Helpline shares a success rate on removal rates of over 90% across hundreds of thousands of takedown requests, indicating both the breadth of the issue and the power that comes with orchestrated action. In the United States, studies from the Cyber Civil Rights Initiative and Data & Society project that 1 in 25 adults has been threatened or has suffered involuntary exposure of intimate imagery.
The consequences stretch far beyond embarrassment, survivors say: continuing reuploads of the videos, efforts at extortion, damage to careers and housing prospects, and a host of mental health crises. Search engines are often where victims first encounter the extent of harm, so proactive delisting is not simply a hygiene feature — it’s a safety measure.
What this means for users and survivors on Google Search
Google already allows people to request removals of explicit or exploitative imagery from Search, including AI-generated fake porn of real people. In using StopNCII hashes, Google is getting victims partway off the hook — historically, they have had to hunt down and report each offending link. Now, when a survivor generates hashes with the nonprofit’s tool, Search can match those hashes at scale as new pages are indexed.
This approach complements other tools. Google has broadened takedowns aimed at reducing the appearance of sexually explicit material that is targeted to individuals and provided the public with self-service dashboards to deal with personal information removals. Minors, meanwhile, have their own pathways through organizations like the National Center for Missing & Exploited Children and, in the U.K., the Internet Watch Foundation.
Safeguards, limits and open questions for NCII hashing
StopNCII’s system is built for only the person in the photo to be able to hash their own content — which cuts down on abuse potential. Since the files themselves never leave the device, and the hashes are not reversible, it’s designed to ensure privacy while allowing for wide collaboration across platforms.
And yet, no hash-based abstraction is perfect. The heaviest edits, composites, and other sorts of generative manipulation also get around matching; for example: delisting does not work on noncomparing content which resides at the hands of buzzy places. That’s where transparency will be important: if companies report aggregate match rates, time-to-action, and regional coverage, that can help researchers and advocates assess impact and spot gaps.
Law and policy are playing catch-up, as well. The Online Safety Act in the U.K. created specific offenses for sharing intimate images without consent, such as deepfakes, so that law enforcement has more complete grounds for action. Successful mitigation will need the three-legged stool of product changes, cross-platform cooperation, and enforceable legal penalties.
The wider industry picture and platform adoption
StopNCII is supported by an extensive list of platforms, covering all major categories:
- TikTok
- Bumble
- Snapchat
- OnlyFans
- X
- Microsoft
Microsoft has completed its integration into Bing; other services join using similar trust and safety agreements. The more users that share and respect these privacy-safe hashes, the fewer spaces there are for abusers to migrate to and reoffend.
For Google, this incorporation into core search quality systems of nonprofit-sourced signals is a significant evolution: it goes beyond response to user-submitted takedowns and ranking adjustments towards a proactive, survivor-driven model. The real test will be durability — sustaining low latency, minimizing false negatives, and putting out credible outcomes that survivors and advocates can check.
If all goes well, the partnership will do more than clean up search results. It can cut the lifecycle of abuse, deprive it of oxygen, and also establish a baseline expectation: intimate images — once flagged by the person depicted — should become all but unfindable across the mainstream web.