Google announced on Thursday that it would take the rare cabined step of blocking search results, including those from Europe and private WhatsApp chats, to help prevent material known as non-consensual intimate imagery from being discovered in the wake of a breakup. The company is partnering with StopNCII, a cross-industry effort that shares privacy-preserving unique codes (hashes) of flagged images so participating platforms can automatically detect and screen them at website-scale.
The move fills a long-existing hole: until now, survivors generally had to search for offending links themselves and submit individual takedown requests. By plugging into a common hash bank, Google says it will seek out and demote or even delist matches without waiting for an incident report — which advocates have pressured the companies to take for years.
How the proactive system operates to curb abuse
StopNCII, developed by the online safety charity SWGfL with support from leading tech companies, allows people to create a unique fingerprint (hash) of an intimate image on their own device. But only the hash is sent out from the device, not the image. “Then we share those hashes with the cooperating platforms for detection.”
And since StopNCII uses perceptual hashing that’s adapted for real-world tampering, the services that use it meaningfully can usually confirm a corresponding image even if someone cropped, resized, or made some other minor change to it. Google will take these hashes and proactively remove the matching content from search results, making it less likely that people see it or that damage is done. Survivors get a case number from StopNCII to track in the entire ecosystem.
It’s also worth noting that the removal from search doesn’t erase the image from the site where it was first shown. But lowering discoverability in search — a major traffic source — can blunt the impact as victims chase removal from hosting providers or authorities.
Why this matters now for fighting image-based abuse
Yet, image-based abuse is pervasive and underreported. A study by the Data & Society Research Institute and the Cyber Civil Rights Initiative found that millions of Americans have experienced nonconsensual sharing of intimate images, with disproportionate effects on women and L.G.B.T.Q.+ people. In the UK, the Revenge Porn Helpline continues to receive thousands of cases annually, and Australia’s eSafety Commissioner has observed an upward trend in reports of image-based abuse.
And the ecosystem response is starting to mature too. StopNCII’s roster already featured Meta’s Facebook and Instagram, Microsoft’s Bing and Tinder, as well as Bumble. Meta has stated that it took down tens of thousands of Instagram accounts associated with extortion scams last year, underscoring the scope of the problem and the importance of platform collaboration. Industry reporting has noted that Google’s involvement follows on from peers joining the effort, yet its participation therefore serves to only reinforce the coalition further given that, in search particularly, abusive material can be magnified (tech companies have also been accused of not doing enough to amplify takedowns).
Regulatory tailwinds are a factor. The EU’s Digital Services Act and the UK’s Online Safety Act pressure large platforms to reduce harm, and over a dozen U.S. states have introduced legislation about non-consensual pornography. In that context, such proactive measures are starting to look like table stakes for major platforms.
Limits, risks, and the growing deepfake problem
The power of hash-matching, while not a panacea. It works better when survivors can also submit an original or near-original image to generate a hash. Deepfakes make this more complicated, because there’s not necessarily an original “source” file to hash. Google, for its part, already considers AI-generated explicit unauthorized depictions of real people to be removable based on its personal explicit imagery policies, a good match to StopNCII’s approach.
False positives are infrequent due to the specificity of perceptual hashes, but platforms still require review processes and appeal options. Another practical challenge: takedown speed. Proactive takedown can only be as effective as the frequency of updates and the cooperation of platforms. Transparency reports that list how many cases are spotted, average response times and error rates could help outsiders judge progress.
Finally, adversaries continuously adapt. Cropping and light edits are generally detected, but heavy manipulations or re-encoding could escape detection, which requires further improvement on hashing and classifier networks. Cross-company collaboration is still the best defense, and Google’s participation amps up the network effect.
What survivors can do right now to protect themselves
As Google releases proactive blocking, survivors can still utilize StopNCII to receive a hash and case number in order to take cross-platform action. Google’s current tool to scrub personal images would still be available for nonconsensual uploads — the tool used to erase those deepfakes. Parallel steps include contacting the hosting site, preserving evidence for potential law enforcement action in sextortion cases, or seeking assistance from organizations like the Cyber Civil Rights Initiative or the Revenge Porn Helpline (in the UK) or Australia’s eSafety Commissioner.
The larger upshot: Proactive search suppression won’t solve the scourge of image-based abuse — but it can lessen harm at one particularly crucial choke point. With search now inside the StopNCII tent, alongside all major social platforms and dating apps, survivors should encounter fewer obstacles — and fewer search results — when they need relief most.