Google is launching a new feature in Gemini that can examine a video and let you know whether any part of that was made or edited with Google’s own AI. The feature also scans the audio, as well as the visuals, for SynthID — an invisible watermark developed by Google DeepMind to indicate AI-generated media. It is a major step up in efforts to battle fake footage at a time of explosive growth for synthesized video that pushes the boundaries of reality, according to some technologists.
How Gemini Flags AI Video Using SynthID Watermarks
In the Gemini app, you can upload a video and pose the question of whether its footage contains AI-generated parts. SynthID looks at frames and the soundtrack, then gives a structured answer to report where it was found, e.g., one may detect an event in the audio while seeing no markers present visually during a given time interval.

The detection is universal across all languages supported by the app and is currently restricted only to recordings up to 100MB in size and about 90 seconds in duration. Behind the scenes, SynthID includes inaudible signals that are meant to survive typical edits, such as compression, cropping, or re-encoding — actions that frequently strip traditional metadata. This enables Gemini to simply make segment-level calls, rather than having to decide yes or no.
What SynthID Does and Doesn’t Cover in AI Detection
One hefty ingredient: Gemini can sense only SynthID. That means it can authenticate material that has been created or edited using Google’s AI tools — and by certain partners that support SynthID — but will not always be able to verify media from systems that do not use this watermark. In other words, for a video generated by another model, its visible content could still be “clean” if there is no compatible watermark.
Google has marketed SynthID to other industry partners, and companies like NVIDIA and Hugging Face have experimented with similar integrations. But the larger ecosystem is still fractured. Some labs include their own watermarks, some apply metadata-level approaches, and many products take a round trip in the community without any associated provenance signals. Therefore, “no detection” should be considered to mean “no SynthID being found” and not as evidence for authenticity.
Why This Matters for Misinformation and Public Trust
The capacity to label AI-edited segments introduces much-needed subtlety to verification workflows for journalists, platforms, and civil society organizations. It’s not just about catching all-out synthetic deepfakes; it could help uncover partial manipulations, such as swapping out audio over real footage, that are more convincing than outright fabrications.
Public anxiety is real. So says a majority of people navigating the online waters, according to the Reuters Institute’s Digital News Report. High-profile incidents, ranging from voice-cloning robocalls to manipulated war videos, have made clear just how quickly synthetic media can spread and how slow traditional verification methods are without better digital tools.

How It Fits In With Emerging Standards and Policies
Watermarking is but one piece of the provenance puzzle. The Coalition for Content Provenance and Authenticity is pushing for signed metadata called Content Credentials, which append a tamper-evident trail of changes. This approach has been championed by Adobe and a group of large publishers, of which Google is a member. At the policy level, NIST and regulators in many regions are already calling for greater transparency of synthetic media.
Each method has trade-offs. Invisible watermarks such as SynthID can survive common editing manipulations, but may not perform well with heavily transformed or reshot content. The cryptographic provenance is strong end-to-end when present, but can be removed by platforms or broken by non-compatible workflows. A multi-layered defense — provenance by default, watermarking as a backup, and platform disclosure — is the best defense.
What Users Should Know Before Using Gemini Detection
Take the Gemini detection as a signal, not as a verdict. If SynthID is checked, you have the strongest evidence that Google AI was involved in it. If nothing is found, accept that the result is likely inconclusive and corroborate with other methods:
- Use key frames in reverse image search
- Match up shadows and reflections
- Listen for audio that didn’t quite sound right
- Look to trusted outlets to further confirm
Also consider privacy and context. Be sure to upload clips you actually have permission to research — and don’t forget the file-size and length restrictions. With platforms like YouTube releasing synthetic media disclosures and newsrooms baking provenance checks into their workflows, detectors like Gemini’s can be plugged into a larger verification process rather than serving as solitary solutions.
Bottom line: Gemini’s updated video detection is another valuable step in the direction of clearer media provenance. It won’t catch everything, but by giving fair warning about where AI was used and doing so at the level of segments, it gives users and creators and fact-checkers a useful tool for telling the difference between signal and noise in an ever more synthetic feed.
