It’s getting harder by the day to tell what kind of video content is real. Google’s Gemini now includes a way to determine if a clip was created using AI — but there’s an important gotcha: It will only be able to confirm videos made with Google’s own generative tools.
How the detection works with Google’s SynthID watermarks
Post a short video to Gemini and pose the question: “Was this made by AI?” The model reads the file to locate SynthID, Google’s invisible watermark which is baked into the content produced by its AI systems. If SynthID is there, Gemini gives a straight-up yes or no and can tell where in the video (or audio) the signature is.
SynthID, which is the result of work by Google DeepMind, is designed to survive regular changes like recompression, format shifts, and standard social platform processing. That robustness is an important factor, too, because most viral clips are encoded multiple times before they reach a viewer’s feed.
There are practical limits. When it launches, Gemini’s video checks will support files up to 90 seconds in length and 100MB on the whole. The feature mimics a similar detection process for images, expanding Google’s watermark-based provenance model from stills and audio to short-form video.
The key limitation: only videos made with Google tools
In the case of Gemini, of course, the answer is only as good as SynthID’s penetration. If a video was created by non-Google systems — those tools that include Runway, Pika, Luma, or early demos of Sora come to mind — Gemini can’t confirm it, even if the video is clearly synthetic. In other words, this is a provenance checker for Google’s ecosystem, not a universal deepfake detector.
(The method also won’t definitively flag significantly manipulated real stuff that has been edited to a high-pitched screech of fakeness, with no watermark involved, or reveal how it was triggered or re-edited.) It’s an exquisitely focused tool for one signal and not a forensic model that will diagnose AI artifacts in all cases.
Why it matters now for provenance and public trust
Public anxiety about synthetic media is growing. The Reuters Institute has found that more than half say they find it very hard to work out what’s real on the Internet, and people are increasingly concerned about AI-generated video and audio. Platforms and lawmakers are calling for AI content to be clearly marked, and major AI companies — including Google — have pledged to watermark as part of industry- and government-led efforts on safety.
Watermarking is part of a broader ecosystem around provenance. The Coalition for Content Provenance and Authenticity (C2PA) has been pushing “Content Credentials,” a cryptographically secured standard for attaching creation and edit history to media. Camera manufacturers like Sony and Nikon have teased in-camera versions, while news organizations are testing credentials to maintain chain-of-custody from capture to publishing.
What that means for newsrooms and creators
When used correctly, the check is a fast triage step: if a clip is tagged with a SynthID you know it came from Google’s models. That can influence labeling decisions, editorial notes, and audience transparency. If the check comes back clean, that doesn’t prove a video is authentic — it just eliminates Google’s generators and directs investigators to other methods.
Verification teams can combine this with established workflows, including extracting key frames for reverse searches, examining any available metadata, and using forensic tools such as the InVID-WeVerify plugin or searching for C2PA Content Credentials.
These methods, along with source tracing and contextual reporting, are still necessary when watermarks don’t exist or have been worn away.
Researchers also warn that no watermark is invulnerable. Adversaries might attempt to prune or bury signals through heavy-handed editing, model-to-model regeneration, or conscious post-processing. That is why independent detection benchmarks led by efforts like NIST’s and the U.S. AI Safety Institute’s, as well as cross-industry standards attempts, are important — to test real-world resilience, not lab-only claims.
The path to universal detection across AI generators
Gemini’s feature is progress, but does draw attention to the gulf between closed, model-specific checks and universal detection that users want. The way forward probably involves some combination of signals — invisible watermarks, cryptographic signatures, and platform-level labels — sitting amid interoperable standards that mean any tool can check the back story of any asset.
Until that ecosystem appears, regard Gemini’s decrees as authoritative solely for media the company produces itself. They’re useful to have, they’re realistic enough, and they respond quickly, but they are just one part of a much larger verification jigsaw in an age when a faked video can grow more quickly than truth.