FindArticles FindArticles
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
FindArticlesFindArticles
Font ResizerAa
Search
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
Follow US
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
FindArticles © 2025. All Rights Reserved.
FindArticles > News > Technology

Gemini can verify AI-generated videos, but only on Google

Gregory Zuckerman
Last updated: December 19, 2025 10:06 am
By Gregory Zuckerman
Technology
6 Min Read
SHARE

It’s getting harder by the day to tell what kind of video content is real. Google’s Gemini now includes a way to determine if a clip was created using AI — but there’s an important gotcha: It will only be able to confirm videos made with Google’s own generative tools.

How the detection works with Google’s SynthID watermarks

Post a short video to Gemini and pose the question: “Was this made by AI?” The model reads the file to locate SynthID, Google’s invisible watermark which is baked into the content produced by its AI systems. If SynthID is there, Gemini gives a straight-up yes or no and can tell where in the video (or audio) the signature is.

Table of Contents
  • How the detection works with Google’s SynthID watermarks
  • The key limitation: only videos made with Google tools
  • Why it matters now for provenance and public trust
  • What that means for newsrooms and creators
  • The path to universal detection across AI generators
The Gemini logo, featuring a colorful, four-pointed star icon to the left of the word Gemini in black text, set against a professional light gray background with subtle geometric patterns.

SynthID, which is the result of work by Google DeepMind, is designed to survive regular changes like recompression, format shifts, and standard social platform processing. That robustness is an important factor, too, because most viral clips are encoded multiple times before they reach a viewer’s feed.

There are practical limits. When it launches, Gemini’s video checks will support files up to 90 seconds in length and 100MB on the whole. The feature mimics a similar detection process for images, expanding Google’s watermark-based provenance model from stills and audio to short-form video.

The key limitation: only videos made with Google tools

In the case of Gemini, of course, the answer is only as good as SynthID’s penetration. If a video was created by non-Google systems — those tools that include Runway, Pika, Luma, or early demos of Sora come to mind — Gemini can’t confirm it, even if the video is clearly synthetic. In other words, this is a provenance checker for Google’s ecosystem, not a universal deepfake detector.

(The method also won’t definitively flag significantly manipulated real stuff that has been edited to a high-pitched screech of fakeness, with no watermark involved, or reveal how it was triggered or re-edited.) It’s an exquisitely focused tool for one signal and not a forensic model that will diagnose AI artifacts in all cases.

Why it matters now for provenance and public trust

Public anxiety about synthetic media is growing. The Reuters Institute has found that more than half say they find it very hard to work out what’s real on the Internet, and people are increasingly concerned about AI-generated video and audio. Platforms and lawmakers are calling for AI content to be clearly marked, and major AI companies — including Google — have pledged to watermark as part of industry- and government-led efforts on safety.

Watermarking is part of a broader ecosystem around provenance. The Coalition for Content Provenance and Authenticity (C2PA) has been pushing “Content Credentials,” a cryptographically secured standard for attaching creation and edit history to media. Camera manufacturers like Sony and Nikon have teased in-camera versions, while news organizations are testing credentials to maintain chain-of-custody from capture to publishing.

A Google Pixel Watch displaying Ask Google Gemini with various Google app icons floating around it on a dark, subtly patterned background.

What that means for newsrooms and creators

When used correctly, the check is a fast triage step: if a clip is tagged with a SynthID you know it came from Google’s models. That can influence labeling decisions, editorial notes, and audience transparency. If the check comes back clean, that doesn’t prove a video is authentic — it just eliminates Google’s generators and directs investigators to other methods.

Verification teams can combine this with established workflows, including extracting key frames for reverse searches, examining any available metadata, and using forensic tools such as the InVID-WeVerify plugin or searching for C2PA Content Credentials.

These methods, along with source tracing and contextual reporting, are still necessary when watermarks don’t exist or have been worn away.

Researchers also warn that no watermark is invulnerable. Adversaries might attempt to prune or bury signals through heavy-handed editing, model-to-model regeneration, or conscious post-processing. That is why independent detection benchmarks led by efforts like NIST’s and the U.S. AI Safety Institute’s, as well as cross-industry standards attempts, are important — to test real-world resilience, not lab-only claims.

The path to universal detection across AI generators

Gemini’s feature is progress, but does draw attention to the gulf between closed, model-specific checks and universal detection that users want. The way forward probably involves some combination of signals — invisible watermarks, cryptographic signatures, and platform-level labels — sitting amid interoperable standards that mean any tool can check the back story of any asset.

Until that ecosystem appears, regard Gemini’s decrees as authoritative solely for media the company produces itself. They’re useful to have, they’re realistic enough, and they respond quickly, but they are just one part of a much larger verification jigsaw in an age when a faked video can grow more quickly than truth.

Gregory Zuckerman
ByGregory Zuckerman
Gregory Zuckerman is a veteran investigative journalist and financial writer with decades of experience covering global markets, investment strategies, and the business personalities shaping them. His writing blends deep reporting with narrative storytelling to uncover the hidden forces behind financial trends and innovations. Over the years, Gregory’s work has earned industry recognition for bringing clarity to complex financial topics, and he continues to focus on long-form journalism that explores hedge funds, private equity, and high-stakes investing.
Latest News
How Automated PDF Workflows Transform Operational Efficiency for Modern Teams
Android Car Audio Bug Continues to Haunt Pixel Owners
Google Gemini Introduces AI Video Detection
Common Pitfalls in Product Development and How to Avoid Them: Derribar Ventures Insights
How Perfovant Uses AI To Transform Data into Operational Strategy
Rhea Seehorn responds to Pluribus kiss and its implications
Maximizing Campaign Performance: Insights from TagStride
Galaxy Z Fold 8 Camera Specs Leak Shows the Upgrades
Latest Automotive News: How Technology is Transforming Vehicles
NotebookLM Introduces Data Tables With Sheets Export
Keith Lee Wins Creator of the Year at TikTok Awards
Samsung Unveils Exynos 2600 For Galaxy S26 Series
FindArticles
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
  • Corrections Policy
  • Diversity & Inclusion Statement
  • Diversity in Our Team
  • Editorial Guidelines
  • Feedback & Editorial Contact Policy
FindArticles © 2025. All Rights Reserved.