FindArticles FindArticles
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
FindArticlesFindArticles
Font ResizerAa
Search
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
Follow US
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
FindArticles © 2025. All Rights Reserved.
FindArticles > News > Technology

YouTubers Sue Snap Over AI Copyright Infringement

Gregory Zuckerman
Last updated: January 26, 2026 11:03 pm
By Gregory Zuckerman
Technology
6 Min Read
SHARE

A group of YouTube creators has filed a proposed class action accusing Snap of using their videos without permission to train commercial AI features, including the app’s Imagine Lens. The lawsuit, lodged in the U.S. District Court for the Central District of California, alleges Snap tapped research-only video datasets sourced largely from YouTube and circumvented the platform’s technical and contractual safeguards to build revenue-generating AI tools.

The case is led by the team behind the h3h3 channel, alongside MrShortGame Golf and Golfholics, which together have roughly 6.2 million subscribers. The same creators previously brought similar suits against Nvidia, Meta, and ByteDance, reflecting a broader campaign by online publishers and artists to challenge unlicensed data use in AI training.

Table of Contents
  • Allegations Center on Research-Only Video Datasets
  • What Snap’s AI Features Do and How They Work
  • The Wider Fight Over Training Data in AI Models
  • What the Creators Seek in Their Lawsuit Against Snap
  • Why It Matters for Platforms and Creators
The Snapchat ghost logo centered on a professional flat design background with soft yellow and white patterns and gradients.

Allegations Center on Research-Only Video Datasets

At the core of the complaint is HD-VILA-100M, a large-scale video-language dataset designed for academic research. Plaintiffs claim Snap, to power features like text-prompted edits, used HD-VILA-100M and comparable corpora for commercial purposes despite license language and common academic norms restricting such use.

The suit further asserts Snap sidestepped YouTube’s terms of service, which prohibit scraping and commercial reuse without authorization, as well as technological measures that control access to videos and metadata. If proven, those assertions could intersect with anti-circumvention provisions under the Digital Millennium Copyright Act—an increasingly common add-on in AI training disputes.

Why these datasets matter: multimodal systems learn by aligning video frames with text, enabling models to understand scenes, actions, and context. HD-VILA-100M and similar sets are attractive because they provide rich, paired examples at scale. The legal question is whether moving such data from a research context into a commercial pipeline crosses the line into infringement.

What Snap’s AI Features Do and How They Work

Snap has leaned into generative and augmented reality features to keep users engaged. Imagine Lens, the product highlighted in the complaint, lets users transform or stylize images using short text prompts—functionality typically backed by models trained on vast collections of image and video data paired with captions or transcripts.

The creators argue their videos helped teach these systems how to recognize content and produce edits, yet they were never asked for permission or paid. In their telling, Snap turned research datasets into commercial fuel without honoring licenses or the platform rules governing YouTube content.

The Wider Fight Over Training Data in AI Models

This case lands amid a wave of lawsuits targeting AI model training practices by authors, artists, news outlets, and user-generated platforms. The Copyright Alliance has tracked more than 70 copyright actions tied to AI training and outputs, underscoring how unsettled the legal landscape remains.

The Snapchat ghost logo, a white ghost outline on a yellow square with rounded corners, centered on a professional flat design background with soft gray and white gradients and subtle wave patterns.

Outcomes have been mixed. In an author suit against Meta, a judge sided with the company on key claims, while authors suing Anthropic reported a settlement. Many cases continue to test whether intermediate copying for training is fair use, how derivative work theories apply to model outputs, and what duties companies have to honor research-only dataset restrictions.

To mitigate risk, some AI developers have struck licenses—deals involving Shutterstock and the Associated Press are prominent examples—signaling a shift toward consent-based sourcing. Creators say platforms hosting their work should follow suit or provide opt-in mechanisms with auditability and robust provenance controls.

What the Creators Seek in Their Lawsuit Against Snap

The plaintiffs are asking for statutory damages and a permanent injunction halting the alleged infringement. For registered works, statutory damages can reach up to $150,000 per work for willful infringement, a figure that can scale quickly for channels with extensive archives.

The court could also face calls—common in similar suits—for orders requiring deletion of infringing data, retraining or disabling affected models, and transparency about training pipelines and vendors. Expect detailed discovery requests around dataset provenance, license terms, and any steps Snap took to filter or vet sources.

Why It Matters for Platforms and Creators

Beyond Snap, the case probes a key fault line for the creator economy: whether platforms and AI makers must license video content explicitly for training, or whether public availability and platform terms can stand in as permission. A ruling could ripple across social media companies that increasingly rely on generative features to compete.

Regardless of the outcome, the pressure is mounting for AI teams to document consent, honor research-only licenses, and provide mechanisms for exclusion and compensation. For creators, the case is both a bid for accountability and a push to shape the rules of how their work trains the next generation of AI.

Gregory Zuckerman
ByGregory Zuckerman
Gregory Zuckerman is a veteran investigative journalist and financial writer with decades of experience covering global markets, investment strategies, and the business personalities shaping them. His writing blends deep reporting with narrative storytelling to uncover the hidden forces behind financial trends and innovations. Over the years, Gregory’s work has earned industry recognition for bringing clarity to complex financial topics, and he continues to focus on long-form journalism that explores hedge funds, private equity, and high-stakes investing.
Latest News
Old NFL Graphic Fuels Super Bowl Rigging Claims
Galaxy A57 Surfaces On TENAA With Slimmer Frame
Nothing Phone 4a Certification Hints Imminent Launch
Vinod Khosla Disavows Rabois ICE Shooting Remarks
AMD Ryzen 7 9850X3D Bundle Leak Emerges
Refurbished iPad Pro M4 Price Drops To $899.99
Google Photos Adds Text Prompts to Video Tool
Android 17 Preview Reveals App Lock And Screen Recording
ICE Deploys New Surveillance Tech In Deportation Push
Verizon Named Most Reliable Network In New Report
Galaxy Buds 4 Series Pops Up In Samsung Members App
Skylight Surges Past 380K Users After TikTok US Deal
FindArticles
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
  • Corrections Policy
  • Diversity & Inclusion Statement
  • Diversity in Our Team
  • Editorial Guidelines
  • Feedback & Editorial Contact Policy
FindArticles © 2025. All Rights Reserved.