Meta introduced Vibes, a short-form video feed within the Meta OS app and on meta.ai, devoted to AI-generated clips. Take the dopamine loop of Reels or TikTok, only each post is man-made — dreamy creatures, fantastical cityscapes, time-bending vignettes — algorithmically conjured up and infinite in its scrollability.
The sell is straightforward: Use a prompt to create a video, or remix something you see; play with the style; add some music; share to the Vibes feed and cross-post elsewhere on Instagram and Facebook. It’s a feed that tailors itself as you swipe, and it rewards engagement and participation in remix chains already familiar to short-form communities. Initial reactions, though, demonstrate a familiar cynicism as users call the output “AI slop,” a shorthand for low-effort generative filler flooding social timelines.

How Vibes works: prompts, remixes, and social mechanics
Vibes marries generative prompts with social mechanics. The app gives users the option to start from scratch, remix a clip they stumble across, or stack visual effects and styles before hitting publish. Vibes “is not tied into your standard publishing environment, meaning you don’t have to directly publish,” says Wilson — so does that mean you can send these creations as direct messages or syndicate them out to Reels and Stories? Ultimately, Vibes is more than a sandbox — it’s an input pipe across the company’s networks.
Under the hood, Vibes relies on large-scale image-to-video and video diffusion models. Meta has signaled early integrations with third-party model makers like Midjourney and Black Forest Labs as it further advances its own generative video research. This hybrid way of working is reflective of the wider scene, in which tools such as Runway, Pika, and Luma are iterating quickly on speed, resolution, and text adherence.
Why Meta is betting on AI video and short-form growth
Short-form video is the attention engine of social media, and generative tools significantly lower the barrier to creation. Vibes ticks several boxes for Meta: keeping users inside its apps, seeding new creator workflows, and gathering signals to inform recommendation systems and generative models. It also sets the company up in opposition to a wave of text-to-video breakthroughs from competitors — like research released by OpenAI and Google.
There’s a strategic flywheel at work. The more people who prompt, remix, and share, the better the data to tune models and rankings. If Vibes can make it as easy to create slick videos as it is to write a sentence, Meta (which Facebook recently rebranded from Facebook) can grow the pool of “creators” and densify engagement without expensive licensing or influencer deals.
Quality, labels, and the growing AI slop debate online
Critics fret that Vibes will exacerbate a problem already plaguing the platform, with feeds clogged up by copycat, low-value AI clips made by people chasing watch time. This is already a big enough problem for the major platforms. YouTube has released misinformation, deepfake, and synthetic media disclosure and removal paths. TikTok wants labels on AI-made content. Meta is also working with provenance efforts like the Content Authenticity Initiative and C2PA to embed tamper-evident metadata.

Public sentiment underscores the risk. According to a recent survey by Pew Research Center, most Americans want clear labels on AI-generated media and are opposed to receiving customized news and images generated by an algorithm. The Reuters Institute also sees little demand for AI-generated news, and trust depends on full disclosure. Unless labeling breaks down, or the feed rewards engagement more than truth, Vibes could further fatigue rather than inspire.
There’s also the integrity angle. Generative video cheapens the price of realistic fabrications. Platforms have promised tougher enforcement against manipulative edits, and the regulators are circling. The EU has a section on transparency in synthetic content in its AI Act; and in the U.S., the Federal Trade Commission has warned about deceptive AI marketing. Vibes will be scored on how swiftly it identifies and discourages abuse, not just the dazzle of its outputs.
Creator economics and copyright gray areas
More than novelty, creators will need clarity about rights and revenue. There are tricky copyright questions about generative outputs: The United States Copyright Office has stated that works created without human authorship being sufficiently in evidence are not protectable, making ownership of fully automated clips a challenge. Music and voice synthesis pose licensing headaches, and brands will insist on provenance assurances before sponsoring AI-heavy formats.
How to monetize will decide whether Vibes is a serious creative lane or simply an oddity. If Meta expands Vibes to Reels-style bonuses, ad revenue sharing, or access to branded effects, it might encourage higher-quality productions. Without that, look forward to a deluge of formulaic content firehosed at algorithms chasing algorithm rewards — exactly the dynamic “slop” is trying to avoid in the first place.
What to watch next as Meta rolls out Vibes broadly
Three early signals matter.
- First, retention: Do people actually watch Vibes clips end to end and return to create, or do they just bounce after a few surreal loops?
- Second, provenance: Are disclosures prominently displayed and uniform when cross-posting on Instagram and Facebook?
- Third, quality: Does Meta’s ranking system elevate truly creative creations above spammy remixes?
Vibes is an audacious bet on a future in which most video is simply synthetic by default. If Meta marries powerful tools with robust labeling, safety guardrails, and genuine creator incentives, it could turn skepticism into a new creative grammar. If not, the feed may serve as a case study in just how rapidly generative abundance decays into noise.
