OpenAI’s latest video feed is getting attention and clicks, by design — that’s Sam Altman not pretending otherwise https://t.co/eE7lKp2Hkj. Following a backlash to the notion that Sora 2’s tireless production of AI-made clips is akin to “slop,” the CEO conceded the obvious trade-off behind it: Rather than attempting to solve bigger problems, this is about getting users in and money out — which has been funding their GPUs and data centers, necessary assets for building more capable AI much further down the line.
A Pragmatic Solution to Expensive AI Ambitions
Pressed by critics on why OpenAI would ship a novelty feed while hyping breakthroughs including curing diseases, Altman said he understands the skepticism but that compute costs determine product decisions.
In a recent email exchange after his CNBC interview, he positioned Sora 2 as both a proof of concept for rapidly progressing video generation and as an asset in funding the extraordinary infrastructure requirements of AI research directed toward AGI and scientific discovery.
That frank stance is increasingly rare in a market where most consumer-facing AI introductions are framed as creativity tools or community platforms. Here, it was a new kind of business model — engagement that powers compute — that made the lead.
The Compute Economics That Support Sora 2
At a time when a staggering volume of text, images, and content of every form is shared as computer-generated media on social platforms like Twitter and Facebook, and blogs such as this one, the words “I see” inspired Van Cuyck to generate what he calls “media art” in the form of video — with no real imperative save centuries-old existential questioning to go along.
Generative video is computationally expensive. GPU rendering seconds of top-quality footage takes large clusters of the best GPUs, and very high-memory bandwidth with rather expensive cloud networking. Industry estimates place cutting-edge accelerators at tens of thousands of dollars per unit or more, without considering the specialized racks, power, and cooling. Even with hyperscale partners — OpenAI is dependent on Microsoft’s Azure to a large degree — the bill grows as usage does.
Energy isn’t trivial either. The International Energy Agency and Uptime Institute have both warned that AI workloads are driving data center power consumption up steeply, prompting operators to reconsider their sources of grid power and efficiency. With that in mind, a product that is popular and sticky, and one that monetizes attention, could be a bridge to the next generation of research systems.
Free Today, Funnel Tomorrow: How Sora 2 Monetizes Usage
OpenAI is seeding Sora 2 with generous free limits to get people trying it for videos, but signaling heavier monetization down the road. The company has linked fuller capability and fewer caps to the premium tier, with power users directed toward the $200-per-month ChatGPT Pro plan. That positioning echoes the playbook that turned ChatGPT subscriptions into a key revenue engine.
The rationale is intuitive: reduce the friction to tinker, then convert power users who want speed, length, and more nuanced outputs. The video makes that funnel more effective — and more costly to operate — than those made of text or images.
Meta’s Vibes And The Engagement Arms Race
OpenAI isn’t alone. Meta deployed its own AI-generated video feed, Vibes, to capitalize on the company’s colossal reach and ad machinery. Meta indicated that conversations with its AI assistant could be used to help target across the company’s properties, fine-tune recommendations, and ultimately advertisements. That move is indicative of how AI content and advertising are yoking together in a feedback loop: creation begets engagement, which then drives better targeting to fund more creation.
The difference is business DNA. Meta’s model is ad-first. OpenAI’s is subscription- and platform-first, with enterprise deals in the door, a burgeoning API business and now — recently introduced shopping flows with marquee consumer brands — a growing commerce layer to sidestep ChatGPT seats.
Will Slop Pickings Pay For Serious Science?
Critics say the “slop” aesthetic is draining culture even as it gobbles scarce compute that could be devoted to scientific models. Backers argue that mass-market products are how revolutionary tech gets paid for. Long-term revenue targets for OpenAI, reported by Bloomberg News and envisaging a steep ascent prior to profitability, are one example; rapid year-on-year revenue growth powered largely by subscriptions, as described in reportage by The Information, is another. That path helps explain why OpenAI is adding new consumer surfaces atop its core models.
There’s also a strategic hedge. Video generation reveals and exacerbates weaknesses — motion consistency, physics, human realism — that intersect with more general model limitations. Solving those issues for creators could result in improved multimodal systems for business and research.
Risks That May Force A Reboot Of The Plan
But three forces could change the calculus.
- Supply: Nvidia controls the accelerator market and bottlenecks in new chips or advanced packaging cause ripples throughout capacity planning.
- Policy pressure: Regulators and legislators alike are eyeing AI’s energy impact, training data origin story, and deepfake threat — topics especially on display in the good-time world of generative video.
- User fatigue: If feeds seem derivative, conversion to paid tiers could stall and erode the funding loop that OpenAI is banking on.
Yet Altman’s message comes through with rare clarity in the realm of consumer tech launches. Sora 2 is a billboard for model progress, and a cash machine to purchase more GPUs. Whether that bargain leads to the scientific AI OpenAI has promised — or just a taller stack of looping clips — will be a question of money, research landmarks, and how enjoyable people find those long-promised videos.