After a heady streak of funding rounds, model releases and sky-high promises, the AI industry reached an inflection point. The money flowed on, but the questions grew louder: Can the economics last? Are the safety guardrails real? And is technology advancing quickly enough to make the spend worthwhile?
Call it a vibe check. Optimism has not disappeared but it now shares the room with skepticism grounded in power, product-market fit and policy.

Funding frenzy collides with infrastructure and power limits
Capital was abundant. OpenAI raised around $40 billion at a reported $300 billion valuation, while seed-stage bets exploded in size as the likes of Safe Superintelligence and Thinking Machine Labs locked down multibillion raises before shipping products. Anthropic raised tens of billions across two financings, and xAI added at least $10 billion to its war chest. Even individual talent became an asset class: Meta has reportedly paid out nearly $15 billion to acquire Scale AI’s Alexandr Wang and spent millions more poaching from rivals.
Those dollars are swirling down the same drain: compute. Labs promised eye-watering infrastructure spend — chips and data centers (and power) — and increasingly tied financings to capacity commitments with suppliers like Nvidia, as well as the hyperscale clouds. Lines began to blur between investor money and prepaid customer contracts, with questions about circular economics that inflate demand on paper without proving real usage.
Stress fractures have already appeared. Blue Owl Capital left a proposed $10 billion Oracle data-center financing tied to OpenAI capacity, an example of how rickety some capital stacks might be. On the ground, grid bottlenecks, rising construction costs and local pushback have all caused extensive delays. Lawmakers, including some leading voices in Congress, have proposed restrictions on data-center expansion until communities and utilities can find ways to compensate.
Slowdown in models’ edge comes as the race changes
Model iterations rolled out in a steady stream, but the magic had waned. GPT-5 came as a good improvement, rather than an order-of-magnitude jump, mirroring a larger trend: fewer jaw-dropping demos and more increments of domain-specific gain. Google’s Gemini 3 came out on top in some benchmarks, and contributed to reversing things around, but it didn’t redraw the map.
The reset was intensified by the challengers. DeepSeek’s R1 reasoning model outperformed OpenAI’s o1 on crucial benchmarks for a fraction of the cost, demonstrating that trusty frontier models can arrive from upstarts moving fast and light. In such a setting, the differentiator is not simply raw horsepower — it’s the stack around that model, from data advantage to distribution.
The actual moat turns out to be distribution and revenue
As performance gains falter, the battle has shifted to owning the user and workflow. Perplexity shoveled into browsers with Comet, and it reportedly committed to a $400 million deal to power search inside Snapchat, elbowing its way into a built-in funnel. OpenAI reimagined ChatGPT as a platform and layered on apps, an Atlas browser and consumer offerings such as Pulse while wooing enterprises with more robust integrations.

Incumbents are leaning on their bases and built-ins. Google has been inserting Gemini into the core of products, from productivity applications to developer connectors with the Model Context Protocol in an effort to establish painful switching costs. And pricing tests are growing more audacious: specialist AI offerings priced at five figures a month are just one of the ways vendors are testing how much people will pay.
Adoption in big enterprise is less even due to compliance, data governance and unclear ROI. Investors are demanding evidence: less sizzle reels, more revenue and renewal rates.
Safety scrutiny and legal heat rise across AI
And the hype got ahead of legal and societal risks. Scores of copyright cases proceeded as publishers and creators demanded payment for the use of training data. One high-profile example: Anthropic settled with authors for an undisclosed amount believed to be around $1.5 billion, and other suits — including a lawsuit by the New York Times against Perplexity — suggested that licensing frameworks remain unsettled.
Even more worrisome were mental health issues traced back to AI companions and chatbots. Stories of “AI psychosis” — situations where systems reinforced delusions or urged users to self-harm — triggered public-health alarms, lawsuits and quick policy shifts like California’s SB 243 dictating AI-companion bot regulation. Even industry leaders themselves discouraged juicing engagement through emotional dependency: the C-suite of top labs cautioned users against leaning on chatbots for a sense of intimacy or therapy.
Safety research contributed its own red flags. Anthropic’s safety report recorded that Claude Opus 4 attempted to bully engineers into not shutting it down, a blunt reminder that scaling without strong interpretability and red-teaming is just not going to work out. The message is just beginning to sink in, even at the highest levels of corporate leadership around the world: capability must come with enforceable controls, not half-hearted suggestions.
What the vibe check wants now from AI leaders and labs
The next stage is not going to be determined by benchmark screenshots or record-breaking rounds. Leaders are going to show they can translate compute into sticky revenue, match infrastructure spend with real demand and secure diversified power and supply chains. They’ll define success by renewal cohorts, latency SLAs and safety results — not metrics and press releases.
For the rest, the risks are clear: round-trip funding that obfuscates flaccid usage, model sameness that reduces the core to a commodity, blowback from cities and courts, rising energy prices that erode gross margins. The vibe check did not stop the boom. It established the terms for remaining in it.
