OpenAI board chair Bret Taylor isn’t denying the truth: the AI market is overheated. Speaking recently with The Verge, the ex-Salesforce co-CEO and current Sierra founder said his sector is in a bubble — but that he sees this as a feature, not a bug, to how breakthrough technologies develop.
His perspective mirrors that of OpenAI CEO Sam Altman, who warned that “someone is going to lose a phenomenal amount of money in AI.” Both leaders, in effect, are saying that two things can be true at the same time: Once excess burns off it may look different, and the long-term value creation could be historic.

The productiveness of a bubble
Taylor’s argument relies on history — a recent one, in fact: the dot-com bubble. The late ’90s were filled with hype and flameouts, but this was also the era that funded both the infrastructure and talent, not to mention standards, that underlay the modern internet. The bust hurt; the result was liberating.
AI seems to be playing out that script. Investment in generative AI rocketed beyond the $25 billion mark globally in 2023, by one measure from PitchBook, and mega-rounds continued to pour in throughout 2024, according to investment tracking numbers kept by CB Insights. And all that capital is underwriting new data centers, funding research and seeding a pipeline of talented practitioners — stuff that doesn’t just vanish when valuations deflate.
The dot-com echoes in today’s AI stack
The clearest rhyme is infrastructure. Hyperscalers have telegraphed tens of billions of dollars a year in capital expenses to expand AI capacity, as indicated by earnings calls from Microsoft, Alphabet, Amazon and Meta. Year-over-year revenue for Nvidia’s data center has been growing at triple digit percentages, in line with the insatiable demand for GPU compute.
History shows that the durable value tends to emerge at the lowest levels: compute, tooling and platforms that assemble themselves into default picks for builders. In the AI world, that encompasses model providers, vector databases, orchestration frameworks and MLOps pipelines to keep enterprise deployments reliably pumping at scale.
Taylor, who now leads the AI agent startup Sierra, falls squarely into a category many expect to commercialize rapidly: software that turns models into workers performing tasks end-to-end.
Are agents going to be the equivalent of office mates or are they the bots we only use for very limited time spans? Their success will have everything to do with two factors that killed many, many dot-coms — unit economics and distribution.
Where the losses might fall
The frothiest risk from all these risks is the me-too layer: thin wrappers around foundation models with little defensibility. If your product could be copied by a prompt, a weekend hack or model provider’s next feature, staying power is not promising.
Costclear cost is the second tripwire. ” Billing the other way around reads much better than realizing that the more your users use, “the less you make,” as with traditional software where marginal costs approach zero. Major labs like OpenAI and Anthropic have repeatedly lowered API prices, but the margins can still evaporate for startups if they don’t tightly control things like context lengths‚ model selection and caching strategies.
And a third is what I would call talent inflation. News outlets like the Wall Street Journal have reported seven-figure salaries for AI researchers. That’s sustainable for a few winners, but brutal for companies lacking enterprise-scale revenue.
Lastly, regulatory and data access risks are sizable. The European Union’s AI Act, evolving guidance in the United States and a burgeoning patchwork of rules from countries around the world will exact governance costs. Meanwhile, models trained without long-term data rights could be liable to legal and reputational harm.
Metrics that separate the wheat from the chaff
If bubbles magnify both signal and noise, the task for operators and investors is to adjust the dials. Several metrics stand out:
• The cost per successful task: Not tokens per dollar, but the all-in cost of getting a dependable outcome. It ought to fall significantly as agents grow up.
• Gross margin after compute: This helps in tracking the margins including inference, fine-tuning and data labeling. Sane software companies cannot have COGS that grow linearly with usage.
• Retention tied to workflow depth: Tools that are mission-critical embedded in processes have higher net retention compared to chat interfaces living at the edge of work.
• Data advantage: Proprietary, permissioned data-pipelines — coupled with agreements, security controls and provenance — beat scraped or commoditized corpora.
• Governance by design: Teams aligning toward frameworks like NIST’s AI Risk Management Framework and capable of passing enterprise due diligence will outlive compliance theater.
Why Taylor’s pragmatism resonates
To call it a bubble is not to deride the phenomenon; it’s to acknowledge that innovation and excess often ride side by side. The internet’s bust wiped away frail business models, but it also swept the runway clear for cloud computing, mobile platforms and SaaS — categories that eventually came to define enterprise IT.
Already, AI is creating real productivity increases in coding, support and content operations. Microsoft and other studies suggest that developers who use copilots tend to save a meaningful amount of time, while early adopters at customer service organizations have seen double-digit improvements in resolution time and CSAT. Those wins may seem small now, but they add up — and adding up is what eventually justifies the infrastructure binge.
Taylor’s bottom line is simple: anticipate churn, embrace discipline and continue to build. The bubble will pop for some. The platform shift will last for many more.