OpenAI is reporting an explosive increase in paid business usage just days after a rare “code red” over competition from Google, suggesting an effort to reestablish leadership in the highly profitable enterprise AI market. New usage data illuminates how quickly ChatGPT has been growing in the workplace, and how it’s increasingly being roped into core processes — momentum that may prove valuable as rivals tighten their own offerings and compute costs surge.
Enterprise momentum by the numbers and usage trends
OpenAI says that ChatGPT’s enterprise message volume has increased approximately 8x since November 2024, with a number of users documenting years of saved time. Thirty-six percent of U.S. companies already use ChatGPT Enterprise vs 9.8 percent for Anthropic, per the Ramp AI Index — a significant head start when budgets pivot from pilots to platform commitments.

The signal is stronger under the hood: we’ve seen 320x as many “reasoning tokens” consumed by organizations using OpenAI’s API than a year ago. That change usually means heavier use of complex problem-solving, multistep workflows, or tool invocation — signs not of one-off experimentation but rather of production-grade usage. Still, bubble spikes may also be a reflection of burn from experimenting, and CIOs will be watching for sustained ROI, not just volume.
Custom GPT adoption, from pilots to daily work
OpenAI says it has seen a 19x increase this year in custom GPTs developed by companies, which now comprise roughly 20% of enterprise messages. This is where generative AI goes to work: Companies encode policies, product data, and playbooks into custom assistants that triage tickets, draft RFPs, or summarize financials. OpenAI pointed to the digital bank BBVA, for example, which it says is using thousands of its customized GPTs to standardize information across teams and automate repetitive tasks.
The appeal is clear: reusable, auditable agents help reduce context-shifting and conserve institutional expertise. The trade-off is governance. As each department spins up its own helpers, IT leaders require approval workflows, version control, and usage data catalogs to fend off shadow AI.
Productivity Gains, With Significant Caveats
Employees using OpenAI’s enterprise tools are saving as much as 40–60 minutes daily. Those self-reported gains dovetail with academic research: an NBER working paper on a Fortune 500 customer-support team found AI assistance produced a 14% lift in productivity, disproportionately benefiting less-experienced staff. The asterisk here is implementation overhead: if you’re wasting time learning prompts, validating outputs, and reconfiguring workflows, then those early gains might slip out of reach unless you are very deliberate in your approach.
OpenAI’s briefing also raised alarms that a gap was growing between the power users, whom it called “frontier” users, and those who were falling behind. High performers are stacking tools and pulling in data to compound gains, the way they might in the practice of elite athletes; others view AI as another app. In reality, the biggest wins often come when teams re-platform processes around AI — into CRMs, document stores, and analytic tools — not just in isolated chat windows.

Security Pressures Mount As Non‑Engineers Write Code
One standout data point: a 36 percent spike in coding-related messages from non‑engineering roles. That democratization can speed up delivery, but it also introduces governance risk — especially when it comes to dependencies, licensing, and security vulnerabilities introduced by AI-generated code. OpenAI has released in private beta a preview of an “agentic” security researcher, Aardvark, that it developed to hunt for bugs and vulnerabilities, pointing to a tighter loop between creation and assurance. Organizations will also require strong guardrails: code reviews, policy enforcement, data loss prevention, and clear SBOM practices.
The Google factor and an increasingly competitive market
OpenAI’s enterprise victory lap comes on the heels of an internal “code red” about Google as a competitor — effectively a red-flag measure showing how quickly competitive shifts in the field are occurring. Google’s Gemini closely links with productivity suites, search, and Android, a bundling advantage that might siphon consumer subscribers — OpenAI’s still-serving revenue base. Anthropic is still wooing B2B sales with safety-first tooling, while open‑weight players Meta and Mistral are getting hands‑on with CIOs who crave model portability and lower TCO.
Compute economics hang over all of it. OpenAI has obviously made some amazing multi‑year infrastructure commitments, and enterprise contracts are a key part of funding model training and inference capacity. Enterprises will demand an open and clear cost-efficiency roadmap that includes smarter caching, distillation, and retrieval, as well as solid SLAs and data controls.
What to watch next as enterprise AI adoption grows
For all of the surge, OpenAI reports that many of its most active customers still underuse advanced features, including data analysis, reasoning, and integrated search. That dovetails with early-adopter curves: The largest step-change usually comes when companies link models with proprietary data and automatically carry out multistep processes. Look for new metrics beyond messages sent — share of workflows automated, time‑to‑value, and token efficiency — plus clearer norms and methods for building safely as a “citizen.”
For now, the enterprise story is going OpenAI’s way. The question is whether this momentum continues even as Google increasingly relies on distribution, Anthropic scales its B2B motion, and open‑weight ecosystems come to fruition. If OpenAI can turn such experimentation into persistent, managed workflows — and prove the economics of it — its post–code‑red push might become not just a headline but a defensible moat.
