OpenAI is reshaping its go-to-market strategy for large companies, tapping veteran AI leader Barret Zoph to accelerate its enterprise business as competition intensifies and market share slides. The move signals a renewed push to convert model leadership into steady, scaled revenue from corporate buyers in 2026.
A Familiar Operator Returns to Steer Enterprise Push
Zoph’s appointment, reported by The Information and attributed to an internal memo, brings a technical heavyweight back into the fold. He previously ran post-training inference at OpenAI before departing in 2024 to co-found Thinking Machine Labs with former OpenAI executive Mira Murati. The circumstances of his exit from that startup remain unclear, but his return puts a seasoned builder at the helm of an increasingly commercial mandate.
- A Familiar Operator Returns to Steer Enterprise Push
- Chasing enterprise market share amid rising competition
- Doubling down with platform partners to boost distribution
- What enterprises will demand in 2026 from AI providers
- Execution questions Zoph must answer for enterprise scale
- The stakes for 2026 as enterprise AI platforms harden

That background matters. Enterprise AI is no longer just about flashy demos; it’s about reliability, security, integration, and total cost of ownership. A leader who understands both model performance and production constraints can shape offerings that meet procurement checklists and scaling realities.
Chasing enterprise market share amid rising competition
OpenAI was first out of the gate with ChatGPT Enterprise in 2023 and says it now serves more than 5 million business users, including customers like SoftBank, Target, and Lowe’s. Yet momentum has shifted. A Menlo Ventures analysis of enterprise large language model usage pegs Anthropic at 40% share as of December, up from 32% midyear. Google’s Gemini sits at 21%, essentially steady. OpenAI, by contrast, has fallen from roughly 50% in 2023 to 27% at the end of 2025.
The slide has not gone unnoticed inside the company. CEO Sam Altman recently flagged the pace of Gemini adoption in an internal note, according to people familiar with the communication. And CFO Sarah Friar has publicly framed enterprise growth as a 2026 priority, putting revenue durability and multi-year contracts front and center.
Doubling down with platform partners to boost distribution
Distribution will be decisive. OpenAI’s expanded multi-year pact with ServiceNow aims to place its models directly inside IT service management workflows—the kind of embedded use that drives daily active usage, not just pilot projects. For enterprises, pre-vetted integrations reduce procurement friction and accelerate ROI because they piggyback on tools that already have budget, champions, and governance in place.
Expect more channel-first moves. Analysts at Gartner and IDC have consistently noted that buyers favor AI that lands inside existing ecosystems—ITSM, CRM, contact centers, office suites—so security and administration are consistent across vendors. For OpenAI, deeper hooks into established platforms can counter rivals that package their models natively with productivity suites or industry-specific stacks.

What enterprises will demand in 2026 from AI providers
Winning back share won’t hinge on raw benchmarks alone. Large customers will press for stronger data isolation options, private networking, and granular audit trails, alongside enterprise-grade SLAs, uptime guarantees, and predictable pricing. Procurement teams increasingly scrutinize evaluation frameworks beyond MMLU-style tests, measuring latency under load, retrieval accuracy on proprietary corpora, and cost per resolved task.
Security and compliance remain gating factors. Certifications like SOC 2 and ISO 27001, robust content controls, and customizable guardrails are table stakes for regulated sectors. On the build side, reliable retrieval-augmented generation, safe function calling, and role-based governance for agents will determine whether deployments scale beyond pilots to thousands of seats.
Execution questions Zoph must answer for enterprise scale
Two bottlenecks loom: cost and complexity. The economics of inference at scale are still a moving target as organizations weigh frontier models against distilled variants tuned for specific tasks. Enterprises will push vendors for clear guidance on when to use smaller, cheaper models, how to enforce routing policies, and how to monitor quality drift over time.
The other challenge is operational. Most enterprises want fewer vendors, clearer accountability, and seamless data governance. Packaging matters as much as performance. That means straightforward SKUs, transparent overage policies, and migration paths from pilots to production without re-architecting. If OpenAI can simplify the path from experimentation to line-of-business outcomes, it will earn the renewal and expansion cycles that define category leaders.
The stakes for 2026 as enterprise AI platforms harden
The contours of the enterprise LLM market are still forming, but share shifts tend to harden once platforms entrench. With Anthropic ascendant and Google steady, OpenAI’s bet is clear: match model prowess with enterprise discipline, expand through trusted channels, and make cost, compliance, and control as compelling as capability. Zoph’s remit is to turn that blueprint into bookings—and to prove that early consumer fame can translate into durable enterprise revenue.
