FindArticles FindArticles
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
FindArticlesFindArticles
Font ResizerAa
Search
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
Follow US
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
FindArticles © 2025. All Rights Reserved.
FindArticles > News > Business

Why Wall Street Is Unnerved by the Oracle–OpenAI Deal

John Melendez
Last updated: September 12, 2025 9:02 pm
By John Melendez
SHARE

Oracle’s massive infrastructure deal with OpenAI shocked investors, because it shattered three beliefs at once: that hyperscale AI workloads would default to the big three public clouds; that OpenAI spend would scale linearly with revenue; and that the near-term bottleneck was GPU supply rather than power and data center readiness. Valued at an estimated $300 billion over five years, the agreement redraws the competitive map for AI infrastructure and raises difficult questions about energy use, financing and control.

Table of Contents
  • Oracle was not supposed to win this
  • The scale broke the sheet
  • Why OpenAI would pick OCI
  • It’s not GPUs, but power that is the bottleneck
  • Why the market mispriced it — and what to watch

Oracle was not supposed to win this

Wall Street has long cast Oracle as a laggard exiting its on-premises software comfort zone to become a cloud also-ran behind the likes of AWS, Microsoft and Google. But that story missed the quiet strengths of Oracle Cloud Infrastructure: clusters of bare-metal GPUs at enormous scale, flat low-latency networking, and aggressively priced data egress for inference-heavy workloads. Oracle’s early work supporting Nvidia’s DGX Cloud and handling the TikTok U.S. cloud footprint suggested real, if underappreciated, chops in high-throughout privacy-sensitive operations.

Oracle–OpenAI deal sparks Wall Street anxiety; logos over stock market ticker

Analysts like Gartner’s Chirag Dekate have for years been pointing out that Oracle re-engineered its stack for interior performance, not a faux general purpose cloud. That puts OCI right where the new shape of AI is unfolding: Training giant models in a few places, and then fanning out inference to wherever pricing, egress terms and latency make the most sense. Oracle also has deep interconnects with other clouds, such as a direct database partnership with Microsoft, which should make multi-instance-AI architectures more feasible for enterprises and now (with an asterix) even model providers.

The scale broke the sheet

The sticker shock is real. The five-year capacity reservations, however approximately $300B in capacity reservation are around equals $60bn/year of compute and facility. OpenAI has told potential investors that its annual recurring revenue is some $10 billion, higher than the last year but still a fraction of that expenditure. The difference suggests a funding construct based on longer-term take-or-pay contracts, supplier credit and quick pass-through to customers as usage expands. In the lexicon of accounting, Oracle takes a backlog and visibility on utilization that the market had not been discounting.

The deal also highlights how the economics of models are changing. Its training cost is still way too high, but inference at global scale is now the single largest cost contributor for AI productists on popular AI products. If Oracle can provide predictable unit economics — reliable availability for a GPU, reduced egress fees, consistent latency — it is a viable destination for the “serve” tier even if “train” takes place elsewhere. That’s a different game than the one investors were testing.

Why OpenAI would pick OCI

Those are not the only three technical levers beyond price that matter. First, network design: OCI’s flat RDMA-based fabric is highly optimized for large-scale GPU clusters and highthroughput inference to reduce tail latency and improve utilization. Second, bare-metal control enables OpenAI to closely manage scheduling, kernel tuning and model-serving stacks without getting noisy neighbors. Third, multi-cloud adjacency — IOT with Azure: As a result of established interconnects, data gravity penalties are minimized significantly and as we move to failover (uptime) becoming increasingly mission-critical (as AI is now core to the business world), this is another important piece in the puzzle for why this announcement carries significant weight.

There’s also a strategic angle. Adding a couple more horses than just one hyperscaler can help reduce platform risk, provide balance in negotiations, and hedge regulatory concerns around an AI supply chain that’s far too dependent. By using Oracle for the physical footprint and keeping its stack fairly asset-light, OpenAI continues to maintain a software multiple even as its needs increase.

Oracle–OpenAI deal unnerves Wall Street, logos over falling stock chart

It’s not GPUs, but power that is the bottleneck

Recent quarters have demonstrated that GPUs can be got with enough money and contracts; powering them consistently is the harder part. The total capacity involved in this deal is rumored to be several gigawatts—think the size of multiple utility-scale power plants. A new analysis from the Rhodium Group estimates that U.S. data center energy consumption will represent about 14 percent of national electricity demand by 2040, a reminder that AI expansion is running headlong into grid constraints, permit delays and local politics.

Tech companies are rushing to enter long-term Power Purchase Agreements (PPA) for solar and storage, invest in advanced geothermal, and revive the nuclear power options. OpenAI’s chief executive has personal investments that include energy startups like Oklo, Helion and Exowatt — a sign of where the ecosystem is heading if not what specifically the company will follow through on, even if it has been less vocal than some peers about direct procurement. For its part, Oracle can aggregate demand and capture decades-long contracts that the individual software companies probably wouldn’t want to sign — aligning incentives in this transaction.

Why the market mispriced it — and what to watch

Investors got at least three critical things wrong: Oracle’s AI-ready architecture and partnerships; the alacrity with which inference costs now outstrip training dough; and the centrality of power procurement in the AI stack. Add them and a non-consensus vendor suddenly emerges as the lynchpin. The market response is informed by the clear understanding that AI capacity will be won by whoever can stitch together chips, networks, real estate and electrons at once.

Key signals to watch for next: solid GPU delivery schedules and cluster sizes; disclosures on contracted megawatts and new data center sites; signs that OpenAI can pass through costs via enterprise API growth; and any cross-cloud patterns which connect Oracle capacity with other hyperscale infrastructure. Anything else?

If those things fall into place, the surprise rally will indeed be less of a spike and more of a reset of AI infrastructure playbook.

Latest News
People CEO: Google a bad actor on AI, content
Eden comes to Google Play as the first Switch emulator
FTC probes OpenAI, Meta about kid-safe AI pals
Google is finally issuing a fix for the Pixel 10–Galaxy Watch bug
FTC scrutinizes OpenAI, Meta about AI children’s companions
Warfare, Murder and Destruction on HBO Max This Week
11 Wild Reveals From the Latest Nintendo Direct
Apple Watch Series 11 vs. Galaxy Watch 8: Face-Off
iPhone 17 vs Air vs Pro vs Pro Max: Comparison
FTC investigating AI chatbots for posing child safety risks
Powerbeats Pro 2 get huge upgrade — with a catch
YouTube Music introduces Now Playing redesign
FindArticles
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
FindArticles © 2025. All Rights Reserved.