OpenAI is getting too big for its britches.
The company has signed a multi-gigawatt deal to obtain more advanced accelerators from AMD, which just goes to show you the brutal reality that Nvidia by itself can’t keep up with OpenAI’s insatiable appetite for compute. The deal grants OpenAI an opportunity to earn up to 10% of AMD based on meeting milestones, all while the AI lab cautions that its Nvidia orders are going to continue growing.
- Why OpenAI Is Going With AMD to Scale Fast and Reduce Risk
- What the Deal Buys for OpenAI, and When It Arrives
- Nvidia Is Still the One With All the Cards
- The Scale Task Is Now a Power and Infrastructure Task
- Software Gravity and the ROCm Ecosystem Conundrum
- Funding the Land Grab for Compute at Hyperscale
- What to Watch Next in OpenAI’s Two-Supplier Strategy
Why OpenAI Is Going With AMD to Scale Fast and Reduce Risk
The choice is primarily one of capacity and time-to-deploy. According to The Wall Street Journal, each gigawatt of new AI compute can run into the tens of billions of dollars. OpenAI’s own leadership has publicly thrown out targets as ambitious as adding a gigawatt of AI infrastructure per week in the future. There is no single supplier who can bring that speed, especially with shortages of high-bandwidth memory, advanced packaging, and data center power.
Diversified suppliers can also mitigate risk and increase negotiating power. It also allows OpenAI to customize workloads: training at massive scale, inference at low latencies, and dedicated pipelines can benefit from different memory footprints, interconnects, and software stacks. AMD’s Instinct platform — which this deal will now expand to include MI450, specifically — has been picking up with hyperscalers on exactly those angles.
What the Deal Buys for OpenAI, and When It Arrives
OpenAI intends to bring into production 1 gigawatt of AMD Instinct MI450 enterprise GPUs starting in the latter half of 2026, with a broader plan that can scale up to 6 gigawatts over time. As part of the contract, OpenAI has an option to acquire at $0.01 per share up to 160 million AMD shares upon meeting certain performance and delivery conditions; think of this as aligning both sides toward execution at scale.
The cost profile is staggering. AMD has told The Wall Street Journal that one gigawatt of leading-edge AI compute could require tens of billions when adding in accelerators, networking, storage, facilities, and power. That scale also fits with the industry’s larger shift to multi-campus “AI factories,” not single data halls.
Nvidia Is Still the One With All the Cards
OpenAI’s push into AMD does not supplant its Nvidia roadmap. The company recently sketched out a separate deal with Nvidia for at least 10 gigawatts of capacity, plus adoption of Nvidia’s next-generation Vera Rubin design. Nvidia is still the market’s performance, ecosystem depth, and software maturity standard.
OpenAI’s leaders have stressed that AMD is additive, not a replacement at all for Nvidia. Patrick Moorhead, industry analyst with Moor Insights & Strategy, described the strategy as an incentive for AMD to scale so its GPUs can be more broadly deployed across data centers—a move designed to ensure a true second source at high volume.
The Scale Task Is Now a Power and Infrastructure Task
Compute is colliding with electricity. Reportedly OpenAI is constructing a Texas plant with an area about the size of dozens of football fields and aiming for about 1.2 gigawatts of compute. For reference, the Hoover Dam generates approximately 2 gigawatts of electricity. These comparisons are rough, but the point is clear: future AI buildouts ride as much on power, cooling, and grid interconnects as they do on chip supply.
Network fabrics and memory are equally important. Heterogeneous clusters spanning hundreds of thousands of GPUs live or die based on interconnect bandwidth and software efficiency. Nvidia’s NVLink and InfiniBand stack the deck today. AMD offers competition with high-bandwidth memory configurations, its Infinity architecture, and increasing backing for Ethernet-based AI fabrics — an area where the hyperscalers are continuing to put their own investments in order to lower cost per bit and avoid vendor lock-in.
Software Gravity and the ROCm Ecosystem Conundrum
CUDA: Nvidia’s CUDA toolkit still has immense sway over developers. The closest relative, at least so far, would be AMD’s ROCm with support from cloud providers who operate clusters based on Instinct. Microsoft has called out MI-series deployments on its cloud, and MLCommons benchmarks have started to incorporate competitive Instinct submissions. Directly having an impact: OpenAI itself engaging with the AMD folks will surely speed up kernel optimizations, compiler work, and framework integrations that still remain.
The larger lesson, she says: With hyperscale computing, software portability is a ploy for our pockets. By enabling models and pipelines to shift between vendors with little friction, OpenAI has the ability to arbitrage price, availability, and performance across regions and suppliers.
Funding the Land Grab for Compute at Hyperscale
The numbers invite scrutiny. Outside of AMD and Nvidia, OpenAI has an eye-popping long-term compute deal with Oracle. Investors are wondering how these promises will be funded and repaid. According to the reporting of The Information, OpenAI made something like $4.3 billion in revenue and spent about $2.5 billion in costs in the first half — impressive growth, but not compared with a multihundred-billion-dollar industrial plan.
Look for inventive structures: equity-linked supply agreements, pre-purchases of capacity, utility partnerships, and power purchase agreements. The AMD deal — proprietary with equity linked to delivery of product — indicates that AI infrastructure formation is moving into tandem with the technology stack.
What to Watch Next in OpenAI’s Two-Supplier Strategy
Delivery schedules and HBM supply will ultimately dictate how rapidly MI450 clusters get deployed. Use rates will be influenced by the maturity of the software, especially ROCm performance on large-scale training and inference. And grid access will be gating for every new campus. If OpenAI delivers on its milestones with both Nvidia and AMD, the market might finally reach a bona fide two-vendor equilibrium at the high end of AI compute, with pricing power gradually tilting in favor of buyers for the first time since 2016.