Nvidia is laying the groundwork to contribute up to $100 billion to OpenAI to help transform the nonprofit AI research lab into a new kind of for-profit business in which Nvidia plays a central investor role, according to a letter of intent signed by both parties announcing plans for its next-generation AI infrastructure and framed initially as an intention from OpenAI “to bring U.S.-sized compute resources” — about 10 gigawatts worth of Nvidia-powered systems earmarked for training and inference — over time.
The deal would make Nvidia the preferred strategic compute and networking provider to OpenAI’s “AI factory” buildout, representing a major push of dedicated capacity for frontier model development.
- Why 10 gigawatts is a massive figure for AI infrastructure
- OpenAI’s pivot away from a single cloud provider
- What Nvidia gains from this OpenAI infrastructure deal
- The money, the chips, or both? How funds may be structured
- Energy, supply chain and policy barriers
- What to watch next in the Nvidia–OpenAI partnership

Though the specifics are still being worked out, the plan is described as complementary to OpenAI’s existing partnerships with cloud and telecom powerhouses. Its driver partnership is “complementary” to deals with Microsoft, Oracle and SoftBank announced previously, while supporting newer mega-campus projects like the already much discussed “Stargate.”
Why 10 gigawatts is a massive figure for AI infrastructure
Ten gigawatts is an eye-popping goal. Ideally, hyperscale data centers would be able to operate at a scale of multiple hundreds of megawatts each, even though the average scale of individual hyperscale data centers today is between a few tens and a few hundreds of megawatts — 10 GW implies dozens of large sites or several ultra-large campuses. It would be similar to the electricity consumption of several million households and would place OpenAI’s footprint on a scale with some of the biggest compute programs ever undertaken.
Transforming power into hardware, conservative estimates alone will result in millions of high-end accelerators at their full buildout. If a next-gen GPU uses in the neighborhood of 1 kilowatt under load, and IT equipment represents most of site power consumption, then there is enough capacity to supply on the order of 6-10 million accelerators over time with 10 GW, possibly less depending upon networking, storage and cooling power overhead. That scale redefines the economics of training multimodal models, long-context agents and autonomous systems.
It would also encourage the rapid takeup of liquid cooling, advanced power distribution and high-speed fabrics like Nvidia’s NVLink and Spectrum-X Ethernet to keep utilisation high and interconnect bottlenecks low. The ingeniousness of the networking layer is where it really counts: at this scale, topology and software orchestration can be as important as raw chip count.
OpenAI’s pivot away from a single cloud provider
The investment is OpenAI’s most explicit bet yet on a multiparty infrastructure approach. Microsoft is still its largest supporter and a critical distribution avenue via Azure for MemVerge, but both sides have indicated flexibility to build more capacity with separate partners. Closely cooperating with Nvidia diversifies supply during a time of high demand, and helps mitigate allocation risk in leading accelerators and high-bandwidth memory.
OpenAI has already partnered with Oracle on high-GPU clusters and toyed with ambitious co-development projects. Hitting SoftBank’s networks and plans to build data centers might allow further reach into new regions and telco-edge zones. The Nvidia partnership secures this patchwork with assured access to the latest hardware as the company chases larger training runs and quicker model refresh cycles.
What Nvidia gains from this OpenAI infrastructure deal
For Nvidia, the payoff is strategic lock-in. A multi-year, multi-gigawatt commitment provides a cushion to fund a demand backlog for its Blackwell-generation platforms and follow-ons, as well as networking, software and services. It also furthers Nvidia’s larger argument that AI factories — standardized, “composable” compute plants — were the gist of what a factory was itself as infrastructure, akin to power stations or semiconductor fabs.
There is coopetition risk: hyperscalers and specialized AI clouds both are customers and competitors. Even then, luring a flagship partner through OpenAI’s model roadmap helps Nvidia shape reference architectures and maintain its full-stack moat — silicon, interconnects, CUDA software and enterprise tools — squarely in place.
The money, the chips, or both? How funds may be structured
The companies have not detailed how the pockets of as much as $100 billion will be structured. Expect a combination of:
- Direct hardware purchases
- Capacity reservations
- Cloud credits
- Systems financing
- Revenue-sharing structures tied to hosted offerings

Those commitments can drive down unit costs, smooth delivery schedules and provide bargaining leverage to each side in tight memory and packaging supply chains as well as tight networking supply chains.
A few points.
It also establishes clear yardsticks for investors monitoring capex cadence and fleet readiness.
Energy, supply chain and policy barriers
Power is the binding constraint. To secure 10 GW, a jigsaw of grid interconnects, on-site generation and long duration PPAs will be needed. Grid improvements, high-voltage transformers and cooling infrastructure take years to build out. The International Energy Agency has cautioned that demand for electricity in data centers could nearly double in the short term, causing challenges to securing enough capacity in certain regions.
On the component side, HBM supply and advanced packaging (CoWoS-class) continue to be gating.
Memory vendors have been rushing to build out HBM3E and next-generation HBM lines, but output has to keep pace with GPU shipments. Any kink in this chain will be felt throughout delivery times.
Regulators are also watching. Antitrust watchdogs in the US and Europe have focused on cloud concentration and dominance of key compute resources. A dominant AI chip supplier and a top model developer will attract scrutiny about market power, interoperability and fair access for smaller competitors.
What to watch next in the Nvidia–OpenAI partnership
Key signals include the final terms of pacts, site selection and permitting, announcements on procurement of power and orders for long-lead components. Listen for mentions of Blackwell-scale clusters, liquid-cooled rack densities and new orchestration software made to manage multi-million GPU fleets.
Also watch the knock-on effects: how hyperscalers price GPU instances; whether other model labs ink analog capacity deals; and then, how quickly OpenAI moves to train its next frontier-sized wave of systems.
Should the parties sign, this could reset not just the pace of AI progress but also the economics of who can play at the cutting edge.
