FindArticles FindArticles
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
FindArticlesFindArticles
Font ResizerAa
Search
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
Follow US
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
FindArticles © 2025. All Rights Reserved.
FindArticles > News > Technology

Microsoft makes a deal with OpenAI to create AI chips

Gregory Zuckerman
Last updated: November 13, 2025 1:04 pm
By Gregory Zuckerman
Technology
7 Min Read
SHARE

Microsoft is beginning to make good on its AI chip shortage promises as it refocuses OpenAI’s silicon efforts, a move that more accurately capitalizes on the benefits of pooled engineering innovation and speed to market. Microsoft is licensing OpenAI’s custom chip designs — created in conjunction with Broadcom — to build its own Quake-meets-Doom-like computer, according to Bloomberg and a recent interview with CEO Satya Nadella. All the while enjoying decade-long access to OpenAI-created models through 2032.

The bet is practical: building leading-edge accelerators is punishingly hard, capital-intensive, and limited by foundry and packaging bottlenecks. Instead of playing catch-up endlessly, Microsoft’s hijacking a partner already ahead of the bleeding edge and integrating that work to benefit Azure’s data center roadmap.

Table of Contents
  • Why Microsoft Needs Help with Custom AI Chips
  • How the OpenAI Deal with Microsoft Actually Works
  • What Broadcom and OpenAI Bring to Microsoft’s Chips
  • How Google and Amazon’s Strategies Compare with Microsoft
  • Risks and Watch-Outs for Microsoft’s OpenAI Chip Deal
  • What the OpenAI Chip Deal Means for Azure Customers
A hand holds up a Microsoft Azure Maia 100 chip, presented in a 16:9 aspect ratio with a professional, clean background.

Why Microsoft Needs Help with Custom AI Chips

Microsoft’s in-house push — Azure Maia accelerators for AI training and inference, Cobalt CPUs for general compute — came later than peers at other chip companies. Google has iterated its TPU line for years, and Amazon runs production workloads on Trainium and Inferentia in addition to Graviton CPUs. And those programs aren’t just about performance; they lock in supply and shave total cost of ownership for hyperscale AI.

Meanwhile, the demand for accelerators has long exceeded supply. Nvidia’s data center GPUs continue to be the workhorse for AI training, and HBM memory and advanced 2.5D packaging have been gating factors. Industry analysts like TrendForce continue to call out limited HBM and CoWoS supply, shortages that can slow down rollouts even in the presence of abundant budgets. For cloud companies, custom silicon is as much a purchasing strategy as it is a technology move.

How the OpenAI Deal with Microsoft Actually Works

In a revised partnership, OpenAI is designing AI chips with Broadcom, a longtime maker of custom ASICs and networking gear — and Microsoft will get intellectual property rights to those designs that it can use in Azure. Nadella has noted that as OpenAI pushes forward at the system level — incorporating compute, memory, and interconnect — Microsoft will take those advances, and then build upon them for its cloud.

The deal retains Microsoft’s use of OpenAI’s models until 2032, but it creates a significant exception for OpenAI’s consumer hardware efforts. That exception indicates OpenAI wants to keep the freedom to tinker with devices — potentially ranging from AI-first gadgets to edge accelerators — without getting in the way of Microsoft’s Surface or Azure hardware roadmaps.

What Broadcom and OpenAI Bring to Microsoft’s Chips

Broadcom’s custom silicon group has a great deal of experience in creating accelerators and the ultra-high-bandwidth fabrics that weave them together. Combine that with OpenAI’s workload perspective — from gigantic model pretraining to latency-sensitive inference — and you get a design optimized for real-world AI efficiency, not just benchmark victories.

For Microsoft, putting that architecture into Azure means closer integration with networking stacks, storage, and software runtimes, all of which could decrease the number of electrons per token, increasing energy efficiency. Even modest improvements can add up at hyperscale; internal analyses at various cloud providers typically focus on double-digit TCO gains when moving some workloads to in-house or partner chips. Nvidia and AMD will still be important, but for every optimized rack less reliant on hard-to-find third-party GPUs, that’s capacity saved.

A Microsoft Azure Maia 100 chip on a professional flat design background with soft patterns.

How Google and Amazon’s Strategies Compare with Microsoft

Google’s TPUs underpin core products such as Search and YouTube recommendations, providing the company with a feedback loop connecting model architecture to chip design. Amazon hypes Trainium and Inferentia for the price-performance win on generative AI and has iterated swiftly with Trainium2. Both have also stressed control over supply and the capability to optimize silicon for their software stacks.

Microsoft’s path is another, but potentially faster. Rather than building an in-house ASIC team to parity, it is “renting” a robust pipeline: OpenAI’s model-driven design decisions plus Broadcom’s execution. If this works, Microsoft shortens the cycle from idea to cloud instances and dodges the multi-year misfires that first-party silicon can suffer.

Risks and Watch-Outs for Microsoft’s OpenAI Chip Deal

There are trade-offs. Strapping a chunk of silicon from the public cloud could be another issue entirely, turning the cloud silicon roadmap into one controlled by a partner and causing dependencies that range from helpful to ruinous — especially now that OpenAI is trying to navigate its own research versus timeline versus potential devices. This new carve-out for consumer hardware has the potential, if reasonably well-defined and on paper clear, to cause occasional friction with the rest of Microsoft’s ecosystem.

On the supply side, its success continues to rely on foundry partners such as TSMC for advanced nodes, and on HBM and advanced packaging. Broadcom’s relationships are well-positioned to help, but memory and advanced packaging capability is a wildcard. And while custom chips can save money in the long run, upfront non-recurring engineering and validation is significant.

What the OpenAI Chip Deal Means for Azure Customers

Look for a more varied set of AI training and inference models, including Nvidia- and AMD-based SKUs as well as accelerators adopted by Microsoft from OpenAI. That variety should help alleviate waitlists, stabilize prices, and offer customers additional choices to pair models with silicon. Microsoft has previously stated that AI services alone are contributing several points to Azure growth; guaranteeing a more predictable accelerator supply is essential to maintain that momentum.

The near-term reality does not change: Nvidia and AMD are still at the core of Azure’s AI scale-out. But if Microsoft can turn OpenAI–Broadcom designs into actual production hardware on an 18–24 month cycle, its chip “problem” changes from one of supply-demand dynamics to that of choice — necessarily a more favorable position in a market where compute capacity is often the deciding factor in who lands the next AI bread loaf.

Gregory Zuckerman
ByGregory Zuckerman
Gregory Zuckerman is a veteran investigative journalist and financial writer with decades of experience covering global markets, investment strategies, and the business personalities shaping them. His writing blends deep reporting with narrative storytelling to uncover the hidden forces behind financial trends and innovations. Over the years, Gregory’s work has earned industry recognition for bringing clarity to complex financial topics, and he continues to focus on long-form journalism that explores hedge funds, private equity, and high-stakes investing.
Latest News
Uber Tests In-App Driver Video Recording in India
Beehiiv Introduces Website Builder And Creator Suite With AI
Android 17 Testing Privacy-Focused Contacts Picker
Male Drivers File Suit Against Uber And Lyft For Gender Price Bias
Poker Face eyes Peter Dinklage as Natasha Lyonne’s successor
OpenAI Violated Copyright Law, a German Court Rules
Amazon cuts M5 iPad Pro price by $64 to an all-time low
Spotify Prepares to Roll Out Music Videos in the U.S.
Internxt is offering 10TB of secure cloud storage at a 90% discount
YouTube TV Gets New NBC Sports Network Channel
Cursor Raises $2.3B, Just Five Months After Its Last Round
Pixel Speakerphone Issue A Widespread Problem, Survey Suggests
FindArticles
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
  • Corrections Policy
  • Diversity & Inclusion Statement
  • Diversity in Our Team
  • Editorial Guidelines
  • Feedback & Editorial Contact Policy
FindArticles © 2025. All Rights Reserved.