Nscale, a new AI hyperscaler, has signed an expansive deal with Microsoft to install nearly 200,000 Nvidia GB300 GPUs across data centers in the United States and Europe—marking one of its first major expansions for Azure-ready capabilities as demand for compute continues to spike.
The buildout will pass through Nscale-operated locations and a joint venture with industrial investor Aker, one of the company’s strategic backers, anchoring capacity in crucial territories delivering power, cooling, and network proximity for large-scale AI workloads.

What the Nscale–Microsoft AI infrastructure deal covers
Half of them—the Texas-bound 104,000—are headed for an Ionic Digital data center in the state over the next year to 18 months as Nscale ramps that campus to its targeted full build-out capacity of 1.2 gigawatts. That footprint qualifies the site as one of the largest announced AI campuses in North America by planned capacity.
In Europe, Nscale said it would position 12,600 GPUs at the Start Campus in Sines, Portugal, from Q1 2026. The other 23,000 GPUs will head to a planned campus in Loughton, England, beginning in 2027, while the rest are meant for Microsoft’s AI center in Narvik, Norway—sites selected based on energy availability, cooling efficiency, and network routes close by.
The agreement “solidifies” its position as a go-to partner for hyperscale GPU deployments, Nscale says, an audacious claim for a firm that was founded in 2024—yet supported by the scale, geographical reach, and speed of project execution.
Why It’s A Big Deal For Microsoft And Nscale
For Microsoft, the deal taps a boutique builder to increase AI capacity for services such as Azure OpenAI Service and Copilot, while broadening supply. The company has vowed to ramp up its AI infrastructure worldwide, and hammered home plans for green energy procurement and cooling efficiency as it grows.
For Nscale, the deal validates its high-capital business model. It has so far attracted over $1.7 billion in investments from strategic partners, such as Aker, Nokia, and Nvidia, as well as from investors including Sandton Capital Partners, G Squared, and Point72. Citing an interview with CEO Josh Payne, The Financial Times reported the company may go public as soon as next year—a bold timeline that would test appetite for pure-play AI-infrastructure providers among investors.
Power And The Realities Of Supply Chains
Getting 200,000 next-generation GPUs to pockets of resistance is as much a logistics and energy problem as it is a win in procurement. The International Energy Agency has cautioned that data center electricity demand could nearly double by mid-decade, and the Uptime Institute continues to identify grid access, transformer lead times, and cooling water availability as major bottlenecks for hyperscalers.

The choice of Narvik, with its abundant hydropower and lower ambient temperatures, and Sines, located near subsea cables as well as a deepwater port, demonstrate how real estate is not the only factor in siting such installations these days—power and climate are just as critical.
On the silicon front, TrendForce has observed that HBM packaging and advanced substrates are in short supply at present, meaning delivery schedules and roll-out yields will still be under intense scrutiny for any GPU megaproject.
A crowded market emerges for AI chips and compute
The Nscale–Microsoft deal occurs amid a surge of multi-gigawatt AI chip commitments. Recent weeks have seen reports of OpenAI booking significant capacity from AMD processors, and also a distinct deal at scale with Nvidia. At the same time, other infrastructure companies like CoreWeave and Lambda have raised billions to construct Nvidia-focused clouds, while traditional infrastructure providers such as Oracle and Equinix are creating GPU-dense offerings to serve enterprise appetites.
The strategic lesson: compute is the currency of the new AI economy. Enterprises that are certain of their supply of accelerators, power, and sovereign-grade connectivity can establish the pricing, availability, and performance bands for the next wave of generative AI applications.
What to watch next for Microsoft and Nscale buildout
Execution risk now becomes front and center. Some important milestones are GPU delivery rhythm, grid connection at the Texas and UK sites, and environmental permissions and hydric issues in Europe. Anticipate headlines around how fast Microsoft can absorb the capacity into Azure, what proportion of Nvidia hardware generations are in the mix, and how much is set aside for its own AI workloads versus customers.
Equally telling will be Nscale’s ability to maintain timetables under supply constraints. Should it hit those targets, the company will have gone from upstart to central player in the AI buildout in a matter of years—evidence that, here, speed and siting and silicon access are on par with, if not more important than, priced-in scale.