An undisclosed US customer has placed a $300 million order for AMD Instinct MI350X GPU servers that use lab-grown diamonds to move heat away from the chips, according to Akash Systems, the Peter Thiel-backed startup supplying the cooling technology. The purchase signals how thermal management is fast becoming a strategic lever for AI data centers chasing more performance per watt.
Inside The Unusual Deal For Diamond-Cooled AMD GPU Servers
Akash says the buyer is keeping a low profile, but the scope suggests a hyperscaler, AI lab, or a large developer outfitting multiple high-density racks. Taiwan-based MiTAC Computing will manufacture the servers, integrating a synthetic diamond component between each GPU and the heat sink—a new layer in the cooling stack designed to spread heat more efficiently than conventional metals.
- Inside The Unusual Deal For Diamond-Cooled AMD GPU Servers
- Why Lab-Grown Diamonds Are Used For Advanced GPU Cooling
- The AI Power And Cooling Problem This Approach Targets
- Partners, Precedent, And Claims Underpinning The Diamond Push
- Engineering And Economic Hurdles To Diamond Heat Spreaders
- What Comes Next For Diamond-Cooled AMD GPU Deployments

AMD’s Instinct MI350X accelerators anchor the build, positioning the system squarely at large-scale training and inference where per-rack power now routinely stretches past traditional limits. Akash declined to share deployment timing or product photos, noting the solution is distinct from previously shown diamond plates.
Why Lab-Grown Diamonds Are Used For Advanced GPU Cooling
Lab-grown diamond—typically fabricated via chemical vapor deposition (CVD) rather than mined—has the highest known thermal conductivity among bulk materials. Akash cites heat removal up to five times faster than copper, the industry’s go-to heat spreader. In practice, a diamond layer can flatten hot spots and shuttle heat into a heat sink or cold plate more uniformly, keeping multi-hundred-watt GPUs in their top performance envelope and reducing throttling under heavy loads.
Research groups and startups are racing to industrialize this idea beyond bespoke parts. IEEE Spectrum has profiled efforts to create wafer-scale diamond films, while Diamond Foundry has discussed bonding thin diamond layers to the backs of silicon wafers to pull heat out at the die level, coverage that has also been reported by The New York Times. The engineering trick is not only conductivity; it’s mastering interfaces—minimizing thermal resistance where diamond meets the package and heat sink.
The AI Power And Cooling Problem This Approach Targets
As AI clusters expand, so do power and cooling budgets. Racks of modern accelerators can draw 60–100 kW, driving facilities to adopt direct-to-chip liquid cooling, warm-water loops, and, increasingly, immersion. The Uptime Institute’s latest global survey places average PUE at about 1.58, leaving meaningful headroom if operators can raise coolant inlet temperatures or maintain performance at higher ambient conditions.
That’s where diamond enters the picture. By evacuating heat faster at the chip interface, data centers can run throttle-free at warmer set points, trimming chiller and pump energy, boosting compute density, or both. Even a modest reduction in thermal throttling can translate into more tokens per second or shorter training runs, compounding into lower total cost of ownership over a GPU’s life.

Partners, Precedent, And Claims Underpinning The Diamond Push
Akash frames the MiTAC partnership as a way to scale production quickly, while AMD executives have pointed to the combination of higher density and energy efficiency as a clear customer priority. Akash also says its diamond-based materials have flown in satellite radios, enabling units that are roughly 3x smaller and use 60% less power—claims the company attributes to more effective heat spreading at the component level.
There is a real-world data point: Akash announced shipments of diamond-cooled Nvidia GPU systems to NxtGen, a major sovereign cloud provider in India, indicating the technology is leaving the lab. For this new order, Akash emphasizes a dedicated supply chain for customized lab-grown diamonds at prices detached from jewelry markets, arguing that energy savings and higher uptime offset added material costs.
Engineering And Economic Hurdles To Diamond Heat Spreaders
Diamond’s properties are exceptional, but execution matters. Thermal interface materials between the die, diamond, and heat sink can become bottlenecks if surface flatness, bond quality, or pressure aren’t finely controlled. Coefficient of thermal expansion mismatches must be managed to ensure long-term reliability under load cycles. Operators will watch field metrics such as sustained GPU clocks, error rates, and coolant delta-Ts to validate benefits at rack scale.
On the economics, electricity and cooling can account for a sizable share of AI TCO once hardware is racked and powered—especially over multi-year depreciation. If diamond layers enable higher inlet temperatures or greater server density without resorting to immersion retrofits, the payback can be compelling. The $300 million commitment suggests at least one buyer has run that math and decided it pencils out.
What Comes Next For Diamond-Cooled AMD GPU Deployments
The industry will be looking for independently verified performance and energy data once systems land: reduced throttling under stress tests, higher rack densities within existing envelopes, and measurable facility-level savings. If those numbers hold, diamond could become a standard element alongside vapor chambers and cold plates—another rung on the cooling ladder for the AI era.
