CoreWeave CEO Michael Intrator is defending the AI industry’s “circular” investment and purchasing arrangements from criticism that they are financial engineering, saying instead it’s how leaders are collaborating their way through a historic compute bottleneck. Intrator, speaking at Fortune’s Brainstorm AI summit in San Francisco, framed the interlocking connections—between chip suppliers, cloud providers, and developers of artificial intelligence—as pragmatic cooperation to hasten capacity online.
The focus of the dispute is interlocking deals in which a small handful of powerful companies fund each other and lock in multibillion‑dollar commitments to scarce GPUs, power, and data center space over multiple years. Opponents say the loops—in which a supplier is also an investor and customer, among other permutations—can mask true demand and risk. Intrator’s answer was direct: companies are “working together” to close a vast mismatch between supply and demand, and such coordination is necessary for keeping AI advancement on course.

Intrator’s case for collaboration in AI infrastructure
Intrator portrayed CoreWeave’s model as a contrast to traditional cloud buildouts. The company finances expensive Nvidia GPU fleets, often making the hardware collateral and matching it with long‑term customer commitments. “They’re just educated at a different level,” he said, noting that the approach is built for a market moving faster than slow‑moving traditional capital cycles, and that some volatility comes with creating a new template for infrastructure.
He also said he had accepted market whiplash in CoreWeave’s shares following its public debut, adding that macro uncertainty and heavy investment lead to sharp swings. One far‑fetched mortgage‑to‑bond trade caught attention. Investors latched onto leverage after CoreWeave sold more debt to supercharge its data centers, a decision that led to roughly an 8% decline in the stock, too. Intrator emphasized what short‑term pressure will cost to build capacity amid a once‑in‑a‑generation compute shortage.
Why AI investment circularity raises regulatory eyebrows
Regulators and analysts have become increasingly concerned about AI’s tangled capital stack. The Department of Justice and the Federal Trade Commission have both signaled their intent to scrutinize strategic partnerships and cross‑investments made in foundational AI, while the UK’s Competition and Markets Authority has also considered whether such tie‑ups can skew competition. The worry is that related‑party transactions and prepayments could distort demand signals, concentrate negotiating power, and hide economic risk.
Look across the ecosystem: Nvidia is both a supplier and investor to several AI infrastructure players; hyperscale clouds sign take‑or‑pay agreements and grant cloud credits as part of larger partnerships; model labs secure capacity in the form of preorders being funded by strategic backers who are also large customers. There’s not necessarily anything wrong with this, but it greases the buyer‑seller separations. Accounting experts and credit analysts, including at the large rating agencies, have pushed for greater disclosure around related‑party transactions, backlog quality, and durability of cash flows linked to strategic deals.
CoreWeave’s expansion playbook for the AI supercycle
CoreWeave’s ascent echoes the AI compute supercycle.

(In computing, a supercycle refers to an era of accelerated tech evolution driven by new developments in at least three core areas.) Originally founded as a crypto mining firm, the company has repositioned itself as a GPU cloud for enterprise AI and bleeding‑edge labs. It has partnerships with Microsoft, OpenAI, Nvidia, and Meta (formerly Facebook), and it has made a number of acquisitions to beef up its developer tooling and orchestration stack—among them OpenPipe, Marimo, and Monolith. CoreWeave also made overtures to the federal market, hoping to provide infrastructure to U.S. agencies and the defense industrial base.
Financing remains aggressive by design. Asset‑backed facilities linked to Graphics Processing Units (GPUs), long‑dated customer pre‑buys, and take‑or‑pay contracts de‑risk massive capex for data center buildouts, power procurement, and network rollouts. Critics point to fragility if demand cools; supporters point out that utilization has stayed high even as model sizes, context windows, and inference volumes scale up. Industry firms like Dell’Oro Group and Synergy Research have shown explosive increases in AI infrastructure spend, with data center investment climbing at a faster rate than traditional cloud workloads.
What to watch next for AI infrastructure and policy
Three pressure points will decide the verdict on “working together”:
- Transparency: better reporting on related‑party revenue and backlog composition, as well as how much business remains committed to the group through contract minimums, can reassure markets that growth isn’t a circular mirage.
- Supply chain resilience: the continued availability of top‑tier GPUs, power capacity, and skilled operations talent will be crucial in making promised scale happen on time.
- Regulatory outcomes: FTC, DOJ, and UK CMA guidance may redraw lines around strategic investments and exclusivity.
Signal indicators to investors:
- Utilization rates
- Committed vs. optional capacity
- Customer diversification
- Cash interest coverage
- Cost of capital trend as the plants grow
If cooperation indeed brings down the cost per unit and increases reliability for AI coders, CoreWeave’s defense of round‑trip deals might seem prescient. If not, the same interlocks that hastened the buildout could magnify risk on the way back down.