Nvidia has translated its dominance in AI hardware into a far-reaching investment strategy, planting stakes in the startups most likely to shape the next era of computing. In earning record revenue and cash flows in its data center business, the company isn’t just selling GPUs — it is buying optionality across the AI stack, from core model labs to data infrastructure to applied robotics.
PitchBook, a data firm, tallies about 50 venture deals for Nvidia so far this year, more than last year’s count of forty-eight. That number does not include NVentures, the company’s official venture arm, which also ramped significantly, according to the same data. The goal, Nvidia says, is to grow the AI ecosystem by supporting “game changers and market makers.” Now the pattern looks strategic, not opportunistic.

Nvidia investment to extend the competitive moat
Unlike a traditional corporate VC, Nvidia often invests off-balance sheet and seeks to co-invest with top-tier funds, combining capital with go-to-market support, early hardware access, and CUDA-optimized software stacks. Startups, in return, tend to pledge to develop on Nvidia platforms or purchase capacity from partners working with its chips. The result is a flywheel: funding drawing startups into Nvidia’s ecosystem, the growth of which lures more developers, models, and spending onto Nvidia hardware.
That playbook manifests itself in markets with growing blocks of compute demand — foundation models, AI cloud instances, data pipelines, and high-throughput networks — in which each new generation can be both devoured by the market while also justifying its existence.
Foundation model stakes signal platform power
Nvidia has laid stakes in rival frontier labs, a hedge that also serves as a demand driver for its chips. The company joined OpenAI’s monumental fundraise, with reports in the New York Times and elsewhere describing Nvidia’s initial check on the deal at somewhere around nine digits, as well as crafting a much broader strategic partnership to build out a gigantic AI computing cluster.
It also supported xAI in a multibillion-dollar raise and, according to Bloomberg, has pledged additional backing for the equity piece of a prospective round to fund purchases of infrastructure. In Europe, Nvidia has repeatedly invested in Mistral AI, including during a multibillion-euro financing that valued the company at double-digit billions. In the enterprise model category, it backed Cohere’s late-stage round that pegged the company in the high single-digit billions.
Relatively unknown outside the headline names, Nvidia has invested in nascent research labs and model-first startups like Perplexity, Reka, among others. It fronted a large funding round for Reflection AI, a young lab that is building a lower-cost competitor to premium closed models, and was part of the multibillion seed round for Thinking Machines Lab, which is headed by another major lab CTO. The throughline: a diversified model portfolio that guarantees no matter what the winning model, Nvidia’s silicon and software are always at the core.
AI infrastructure and the rise of data factories
On the infrastructure side, it’s like Nvidia is co-creating the layer found in this AI factory. It was an early investor in CoreWeave, and today is a major shareholder as the GPU-cloud provider grew into a public company. It invested in Lambda’s massive Series D to scale GPU cloud for training and inference, and led a round of Together AI to produce cloud-native tooling for model building.
Data remains the other bottleneck. Nvidia invested in Scale AI’s billion-dollar round helmed by cloud and consumer giants to accelerate data labeling and orchestration. It backed Weka for AI-native data management and Ayar Labs, which designs optical interconnects to shift data more quickly and drastically cut energy per bit. It also backed Enfabrica Inc., a networking silicon startup that focuses on easing I/O and memory bandwidth bottlenecks in AI clusters.

There is a mirror bet on physical capacity. Nvidia also took part in Crusoe’s big raise (for sequestering data centers), supported Firmus Technologies as it builds a green AI facility in Tasmania, and joined several financings for Nscale, which is creating sites across Europe connected to hyperscale AI projects. These investments supplement Nvidia’s supply obligations, and put it close to the front of the line when limited power and real estate are allocated.
Robotics, autonomy and the next wave of applied AI
Applied AI is where compute turns into revenue, and Nvidia has helped to incubate a number of leaders. Figure AI raised its latest round at a valuation of just under forty billion, though Nvidia also invested after an earlier investment as the breaking news in the humanoid robotics competition continues to unfold. In the realm of autonomy, Nvidia hopped into Waabi’s hefty Series B and cut a new check to Nuro, a delivery-focused autonomous company.
In media and productivity, Nvidia invested in Runway, a generative video company used by studios and creators. In healthcare, it added to Hippocratic AI’s financing to develop specialized models for patient-facing workflows. And in advanced computing, Nvidia supported an expansion from the startup Sandbox AQ into large quantitative models that are used in chemistry, finance, and cybersecurity — workloads that map neatly onto GPU acceleration.
Not every bet is linear. Nvidia had led a giant round for Inflection, only to watch the startup’s founders and key IP march off to a hyperscaler through a generous licensing agreement, indicating how blurry the lines between startups and strategic partners have become in today’s AI market. Yet even results like these ultimately steer more high-value, latency-sensitive AI applications onto GPU infrastructure.
What to watch next for Nvidia’s sweeping AI bets
Three dynamics now matter most. First, capacity: Bloomberg and others reported multibillion, multiyear infrastructure plans between Nvidia and frontier labs; how fast those turn into delivered compute will govern model progress and revenue cadence. And second, interop: investments in networking and optical I/O — such as Enfabrica and Ayar Labs — are attempting to relieve the cluster bottlenecks that increasingly limit performance per dollar. Third, policy: Regulators in the United States and Europe are scrutinizing AI supply chains and partnerships; Nvidia’s broad reach as a supplier, investor, and convener of an ecosystem will invite scrutiny.
For founders, the message is easy enough to read: If your product makes GPU clusters busier, more efficient, or harder to part with, Nvidia wants a piece of it.
For an industry really just wrestling with a still-early form of superintelligence, the company’s capex-driven model for venture — checks and chips and channels, we’ll call it — is one of the most potent forces in AI right now, perhaps the tech industry’s ultimate meta-bet.