Nvidia’s investment footprint has grown alongside its control of AI chips, and the company is now a crucial investor influencing where state-of-the-art models, tooling, and compute get built. According to a PitchBook tally, Nvidia was involved in about 67 venture transactions over the past year, compared with 54 during the previous period, not including its formal venture arm, NVentures. The thesis is simple: Back “game changers and market makers” that “expand the AI pie,” as well as support demand for Nvidia’s platforms.
What is unusual about this investment spree isn’t just the scale of it, but how firmly capital, compute, and customers are braided together. There are many similar large transactions that include off-balance-sheet purchase agreements for Nvidia-powered systems, converting equity checks into long-term real demand for GPUs, networking, and software. It’s a vertical strategy, played out in public with stakes in model labs, data infrastructure, chips, robots, and even energy.

A Plan to Own the AI Stack Through Strategic Bets
Nvidia’s product portfolio neatly aligns with the layers at which AI gets value. It has participated in mega-rounds to the public, deeply technical OpenAI, Anthropic, Menlo Park-based xAI, and seed-stage rounds for Mistral, Reflection AI, Thinking Machines Lab, Imbue, and Reka AI at the frontier-model tier. These bets are all also about scaling compute-hungry workloads; these happen to map right across the company’s most recent platform roadmaps.
At the developer and application layer, Nvidia has supported Cursor and Poolside for AI coding, Perplexity for AI search, and generative media players like Runway and Germany’s Black Forest Labs. Enterprise LLMs and tooling are funded through Cohere, Together AI, Weka, Scale AI, and Kore.ai — pieces that revolve around data pipelines, model customization, and GPUs at scale.
Infrastructure is another pillar. Nvidia funded GPU cloud operators and data-center builders such as CoreWeave, Lambda, Crusoe, Nscale, and Firmus Technologies (follow us on Twitter at @TelecomDC for more Intel–AMD–Nvidia news). Ayar Labs and Enfabrica have helped enable breakthroughs in bandwidth and interconnect technology. The focus is clear: make sure capacity is there, and ensure it’s tuned for the Nvidia ecosystem.
Follow the Money and the Compute Behind AI Deals
The headline numbers tell a tale of scale. And Intel isn’t the only chipmaker that OpenAI coalesced around in 2019: Nvidia put in a first-time investment during a $6.6 billion round and also signed on to a framework agreement promising to coordinate future infrastructure investments, according to company disclosures and coverage by top-shelf outlets. As part of a broader deal, it agreed to invest up to $10 billion in Anthropic, which laid out multibillion-dollar spending plans for cloud compute — including Nvidia-based systems. There are reportedly comparable deals happening around xAI, too, in which equity is exchanged for more Nvidia gear.
Mistral AI raised a two-billion-dollar round with Nvidia, backing open-weight models in Europe. Cursor landed a multibillion-dollar Series D at a much-tractioned-up valuation, and Nvidia transitioned from customer to shareholder as code assistants, instead of supplements, become the norm for software teams. Cohere’s Series D took its valuation to nearly $7 billion, validating Nvidia’s enterprise LLM bet.

On the infrastructure scene, Crusoe has raised about $1.4 billion at a valuation of $10 billion to build out AI data centers; Nscale and Firmus are building capacity linked to large-scale model deployment; Lambda’s funding fuels additional GPU cloud capability; and earlier CoreWeave backing confirms Nvidia’s early role in the rise of specialized GPU clouds. In applied AI, Figure AI becomes a second $1 billion round leader at a reported $39 billion valuation, and Waabi and Nuro moved autonomy forward with Nvidia as a repeat backer — even as Nuro saw its perceived value fall by about 30% from all-time highs, an interesting reminder of how non-uniformly value can be captured within the AI space.
The Playbook Behind the Checks and Partnerships
It is less spray-and-pray, more system design. Investments tend to be associated with alignment from a technical perspective — CUDA, networking topologies, and now on the platform level (like Grace Blackwell). Startups get earlier access, engineering support, and credibility with enterprise buyers. That in turn improves visibility into future demand, co-designs workloads that show off its newest silicon, and seeds software moats around SDKs and inference stacks.
Reporting from both PitchBook and Bloomberg suggests an increasing portion of rounds involve “circular” elements — equity that goes to finance the very infrastructure the startup will end up consuming. In more practical terms, Nvidia is financing both sides of the AI boom: not only is it providing the picks and shovels, but it also now owns a good portion of all the miners toiling away.
Where the Risks Are in Nvidia’s AI Investment Web
The coupling of supplier power with ecosystem ownership raises questions. Rivals may chafe at what they see as the preferential distribution of stock, and regulators might be interested in the questions about equity, supply, and exclusivity that play out when a GPU market is constrained. There’s also classic venture risk here: not every lab and/or application will be able to cumulatively sustain its valuation, just as leadership changes, acquihire-style deals, or market pivots rewrite trajectories for portfolio companies. The Inflection saga — speed to scale, then a talent and IP licensing pivot that returned us to the roadmap drawing board — is a cautionary tale.
Another is energy and networking. The demand for power, cooling, and bandwidth is dragging AI deeper into fusion, optics, and grid partnerships. Nvidia’s investments in Commonwealth Fusion and Ayar Labs suggest that compute progress may be gated less by cores and more by electrons and photons.
What to Watch Next as Nvidia Deepens AI Bets
Look for more capital around data-center build-out in power-rich geographies, more bets on small-footprint models operating efficiently on next-gen accelerators, and more enterprise platforms transitioning from copilots to fully agentic workflows. As NVentures and corporate investing proceed in tandem, Nvidia’s real edge may be the same one that led to it becoming the go-to AI chip supplier: its capacity for co-architecting the future alongside the very startups attempting to invent it.
