Naveen Rao is returning with a new AI hardware venture, and he’s going big-time, sources have told me. The ex-Databricks head has been in discussions to raise some $1 billion at a valuation of $5 billion for Unconventional, Inc., maker of what he has called a completely new sort of computer for intelligence. The round is currently being led by Andreessen Horowitz, with Lightspeed and Lux Capital participating, according to people familiar with the process.
The raise will be tranched, with hundreds of millions already in the bag and the company getting to work before the full close, the sources say. Databricks in and of itself plans to invest far more here: a notable link between one of the largest AI software platforms and an ambitious new hardware stack. Bloomberg was first to report that Databricks will be investing.

A Bid to Remix the AI Computer From the Ground Up
Rao later publicly conceded the attempt on X, where he described his goal as “rethinking the foundations of a computer” and pursuing “Brain-Scale Efficiency without the biological baggage.” Unconventional hasn’t said how it is designing the architecture, but people were involved in viewing a system that combines custom silicon with closely bundled server infrastructure, repositioning the company to directly target high-end accelerator and systems competition.
It’s that target that puts Unconventional in the blast zone of Nvidia, whose accelerators and software have been a de facto standard for training and inference at scale. Nvidia’s data center business, its fastest-growing division and the linchpin of its future ambitions, now runs at tens of billions of dollars annually, according to company filings — thanks to demand from hyperscalers and AI-native startups. Gaining access to this market is about more than just fast chips, though; it’s interconnects, memory bandwidth, packaging, networking and a software stack that developers have faith in.
Investors Pursue a Full-Stack Play in AI Infrastructure
Andreessen Horowitz’s interest reflects a larger thesis: the next big gains in AI will not only be from bigger models but also from better infrastructure. Lightspeed and Lux — which were early backers of Rao’s previous companies, including his last, which also sold to Salesforce — bring deep hardware and systems experience. MosaicML, which Rao founded in 2020 and sold to Databricks for $1.3 billion, raised just a modest sum of $33.7 million before that exit based on PitchBook data. Previously, he was the co-founder of Nervana Systems, which was sold to Intel for a reported amount over $400 million.
The capital intensity here is of another order. Tape-out costs for advanced-node designs can exceed hundreds of millions of dollars, while high-yield packaging capacity — like CoWoS — is controlled by a few players among foundries and OSATs. Analysts at outfits like McKinsey have predicted AI-specific silicon will become a multi-hundred-billion-dollar category by decade’s end, which implies space for multiple winners — but only if they can line up supply, ecosystem and customers.
Why Outperforming Nvidia Is More Than Just a Chip
Hardware speed by itself isn’t enough to displace the incumbents. The existence of Nvidia’s CUDA and cuDNN stack, the ecosystem libraries, and overall prevalence within MLPerf are creating a lot of developer lock-in. Successful challengers usually pass three tests at once:
- Drop-in compatibility with popular frameworks
- Proving system-level wins end to end on real workloads
- Having supply that’s reliable at scale

The language of “biological efficiency” in Unconventional suggests memory-centric, sparsity-native execution or even neuromorphic-style approaches. Any one of them could cut energy per operation — an increasingly urgent concern as inference exceeds training in this measure of aggregate compute demand. Industry reports from MLCommons and academic surveys make clear that memory movement has now become the dominant line item for power budgets at large scales of inference; reducing that cost would be a credible wedge for a new player.
Rao’s history and strategic angles for Unconventional
Rao had a two-and-a-half-year stint as VP of AI at Databricks, which is valued at around $100 billion and generating multibillion-dollar recurring revenue. That experience, combined with previous exits, gives Unconventional instant credibility to engineering talent and early customers. Should Databricks join as an investor, it also opens up the possibility of tight integrations across training pipelines, model serving and data tooling — an attractive proposition for enterprises accustomed to stitching in separate stacks.
The tranched approach implies disciplined milestones: initial system prototypes, early silicon bring-up and software maturity checkpoints. For investors, it’s a risk-managed way to finance an expensive journey. What this offers the company is runway that’s paired with hard engineering proof points.
What to watch next as Unconventional ramps its plans
The first primary markers will come quickly:
- The first public technical paper or developer preview
- Evidence of framework compatibility beyond PyTorch headline support
- A foundry and packaging partner announcement
- A launch customer willing to pilot production work
And of course, transparent benchmarks — preferably MLPerf or independently audited results — will be a must to get through the marketing noise.
Should Unconventional have a truly differentiated architecture to combine with a pragmatic software story, it could sit on the short list of credible Nvidia alternatives. A new full-stack player can plausibly matter, given the hundreds of billions in collective capex pledged by hyperscalers and AI-native companies. It’s not a matter of whether there’s demand — it’s if one particular new design can turn biological inspiration into enterprise-grade, reproducible gains at scale.
