As AI build-outs strain the grid, Peak XV Partners has led a $15 million Series A in Bengaluru-based C2i Semiconductors, betting that smarter power delivery — not just more GPUs — will unlock the next phase of data-center growth. The round, joined by Yali Deeptech and TDK Ventures, brings C2i’s total funding to $19 million and underscores a fast-forming consensus: power is the new bottleneck in AI infrastructure.
C2i, short for control conversion and intelligence, is developing a plug-and-play “grid-to-GPU” power platform that treats conversion, control, packaging, and telemetry as one integrated system. By redesigning how electricity is stepped down and routed from the data-center bus to the processor, the startup says it can materially cut losses, shrink cooling overhead, and lift effective GPU utilization — all without forcing operators to re-architect their entire stack.
- Why Power Is the New Bottleneck for AI Data Centers
- C2i’s Grid-to-GPU Approach for Data Center Power Delivery
- Proof Points and Go-To-Market for C2i’s Power Platform
- Why Investors Are Paying Attention to AI Power Delivery
- India’s Semiconductor Moment in Power Electronics Design
- What to Watch Next for C2i’s Data Center Power Strategy

Why Power Is the New Bottleneck for AI Data Centers
Data centers are on a steep power trajectory that silicon alone can’t flatten. BloombergNEF has projected data-center electricity demand could nearly triple by the next decade, while Goldman Sachs Research estimates a 175% surge by 2030 from 2023 levels — akin to adding another top-10 power-consuming nation. The International Energy Agency has warned that global data-center electricity use could roughly double mid-decade driven by AI and digitalization.
Inside facilities, the crunch is not just about how much power is available, but how efficiently it is converted and delivered. High-voltage AC must be rectified and stepped down multiple times through UPS systems, busbars, voltage regulators, and point-of-load converters before it reaches GPUs. Each hop wastes energy. Industry averages suggest 15–20% losses across this chain, and those losses scale directly with AI rack densities now exceeding 100 kW — with liquid-cooled deployments pushing higher.
Efficiency targets are tightening, yet progress has slowed. Uptime Institute surveys show average PUE hovering around the mid-1.5s, with AI clusters often faring worse due to extreme densities and thermal loads. In parallel, interconnection queues in major markets have stretched to years, according to analyses from national labs, making “use power better” a near-term imperative while “get more power” remains a multi-year slog.
C2i’s Grid-to-GPU Approach for Data Center Power Delivery
C2i’s thesis is that the power path should be engineered as a single control surface rather than a daisy chain of parts from different vendors. The company is co-founded by former Texas Instruments power leaders Ram Anant, Vikram Gakhar, Preetam Tadeparthy, and Dattatreya Suryanarayana, alongside Harsha S. B and Muthusubramanian N. V — a team steeped in high-efficiency conversion, magnetics, and advanced packaging.
Their platform integrates silicon controllers and converters with system-level firmware, real-time telemetry, and packaging optimized for high-current, low-impedance delivery to accelerators. By co-optimizing rectification stages, intermediate DC buses, point-of-load regulation, and thermal design, C2i targets roughly a 10% reduction in end-to-end power losses — about 100 kW saved per megawatt consumed. That cascades into smaller cooling loads, improved power budgets per rack, and better uptime under transient spikes common in AI training.
Crucially, the company positions its solution as “plug-and-play” at the system level. For operators, that means dropping in a known-good power delivery backbone that can qualify faster than bespoke, multi-vendor integrations. For hyperscalers, it promises granular telemetry — down to rail-level behavior at the GPU — to tune workloads against real electrical constraints.

Proof Points and Go-To-Market for C2i’s Power Platform
C2i expects initial silicon returns in the near term, followed by joint validation with data-center operators and hyperscalers. Executives say multiple customers have already requested performance data, setting up a relatively short feedback loop to prove the savings and reliability at scale. The startup has assembled roughly 65 engineers in India and is establishing customer-facing operations in the U.S. and Taiwan to support early deployments and manufacturing partnerships.
Power delivery has historically been one of the slowest-moving layers in the data-center stack, dominated by incumbents and long qualification cycles. That inertia is precisely why system-level integration could be disruptive: a single vendor accountable for silicon, control, packaging, and compliance can compress time-to-validate — if the technology clears efficiency, thermal, safety, and reliability gates.
Why Investors Are Paying Attention to AI Power Delivery
For investors, the math is compelling. At AI-scale power levels, even a 5–10% improvement can translate to millions of dollars saved per site annually and accelerate revenue by enabling more GPUs per megawatt. C2i’s leaders say sites could see 10–30% lower energy-related costs when factoring in downstream cooling and capacity gains, a claim that, if validated, would unlock tens of billions of dollars in cumulative savings across the sector.
Peak XV’s participation alongside TDK Ventures — the corporate VC arm of a global magnetics and components leader — signals confidence in both the silicon roadmap and the supply-chain path to production. The bet aligns with a broader shift in venture focus from pure compute to systems that relieve power, cooling, and networking constraints.
India’s Semiconductor Moment in Power Electronics Design
C2i also reflects a maturing semiconductor design ecosystem in India. The country has a deep bench of analog and power electronics talent, while design-linked incentive programs have lowered the cost and risk of tape-outs for startups. With more global chip design teams based in India and increasing local capability in advanced packaging and validation, building world-class power solutions from Bengaluru is no longer a stretch goal — it’s an emerging norm.
What to Watch Next for C2i’s Data Center Power Strategy
The near-term milestones are clear: silicon performance against claimed loss reductions, reliability under AI-style transient loads, and integration ease in brownfield facilities. Watch for metrics such as measured bus-to-GPU efficiency, thermal headroom reclaimed, and net PUE impact. If C2i’s platform performs, it won’t just shave megawatts; it could reset how AI data centers budget power, plan capacity, and price workloads in an era when every watt counts.
