FindArticles FindArticles
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
FindArticlesFindArticles
Font ResizerAa
Search
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
Follow US
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
FindArticles © 2025. All Rights Reserved.
FindArticles > News > Technology

Niv-AI Exits Stealth To Boost GPU Power Efficiency

Gregory Zuckerman
Last updated: March 17, 2026 2:07 pm
By Gregory Zuckerman
Technology
7 Min Read
SHARE

Niv-AI has emerged from stealth with $12 million in seed financing and a focused mission: squeeze more usable performance out of power-hungry GPU clusters by measuring and shaping electricity use at millisecond resolution. The company says AI data centers are leaving performance on the table as operators throttle clusters to stay within power envelopes, a drag that can hit as high as 30% during peak activity. With hyperscale buildouts colliding with grid constraints, the promise of turning “stranded watts” into work is drawing outsized attention.

Founded in Tel Aviv by CEO Tomer Timor and CTO Edward Kizis, Niv-AI is positioning its technology as an intelligence layer between AI workloads, facility infrastructure, and the grid. Backers include Glilot Capital, Grove Ventures, Arc VC, Encoded VC, Leap Forward, and Aurora Capital Partners. Early deployments with design partners are underway, with the company targeting initial rollouts in several US data centers in the coming months.

Table of Contents
  • Why GPU Power Spikes Strain Data Centers
  • Inside Niv-AI’s Millisecond-Scale Power Control Approach
  • What Niv-AI’s Power Smoothing Could Unlock for AI Data Centers
  • Funding, deployment timeline, and early pilot plans
  • The competitive context for power-aware AI operations
  • The bottom line on reclaiming GPU power headroom
A professional, enhanced image of a server room with glowing orange and blue lightning bolts on the server panels, resized to a 16:9 aspect ratio.

Why GPU Power Spikes Strain Data Centers

Modern AI clusters run thousands of GPUs in lockstep. During training, they swing rapidly between compute phases and communication-heavy bursts—think all-reduce or all-to-all exchanges—causing sharp, millisecond-scale spikes in power draw. When many accelerators synchronize, those spikes add up at the rack and room level. To avoid tripping breakers, breaching utility limits, or overpaying demand charges, operators keep headroom by capping GPU power or dialing back concurrency.

Today’s flagship accelerators can pull 700W–1,000W apiece under load, and next-gen parts are trending higher. Multiplied across pods, halls, and campuses, a few milliseconds of overshoot can cascade into thermal and electrical constraints. Operators often rely on bulk storage and UPS systems to ride through spikes, but those buffers are costly and finite. Industry groups like Uptime Institute have warned that grid interconnect lead times are stretching and that power availability, not floor space, is increasingly the gating factor for capacity. Meanwhile, the International Energy Agency projects global data center electricity use could approach or exceed 1,000 TWh by 2026, with AI workloads a fast-growing share.

Inside Niv-AI’s Millisecond-Scale Power Control Approach

Niv-AI’s first step is visibility. The startup is deploying rack-level sensors capable of millisecond sampling to capture the real power signatures of different AI workloads across GPUs, NICs, and supporting gear. That granularity aims to fill a gap left by device telemetry, which can be too coarse or inconsistent for facility-grade control.

On top of that data, Niv-AI is building a software layer to forecast, flatten, and synchronize power loads without sacrificing throughput. The system is designed to integrate with common schedulers and orchestration stacks—such as Slurm and Kubernetes—as well as GPU management frameworks like Nvidia’s DCGM. Tactics can include power-aware job placement, phase-shifting collective operations, and dynamic adjustments to GPU power limits, turning “peaky” workloads into steadier profiles so operators can safely reclaim capacity.

What Niv-AI’s Power Smoothing Could Unlock for AI Data Centers

Even modest smoothing can be material. In a 10MW hall that routinely holds 10–20% headroom, recovering just a few points translates to additional racks of usable compute or faster job completion. At current training costs, single-digit gains in usable power or cluster throughput can save millions of dollars across large runs. It also curbs demand spikes that trigger premium tariffs, lowers battery cycling, and helps facilities remain within contracted limits.

A professional presentation slide with a graph showing the energy efficiency improvement of LLM inference over eight years, alongside an image of a circuit board.

The opportunity aligns with a broader industry push to treat power as a first-class optimization target alongside FLOPS and memory bandwidth. At Nvidia’s GTC, company leadership underscored that unused watts are effectively lost revenue in AI factories. If Niv-AI can prove that millisecond-aware control reliably lifts effective utilization without destabilizing systems, it will tap into urgency that spans hyperscalers, model labs, and colocation providers.

Funding, deployment timeline, and early pilot plans

The $12 million seed round gives Niv-AI resources to scale sensor deployments, expand integrations, and validate results with early adopters. The company says it will have operational pilots in several US facilities within six to eight months, with findings feeding a predictive model meant to function as a power “copilot” for site reliability and facilities engineers. Valuation details were not disclosed.

The competitive context for power-aware AI operations

Niv-AI’s pitch sits between two established camps. On one side are power and cooling incumbents—Schneider Electric, Eaton, Vertiv—with hardware-centric solutions. On the other are software schedulers and GPU orchestration tools, including platforms like Run:AI, that raise utilization but typically focus on compute, not facility-grade power dynamics. Cloud providers also build proprietary systems, yet independent tooling that spans OEMs and sites has appeal for enterprises running heterogeneous fleets.

Regulatory and grid realities add urgency. North American and European grid operators have flagged reliability risks from rapid load growth in data center hubs, and interconnect queues can stretch for years. A layer that coordinates GPU workloads with real-time facility limits—and, eventually, with utility signals—could open doors to demand response revenue and faster capacity turn-up without new substations.

The bottom line on reclaiming GPU power headroom

GPUs are getting faster, but the grid is not. By measuring power where it actually fluctuates and orchestrating workloads to fit within tight envelopes, Niv-AI aims to convert electrical headroom into AI performance. If pilots confirm stable gains—even in the 5–10% range—the approach could become a standard tool in the kit for operators chasing higher throughput without waiting on new power.

Gregory Zuckerman
ByGregory Zuckerman
Gregory Zuckerman is a veteran investigative journalist and financial writer with decades of experience covering global markets, investment strategies, and the business personalities shaping them. His writing blends deep reporting with narrative storytelling to uncover the hidden forces behind financial trends and innovations. Over the years, Gregory’s work has earned industry recognition for bringing clarity to complex financial topics, and he continues to focus on long-form journalism that explores hedge funds, private equity, and high-stakes investing.
Latest News
How Faceless Video Is Transforming Digital Storytelling
Oracle Cloud ERP Outage Sparks Renewed Debate Over Vendor Lock-In Risks
Why Digital Privacy Has Become a Mainstream Concern for Everyday Users
The Business Case For A Single API Connection In Digital Entertainment
Why Skins and Custom Servers Make Minecraft Bedrock Feel More Alive
Why Server Quality Matters More Than You Think in Minecraft
Smart Protection for Modern Vehicles: A Guide to Extended Warranty Coverage
Making Divorce Easier with the Right Legal Support
What to Know Before Buying New Glasses
8 Key Features to Look for in a Modern Payroll Platform
How to Refinance a Motorcycle Loan
GDC 2026: AviaGames Driving Innovation in Skill-Based Mobile Gaming
FindArticles
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
  • Corrections Policy
  • Diversity & Inclusion Statement
  • Diversity in Our Team
  • Editorial Guidelines
  • Feedback & Editorial Contact Policy
FindArticles © 2025. All Rights Reserved.