FindArticles FindArticles
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
FindArticlesFindArticles
Font ResizerAa
Search
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
Follow US
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
FindArticles © 2025. All Rights Reserved.
FindArticles > News > Technology

Huawei introduces AI SuperPoD as Nvidia blocked in China

Bill Thompson
Last updated: October 25, 2025 12:07 pm
By Bill Thompson
Technology
7 Min Read
SHARE

Huawei is launching a new AI interconnect it claims can stitch together thousands of devices in a single cluster, directly targeting Nvidia by making its pitch as GPUs are driven from the Chinese market. The company’s SuperPoD Interconnect ties together as many as 15,000 AI chips — including Huawei’s own Ascend series — into a single, high-throughput fabric designed for training and serving frontier-scale models.

Why this interconnect matters for large-scale AI

Today’s AI performance is as much about chip-to-chip communication as it is about raw FLOPs. Huawei’s SuperPoD (Super Device/Super Performance: Omni-directional) resolves the bottlenecks that bog down large model training, which is crippled by high-bandwidth and low-latency communications across thousands of accelerators — think of all-reduce operations, parameter sharding, memory offload. It’s firmly pitted against Nvidia’s NVLink and scaling technologies that currently form the foundations for most top AI clusters.

Table of Contents
  • Why this interconnect matters for large-scale AI
  • A strategic opening as Nvidia gets blocked
  • Inside Huawei’s Ascend stack and its supporting tools
  • Engineering trade-offs: Fabric, software, and yield
  • What it means for China’s AI buildout under sanctions
  • The bottom line on Huawei’s SuperPoD and Nvidia’s absence in China
A Huawei server rack with All SuperP0 0 branding in the foreground, and a monitor displaying a declining stock graph with the word NVIDIA crossed out in red in the background .

Big-batch training for 70B-100B+ parameter models may require several thousand H100-class GPUs, based on public disclosures by leading AI labs. There is ever more value to be obtained by pooling smaller accelerators together in a coherent manner, particularly if access to the very latest chips or indeed wafers is restricted.

A strategic opening as Nvidia gets blocked

Huawei’s declaration comes as Chinese authorities put in place a fresh ban on the country’s tech corporations purchasing Nvidia hardware, which includes locally tailored server offerings. That means the market leader in AI accelerators will no longer be able to sell its products for new deployments in China; Nvidia is reportedly serving more than 80% of the global space, according to several analyst houses.

Now that Nvidia is out of the picture, clouds and enterprises in Shenzhen and elsewhere from Beijing will want an answer that can scale. Already insinuated into China’s telecom and data center infrastructure, Huawei is offering SuperPoD as a prefabricated backbone for Ascend-powered clusters spanning public cloud, internet platforms and government compute centers.

Inside Huawei’s Ascend stack and its supporting tools

Huawei’s AI offerings focus on Ascend accelerators, the CANN operator stack, and the MindSpore framework paired with toolchains that support PyTorch and other mainstream ecosystems. Ascend chips generally lag Nvidia’s latest on peak performance but broker research out of China has routinely pegged the flagship 910B at A100-class levels for many kinds of FP16 workloads, so somewhere short of H100/B200 levels but competitive when scaled.

The SuperPoD Interconnect instead relies on high-performance, lossless Ethernet- and RDMA-based fabrics as opposed to Nvidia’s roots in InfiniBand. That’s consistent with broader industry trends: Companies that follow data center network routes, like Dell’Oro Group, have observed a coming leveling wave of Ethernet-based AI fabrics as vendors improve technologies for congestion control and collective offload. Huawei’s argument is that a tuned Ethernet fabric would be able to provide deterministic performance at exascale without the need to lock the customer in with proprietary technology.

A wide shot of a presenter on stage with a large screen behind them displaying  Groundbreaking SuperPo D Interconnect: Leading a New Paradigm for AI Infrastructure and  Unveiling the world's most powerful SuperPoDs and SuperClusters over a scenic landscape. Filename : super podpresentation 1 6x 9.png

Engineering trade-offs: Fabric, software, and yield

The real test is end-to-end throughput for user production workloads. The interconnect bandwidth and latency should sustain all-to-all traffic, which is prevalent for transformer training. Software maturity—for compiler graphs, for kernel fusion, for mixed-precision correctness, for convergence of framework adapters—can mean the difference between a realizable performance and one that cannot even be tested. According to Huawei, its stack is designed to maximize collective ops and memory orchestration in order to maintain high utilization across big rings and meshes.

Another dimension is that of resilience in the supply chain. Yield and packaging limitations for advanced accelerators in China have been pointed out by analysts. If Huawei can ship SuperPoD-ready systems at volume that come with liquid cooling, 200kW-class racks and a power delivery chain optimized for AI density, it could save domestic providers from falling short of capacity targets, or being accused of flouting energy-efficiency rules. With national planning guidelines driving PUE targets to the 1.2–1.3 range for new builds in China, it’s no wonder that data center policies in China are an emerging topic of conversation among industry stakeholders.

What it means for China’s AI buildout under sanctions

Now for hyperscalers and internet platforms, the math transitions from “fastest single GPU” to fastest system per dollar and per watt under sanctions. If SuperPoD can continue to keep large Ascend clusters busy, Chinese companies building LLMs, recommender systems and video-generation pipelines would be free to advance without having to rely on Nvidia’s ecosystem.

Software portability will be a concern for international developers. The better Huawei is able to follow the mainstream frameworks and toolchains, the easier it will be to port PyTorch graphs, inference runtimes and quantization flows. The early case studies of local clouds and research institutes – possibly including transparent training times, tokens processed and energy metrics – will be the key signals about how competitive the platform is.

The bottom line on Huawei’s SuperPoD and Nvidia’s absence in China

Huawei’s SuperPoD Interconnect is a timely attempt at Nvidia-scale fabric in China. It won’t erase the performance gap overnight, but if the company provides strong networking, a proven software stack and stable supply, it could become the hot rod backbone of domestic AI clusters — just when that market needs one.

The most important metrics for investors, builders and policymakers to track are these three:

  1. Real LLM training runs’ cluster utilization
  2. Ready-to-use systems at scale
  3. Momentum in the ecosystem across toolchains and partners
Bill Thompson
ByBill Thompson
Bill Thompson is a veteran technology columnist and digital culture analyst with decades of experience reporting on the intersection of media, society, and the internet. His commentary has been featured across major publications and global broadcasters. Known for exploring the social impact of digital transformation, Bill writes with a focus on ethics, innovation, and the future of information.
Latest News
Video Call Glitches Cost Jobs And Parole, Study Finds
OpenAI Rejects Ads As ChatGPT Users Rebel
Pixel 10 always-on display flicker reported after update
Anker SOLIX C300 DC Power Bank discounted to $134.99
Musk Says Tesla Software Makes Texting While Driving Possible
Kobo Refreshes Libra Colour With Upgraded Battery
Govee Table Lamp 2 Pro Remains At Black Friday Price
Full Galaxy Z TriFold user manual leaks online
Google adds Find Hub to Android setup flow for new devices
Amazon Confirms Scribe And Scribe Colorsoft Launch
Alltroo Scores Brand Win at Startup Battlefield
Ray-Ban Meta Wayfarer hits 25% off all-time low
FindArticles
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
  • Corrections Policy
  • Diversity & Inclusion Statement
  • Diversity in Our Team
  • Editorial Guidelines
  • Feedback & Editorial Contact Policy
FindArticles © 2025. All Rights Reserved.