Imagine linking thousands of AI chips across dozens of server cabinets so they behave like one enormous computer. At HUAWEI CONNECT 2025, Huawei presented an infrastructure design that stitches separate processors into a single logical machine, a move that could alter how organizations assemble and expand large-scale artificial intelligence systems.
The company calls the concept SuperPoD. Rather than leaving servers to operate as isolated units, SuperPoD groups thousands of processing elements and coordinates them as if they were a single server, giving the combined cluster the ability to act and adapt in a unified manner.
At the center of the design is UnifiedBus (UB). Yang Chaobin, Huawei’s Director of the Board and CEO of the ICT Business Group, said Huawei built the SuperPoD architecture on the UnifiedBus interconnect protocol and that the approach tightly links physical servers so they can “learn, think, and reason like a single logical server.”
Network engineers often face two recurring limits when scaling AI: reliable long-range links and the tradeoff between bandwidth and latency. Traditional copper cabling gives very high bandwidth but only across short distances, typically spanning a couple of cabinets. Optical fiber can carry signals farther, yet reliability tends to degrade as distance and system size increase.
Eric Xu, Huawei’s Deputy Chairman and Rotating Chairman, framed the solution around the OSI model, describing a protocol stack that layers fault protection throughout the link. He said the company has built reliability into every layer of the interconnect, from the physical layer up to network and transmission layers, adding that there is 100-nanosecond-level fault detection and protection switching on optical paths so intermittent optical-module problems are effectively invisible to applications.
The Atlas 950 SuperPoD is the headline product that demonstrates this architecture. Huawei said one implementation can aggregate up to 8,192 Ascend 950DT chips. Xu gave performance figures for the system: “8 EFLOPS in FP8 and 16 EFLOPS in FP4. Its interconnect bandwidth will be 16 PB/s.” He added that the interconnect capacity of a single Atlas 950 SuperPoD would exceed the entire globe’s peak internet bandwidth by more than a factor of ten.
Physical and latency numbers are striking. An Atlas 950 SuperPoD occupies 160 cabinets across roughly 1,000 square meters, composed of 128 compute cabinets and 32 communications cabinets tied with all-optical links. Total system memory reaches 1,152 terabytes, and Huawei reports an end-to-end system latency around 2.1 microseconds.
Huawei plans a larger follow-on, the Atlas 960 SuperPoD, which will scale to 15,488 Ascend 960 chips in 220 cabinets over about 2,200 square meters. Xu said it will offer “30 EFLOPS in FP8 and 60 EFLOPS in FP4, and come with 4,460 TB of memory and 34 PB/s interconnect bandwidth.”
The SuperPoD idea has an angle outside of specialized AI inference and training. The company introduced the TaiShan 950 SuperPoD, a general-purpose variant powered by Kunpeng 950 processors, aimed at replacing legacy mainframes and midrange systems in enterprise settings. Xu positioned it for financial institutions, noting that the TaiShan 950 SuperPoD working with distributed GaussDB can serve as an alternative and “replace — once and for all — mainframes, mid-range computers, and Oracle’s Exadata database servers.”
A major announcement at the event was the decision to publish UnifiedBus 2.0 technical specifications as open standards. Huawei framed the move as a way to bring others into the architecture’s ecosystem. Xu acknowledged limits on semiconductor process-node advances on the Chinese mainland and argued that practical, available process nodes must be the basis for sustainable compute deployments.
Yang described the open stance as a deliberate strategy to widen participation: the company will follow an open-hardware and open-source-software path intended to help partners create industry-specific SuperPoD solutions, accelerate developer innovation, and grow an active partner network.
Huawei said it will open hardware and software components to partners. Hardware slated for sharing includes NPU modules, air-cooled and liquid-cooled blade servers, AI accelerator cards, CPU boards, and cascade cards. On the software side, the company committed to fully open-sourcing CANN compiler tools, Mind series application kits, and the openPangu foundation models by December 31, 2025.
Practical deployments already exist. More than 300 Atlas 900 A3 SuperPoD units were shipped in 2025 and have been installed for upwards of 20 customers across sectors that include internet companies, financial services, carriers, utilities, and manufacturers. Those early systems provide field validation for the architecture and the interconnect protocol under live loads and mixed workloads.
For China’s domestic AI build-out, the strategy addresses a particular constraint: limited access to the most advanced semiconductor process nodes. By designing an architecture that can scale with widely available chips and by opening specifications and components, Huawei is offering a path for local players to contribute to and commercialize large-scale AI infrastructure without relying exclusively on bleeding-edge fabrication.
On the international stage, the company’s open-architecture approach presents an alternative to closed, vertically integrated platforms sold by other major suppliers. Whether an open SuperPoD community can match the performance, deployment simplicity, and commercial strength of established proprietary stacks will be judged as partners, customers, and competitors run larger installations and test operational economics at scale.
The SuperPoD design rethinks how massive compute pools are connected, managed, and expanded. By publishing standards and releasing modules and toolchains, Huawei has put its interconnect model and parts into the public domain; the market’s response to that availability will shape competitive dynamics across AI infrastructure suppliers and service providers.

