Article

Cisco Debuts 51.2 Tbps Router to Tackle AI Data Center Interconnect Bottleneck

DATE: 10/9/2025 · STATUS: LIVE

Cisco’s 8223 offers 51.2 Tbps via Silicon One P200 to link AI across data centers, yet test results reveal surprises…

Cisco Debuts 51.2 Tbps Router to Tackle AI Data Center Interconnect Bottleneck
Article content

Cisco announced its 8223 routing system on October 8, positioning the product as a fixed router capable of 51.2 terabits per second and designed to link AI workloads across multiple data centers. At the heart of the platform is the new Silicon One P200 chip, which Cisco says answers a mounting constraint in the AI world: how to keep increasing compute capacity when data centers run out of space, power and cooling headroom.

The move follows similar pushes from other silicon makers. Broadcom introduced its “Jericho 4” StrataDNX switch/router chips in mid‑August, which began sampling and offer 51.2 Tb/sec of aggregate bandwidth supported by HBM memory for deep packet buffering to tame congestion. Two weeks later, Nvidia revealed its Spectrum‑XGS scale‑across network — a name that plays off Broadcom’s StrataXGS family, which includes the Trident and Tomahawk switch ASICs. Nvidia signed CoreWeave as an anchor customer but provided limited technical detail on the Spectrum‑XGS ASICs. Cisco’s 8223 and P200 now put the company squarely in a three‑way contest for the market.

Large AI training jobs and other intensive ML workloads demand thousands of high‑performance processors working together, producing huge power draw and heat. Data centers are encountering hard limits on footprint, power delivery and cooling capacity, and those constraints are motivating new approaches to infrastructure design.

“AI compute is outgrowing the capacity of even the largest data center, driving the need for reliable, secure connection of data centers hundreds of miles apart,” said Martin Lund, Executive Vice President of Cisco’s Common Hardware Group.

Until recently the industry leaned on two basic strategies: scale up by adding more capability inside individual systems, or scale out by putting more systems into the same facility. Both approaches are reaching practical limits. Physical racks are finite, power grids provide a cap on electricity, and cooling systems struggle to shed the heat generated by dense GPU clusters.

That reality is creating demand for a third model: scale‑across, in which AI workloads are distributed across multiple data centers that may sit in different metropolitan areas or even different states. The tradeoff is that interfacility links become a major chokepoint. AI traffic patterns differ from typical data center flows: training runs produce massive, bursty transfers followed by quieter intervals. If the network between facilities cannot absorb those spikes, GPU fleets sit idle waiting for data, which wastes costly compute cycles and drives up project timelines and budgets.

Most traditional routers were not built to handle that mix of extreme throughput, intelligent buffering and power efficiency at the same time. Many products favor raw port speed or advanced traffic shaping, yet struggle to deliver both with acceptable power draw. For AI interconnect use cases, operators want all three traits in a single platform.

Cisco’s 8223 targets that need set. The product comes in a compact three‑rack‑unit chassis and offers 64 ports of 800‑gigabit connectivity, a density Cisco says is the highest available in a fixed routing system today. The vendor claims packet‑processing capability in excess of 20 billion packets per second and potential interconnect throughput that can scale toward three Exabytes per second.

Its standout feature is deep buffering, made possible by the P200 silicon. Buffers act as temporary holding areas for data — like a reservoir that catches runoff during heavy storms. When training jobs produce sudden traffic surges, the 8223’s large buffers can absorb those spikes and smooth flow so GPU clusters do not idle waiting on data transfers.

Power efficiency matters in this context. As a 3RU device, the 8223 delivers what Cisco describes as “switch‑like power efficiency” and maintains routing functionality, a critical trait when data centers are already under stress from power budgets. The system supports 800G coherent optics, allowing connections that span up to 1,000 kilometers between sites, which is useful for placing compute where space and power are available while keeping clusters linked.

Hyperscalers and large cloud providers are already working with the Silicon One family. Microsoft, an early adopter, says the approach has proven useful across a range of applications.

Dave Maltz, technical fellow and corporate vice president of Azure Networking at Microsoft, said the common ASIC architecture “has made it easier for us to expand from our initial use cases to multiple roles in DC, WAN, and AI/ML environments.”

Alibaba Cloud plans to build the P200 into its eCore architecture. Dennis Cai, vice president and head of network Infrastructure at Alibaba Cloud, said the chip “will allow us to expand into the Core network, replacing traditional chassis‑based routers with a cluster of P200‑powered devices.”

Service provider Lumen is also evaluating the platform. Dave Ward, chief technology officer and product officer at Lumen, said the company is “exploring how the new Cisco 8223 technology may fit into our plans to improve network performance and roll out superior services to our customers.”

Adaptability is another factor that matters to operators. Networking requirements for AI are changing rapidly as new protocols and standards emerge. Conventional hardware often requires replacement or costly upgrades to pick up new features. The P200’s programmability gives operators a route to support evolving protocols by updating silicon behavior in place, which matters when individual routing systems represent large capital investments and standards are still shifting.

Long‑distance interconnects also raise security questions. Cisco built line‑rate encryption into the 8223 that uses post‑quantum‑resilient algorithms, aiming to address potential future threats from quantum computers. The system ties into Cisco’s observability tools for detailed monitoring, which helps engineers spot and fix issues fast.

Broadcom and Nvidia are already pressing their own solutions in the scale‑across networking market, so Cisco faces established rivals. Yet the company brings long experience in enterprise and service provider networking, a Silicon One portfolio that dates back to 2019, and existing relationships with hyperscalers that have adopted its silicon.

The 8223 ships initially with open‑source SONiC support, with IOS XR slated for later release. Cisco says the P200 will be offered across multiple platform types, including modular systems and the Nexus family, giving operators deployment choices that may reduce the risk of vendor lock‑in as they assemble distributed AI infrastructure.

Whether Cisco’s route becomes the standard for AI data center interconnect will depend on more than raw specs: the winning approach will likely be the one that pairs capable silicon with the most complete software, support and integration ecosystem around it.

Keep building
END OF PAGE

Vibe Coding MicroApps (Skool community) — by Scale By Tech

Vibe Coding MicroApps is the Skool community by Scale By Tech. Build ROI microapps fast — templates, prompts, and deploy on MicroApp.live included.

Get started

BUILD MICROAPPS, NOT SPREADSHEETS.

© 2025 Vibe Coding MicroApps by Scale By Tech — Ship a microapp in 48 hours.