Cisco Systems (CSCO) unveiled a new networking chip aimed at speeding information through large data centers that will potentially compete against products from Broadcom (AVGO) and Nvidia (NVDA).
Cisco said Silicon One G300 — a 102.4 Terabit per second, or Tbps, switching silicon — can power gigawatt-scale AI clusters for training, inference, and real-time agentic workloads, while maximizing graphics processing unit, or GPU, utilization with a 28% improvement in job completion time.
The Cisco Silicon One G300 will power new Cisco N9000 and Cisco 8000 systems, which are designed for hyperscalers, neoclouds, sovereign clouds, service providers, and enterprises.
The Silicon One G300, G300-powered systems, and optics will ship this year, according to the company.
The company noted that the new systems are available as a 100% liquid cooled design that, along with new optics, enables a customer to improve energy efficiency by nearly 70%.
In addition, the company enhanced its data center networking architecture called Nexus One to make it easier for enterprises to operate their AI networks on-premises or in the cloud.
“As AI training and inference continues to scale, data movement is the key to efficient AI compute; the network becomes part of the compute itself. It’s not just about faster GPUs – the network must deliver scalable bandwidth and reliable, congestion-free data movement,” said Martin Lund, executive vice president of Cisco’s Common Hardware Group. “Cisco Silicon One G300, powering our new Cisco N9000 and Cisco 8000 systems, delivers high-performance, programmable, and deterministic networking – enabling every customer to fully utilize their compute and scale AI securely and reliably in production.”
Last month, Nvidia unveiled its next-generation AI computing platform, Vera Rubin, which features several key networking and infrastructure components. In June 2025, Broadcom started shipping its networking chip Tomahawk 6 switch series.
Separately, Cisco (CSCO) announced a suite of capabilities to help enterprises securely adopt AI technology while maintaining agent integrity and control of agentic interactions.
These features include AI Bill of Materials, or BOM: provides centralized visibility and governance for AI software assets, including model context protocol, or MCP, servers and third-party dependencies, to secure the AI supply chain.
The features also include MCP Catalog: discovers, inventories, and helps manage risk across MCP servers and registries spanning public and private platforms, strengthening AI governance; advanced algorithmic red teaming: expands the scope of AI security assessments; and real-time agentic guardrails to keep agents and applications safe.