Nvidia Pivots to Optical Interconnects as Copper Hits Physical Limits, Plans 1,000+ GPU Systems by 2028
Key Takeaways
- ▸NVIDIA plans to deploy systems with 1,000+ GPUs by 2028 using photonic interconnects, up from the current 72-GPU NVL72 rack
- ▸Copper's 1.8 TB/s bandwidth ceiling and short effective cable distances necessitate a transition to optical technology for larger AI systems
- ▸Co-packaged optics (CPO) breakthroughs have solved the power consumption problem that made pluggable optics impractical, enabling efficient scale-up
Summary
NVIDIA is accelerating its shift toward optical interconnects and co-packaged optics (CPO) technology to overcome the bandwidth and physical limitations of copper cabling in its GPU clusters. At GTC, CEO Jensen Huang announced plans to pack more than 1,000 GPUs into single systems by 2028, marking a significant evolution from the current Grace Blackwell NVL72 rack that houses 72 GPUs. The GPU giant has invested billions in optical and interconnect specialists including Marvell, Coherent, and Lumentum to secure supply chains and accelerate the deployment of these next-generation systems.
Copper interconnects, while cost-effective and reliable, are approaching their limits with maximum bandwidth of 1.8 TB/s and cable runs constrained to just a few feet before signal degradation. Pluggable optics initially presented a viable alternative but would have required 20,000 additional watts of power per rack. However, recent breakthroughs in co-packaged optics—which embed optical engines directly into switch ASICs—have dramatically reduced power consumption and enabled NVIDIA to integrate CPO directly into its Spectrum Ethernet and Quantum InfiniBand switches in 2025, making large-scale optical deployments economically feasible.
- NVIDIA has invested billions in optical infrastructure companies (Marvell, Coherent, Lumentum) to secure supply chains for widespread CPO deployment
Editorial Opinion
NVIDIA's pivot to optical interconnects represents an inevitable but strategically crucial evolution in AI infrastructure. As the company scales from 72 to potentially 1,000+ GPUs per system, the physical laws of copper transmission make optics not just preferable but essential. The timing is critical—by combining CPO breakthroughs with aggressive capital deployment into specialized suppliers, NVIDIA is positioning itself to own the next generation of AI compute architecture before competitors fully realize the necessity. This move will likely reshape the entire datacenter networking landscape.



