NVIDIA DGX Station Now Available: High-Performance Grace Blackwell Workstations Enter Market
Key Takeaways
- ▸DGX Station workstations with Grace Blackwell architecture are now shipping and available for order from NVIDIA partners
- ▸Final specifications include 252GB HBM3e (reduced from originally planned 288GB) and 72-core Grace CPU, with significant networking capabilities via dual 400Gbps ports
- ▸System is power-constrained to 1.6kW to fit within standard 120V outlet limits while delivering server-grade AI compute performance
Summary
NVIDIA has officially launched the DGX Station, a workstation-sized system built around the Grace Blackwell architecture that was first announced at GTC 2025. The system is now available for order through NVIDIA partners as of GTC 2026, bringing server-grade AI compute capabilities to individual workstations. The DGX Station features a 72-core Grace CPU paired with a Blackwell Ultra GPU, 252GB of HBM3e memory, and 496GB of LPDDR5X system memory, positioned as a significant step up from both the smaller DGX Spark and traditional PCIe-based x86 workstations.
The system includes enterprise-grade connectivity with dual 400Gbps ConnectX-8 networking ports, four PCIe Gen5 M.2 SSD slots, and three PCIe Gen5 expansion slots. However, the shipping specifications represent a change from NVIDIA's original 2025 announcement—the HBM3e memory was reduced from the initially planned 288GB to 252GB, suggesting NVIDIA is using salvaged B300 chips with one HBM3e stack disabled. The complete system is designed to operate within a 1.6kW power envelope, the maximum supported by standard North American 120V outlets, demonstrating NVIDIA's effort to pack maximum compute into a desktop-form-factor device.
- NVIDIA maintains strict specification requirements for OEM partners building DGX Station systems, ensuring consistency across vendors
Editorial Opinion
The DGX Station represents NVIDIA's pragmatic approach to extending Blackwell performance down to the workstation segment, though the last-minute memory reduction from 288GB to 252GB raises questions about yields or supply chain pressures. While the 1.6kW thermal envelope is impressive engineering, the 12% memory bandwidth reduction versus full B300 servers may limit performance on certain workloads. Nevertheless, the rigorous specification control and enterprise-grade networking make this a compelling option for AI researchers and professionals who need more than a discrete GPU setup but less than a full server deployment.



