NVIDIA DGX Station Now Available: Grace Blackwell Workstations Enter Market with Salvaged Chip Configuration
Key Takeaways
- ▸DGX Station workstations featuring Grace Blackwell processors are now shipping with partner orders beginning at GTC 2026
- ▸Specifications use salvaged B300 chips with 252GB HBM3e memory—12% less than originally planned due to seven of eight enabled stacks
- ▸System design maximizes compute density within 1.6kW power envelope, approaching server-class performance in desktop-sized form factor
Summary
NVIDIA's DGX Station workstation systems are finally available for order, marking the market arrival of the company's desktop-class Grace Blackwell offering announced at GTC 2025. The systems pack NVIDIA's Blackwell Ultra GPU and 72-core Grace CPU soldered directly to the motherboard, paired with 252GB of HBM3e memory and 496GB of LPDDR5X RAM. However, the shipping specifications represent a notable downgrade from original 2025 plans, with NVIDIA shifting to salvaged B300 chips featuring only seven of eight enabled HBM3e stacks, resulting in 12% less memory capacity and bandwidth compared to full GB300 server parts.
The DGX Station is designed to bridge the gap between NVIDIA's smaller DGX Spark and traditional x86 workstations, positioning itself as close to a Grace Blackwell server as a single-workstation form factor allows. Each system includes high-speed networking via ConnectX-8 dual 400Gbps Ethernet ports, along with comprehensive I/O specifications including PCIe Gen5 slots and M.2 SSD support. Power consumption is capped at 1.6kW—the maximum draw for a standard North American 120V outlet—demonstrating NVIDIA's aggressive thermal engineering. For users requiring graphics capabilities, NVIDIA offers limited add-in options including RTX Pro 2000, 4000, and 6000 cards, and systems will run NVIDIA's proprietary DGX OS.
- NVIDIA maintains strict specifications for partners and provides proprietary DGX OS stack alongside hardware
Editorial Opinion
The DGX Station represents NVIDIA's pragmatic approach to bringing server-grade AI compute to workstation users, though the shift to salvaged chip configurations raises questions about yield challenges in Blackwell production. While the memory reduction is technically notable, the system's positioning at the intersection of accessibility and performance could prove valuable for AI researchers and developers who need more than a traditional x86 workstation but lack data center infrastructure. The aggressive 1.6kW power budget demonstrates impressive engineering, though it also signals potential thermal or clock-speed tradeoffs compared to rack-mounted variants.



