LEVIATHAN SYSTEMS

NVIDIA GB200 NVL72 GPU Infrastructure & Deployment_

72 Blackwell GPUs per rack at 120 kW with mandatory liquid cooling. The GB200 NVL72 represents the first generation of NVIDIA's Blackwell architecture at rack scale.

What Is the NVIDIA GB200?_

The NVIDIA GB200 NVL72 is the first Blackwell-architecture GPU system designed for rack-scale AI training. Each rack contains 72 GPUs with 192 GB HBM3e per GPU, connected via NVLink 5.0 for massive inter-GPU bandwidth within the rack.

At approximately 120 kW per rack with mandatory 100% direct liquid cooling, the GB200 NVL72 requires purpose-built data center infrastructure. Traditional air-cooled facilities cannot support this platform without significant upgrades to power distribution and cooling systems.

Technical Specifications_

SpecificationGB200 NVL72
ArchitectureBlackwell
GPU Memory192 GB HBM3e per GPU
GPUs per Rack72 (NVL72 configuration)
TDP per Rack~120 kW
Rack Weight~1,360 kg (3,000 lbs)
InterconnectNVLink 5.0, InfiniBand NDR
Networking400GbE / 800GbE
Cooling100% Direct Liquid Cooling (mandatory)

Deployment Considerations_

Power Distribution

120 kW per rack requires dedicated high-voltage feeds. Standard 208V circuits are insufficient. Plan for 480V 3-phase distribution with bus plugs or overhead busway.

Structured Cabling

400GbE/800GbE networking with InfiniBand NDR fabric. High-count MPO/MTP fiber trunks required for the dense interconnect topology.

Liquid Cooling

100% direct liquid cooling is mandatory. No air-cooled option exists. Facility must provide CDUs, chilled water loops, and rack-level manifolds.

Ready to Deploy Your GPU Infrastructure?_

Tell us about your project. We’ll respond within 48 hours with a scope assessment and timeline estimate.