Nvidia SaturnV supercomputer takes HPC efficiency to new heights

Fast cars have a fantasy attached to them that fuel-efficient ones do not. The latter, though, are much more practical in just about every case. The same might go for high-performance computing, where speed usually makes the news, but efficiency means much lower operating costs.  That was exactly Nvidia's goal with its DGX SaturnV, which the company announced this week as the world's most efficient supercomputer.

Nvidia says the SaturnV is 42% more efficient than last year's most efficient machine. Where last year's winner managed 6.67 GFLOPS/W, the SaturnV pulled 9.46 GFLOPS/W. Nvidia compares the SaturnV to the current list as well, matching it to the Camphor 2 supercomputer in terms of performance and beating it by 2.3 times in energy efficiency.

The SaturnV is made up of 125 DGX-1 deep learning systems, each of which has eight Tesla P100 cards inside. That's 1,000 cards, each of which can perform FP16 calculations at 21.2 TFLOPS.  For comparison, a GTX 1080 performs at 138 GFLOPS. Nvidia is banking heavily on the power of machine learning, which these DGX-1 systems are designed for. Nvidia offers up examples like modeling new combustion engines and fusion reactors as potential uses for the SaturnV. The DGX-1 itself is already in the field, with groups like Open AI, Stanford, the New York University and BenevolentAI using them for research. Nvidia itself uses the DGX-1 for designing the autonomous driving software included in the Drive PX 2.

The DGX SaturnV might not be the fastest supercomputer this year—it ranks a not-too-shabby 28 on the TOP500 list—but its incredible efficiency is going to make the system far more practical for many of the applications that companies and universities will be using it for in the coming years.

Tip: You can use the A/Z keys to walk threads.
View options

This discussion is now closed.