We don't know for sure whether it will make its way into a graphics card or not, but the biggest variant of Nvidia's Kepler GPU architecture doesn't lack for customers. The GK110 chip, which we previewed earlier this year, is making its debut today as the primary flops contributor to Titan, a massive new supercomputer at Oak Ridge National Labs.
Titan consists of 200 cabinet-sized Cray XK7 supercomputers. All told, it includes 18,688 nodes, with each node comprised of a 16-core AMD Opteron processor and an Nvidia Tesla K20 GPU. Nvidia tells us Titan can achieve over 20 petaflops of peak performance, and over 90% of those flops come from its GK110 GPUs.
If those big numbers aren't impressive enough for you, consider this. Titan is an upgrade from the prior supercomputer, Jaguar, which consumes seven megawatts of power in order to achieve two petaflops of throughput. Titan consumes a little more power, at nine megawatts, in the same physical space, yet it peaks at ten times the flops. That's a huge claimed increase in power efficiency from one generation to the next.
Titan's most basic component is a blade for the Cray XK7. Each blade houses four nodes, or four Opteron-plus-Tesla instances, aboard a low-profile, snap-in module. In order to fit into this form factor, the Tesla K20 is mounted on a custom card that's shorter than a traditional graphics card—and nearly square. Each Tesla card includes 6GB of dedicated GPU memory. You can see four of those cards to the right in the picture above, with the copper heatsinks atop the GK110 GPUs. Across the middle of the blade are the heatsinks for the 16-core Opterons, and on the left are the two Cray Gemini interconnect units that attach the blade to the rest of the system.
Incidentally, the Opteron processors used in the system are dual-chip CPUs based on the Bulldozer microarchitecture. We asked Sumit Gupta, General Manager for Tesla Accelerated Computing at Nvidia, why those CPU were chosen for this project, given the Xeon's current dominance in the HPC space. Gupta offered an interesting insight into the decision. He told us the contracts for Titan were signed between two and three years ago, and "back then, Bulldozer looked pretty darn good."
Titan is what's known as an "open science" supercomputer, because researchers from across the nation are free to request time on the system for their projects. Accordingly, Nvidia and Oak Ridge have been working to accelerate some key applications for scientific computing, using both the CUDA and OpenACC APIs. Materials science application WL-LSMS is already purportedly running at a sustained rate of over 10 petaflops. Not surprisingly, Oak Ridge is swimming in proposals from scientists hoping to use Titan, to the tune of triple the time available.
Gupta refused to speculate where Titan might land on supercomputing's Top 500 list, but he noted that the next revision of the list will be released on November 12, so we'll have to keep an eye on it.
GK110 GPUs may be shipping to other customers under embargo, but the Tesla K20 hasn't reached general availability yet. As we learned back in May, supercomputing and HPC clusters could potentially soak up the entire supply of GK110 chips—likely at some very nice prices, we should note—leaving few or none left over for GeForce cards. Nvidia may keep a limited quantity of GK110s in reserve, though, just in case the upcoming Radeon refresh steals away the consumer graphics performance crown.
|Gigabyte SA-SBCAP3350 puts formidable power on a single board||6|
|Alphacool Eisblock HDX-2 and HDX-3 help M.2 SSDs beat the heat||3|
|Corsair Lighting Pro Expansion Kit lets builders turn up the lights||7|
|Adata D16750 power bank is tougher than the average juice pack||9|
|Deals of the week: fast memory, an AM4 motherboard, and more||12|
|Corsair RMx White Series PSUs take a walk on the snowy side||22|
|Intel crams 100 GFLOPS of neural-net inferencing onto a USB stick||38|
|Toshiba's XG5 1TB NVMe SSD reviewed||9|
|Microsoft and Johnson Controls put Cortana in a thermostat||23|