Nvidia's Tesla P100 will slot into PCI Express in Q4 2016


— 9:49 AM on June 20, 2016

Nvidia's Tesla P100 used a special mezzanine connector for its proprietary NVLink interface when it arrived in April. That interface isn't common in typical servers and workstations, leaving HPC customers out of the Pascal party unless they purchase one of Nvidia's DGX-1 systems. Today, Nvidia is making the Tesla P100 accessible to more traditional third-party systems with a PCIe 3.0 version of the card.

The GP100 GPU used in the PCIe version of the Tesla P100 appears to be identical to the one aboard the NVLink P100. Nvidia reduced the boost clock of this card to 1303 MHz, down from 1480 MHz in the NVLink card. That move reduces the card's single-precision performance to "only" 9.3 TFLOPS versus the 10TFLOPS of the NVLink P100. That dialing-back also reduces the card's TDP 50W to 250W.

16GB of HBM2 memory is still the default configuration on the PCIe P100, although Nvidia will offer a 12GB version of the card, as well. The 12GB card appears to simply drop or disable one stack of HBM, reducing the memory interface width from 4096 bits to 3072 bits. Accordingly, the 12GB card's memory bandwidth falls from 720GB/s to 540GB/s. Either card will still be a compute monster, though.

The launch of the GeForce GTX 1080 was such a momentous event that it's easy to forget that Pascal actually launched with GP100. Our comment threads and forums have produced much speculation about a possible forthcoming release of GP100 in a more desktop-friendly form factor. This card isn't that release, obviously—Tesla cards have no video connections—but the standard interface might bring GP100 that much closer to a Quadro card of some kind. Nvidia says the PCI Express GP100 will be available in Q4 this year.

   
Register
Tip: You can use the A/Z keys to walk threads.
View options

This discussion is now closed.