Nvidia readies up a PCIe version of the Tesla V100

Nvidia created a stir back in May when it introduced its Volta GPU compute architecture and a stack of products based on the preposterously large 815 mm² V100 chip. The company announced yesterday that the V100 product line will expand by one when the PCIe version of the the Tesla V100 compute card starts shipping by the end of the year. The card will join a suite of previously-announced products that use Nvidia's proprietary NVLink interconnect.

The PCIe Tesla V100 is just a bit tamer than its brethren, a byproduct of a TDP cut to 250 W from the 300 W figure of the other V100 products. The specifications are about 6.5% lower across the board, suggesting that only the clock rates changed. The card still packs an arsenal of 5120 stream processors capable of delivering a peak of 7 TFLOPS of double-precision floating-point arithmetic, up to 14 TFLOPS of single-precision FP, and as much as 112 TFLOPS when doing deep-learning work on 640 tensor cores. For reference, the full-fat NVLink Tesla V100 can deliver up to 7 TFLOPS of double-precision FP, 15 TFLOPS of single-precision FP, and 120 TFLOPS from its tensor cores.

The bandwidth to the rest of the system is chopped down quite a bit, plummeting from the second-generation NVLink's mind-boggling 300 GB/s to a more pedestrian 32 GB/s. The on-package memory is 16 GB of HBM2 on a similar setup to the NVLink version of V100, offering the same 900 GB/s of bandwidth over a 4096-bit interface.

The PCIe Tesla V100 is arriving roughly at the same time as AMD's Vega Frontier cards, and some comparisons are inevitable despite the difference in architecture. The memory speed, capacity, and the single- and double-precision FLOPS specs aren't terribly far off from one another, though rumors suggest that AMD's cards will require a big chunk of power. The second half of the year should be an interesting time for GPU computing.

Nvidia didn't offer pricing information for the PCIe version of Tesla V100, though you can bet it will be exquisitely expensive. The company says that the cards will be available before the end of the year from Nvidia resellers partners and manufacturers including Hewlett-Packard Enterprise. We are unsure if this means that cards will only be available as a part of new systems or if they'll be selling individually. In any case, stay tuned.

Comments closed
    • Mr Bill
    • 2 years ago

    Its ambiguous to talk about the Tesla 100 in terms of single and double precision TFLOPS when on the “Vega-powered Instinct MI25” post you are reporting with 32 bit and 16 bit TFLOPS. Please pick one or the other. Myself, I would pick the number of bits as more informative rather than single, double.

    • renz496
    • 2 years ago

    [quote<]The memory speed, capacity, and the single- and double-precision FLOPS specs aren't terribly far off from one another[/quote<] isn't that Vega 10 FP64 already confirmed to be configured the same way as Fiji (1/16 SP performance)? [url<]https://videocardz.com/70440/amd-announces-radeon-instinct-mi25-specifications[/url<]

      • jts888
      • 2 years ago

      Yeah, Vega 20 sometime in 2018 is supposed to be the 1/2-rate fp64 card with 64 (N)CUs/4096 ALUs, whereas Vega 10 just focuses on fp16 and fp32.

    • chuckula
    • 2 years ago

    Sorry Nvidia, not Epyc enough. Try again later.

Pin It on Pinterest

Share This