Home Nvidia readies up a PCIe version of the Tesla V100
News

Nvidia readies up a PCIe version of the Tesla V100

Wayne Manion
Disclosure
Disclosure
In our content, we occasionally include affiliate links. Should you click on these links, we may earn a commission, though this incurs no additional cost to you. Your use of this website signifies your acceptance of our terms and conditions as well as our privacy policy.

Nvidia created a stir back in May when it introduced its Volta GPU compute architecture and a stack of products based on the preposterously large 815 mm² V100 chip. The company announced yesterday that the V100 product line will expand by one when the PCIe version of the the Tesla V100 compute card starts shipping by the end of the year. The card will join a suite of previously-announced products that use Nvidia's proprietary NVLink interconnect.

The PCIe Tesla V100 is just a bit tamer than its brethren, a byproduct of a TDP cut to 250 W from the 300 W figure of the other V100 products. The specifications are about 6.5% lower across the board, suggesting that only the clock rates changed. The card still packs an arsenal of 5120 stream processors capable of delivering a peak of 7 TFLOPS of double-precision floating-point arithmetic, up to 14 TFLOPS of single-precision FP, and as much as 112 TFLOPS when doing deep-learning work on 640 tensor cores. For reference, the full-fat NVLink Tesla V100 can deliver up to 7 TFLOPS of double-precision FP, 15 TFLOPS of single-precision FP, and 120 TFLOPS from its tensor cores.

The bandwidth to the rest of the system is chopped down quite a bit, plummeting from the second-generation NVLink's mind-boggling 300 GB/s to a more pedestrian 32 GB/s. The on-package memory is 16 GB of HBM2 on a similar setup to the NVLink version of V100, offering the same 900 GB/s of bandwidth over a 4096-bit interface.

The PCIe Tesla V100 is arriving roughly at the same time as AMD's Vega Frontier cards, and some comparisons are inevitable despite the difference in architecture. The memory speed, capacity, and the single- and double-precision FLOPS specs aren't terribly far off from one another, though rumors suggest that AMD's cards will require a big chunk of power. The second half of the year should be an interesting time for GPU computing.

Nvidia didn't offer pricing information for the PCIe version of Tesla V100, though you can bet it will be exquisitely expensive. The company says that the cards will be available before the end of the year from Nvidia resellers partners and manufacturers including Hewlett-Packard Enterprise. We are unsure if this means that cards will only be available as a part of new systems or if they'll be selling individually. In any case, stay tuned.

Latest News

smartphone security organization
Community Contributions

How to Successfully Tackle Smartphone Security in Your Organization

meme-season (1)
Crypto News

8 Meme Coins to Consider for Investment During the Current Meme Coin Trend

Meme coins recorded jaw-dropping returns in the past couple of weeks. Many household projects pushed towards its new ATHs in recent weeks. Dogwifhat, surged over 600% in the last week...

SpaceX Is Building A Network Of 100 Spy Satellites For The US
News

SpaceX Is Building a Network of 100 Spy Satellites for the US Government, Angers China

Elon Musk’s SpaceX is reportedly making 100 spy satellites for the US intelligence agency. According to sources, the company recently accepted a secret contract by the US government worth $1.8 billion....

IMF Shared An Update About The February Security Breach
News

IMF Shared an Update about the February Security Breach – All Affected Email Accounts Resecured

Taylor Swift in concert
Statistics

9 Taylor Swift Controversies – The Numbers Behind the Drama

What is Darwin AI, Apple’s Latest AI Acquisition?
News

What is Darwin AI, Apple’s Latest AI Acquisition?

Cyberattack On France Govt Exposes Data of 43 Million Users
News

Massive Cyberattack On France Government Departments Leaves The Data of 43 Million Users Exposed