Big Kepler fulfills potential in Nvidia’s Quadro K6000 graphics card

Nvidia’s GK110 GPU is the big daddy of the Kepler generation. This seven-billion-transistor monster underpins the GeForce GTX Titan graphics card and the compute-oriented Tesla K20 family. Now, the chip has made its way into a workstation-focused product: the Quadro K6000.

Although Nvidia has a handful of Kepler-based Quadros, this is the first to tap the GK110 GPU. It’s also the first implementation to use all 15 of the chip’s SMX units. The GeForce and Tesla cards derived from the same silicon have at least one SMX unit disabled, leaving them with no more than 2688 ALUs. With all of its GPU resources intact, the Quadro K6000 has 2880 shader ALUs—nearly double the number present in Nvidia’s previous Quadro flagship. Here’s how the K6000 stacks up against its predecessor.

Model GPU ALUs Peak SP rate Memory

size

Memory

bandwidth

TDP
Quadro K6000 GK110 2880 5.2 TFLOPs 12GB 288GB/s 225W
Quadro K5000 GK104 1536 2.2 TFLOPs 4GB 173GB/s 122W

The Quadro K6000 promises 2.4 times the single-precision throughput of the K5000. Nvidia doesn’t quote double-precision figures, but expect the new hotness to be a substantial improvement on that front. The Quadro K5000’s GK104 GPU is limited to crunching double-precision math at just 1/24th the rate of single-precision work. In the Tesla K20 series, the GK110’s DP throughput is about one third its SP rate.

Thanks to a wider memory interface and speedy GDDR5 RAM, the Quadro K6000 offers substantially more memory bandwidth than K5000. With 12GB of memory onboard, it also has a lot more RAM overall. Nvidia says the extra memory is needed to accommodate the larger data sets being used by artists, designers, and folks in the oil-and-gas industry.

Despite its impressive horsepower, the Quadro K6000 has a modest 225W TDP. That may be 103W more than the K5000’s thermal envelope, but it’s 25W less than the rating attached to the GeForce GTX Titan. The Tesla K20X, which has half the memory and 24% less SP throughput than the new Quadro, is rated for 235W.

The Quadro K6000 is scheduled to start shipping in September or October. Nvidia hasn’t revealed pricing details yet, but you can bet the card won’t be cheap. If you’re at the Siggraph conference in Anaheim this week, you’ll be able to see the K6000 being demoed at Nvidia’s booth.

Comments closed
    • Bensam123
    • 6 years ago

    Milk that old architecture gooood…

    This shall continue looking like a spectacular monster till whenever AMD decides to release their new cards this fall…

      • Airmantharp
      • 6 years ago

      Does AMD have new, larger silicon coming? They’re pretty far behind in raw resources…

        • Bensam123
        • 6 years ago

        I thought that’s what the whole 8xxx series was that was delayed?

        Or was that a joke about silicon use and the raw resources for it…? XD

          • Airmantharp
          • 6 years ago

          Honest question, actually. Didn’t know if they were just going to push re-spins of their current stuff or actually have a larger part in the pipeline.

            • Bensam123
            • 6 years ago

            I’m pretty sure the 8xxx was supposed to be an entirely new generation from the ground up based on what TR was reporting. Then this spring they pushed the launch back, added a few cards to the 7xxx series in the meantime, and introduced some 8xxx mobile parts which are based on the 7xxx generation (which is what they normally do for mobile parts).

            • jihadjoe
            • 6 years ago

            I believe the 8000-series turned into a complete rebadge of the 7000 series, which is probably why they decided to make the 8000-series OEM only and just extend the life of the 7000 series for retail. Good decision, IMO. Certainly less confusing for would-be buyers.

            From AMD’s own site:
            [url<]http://www.amd.com/us/products/desktop/graphics/8000/Pages/8970.aspx[/url<] Notice how everything is exactly the same as with the 7970.

            • Bensam123
            • 6 years ago

            They haven’t updated that page in a long time… It was posted on the forums awhile ago too.

            Looking at the wikipedia article, the 8xxx series is a rebadge, but the 9xxx series is supposed to be released this fall…? Which makes it extremely awkward seeing as the 8xxx series really had any penetration (let alone I haven’t seen it being put in oems).

            I honestly don’t think any of the sources for this information is reliable, but there isn’t really anything else to go off of.

            [url<]http://en.wikipedia.org/wiki/Radeon_HD_8000_Series[/url<]

      • chuckula
      • 6 years ago

      Yeah Bensam, you go on believeing that Nvidia has abandoned all development of all GPUs other than the ones it already has on the market.

      Meanwhile, where is AMD’s next generation again? So AMD is in no way “milking” its old architecture since the 7970 just came out on Tuesday or something right?
      Wait, did you say that they are waiting on TSMC to get its act together with 20nm lithography?
      And that Nvidia’s next generation chips are also waiting on TSMC?

      Well, obviously this means that only AMD would ever be smart enough to design a next-generation chip so obviously Nvidia* will soon be bankrupt!

      * And since the new AMD fanboy schtick is to say that Intel is doomed when commenting on stories that have nothing at all to do with Intel, I’m certain that Nvidia releasing a workstation card is the all smoking-gun evidence that we need to prove that Broadwell is cancelled and Intel has abandoned any hopes of ever producing new chips too.

      • MathMan
      • 6 years ago

      You make it sound as if that’s a bad thing?

        • Bensam123
        • 6 years ago

        It isn’t?

        Generally as a consumer it’s a bad thing…

          • chuckula
          • 6 years ago

          Bensam123: Choice is bad unless I approve of the companies that give you the “choice”.

      • beck2448
      • 6 years ago

      Nvidia dominates the PRO workstation market in graphics at 80%. That won’t change any time soon.

    • albundy
    • 6 years ago

    and folks in the oil-and-gas industry? whats that about?

      • Airmantharp
      • 6 years ago

      They’re the people that actually use these things :).

      • Firestarter
      • 6 years ago

      seismic studies, analyzing what the sensors read in order to hit that multi-billion dollar oil/gas pocket

    • internetsandman
    • 6 years ago

    12GB of memory? If this was on a ‘consumer’ class card (a consumer with bottomless pockets) it could probably run six 4k displays in one game simultaneously

    Imagine how much of the world map you could see in Civilization V *drools*

    In all seriousness though, having no familiarity with the kind of workloads that this card is built to handle, I’d love to just be a spectator to a project that makes full use of all of these resources

      • Srsly_Bro
      • 6 years ago

      These aren’t gaming cards….

        • internetsandman
        • 6 years ago

        That was the joke. That’s why I said ‘if this was a consumer card’

    • keltor
    • 6 years ago

    For those with DP questions, basically note that for GPGPU type stuff, AMD=DP, Nvidia=SP and Phi=StuffThatPerformsLikeCrapOnAMDNvidia

    We have some special noise reduction code that we run on Phi due to the issues we faced getting the code to run fast enough on AMD/Nvidia – there’s some performance limits there that don’t work for all code.

      • lycium
      • 6 years ago

      very interesting – what sort of work do you do? was the xeon phi better for caching reasons?

    • mark625
    • 6 years ago

    Double precision a key consideration for workstation-class graphics. That fact that Nvidia doesn’t quote DP figures implies that DP is a weakness for this card. I seem to recall that DP rate is 1/24 of SP rate for Kepler, meaning only 217 Gigaflops of DP throughput. That seems incredibly weak compared to the AMD W9000 with 1000 Gigaflops DP throughput.

    Maybe I’m wrong and the K6000’s DP rate is double that. It would still be less than half what the AMD card can provide. Do people really spend thousands of dollars on workstation cards just to do SP work?

      • Deanjo
      • 6 years ago

      DP is hardly a “weak point” on this card. They also do quote the DP numbers. 1,732 Gigaflops in DP mode 5,196 in SP.

      [url<]http://www.nvidia.com/content/PDF/line_card/6660-nv-prographicssolutions-linecard-july13-final-lr.pdf[/url<]

      • Scrotos
      • 6 years ago

      Um, from the article you just skimmed:

      [i<]The Quadro K5000's GK104 GPU is limited to crunching double-precision math at just 1/24th the rate of single-precision work. In the Tesla K20 series, the GK110's DP throughput is about one third its SP rate.[/i<] So why do you think that the K6000's DP rate would be weak? Maybe you're wrong and the rate is double 1/24th of the previous gen? What? Did you just look at the pictures? 😀

      • Silus
      • 6 years ago

      Here’s your problem: you don’t know how to read.
      The article itself quite clearly mentions that GK110’s DP rate is 1/3 of its SP rate. GK110 is unmatched in DP rate.

      Reading is your friend.

        • Scrotos
        • 6 years ago

        I think keltor would disagree with your GK110 is umatched in DP rate declaration. But maybe he just hasn’t played with the new nvidia stuff?

          • Silus
          • 6 years ago

          keltor can disagree what he wants. There’s no professional graphics card that matches GK110 in DP throughput.

            • Scrotos
            • 6 years ago

            I know there’s like 2 people who post here who actually do HPC coding for a living. From keltor’s response, it seemed like he was one of them.

            Now you or I can quote specs until we’re blue in the face (and I’m not disagreeing with you, btw), but I would always defer to either benchmarks or anecdotes from someone actually using the products in question in a field where their abilities are being used.

            If keltor’s got real-world experience that says nvidia’s DP rate is much worse than they claim, I’d be inclined to believe it. If he’s just talkin’ out of his ass, though, yeah, nevermind. 🙂

            • Deanjo
            • 6 years ago

            I have real world experience as well with them. I think keltor is comparing the GK104’s and in that case he would be right. With the GK110’s however it is a different story. It also depends on weather he is utilizing openCL or Cuda. Cuda code does improve the performance on Nvidia cards and their openCL performance is hampered. There is also a ton of other factors as well as to the type of calculations are being done, code optimization for the target hardware, etc.

    • rootbear
    • 6 years ago

    I think you meant seven billion transistor monster.

      • Scrotos
      • 6 years ago

      Looks like he fixed it. Or maybe he meant 7000 million like in their GPU review tables!

    • kravo
    • 6 years ago

    Whenever I see one of these beasts I always start to wonder if it would make any sense to have one for only gaming and absolutely no scientific use. Just to see how long it takes untill the new games make it lag. Would it last 3-4 years?

    Not that I’d spend money on something that costs a smaller fortune and I couldn’t even use it to its full potential.

    But it would be still nice to see them incorporated in one of TR’s reviews.

      • Scrotos
      • 6 years ago

      I think the drivers would default to using Quadro ones instead of consumer ones. The difference being the pro/workstation ones are optimized for different workloads and wouldn’t have any game-specific tweaks and shortcuts. I could see games running slower on a beast like this just due to the drivers. Looks like it’s hard for a specific site to get review samples of both pro and consumer cards, though:

      [url<]http://www.cgchannel.com/2012/07/gpu-review-quadro-5000-vs-geforce-gtx-580/[/url<] Not really gaming-centric but at least a data point.

      • beck2448
      • 6 years ago

      Dedicated gaming cards like Titan or the 780 will outperform it in games.

      • Bensam123
      • 6 years ago

      This is just a Titan with some workstation stuff tacked onto it, you wouldn’t see a big performance increase if one at all while playing games over a Titan. It’d become obsolete just as fast as the Titan would, which is to say with the next generation of cards. This is putting aside the workstation drivers it’d use too, which aren’t optimized for gaming at all…

        • torquer
        • 6 years ago

        Actually just for the record, workstation users rarely upgrade with every generation. Since the cards are vastly more expensive and generational increases don’t have that much impact, they hold onto them for far longer. I speak as someone with experience in the broadcast industry and the IT needs that go along with it.

        That being said, the hardware is generally somewhat equivalent, but clockspeeds are often lower and drivers have had a lot more development and certification. The reason for this is simple – workstation users need accurate rendering work more than fast rendering work. That’s why these cards generally have ECC memory and other accuracy enhancing measures not found in gaming cards either at a hardware or software level. Chances are you won’t give a damn if a pixel isn’t colored absolutely correctly, but if you work at Pixar its a much bigger deal. Its even more important when you’re using these for GPU compute as an error in your calculations can literally mean life or death.

          • Bensam123
          • 6 years ago

          Yeah… they don’t hold onto them because they’re as fast as the next generation, they hold onto them because it’s cost prohibitive to constantly upgrade them.

          OP was talking about using it in a gaming application, not for a render farm though. That’s what my post was in response to.

      • jihadjoe
      • 6 years ago

      No sense at all. Last time I saw prices on big Quadros you could buy four Titans for the price of the GF110 based one.

    • CampinCarl
    • 6 years ago

    I wasn’t too impressed at first, but the bump in RAM makes this very attractive. Even if your data sets are only 2GiB, you can fit quite a few copies in RAM and still have plenty of ‘work’ area. Verrrry interesting.

      • Airmantharp
      • 6 years ago

      What’s impressive is that they can put that much RAM on a card- that means that they can do it on a consumer card when these incoming console ports arrive with assets that can make use of all that RAM on the new consoles :).

      I’m going to need at least a pair of 8GB cards to run high settings at 4k…

        • Visigoth
        • 6 years ago

        They always could’ve put more RAM in consumer cards, but it would bump up the prices needlessly. But I agree, with the new consoles having more RAM, this will be a great push for PC hardware companies to start thinking about increasing RAM specs.

          • CampinCarl
          • 6 years ago

          Not just bump the prices, but also manufacturing costs and design costs as well. These K6000s will probably cost near the same as their K20 siblings ($3k). That is more than 3 times the cost of even the GeForce Titan; nVidia (and AMD) can absorb the production and design costs when charging $3k. But for a consumer card that has to cost $500 or less? Not likely. At least they’re probably not able to and keep their margins where they like.

            • Airmantharp
            • 6 years ago

            It’s a Quadro- they could strap 12GB of memory to a Titan and charge $1500 for it, and they’d sell out.

          • Airmantharp
          • 6 years ago

          You got it!

          Not worried about it just yet- many/most of the oncoming console games will be highly scalable as they’ll be released on the current as well as future consoles, so even in my case with a pair of 2GB cards pushing 2560×1600, I’m not worried.

          But my next GPU setup will be able to handle next-generation games with live assets that could make use of 5GB or 6GB of memory for graphics, so it’s nice to see that Nvidia has already done what little work would be needed to get cards with enough memory out the door. I’ll go ahead and throw out 8GB as a definition of ‘enough’.

    • dpaus
    • 6 years ago

    Was going to express feigned outrage over the outrageous power usage, but, awww – fuggedaboutit…

      • OneArmedScissor
      • 6 years ago

      Outrageous? Why, it’s only as much as an AMD CPU!

Pin It on Pinterest

Share This