Nvidia refreshes the Quadro M6000 with 24GB of RAM

Some fun new toys are on the horizon for gamers, but Nvidia isn't neglecting workstation builders in the meantime. The fine folks over at Anandtech report that Nvidia is refreshing its Quadro M6000 with 24GB of GDDR5 RAM. By doubling the memory capacity of last year's model, the company has leapfrogged the 16GB of RAM available in AMD's FirePro W9100. That gives Nvidia temporary bragging rights around the workstation-graphics water cooler.

Aside from the memory increase, this Quadro M6000 appears to be identical to the 12GB version. It's still powered by the 28-nm GM200 GPU with an 1140MHz boost clock. It has the same 3072 CUDA cores and 192 texture units, and it still has a 250W TDP. According to Anandtech, Nvidia is providing some new options for controlling the card's temperature and clock speed. These controls are meant to keep GPU temperatures below the threshold where thermal throttling kicks in.

Speaking of things that have stayed the same, the Quadro M6000 24GB carries a $5000 price tag, just like the 12GB M6000. The card will be available sometime this week.

Comments closed
    • Srsly_Bro
    • 3 years ago

    This memory is also supposed to be 8 Gbps, up from 7 Gbps.

    • torquer
    • 3 years ago

    24GB of VRAM ought to be enough for anybody.

    • Mikael33
    • 3 years ago

    Is that enough rams for 4k gaming?

      • Star Brood
      • 3 years ago

      This should be enough RAM for 4K: [url<]https://wallpaperscraft.com/image/2006_dodge_ram_2500_mega_cab_96891_3840x2400.jpg[/url<]

      • Krogoth
      • 3 years ago

      More then enough.

    • NTMBK
    • 3 years ago

    Hope this gets reflected through the lineup. I could do with a 16GB M5000.

    • DancinJack
    • 3 years ago

    That is a strange move considering how close we are to Pascal. I wonder if that signals delays?

      • the
      • 3 years ago

      We could be close to consumer Pascal. The high end chip may wait until the end of the year due to its expected size (huge) and HBM2. In otherwords, expect the GDDR5(x?) based GP104 before the HBM2 based GP100.

        • DancinJack
        • 3 years ago

        I know that, and figured that, it still seems odd.

          • the
          • 3 years ago

          nVidia learned their lesson with Fermi not to go to a new process node with an extremely large chip. The GTX 480 was a bit of a fiasco. Thus the GK106/GK110 and GM206/GM200 had the smaller chip come up first.

            • Airmantharp
            • 3 years ago

            They also found that they could make their new smaller chips as fast as their old bigger chips for a while there- that certainly helps ;).

            • Voldenuit
            • 3 years ago

            The GT 220 and 240 came out 6 months before the 480 and were on 40 nm, so nvidia had already learned (from the 5700 Ultra days) not to mix new process and architectures. Unfortunately, that wasn’t enough that time.

            Big Fermi had troubles because it was a very large chip, combined with a new memory type and controller (nvidia was late following AMD onto the GDDR5 train).

            I think the lesson you allude to still holds, though. On a new process or architecture, it’s probably a good idea to launch with smaller midrange parts first, as yields will be higher with smaller dies, and that’s where most of the market is, anyway. It may be nice to have bragging rights with a flagship product, but if you screw up your flagship, it just casts a shade over your entire family of products, anyway.

            • the
            • 3 years ago

            You forget that the GTX 480’s launch was delayed by at least three months. PAX East in March of 2010 was not supposed to be the launch date. Even then, it didn’t hit limited retail until April. nVidia was hoping to launch consumer versions at CES 2010 and showed off prototype hardware in October 2009.

      • Brok
      • 3 years ago

      I guess either NVIDIA have no plans for prompt release of Pascal-based Quadro, or they do not want M6000 to look too bad in comparison.

      • Krogoth
      • 3 years ago

      Nvidia is just trying sell off GM200 chips. They aren’t that great at general compute, but they are excellent for graphical processing.

        • Flapdrol
        • 3 years ago

        I thought they were quite good at general compute, just suck at double precision.

          • Krogoth
          • 3 years ago

          Most HPC applications require double-precision.

          If you only need single precision then any run of the mill GPGPU will suffice.

Pin It on Pinterest

Share This