Nvidia teases a Titan X Collector’s Edition graphics card

Just minutes ago, Nvidia tweeted out a portentous video teasing some kind of Titan X Collector's Edition graphics card. Relive that 13 seconds with me:

Some strategic pauses in the action revealed the name of this card, and it appears to be the first major redesign of Nvidia's reference cooler shroud since the GTX 1080 Founders Edition broke cover a year and a half ago. RGB LEDs also appear to be along for the ride. Whether this is a kind of capstone for Pascal GPUs or the debut of a consumer Volta chip remains to be seen, but I'd put my money on the former for now. We'll keep our eyes peeled for more details as they surface.

Comments closed
    • CScottG
    • 2 years ago

    The 1070Ti and now this..

    -squeezing that last little bit of pascal out of the “tube”.

    [url<]https://www.youtube.com/watch?v=nIOoNuUzFjQ[/url<]

    • DeadOfKnight
    • 2 years ago

    In half a year they will announce a new chip that’s half the size and power draw with similar performance for half the price. Only a true collector would want this.

      • shank15217
      • 2 years ago

      I doubt that, Volta is a compute focused gpu, that’s the next gen.

        • renz496
        • 2 years ago

        It will be similar to pascal. GV100 will be the compute volta. Anything else will be tweaked more for gaming purpose.

          • the
          • 2 years ago

          GV100 definitely is not a gaming chip. It currently ranks as the largest mass production chip ever built at 815 mm^2. That is more than the GP102 and GP104 [i<]combined.[/i<] nVidia pretty much had to deal with that insane die size to meet some contractual deadlines for some super computing projects. I strongly suspect that we'll be seeing a GV110 or GV200 chip a year from now on a new process node that is pretty much the same specs but at a relatively more reasonable ~600 mm^2 die size. GV104 will probably launch on a new process node and be the gaming focused card. GV102 will be the really high end chip and arrive 6 to 9 months later for Quadros and a Titan V card. After that, nVidia will likely play a wait and see when a hypothetical GTX 1180 Ti would arrive using GV102 based upon AMD's offerings.

    • the
    • 2 years ago

    Could be a GP100 based card. Not much room for the standard GP102 Titan to improve. nVidia may have an excess of GP100 inventory now that GV100 chips are making their way into supercomputers now.

      • techguy
      • 2 years ago

      GP100 only has 3584 CUDA cores. GP102 has 3840. Other than additional memory bandwidth by way of uber expensive HBM2, GP100 brings nothing to the table for graphics workloads that GP102 does not already bring. GP100 has additional DP compute rate, which does nothing for graphics (gaming) workloads.

        • Wren
        • 2 years ago

        I’m pretty sure GP100 also has 3840 CUDA cores (some are disabled on existing cards using it).

          • techguy
          • 2 years ago

          Even if it does, the point stands. Gaming workloads won’t execute significantly faster on GP100 than GP102.

            • the
            • 2 years ago

            You have a lot more memory bandwidth on the GP100 than the GP102. ROP count is purportedly higher too at 128, though I haven’t actually seen that confirmed anywhere as the only GP100 card with ROPs enabled has been the Quadro GP100. This would benefit 4K and higher resolution gaming.

            Games that make use of half precision formats should see a performance gain as half precision throughput is twice that of single precision on GP100. Half precision throughput on GP102 is far lower than single precision.

            • Laykun
            • 2 years ago

            Games that use half precision will be few and far between at this point in time since it requires explicit developer effort to implement. Although if anyone has more information on this I’d love to be proven wrong. The usefulness of half precision floats in games is somewhat limited and its influence blown a bit out of proportion, although I could see it making a big difference for low end cards where you’ll want to turn the settings down for some “acceptable” artifacting. That’s not to say gains can’t be had on higher end cards but they’ll be difficult wins if you want to maintain quality.

            • the
            • 2 years ago

            Most developer went with 32 bit single precision because there was no difference between half precision and single precision performance. Basically half precision was performed as a single precision operation then casted down to the correct 16 bit value. Until recently the only reason to really use half precision was to save on memory storage. Now we’re just starting to see where some performance gains on the PC side can happen.

            Certain engines like Unity which have strong mobile backing already support half precision throughput as that is more of the standard format in mobile. It wouldn’t take much to utilize half precision on the PC. I want to say that Unreal also has similar 16 bit support now too.

            • renz496
            • 2 years ago

            to use FP16 in games is not much of a problem. the bigger issue would be how to properly use them the correct way and optimize them. and the way how console and PC games look today the usage of FP16 becoming even more limited. with mobile games the usage of FP16 made much more sense due to their graphic are not as complex as games on home console or PC.

            also to gain performance from FP16 the hardware need to be configured in certain way. personally i suspect the way FP16 is configured on Vega is one of the reason that contribute why despite being slightly bigger than GP102 Vega can only compete evenly with GP104 in majority of games.

          • renz496
          • 2 years ago

          I think they are. Just that at launch they disable some CUDA cores to improve yield. If nvidia release titan based GP100 with it’s full compute capability intact it will sell like hot cakes.

        • Chrispy_
        • 2 years ago

        Has that been confirmed? I always thought GP102 was the DDR5X variant of GP100 and that GP100 was just 3584 cores for yield reasons.

          • the
          • 2 years ago

          GP100 does have 3840 share ALUs per nVidia’s white paper:
          [url<]https://images.nvidia.com/content/pdf/tesla/whitepaper/pascal-architecture-whitepaper.pdf[/url<] There are a couple of major differences in GP100 vs. the rest of the pascal lineup. Compute alignment has double precision twice that of single precision and has single precision twice that of half precision. This is not true on the other Pascal cards. GP100 does not support SLI but rather nvLink which passes some coherency information instead of just synchronization data. So conceptually multi-GPU scaling should be superior on GP100. Eight GP100 are possible in a single system but it appears that the PCIe card based boards are limited to just four (GP100 Teslas have a mezzanine slot option that permit more GPUs).

          • renz496
          • 2 years ago

          GP100 is compute pascal. GP102 is gaming pascal. they are not the same chip at all. GP100 die size is 610mm2. GP102 is 471mm2.

    • derFunkenstein
    • 2 years ago

    Someone on Twitter wondered aloud about a Star Wars tie-in or theme. While that would be super weird, the video did kind of give that Star Destroyer “under the hull” POV from the opening scene in Episode 4.

    • Philldoe
    • 2 years ago

    Collectors Edition. wtf. Who is stupid enough to collect a $1200 GPU?

      • derFunkenstein
      • 2 years ago

      YouTubers who film in front of hardware shelves like [url=https://www.youtube.com/watch?v=M_hyxCw_LA8<]this guy[/url<].

        • floodo1
        • 2 years ago

        Well played

      • kcarlile
      • 2 years ago

      Biologists who want to do machine learning but don’t want to pay for Quadros or Teslas. Trust me on this one.

      • swaaye
      • 2 years ago

      Worth $12 in 10 years.

    • chuckula
    • 2 years ago

    Well that’s 13 seconds of your life you’re never getting back.

      • Chrispy_
      • 2 years ago

      More. I skipped around the video looking for a product shot, couldn’t find one, watched all 13 seconds, then wasted some time typing out this complaint in the comments box.

      Total damage: WELL OVER A MINUTE. I could have spent that time drinking some coffee!

        • drfish
        • 2 years ago

        Don’t worry, you can get that time back after you buy the card and benchmark marginally faster[?]

          • Chrispy_
          • 2 years ago

          Good point, I’ll take three!

Pin It on Pinterest

Share This