Nvidia unveils a Pascal-powered Titan X with 11 TFLOPS on tap

Nvidia unveiled the Pascal-powered version of its Titan X uber-card this evening. The new card features 3584 stream processors running at 1417MHz base and 1531MHz boost speeds. The company promises 11 TFLOPs of single-precision performance from a board power of 250W.

While details of the silicon itself are light right now, this new Titan X is without a doubt our first look at a "bigger Pascal" chip, which the press has dubbed GP102 in the absence of more info. That GPU has a 384-bit path to 12GB of GDDR5X memory running at 10 GT/s. Nvidia says it built this chip with 12 billion transistors on the same 16-nm FinFET process it's using to fabricate its other consumer Pascal cards. The new Titan X will be available August 2 for $1200 from Nvidia's online store.

Comments closed
    • DancinJack
    • 3 years ago

    So, Nvidia says the GP102 chip in this card is 471mm2. Apparently the size differences between GP100 and GP102 are primarily the HPC features (NVLink, FP16/FP64 etc). Nvidia says the primary focus of this chip is FP32 and INT8. /shrug

    So not really the Titan incarnations we’ve seen in the past.

    • Shoki
    • 3 years ago

    This card convinced me to stop waiting for an EVGA 1080. Ordered the 1070 FTW.

    • tootercomputer
    • 3 years ago

    To whom is this card directed?

      • smilingcrow
      • 3 years ago

      Metrosexual pygmies with a college degree and an intact virginity.

        • ImSpartacus
        • 3 years ago

        Yep, I’ve already ordered three.

          • Redocbew
          • 3 years ago

          So you’re the one who ninja’d the last three from me.

      • tipoo
      • 3 years ago

      In addition to two good answers above, budget Quadro/Tesla users.

        • tootercomputer
        • 3 years ago

        That’s what I was wondering. CAD, workstation. I have a brother-in-law who has purchased Quadro cards for many years; he does a lot of professional CAD work. His cards have always cost 1k or more.

        I’ve never understood how quadro cards differ from geforce cards; supposedly they use the same gpu.

          • tipoo
          • 3 years ago

          Depending on the range of Quadros, they can be pretty much the same silicon as Geforces, but with Quadro drivers tested against pro applications. You also get pro level customer support, you don’t want to be waiting on consumer customer support if each hour is costing you a lot of money.

          On the higher end of Quadros, they stop disabling a lot more of the compute capabilities and have a higher double precision rate than consumer cards, and some have ECC memory.

          The original Titan had the same Double Precision rate (1/3rd) as high end Quadros I recall, but then the Kepler/Maxwell generation cut out a lot of the compute – see newer Titan X getting whooped by the older Titan in DP below:

          [url<]http://images.anandtech.com/graphs/graph9059/72530.png[/url<]

          • chuckula
          • 3 years ago

          You aren’t paying for better silicon, you are paying for a certification process for the CAD software.

          CAD developers don’t care about FPS (within reason) or other metrics that you care about when playing games. They care about making sure that what they see on the screen is absolutely an accurate representation of what they have designed with a very high level of precision.

    • Mikael33
    • 3 years ago

    But will it play Minesweeper at 4k?

    • Voldenuit
    • 3 years ago

    The Titan X is not targeted at gamers, even though some gamers will pay to get it.

    Companies and educational institutions heavily invested into neural network and deep learning algorithms will probably find it a bargain.

    Because of this, as a gamer (and not an AI researcher) I take far less issue to its $1200 price tag than the markups on gaming cards (FE tax, retailers price gouging on 1080).

      • Billstevens
      • 3 years ago

      Their real research cards are the Tesla line aren’t they? Or do they kind of push this like a budget workstation GPU.

      Their website only talks about gaming under the Titan line… I know this is fully capable of doing some serious research work too. But then again so is the 1080.

      Though I am sure the extra Vram goes a long way.

        • brucethemoose
        • 3 years ago

        I think Nvidia pushed the original Titan as a prosumer card, only to discover than a sizable number of gamers will drop over $1k for a single GPU. So now they market it as a gamer card.

          • JustAnEngineer
          • 3 years ago

          The Titan Z, based on Kepler, had a whole lot of DP compute capability. Maxwell and Pascal have only a fraction as much of the compute resources. These are gaming GPUs first and foremost.

            • tipoo
            • 3 years ago

            The pascal one should do better than the last two gens here, but we’ll see with reviews. The original GTX Titan does still stomp the X though.

          • Krogoth
          • 3 years ago

          Titans were originally pitched as a “ultra-high” gaming GPU that used silicon that didn’t meet Quadro/Telsa standards.

          It is akin to “Extreme Edition” chips from Intel which are just Xeons that didn’t hit TDP/clockspeed grades that Intel wanted.

          It is just a happy accident that Nvidia left most of the compute logic intact for Kepler Titans (just lacks ECC support and certifications for CAD and other professional 3D applications).

          Maxwell Titans on the other hand were a fully enabled GM210 chip that had double the memory of a 980Ti.

            • ImSpartacus
            • 3 years ago

            Yeah, I think Titan is just Nvidia’s halo card at this point. It no longer really makes sense for prosumers in the way that the original Titan did.

        • MathMan
        • 3 years ago

        Here’s what happens when a leading deep learning researcher at a very big company needs a bit more calculation power: he buys *another* 80 Titan X.

        [url<]https://forum.beyond3d.com/posts/1862141/[/url<] A bit further down he mentions that he has 2 PFLOPS of Titan X FP32 compute power. That's 300+ Titan X machines.

      • Mr Bill
      • 3 years ago

      Can you use workstation drivers with this card?

        • Krogoth
        • 3 years ago

        If you use hacked firmware and drivers.

          • Mr Bill
          • 3 years ago

          QED it actually IS targeted at gamers.

            • MathMan
            • 3 years ago

            You don’t need hacked drivers for deep learning.

            • tipoo
            • 3 years ago

            [url<]http://www.anandtech.com/show/10510/nvidia-announces-nvidia-titan-x-video-card-1200-available-august-2nd[/url<] "Finally, NVIDIA has clarified the branding a bit. Despite GeForce.com labeling it "the world’s ultimate graphics card," NVIDIA this morning has stated that the primary market is FP32 and INT8 compute, not gaming. Though gaming is certainly possible - and I fully expect they'll be happy to sell you $1200 gaming cards - the tables have essentially been flipped from the past Titan cards, where they were treated as gaming first and compute second. This of course opens the door to a proper GeForce branded GP102 card later on, possibly with neutered INT8 support to enforce the market segmentation." So a little of both, they do market it as a gaming card, but they expect a lot of compute use. You don't need pro drivers for all compute work.

            • Krogoth
            • 3 years ago

            You do need professional driver and firmware if you want to use professional-tier sofware though. The license cost for said software is much greater than the cost of a Quadro/Telsa card.

            Titan makes more sense for general compute hobbyists and students where the cost of a genuine Quadro/Telsa/FirePro is too much while regular Geforces/Radeons don’t cut it.

            • tipoo
            • 3 years ago

            That’s what I was getting at. Some users will just want as much compute grunt as they can get without paying for Firepro support and driver testing, as they may not even use professional programs, rather, doing more custom/academic things themselves.

            • Mr Bill
            • 3 years ago

            The Pro Duo does not come with the workstation license?

            • Mr Bill
            • 3 years ago

            Ah, [url=http://support.amd.com/en-us/kb-articles/Pages/AMD-Radeon-Pro-Duo-Workstation-Driver-for-Windows.aspx<]they do[/url<] and its free.

    • Mr Bill
    • 3 years ago

    A single GPU solution will probably game more smoothly than the Radeon Pro Dual. So, this is pointed more at gamers than professionals in my opinion. Whereas, the Radeon Pro Dual at $1500 is 25% more costly, but at 16.38 TFLOPS, has 50% more FP32 performance.

    [url=https://techreport.com/review/30037/amd-radeon-pro-duo-bridges-the-professional-consumer-divide<]AMD Radeon Pro Duo bridges the professional-consumer divide[/url<] Edit1: Originally misread TFLOPS from table, now corrected. Edit2: Added link to TR's Radeon Pro Dual article with specs table.

    • geekl33tgamer
    • 3 years ago

    Without competition, this is how we arrive at $1200 . Nvidia: The way it’s meant to be PAID.

      • HisDivineOrder
      • 3 years ago

      Kudos for the play on the words. However, I have to ask. Are you blaming nVidia for AMD not competing? πŸ˜‰

    • cynan
    • 3 years ago

    [quote<]The company promises 11 TFLOPs...[/quote<] So going by this metric alone, just over 2x the GPU horsepower of the RX 480 for 5x the price. Now that's TITANic value!

      • chuckula
      • 3 years ago

      I only have one question: HOW MANY PHASES?!!??

        • ImSpartacus
        • 3 years ago

        [url=http://i.imgur.com/XS5LK.gif<]I understood that reference.[/url<]

      • Mr Bill
      • 3 years ago

      Or for $1499, you could buy the 16.38 TFLOP [url=https://techreport.com/review/30037/amd-radeon-pro-duo-bridges-the-professional-consumer-divide<]Radeon Pro Dual[/url<]. A 25% higher cost for 50% more performance. Edit: my bad misread 16.38 for 18.5 huh, wonder how that happened

        • chuckula
        • 3 years ago

        Where the hell did you find an 18.5 TFlop Radeon Pro Duo?
        Because AMD would like to see one.
        Their version is only 16.38 TFlops you see. [url<]https://techreport.com/review/30037/amd-radeon-pro-duo-bridges-the-professional-consumer-divide[/url<] AMD is very curious about these rogue Radeon Pro Duo manufacturers.

          • Mr Bill
          • 3 years ago

          +++ You are positively correct sir! I have corrected my posts.

        • Andrew Lauritzen
        • 3 years ago

        Or, you could collect like 12 HD4870s from the dumpster and do it for FREE! Infinite perf/$!

      • JustAnEngineer
      • 3 years ago

      [quote=”cynan”<] Now that's a TITANic value! [/quote<] [url=https://www.youtube.com/watch?v=kgv7U3GYlDY<]What savings![/url<]

        • Mr Bill
        • 3 years ago

        nearly spewed my coffee, the link was just what I was thinking.

    • chuckula
    • 3 years ago

    So in a span of 67 days from May 27 to August 2 Nvidia has released four new silicon designs for commercial sale and five products when you consider the GP104 variants.

      • anotherengineer
      • 3 years ago

      You mean TSMC and board partners have produced several new saleable products for Nvidia??

        • chuckula
        • 3 years ago

        And if AMD’s smart then TSMC and board partners will produce several new saleable Vega products for AMD to cut out GloFo.

          • anotherengineer
          • 3 years ago

          Maybe. But I guess that depends if they are smart, have their design done, and TSMC has some space and time for them.

          • raddude9
          • 3 years ago

          You’re assuming that just because a process is better for GPU’s it’s also better for CPU’s. Why? Every company’s process has it’s own set of advantages and disadvantages.

            • chuckula
            • 3 years ago

            [quote<]You're assuming that just because a process is better for GPU's it's also better for CPU's.[/quote<] No I'm not. I'm saying that Vega -- which is a GPU -- would be much better off if it's not starting with a noticeable disadvantage at GloFo. I never said anything about whether GloFo's process is worthwhile for CPUs. I never said anything about Zen. We can have that conversation closer to Zen's launch when real information about what it's actual power consumption and performance numbers look like.

    • WaltC
    • 3 years ago

    Sounds like the people who want to buy these things will need them for cerebral processing hookups to help their (obviously deficient) brains to think rationally…;) Talk about “sucker food,” nVidia can surely dish it out…;)

      • smilingcrow
      • 3 years ago

      So affluent people who are gamers who buy the best are inherently irrational!
      People buying $1700 Intel CPUs when they aren’t able to harness more than half of its performance may be clueless but at least with a GPU you can harness it all unless they are running at 1280×720. πŸ™‚

      • xand
      • 3 years ago

      Why would you object to the existence of a better performing product, even if the price/performance ratio is worse?

      You do realise you don’t have to buy it, and if you’re right and no-one should buy it, well, nVidia would probably stop updating the TITAN line.

      However, nVidia is still updating the TITAN line, so it looks like you’re wrong.

    • stefem
    • 3 years ago

    Those numbers are interesting, maybe NVIDIA will disable some parts of the chip.

    If this iteration of Pascal has 3584 CUDA core partitioned in SM that still composed by 128 of them, we end up with 28 SM which does not fit well with GPC made of 5 SM, unless two of them are disabled.

      • jihadjoe
      • 3 years ago

      Check out the [url=http://videocardz.com/62649/nvidia-quadro-p6000-and-p5000-pictured<]Quadro P6000 info[/url<]. GP102 (presumably the same chip as this) but with 3840 CUDA cores. If that's the fully enabled chip, then that's 30SMs and fits very well with 5SM per GPC.

    • leor
    • 3 years ago

    I wonder why they didn’t just go with HBM2 if they were already going this far.

      • PrincipalSkinner
      • 3 years ago

      HBM2 is rarer than hens tit nowadays. It’s next year’s high end memory.

        • Magic Hate Ball
        • 3 years ago

        I thought it was hen’s teeth.

        But yeah, you’re right.

          • PrincipalSkinner
          • 3 years ago

          Hens don’t have either so it makes it OK to say that.

    • Kougar
    • 3 years ago

    $1200 bucks but not even worth giving it its own model name…

    • I.S.T.
    • 3 years ago

    …What the hell? This was supposed to be the review for the GTX 1070, not the Pascal Titan X article?

    Dammit Firefox, what did you do now

    • anotherengineer
    • 3 years ago

    I guess Nvidia is able to push out all these cards so fast since they don’t have to share TSMC capacity with AMD???

      • chuckula
      • 3 years ago

      They get whatever leftovers TSMC has available after Apple does its thing.

        • MathMan
        • 3 years ago

        It doesn’t really work that way.

        A fab will always try to balance its allocation between customers. It’s better to make all customers somewhat disgruntled with 90% of what they need, they will stay, than to give priority to a very big one and watch the other go to a competitor. (And for this generation, the Samsung process is a valid competitor.)

        It’s just smart business.

          • SHOES
          • 3 years ago

          I suspect that Chuckula is on point. While your theory (Assuming your guessing like the rest of us.) seems logical, it seems to me that here in the real world logic tends to escape most large corporate entities.

            • MathMan
            • 3 years ago

            I’m not entirely guessing. Whenever my company had a chip that became an unexpected high-runner and we ran out of wafers, our CEO would do the pilgrimage to Hsinchu to get more. And he’d always get a good part of what he was asking for.

            There’s also the fact that not all companies have peak demand at the same time. Even Apple will have periods where wafer demand is considerably lower than during peak. That’s where you need your other customers.

            Really large companies usually became really large because they were using real world logic better than their competitors. Especially when applied to their core product: the ability to deliver silicon to those who need it.

    • Chrispy_
    • 3 years ago

    Six years ago, Nvidia’s 500+mm^2 products sold for $350-500.

    This is what no competition at the high end causes. We saw it with Intel, and now Nvidia are taking the mickey too.

    People will argue that the 1080 and 1070 are the high end, but they’re really not. They have similar die size and manufacturing cost as the old midrange champions like the GTX 460 and 9800GT. Lack of competition from AMD means that Nvidia can get away with selling their midrange products for $699 now.

    $1200?
    Damn I miss healthy competition πŸ™

      • brucethemoose
      • 3 years ago

      To be fair, Nvidia gets to do this because they’re first to market.

      They do compete with AMD these days… they just do it months after their product launches.

      • chuckula
      • 3 years ago

      [quote<]Six years ago, Nvidia's 500+mm^2 products sold for $350-500.[/quote<] Yeah, but how much did 500 mm^2 of wafer space cost on a 40nm process 6 years ago?

        • Waco
        • 3 years ago

        Don’t bring logic into this!

      • maxxcool
      • 3 years ago

      The process technology to etch those wafers however is vastly different and much more expensive

        • jihadjoe
        • 3 years ago

        This. Sometime during the whole 28nm thing both Nvidia and AMD said something about cost per transistor only marginally coming down with each new process because the wafer cost keeps going up node to node.

          • maxxcool
          • 3 years ago

          Indeed. It is very close to leveling off in terms cost to performance gains. I imagine unless something magical happens and you can align atoms on the cheap (that IS a joke) GPU costs are going to start going up for those in need of Moar-epenis in a few more years.

      • Blytz
      • 3 years ago

      Plenty of times where flagship cards were 600-700 a decade ago though. Pretty sure we’ve moved up in the cost of living to cover that difference.

        • Chrispy_
        • 3 years ago

        The G80 and G70 launched at $599, are you saying cost of living has doubled since 2006?

          • f0d
          • 3 years ago

          there was the 8800 ultra at about $900

          edit: $830+
          [url<]http://www.anandtech.com/show/2222[/url<]

            • Chrispy_
            • 3 years ago

            a) That wasn’t a launch product for G80.
            b) That review you linked [i<]specifically criticizes the cost[/i<] for what is effectively a 6% or 10% overclock. You're right that it was expensive, it was Nvidia's first attempt at jacking up the price for no reason other than "they could". In my opinion, the reason "they could" be greedy was because their competition from ATi was already 7 months late with their 2900-series and leaked benchmarks were already showing that the 7-month-old GTX was still faster than the upcoming 2900XT. Which takes me nicely back to my original point; Without competition, Nvidia get unbelievably greedy. $1200 greedy in this case.

            • Voldenuit
            • 3 years ago

            [quote<]Which takes me nicely back to my original point; Without competition, Nvidia get unbelievably greedy. $1200 greedy in this case.[/quote<] Without competition, anyone gets greedy. The dual-core Athlon X64s were going for $300 back when intel had no answer to them, and the 5850 and 5870 were $100 more expensive than the 4850 and 4870 launched at.

            • f0d
            • 3 years ago

            $300?
            when they were released they were $537-$1001
            [url<]https://techreport.com/review/8295/amd-athlon-64-x2-processors[/url<] nvidia are just doing what any company would do if they had the chance, its just that amd hasnt had the products to be able to price so high for a while now you can bet that if amd had a superior solution that nvidia had no answer to they would also price as high as they think they could sell them for

            • Voldenuit
            • 3 years ago

            Oof. That makes me feel really good about that Opteron 165 I got for ~$330 and OC’ed to 3 GHz, then.

          • smilingcrow
          • 3 years ago

          The Titan cards are akin to Intel’s X99 platform so they are re-positioned workstation/server parts aimed at prosumers so there is nothing from 10 years ago to compare with.

          Die sizes are irrelevant as a means of comparison.
          Look at a high end consumer CPUs today versus 10 years ago and the die sizes are smaller today.
          It’s gets more expensive to produce on smaller nodes so it’s not surprising the chips are smaller.

          Of course competition would help consumers but the market has changed from 10 years ago as well. Without the high end prosumer chips putting a ceiling on pricing it’s possible that the high end consumer chips would have drifted up even more in price but right now that isn’t so possible. And remember that Intel & AMD used to sell $1,000 consumer CPUs over 10 years ago.
          But Intel and Nvidia have both pushed the envelope this year with prosumer chips now going well above $1,000 especially from Intel.
          The good news is that at least AMD has a chance of making some money at the high end when they release products as that in the long term is what is needed to make them competitive which will prices back down.
          So high prices now is a good thing! πŸ™‚

      • smilingcrow
      • 3 years ago

      Considering that an Iphone 6s Plus 128GB is $949 direct from the manufacturer this seems good value in comparison.

    • chuckula
    • 3 years ago

    To those of you who don’t like the price [me included] there’s a simple solution: Get AMD to actually launch Vega so the GTX-1080Ti can launch a month earlier.

      • jts888
      • 3 years ago

      Do we even have any hard information on whether there will be multiple Vega models at this point?

      Last I heard, the top (and possibly only) chip was stated to be HBM2, and I don’t see Vega coming out any time soon unless AMD wanted to push out a similar 384b GDDR5(X) variant.

        • chuckula
        • 3 years ago

        AMD has stated that there is are two Vega chip models.

        Considering what Polaris does and doesn’t cover, I would guess that “small” Vega is their answer to the GP-104 and that “big” Vega is their answer to these GP-102 parts. Big Vega is pretty much confirmed to be HBM2 equipped, although I’m not sure if small Vega is also HBM2 or using GDDR5X.

        As a side note, it’s interesting to speculate on whether Polaris could support GDDR5X if you really wanted to push a version with that memory.

          • RAGEPRO
          • 3 years ago

          [quote<]As a side note, it's interesting to speculate on whether Polaris could support GDDR5X if you really wanted to push a version with that memory.[/quote<]Could be interesting for a refresh cycle if Globalfoundries' process improves. Polaris 10 at, say, 1.5 GHz or so could probably make use of the extra bandwidth from 10 Gbps memory. AMD should bring back the old ATI "Pro" branding.

            • chuckula
            • 3 years ago

            [quote<]AMD should bring back the old ATI "Pro" branding.[/quote<] Or even.... (β€’_β€’) ( β€’_β€’)>βŒβ– -β–  (βŒβ– _β– ) [url=http://www.cnet.com/products/ati-rage-pro-turbo-agp-2x-4mb-deskpro-en-series-compaq-cto-onl/specs/<]Rage Pro [/url<]

            • the
            • 3 years ago

            Rage Fury MAXX

          • Pancake
          • 3 years ago

          Vic and Vincent Vega.

    • torquer
    • 3 years ago

    Cool.

    • rudimentary_lathe
    • 3 years ago

    It’s OK R9 270, I still love you.

    • BurntMyBacon
    • 3 years ago

    The naming scheme is too obfuscated. From now on I shall refer to the titans as follows:
    Titan X[sub<]m[/sub<] - Maxwell Titan X[sub<]p[/sub<] - Pascal

      • DoomGuy64
      • 3 years ago

      Corsair Titan XMP Fatal1ty edition. Ram specifically tuned for gamers.

      • chuckula
      • 3 years ago

      Oooh, subscript in BBCode.
      Very slick.

    • PrincipalSkinner
    • 3 years ago

    They should have priced it $1723 and be done with it.

      • BurntMyBacon
      • 3 years ago

      Are you sure they couldn’t have gotten $1726 out of it? Surely they can beat Intel.

      • cynan
      • 3 years ago

      Don’t worry. That’ll be the street price of the Founder’s Edition when it first hits.

        • smilingcrow
        • 3 years ago

        Maan, this ain’t no street card dude, it’s an exclusive VIP only, buy direct from the Source so you are gonna get some pure uncut shit maan.

        As for being uncut as they are selling direct nobody else is taking a cut from the profits so as well as a $200 price hike NV also gets the OEM and retailer cut as well.
        Wow, they will have a massive margin on these compared to an OEM card selling at a grand at retail.

    • Spunjji
    • 3 years ago

    This is what happens when we don’t have competition at the high-end. 😐

    Also, what’s with all the relativism? I keep seeing posts (not just here) working out some particular circumstance in which this looks like good value. Self-deception as an art form.

    Ah well. I can’t afford one, so I’m not the target market, so I guess I should just shut up and let the big boys play.

      • chuckula
      • 3 years ago

      Yeah, where were your comments on the Radeon Pro Duo that costs $300 more than this thing?

        • BurntMyBacon
        • 3 years ago

        That is also what happens when you don’t have anything with which to compete at the high end. ;’)

    • Tristan
    • 3 years ago

    Should be named Titan G (reedy)

    • bfar
    • 3 years ago

    So these will only be available directly from Nvidia? Sounds to me like they won’t (or can’t) make many of these for general distribution. A Halo product in virtually every respect.

      • stefem
      • 3 years ago

      Or maybe they want to avoid price inflation by retailers while obtaining higher margin in the process

      • smilingcrow
      • 3 years ago

      Unless it has good FP64 performance then it’s kinda niche at this price.
      So it seems a smart move to sell direct and reap the profits from the small pool of buyers.

      Nvidia maximises profits and they are greedy.
      AMD flirts with financial disaster through incompetence and they are the good guys.
      Rewarding failure is not a good attitude.

        • stefem
        • 3 years ago

        Will be a great card for deep learning, that’s why they presented it at a deep learning developers event and will be really fast for gaming too

          • jts888
          • 3 years ago

          Using individual workstation cards for DL is dubious at best.

          99+% of users will never, ever need it, and those who do would be better served by a chassis or rack full of enterprise grade cards.
          The rare slice of people who primarily game but want to maybe play around with DL stuff once or twice aren’t really going to care about getting a 2x speedup by using fp16 calculations anyway.

            • stefem
            • 3 years ago

            It is not, people are using even Tegra processors for deep learning.
            Also the card (like the rest of Pascal consumer line-up) doesn’t offer double performance in fp16, it works at 1/128.
            But given the incredible INT8 performance this card will be a perfect match for deep learning inferencing.

    • Unknown-Error
    • 3 years ago

    Insane specs but with confusing name.

      • JustAnEngineer
      • 3 years ago

      Some marketing jerk at NVidia deserves bad things for this latest product naming shenanigan. NVidia has long mastered the art of intentionally-confusing product names, but this may take the cake.

        • slowriot
        • 3 years ago

        I think the recent “Adaptive VSync” feature takes the cake. I’ve yet to see someone get that 100% right. I mean that in… I’ve seen people describe the right feature with the wrong name and/or mixing it up with Adaptive Sync. Frankly I think it was Nvidia’s intention to cause confusion. They could have used “Dynamic VSync” or more accurate than either “Selective VSync.”

      • Liron
      • 3 years ago

      To be fair, the bad name was the last generation, which should have been the Titan IX; here the card just finally caught up with its proper name.

      • travbrad
      • 3 years ago

      They should have called it Titan ONE. That seems to be the thing to do these days (Xbox One, Battlefield 1, etc)

      • chuckula
      • 3 years ago

      Well, Nvidia has said that they aren’t trying to destroy AMD anymore.

      It looks like they are now intruding into Intel territory with an assault on Intel’s bastion of confusing names.

    • mkk
    • 3 years ago

    Queue them in pairs for them Youtubers.

    1060/480 -festivus, for the rest of us.

    • Zizy
    • 3 years ago

    Cheap. I thought Titan of this gen was going to be at least 1500 if not 2000 given insane prices of other cards. This is possibly a better perf/$ than the current prices of 1080, and you get a higher end GPU and you also get some “pro” features (INT8, hopefully FP16 too) as well.
    But I don’t expect to see 1080Ti given price of this one.

      • Airmantharp
      • 3 years ago

      Why not?

      There’s still plenty of room in the lineup.

    • DancinJack
    • 3 years ago

    NOW LET’S SEE DAT 1080Ti WOOOOOOOOOOOO

      • HisDivineOrder
      • 3 years ago

      That’s what a new Titan means. I guess the 1080 sales dried up and it was time to try to lure in the holdouts with the dream of a 1080 Ti…

      Elsewhere, 1080 buyers snap their fingers and mutter, “Had highest end for a whole month!”

    • ClickClick5
    • 3 years ago

    Now in the Nintendo NX.

      • sweatshopking
      • 3 years ago

      The new, slower than xbone, for casual players only (according to ubisoft and Nintendo) cheap console? I doubt it.

        • tipoo
        • 3 years ago

        If that has Polaris and roughly the GPU execution power of the first XBO I’ll be pleasantly surprised. After all the Wii U speculation threads of “1Tflop, minimum, no way it’s lower…They can’t even buy a part under 600Gflops anymore…Ok, here’s the die shot, 300, right, can’t be lower? …And it’s 172Gflops. Crap.” I’m leaving expectations low in hopes of being pleasantly surprised.

        Matching the first iteration of these consoles would still put them in a better spot power wise than they have been in years. The Wii U GPU was over 10x slower than the PS4 one. If the PS4 .5 is 2.2x as powerful as the PS4, and the Wii U is between the XBO and PS4, that’ll be closer than anything since the Gamecube.

          • sweatshopking
          • 3 years ago

          Yeah, but it’ll still suck

            • Kretschmer
            • 3 years ago

            Sorry, but your opinions carry little weight WITHOUT CAPS.

            • sweatshopking
            • 3 years ago

            NX IS DUMB AND WILL ALMOST CERTAINLY BE TERRIBLE.

    • ultima_trev
    • 3 years ago

    I hope GP102 overclocks well because a 2 GHz GTX 1080 would be pretty darn close to this in terms of theoretical performance.

      • bfar
      • 3 years ago

      Well the 1070 can boost up to 2ghz on it’s own, but a 1080 can’t. I wouldn’t hold out hope for the bigger chip.

        • floodo1
        • 3 years ago

        My 1080 runs at around 2150mhz

        • jihadjoe
        • 3 years ago

        TPU has tested a bunch of custom 1080s, and every single one has reached 2GHz. Actually every single Pascal card they had has reached 2GHz.

        GPU CORE MEMORY
        MSI GeForce GTX 1060 Gaming X 6 GB 2139 MHz 2435 MHz
        NVIDIA GTX 1070 FE 2101 MHz 2380 MHz
        EVGA GTX 1070 SC 2088 MHz 2370 MHz
        MSI GTX 1070 Gaming X 2101 MHz 2420 MHz
        NVIDIA GTX 1070 FE 2088 MHz 2330 MHz
        Palit GTX 1080 GameRock 2114 MHz 1400 MHz
        ASUS GTX 1080 STRIX 2114 MHz 1400 MHz
        Gigabyte GTX 1080 G1 Gaming 2050 MHz 1405 MHz
        MSI GTX 1080 Gaming X 2050 MHz 1400 MHz
        NVIDIA GTX 1080 FE 2114 MHz 1450 MHz

          • HisDivineOrder
          • 3 years ago

          The power of manufacturer-supplied cards at work in the real world.

          • smilingcrow
          • 3 years ago

          Are those peak or sustained frequencies as that must make a difference?
          I’m not up on modern GPU OCing which is why I ask.

            • jihadjoe
            • 3 years ago

            Peak for the FEs i think because they hit 83Β°C, but sustained for the custom cards if you raise the power limit. Thermal throttling happens once the card exceeds 82Β°C.

            Here’s a youtube review showing sustained 2GHz OC on an EVGA 1080FE while playing Witcher 3. The reviewer raised the fan speed to 80% to keep temps just under 82Β°C and avoid throttling:
            [url<]https://www.youtube.com/watch?v=YCQcdeS4qFg[/url<]

    • sweatshopking
    • 3 years ago

    Meh

      • f0d
      • 3 years ago

      krogoth?
      did you take over SSK’s account?
      totally unimpressed and no caps – something dodgy is going on here

    • djayjp
    • 3 years ago

    Obviously the (relatively) low flops (despite much higher core count and similar clock speed) means this emphasizes dual precision and is not a gaming card (hence the conspicuous lack of GeForce branding). Don’t get all hot n bothered guys. Wait for the GTX 1080Ti at the end of the year or so to fight Vega for much less $ and higher performance.

    • evilpaul
    • 3 years ago

    Those marked up 1080s just became a better value!

      • Spunjji
      • 3 years ago

      It looks even better if you compare it to the previous gen Titan X and pretend there was no die shrink in between, too πŸ˜€

        • BurntMyBacon
        • 3 years ago

        There was a die shrink!?! [b<]Hogwash!!![/b<]

    • brucethemoose
    • 3 years ago

    Any word on how big the GP102 die is, and how much this is cut down (if at all)?

    • Vaughn
    • 3 years ago

    Come on Nvidia that price is too low surely you can do better.

      • brucethemoose
      • 3 years ago

      Don’t worry. It’ll be perpetually OOS at first, and you’ll get a nice $200+ markup on top of that.

    • wingless
    • 3 years ago

    I was fine until I read the price. TWELVE….HUNNED….HOLY….S***!

      • jihadjoe
      • 3 years ago

      You know what? It doesn’t actually sound that expensive to me, so I guess the intended effect of Founder’s Edition pricing has worked.

        • Airmantharp
        • 3 years ago

        Me either- but this is half gamer card and half professional card.

        Four-digit pricetags on professional gear isn’t unheard of, and this is just barely that. Look up the price for a comparable Quadro or Tesla, and get beck to me ;).

          • Spunjji
          • 3 years ago

          Half gamer card, half pro card, half absurd overpriced rip-off…

            • Airmantharp
            • 3 years ago

            There are situations where having this much power on a single gaming card can make sense- and while it is certainly expensive, you’re spending as much (or more) on what you’re driving with it.

            Ditto for professional uses; in that world, it’s a downright bargain!

          • jihadjoe
          • 3 years ago

          Agreed, if this is 1/2DP and you need the compute then it’s actually a bargain.

          IIRC even entry level FirePros and Quadros are at least $4000.

            • chuckula
            • 3 years ago

            If this thing had 1/2DP then my guess is Nvidia would have broadcast it loud & clear in the marketing announcement. My initial guess is that it’s also 1/32 DP just like regular Pascal cards.

            Now, there might be a cut-down version of the GP100 that does the 1/2DP coming at some point in the future, but that’s probably a 2017 product.

            • Jeff Kampman
            • 3 years ago

            The transistor count suggests this is not going to be a 1/2 DP card.

    • LocalCitizen
    • 3 years ago

    where does that leave 1080ti?
    3072:192:96 with 384bit 10GHz GDDR5x for 699$ (non FE) in october?

      • ImSpartacus
      • 3 years ago

      I don’t think any that has been confirmed.

      It’s fair to assume that a 1080 Ti will eventually exist and it’ll likely be gp102-based, but I don’t believe that we have any solid info on price or configuration.

      • guardianl
      • 3 years ago

      1080 Ti –

      3072:192:96 with 384bit 10GHz GDDR5x (maybe with 320 bit bus)
      Approx. 8 GB RAM (Micron has GDDR5X modules with power of 2 + 50% sizes, they didn’t make those just hoping someone would use them)
      $899 with $999 FE

      I would pretty much bet money on those specs/price.

        • LocalCitizen
        • 3 years ago

        1080 details are not set because it’s waiting for vega info to show, and then tweek

        power of 2 + 50% is interesting. 8GB is the minimum (since 1080 has 8) 12GB might be too much (since Titan has 12GB), so, if it’s 6GB + 50% = 9GB, that’s a good number.

        i think 899 / 999 is too high, but it all depends on vega.

        also, pascal titan x is clearly a limited production product right now, thus only available from nvidia.com.

        it’s interesting to why gp102 is available so soon. there’s no pressure from amd. TSMC 16nm yield is probably fabulous. nvidia can supply all 3 chips if vega challenges. (4 chips including gp100)

        of course nvidia wants to delay 1080ti as much as possible because gp104 is selling so well right now, there’s no need to lower their prices.

        changing my prediction a bit:
        vega comes out Q1 ’17,
        1080ti out around the same time at between 699 and 799 (non FE),
        price cuts on 1080, 1070
        price cuts on titan x pascal, if vega challenges.
        1080ti has 9GB if it uses Micron power of 2 + 50% chips
        or has 10GB on 320 bit bus.

        surprise release of titan x pascal is a move against ati in a game only they understand

          • ImSpartacus
          • 3 years ago

          9GB of VRAM on the 1080 Ti is unlikely for a couple reasons.

          If you use 12Gb GDDR5X on a crazily cut-down 192-bit bus (6 chips), then you get to 9GB of VRAM, but you only have 240 GB/s of bandwidth with 10GB/s GDDR5X. If Micron or someone somehow gets 11 or 12 GB/s chips in mass production, then that’s still a max of 288GB/s. That would be enough bandwidth for like a 1070.

          If you use 6Gb GDDR5X on a 384-bit bus (12 chips), then you get to 9GB of VRAM, but then you look at everything and notice that it’s identical to the Titan X’s configuration except you’re using 6Gb chips rather than 8Gb chips. Same bandwidth, etc. So you naturally go, “Why not just use the slightly older (and therefore not super expensive) 8Gb GDDR5X chips?”

          Oh yeah, and then you notice that Micron, Sammy & Hynix aren’t showing 6Gb or 12Gb GDDR5X in their catalogs yet, so there’s that…

          Otherwise, I feel ya. 10GB of GDDR5X on a 320-bit bus is plausible and Nvidia will probably delay the 1080 Ti as long as possible so as to ruin AMD’s day weeks before big Vega drops just like the 980 Ti and Fury X.

        • ImSpartacus
        • 3 years ago

        I was actually also on the 320-bit bandwagon earlier today, because it would allow a 10GB VRAM capacity that would cleanly fit between the the 1080’s 8GB and the Titan X’s 12GB. It would also provide a perfectly usable 400 GB/s of bandwidth.

        However, NVidia could use much cheaper & easily available GDDR5 on a 384-bit bus and get 384 GB/s (almost 400GB/s!) and have 12GB of VRAM capacity.

        I don’t think we’ll see the 6Gb or 12Gb GDDR5X yet. Yes, Micron [url=http://www.anandtech.com/show/10193/micron-begins-to-sample-gddr5x-memory<]plans to make them[/url<]. However, [url=https://www.micron.com/products/dram/gddr/gddr5-part-catalog#/<]they don't show up on Micron's catalog at all[/url<] (not even as "sampling"). The only GDDR5X is in the 8Gb capacity (10 GB/s, 11 GB/s & 12 GB/s). Also, what if big Vega drops with three stacks of HBM? That would allow for 12GB of VRAM. No way Nvidia would risk being outdone by that.

    • DrCR
    • 3 years ago

    A pretty good deal for a durable good that will outlast your lifetime and your great-grandkids could use someday.

      • JustAnEngineer
      • 3 years ago

      Note that the comments do not have a [sarcasm] tag.

      On the other hand, this new Titan X (Pascal) will be a [b<]better[/b<] relative gaming value than the old Titan X (Maxwell) that they are still selling. For compute, there may be better alternatives. NVidia drastically reduced the compute capabilities in their GPUs when they went from Kepler (Titan Z, for those keeping track) to Maxwell.

    • chuckula
    • 3 years ago

    Somebody needs to tell Nvidia that those shield tablets make lousy web servers.

    OK: Finally go the blog to load with the specs:

    So forget words. Here are its numbers:

    11 TFLOPS FP32
    44 TOPS INT8 (new deep learning inferencing instruction)
    12B transistors
    3,584 CUDA cores at 1.53GHz (versus 3,072 cores at 1.08GHz in previous TITAN X)
    Up to 60% faster performance than previous TITAN X
    High performance engineering for maximum overclocking
    12 GB of GDDR5X memory (480 GB/s)

    All of this happened because some guy was drinking with Jen Hsun and made a bet or something.

    Actually, the raw numbers are an OK step up from a GTX-1080 but the single-precision TFLOP count is only 22% higher than the stock-clocked GTX-1080. Bigger improvements (50%) to memory bandwidth and even moar capacity. The 8-bit integer instructions are also a gimmick for weight values at nodes in deep convolutional networks that aren’t going to help your games run any faster.

      • f0d
      • 3 years ago

      trading in your 1080 for one?
      if you do can i have your address and what times you wont be home?

        • chuckula
        • 3 years ago

        Lol, I’m not trading in for that.

        I’m actually quite happy with the GTX 1080 that cost me exactly what a GTX-980Ti or R9 Fury X cost about 6 weeks ago and even has a factory OC.

        We all knew that “big” Pascal would show up and that it would cost a boatload when it did. The only real surprise here is that it showed up sooner than most people (myself included) actually expected.

          • Spunjji
          • 3 years ago

          And cost more than expected, too – attempts at shilling by NVIDIA fans inserting even-more ludicrous prices to “fall back from” disregarded.

            • chuckula
            • 3 years ago

            Radeon Pro Duo: $1500
            Titan X 2.0: $1200

            I don’t have a problem with you saying that the new Titan X is overpriced.
            I do have a massive problem with you pretending that only Nvidia charges too much for a product, especially when the $1200 Titan X is a bargain compared to its competition from a certain company that apparently can’t be criticized because they released one part for one market segment that’s not a bad value while being an actively bad value everywhere else.

            • BurntMyBacon
            • 3 years ago

            About this:[quote<]I do have a massive problem with you pretending that only Nvidia charges too much for a product ...[/quote<] To be fair, there is no mention of any other company's products being a bad or a good value in this post, but I'll let Spunjji make his own case. As for myself, I'm an equal opportunity criticizer. The fact that there is a much worse value to be had in no way changes the value of the product in question. It only changes the perception of that value by the people comparing it to the other product.

            • bjm
            • 3 years ago

            Hey look, the pot is calling the kettle black.

            • DoomGuy64
            • 3 years ago

            The Pro Duo is a dual chip on a stick with a watercooler. Not exactly cheap to make, so I wouldn’t be calling it overpriced compared to a Titan X. They are both expensive, but the Duo is clearly more expensive to make. I’m not saying anything about the Duo is positive, just that it is obviously more expensive to make.

            They’re both overpriced niche products which have no business being compared to mainstream cards. People buy these because they’re either someone with money to burn, or a professional who can write off these cards as a business expense. At least the Pro Duo lets you use AMD’s workstation drivers, dunno if Nvidia lets you do the same.

            • Mr Bill
            • 3 years ago

            [quote<] At least the Pro Duo lets you use AMD's workstation drivers[/quote<] That certainly makes the cost worthwhile for a "prosumer" compared to an actual workstation card like the $6000 [url=http://www.anandtech.com/show/10209/amd-announces-firepro-s9300-x2<]AMD FireProβ„’ S9300 x2 Server GPU[/url<].

            • Mr Bill
            • 3 years ago

            A 25% higher cost for 50% more FP32 performance. As other posts make clear, was not suggesting the Pro Duo was a better gaming card.

            edit: misread TFLOPS, now corrected, Thanks Chuck

            • chuckula
            • 3 years ago

            You just keep on believing that.

            [Edit: I’m not even going into your assumptions about how well the Fury Pro actually performs BTW, but your numbers are literally not right. The Fury Pro isn’t an 18.5 TFlop card, it’s a 16.38 TFlop card according to AMD’s own advertising.

            [url<]https://techreport.com/review/30037/amd-radeon-pro-duo-bridges-the-professional-consumer-divide[/url<] ]

            • Mr Bill
            • 3 years ago

            +++ because I like your positive attitude.

            • Mr Bill
            • 3 years ago

            My bad, I misread the table. Its corrected now. Thanks again for your positive attitude.

            • anotherengineer
            • 3 years ago

            2 chevy malibus will be more money than 1 camaro and have less performance.

            I find in the silicon world you pay for performance and quantity also, that is unless fabs are now offering buy 1 wafer, get 1 wafer free?? High-end products, niche products, etc. always have big e-peen tax on them.

            • HisDivineOrder
            • 3 years ago

            Did it really surprise you? Really?

            Because it didn’t surprise me. Rumors had pegged it anywhere between $1200-1500. Plus, what has nVidia released lately that’s actually hit MSRP?

            The 1080 came out, promised at $599, but it’s really $699. The 1070 came out, promised at $379, but it’s really $449. The 1060 came out, promised at $249, but it’s really $299.

            The only thing truly surprising about the Titan is if it actually releases at that $1200 price point and doesn’t sneak up to $1400 “just cuz.”

        • Srsly_Bro
        • 3 years ago

        It won’t matter, without organs you aren’t going far.

      • stefem
      • 3 years ago

      Are INT8 OPS number available for other Pascal GPUs?

        • chuckula
        • 3 years ago

        I’m honestly not sure. I think they would be unless this is a slightly different version of the Pascal architecture that turns on those instructions while they aren’t activated in other Pascal releases.

        All they are doing is byte-length integer operations [just like a 1980’s NES did] and from what little I know of convolutional networks it’s almost all simple addition/subtraction operations that modify the “weight” values of different nodes in these deep convolutional networks that are popular in AI.

        • MathMan
        • 3 years ago

        GP104 has the same DP2A and DP4A instructions. Same speed factor as FP32 FOR GP102.

          • stefem
          • 3 years ago

          Nice, didn’t know that

      • tipoo
      • 3 years ago

      “High performance engineering for maximum overclocking”

      HIGH PERFORMANCE ENGINEERS, NOW ON POWER THIRST

    • bjm
    • 3 years ago

    I wasn’t keeping track of the rumors, but I’m impressed at how quickly they’re going to be shipping this. August 2nd! I’m aiming to build a new system toward the end of this year and this is going to be on the top of my list.

      • beck2448
      • 3 years ago

      The one percenters will buy every last one or two because they can. I don’t like SLI but even so it will put up ridiculous numbers.

      • Spunjji
      • 3 years ago

      You are part of the problem. Thank you.

        • bjm
        • 3 years ago

        Ya know, I’ve never understood this point of view. I’ve only ever bought low to midrange hardware up until now. During all that time, never did I think that those who bought the top-end hardware were a “problem” because they could afford the hardware that I could not. But hey, not my problem. πŸ™‚

          • BurntMyBacon
          • 3 years ago

          It’s a problem for those who buy at the high end. It encourages nVidia to continue pushing the limits of how much they can charge. For those who buy the (relatively) cheaper lower margin cards that don’t have as much wiggle room, it can actually be a good thing as every higher margin sale subsidizes the R&D costs that went into both products.

        • Kretschmer
        • 3 years ago

        What, he’s problematically subsidizing the R&D of our mid-range cards with his huge margins? Good luck and godspeed, my pecuniary-heavy friend!

      • nanoflower
      • 3 years ago

      Um, who said they will be shipping this? πŸ˜‰

      Seriously, given the trouble they have with keeping stock of 1070s/1080s I expect the Titan X to be in short supply for at least a few months. If you are waiting till the end of the year to purchase one then maybe you won’t have any problems finding one but the people trying to find them at launch are likely to have serious issues finding them.

        • bjm
        • 3 years ago

        Well, according to nVidias own announcement, their store will have them available on August 2nd. But true, availability will likely be scarce. I’m aiming for a December build, but still not quite sure of the specs.

        I probably jumped the gun a bit on my original post, but being that Vega has a vague rumored release date of “1H 2017”, I’m not sure if I’m willing to wait.

    • LocalCitizen
    • 3 years ago

    ooo aaah
    but unfortunate timing tho. (almost) everyone is focused on that guy with the shiny hair.

    • CScottG
    • 3 years ago

    Oh yes. The scalping will be mighty with this one.

    • travbrad
    • 3 years ago

    [url=http://vignette3.wikia.nocookie.net/2001/images/4/41/A_itmes.jpg/revision/latest?cb=20111116211153<]It's finally here![/url<]

    • chuckula
    • 3 years ago

    I think Jen Hsun just dropped the mic and walked off the stage.

      • cynan
      • 3 years ago

      And enthusiasts holding out for any sort of price/performance value in the high end GPU segment just dropped their wallets back into their pockets and walked off to vainly queue for a 1080Ti .

      • anotherengineer
      • 3 years ago

      Not sure if it’s worth the “slow clap” or not.

      Think I will wait for the Pascal Titan X Duo, x2 in SLI.

      Then he can drop the mic and walk of the stage πŸ˜‰

    • derFunkenstein
    • 3 years ago

    Ooh… *boing*

    *embarrassed giggling*

    Yes this is vaguely fanboyish but my goodness. I would sell f0d’s remaining body parts for one just to have it.

    • f0d
    • 3 years ago

    here take this arm and a leg and a kidney and a lung
    ill take one

      • chuckula
      • 3 years ago

      PLEASE FOR THE LOVE OF F0D’S ORGANS TELL US THAT NGREEDIA TOOK OUT THE SLI SUPPORT!

        • CScottG
        • 3 years ago

        ALL SHALL KNEEL BEFORE f0d! ..or what’s left of him. (ewww.)

        • Neutronbeam
        • 3 years ago

        SSK know you’re doing still doing this? You’re co-opting his brand!

          • sweatshopking
          • 3 years ago

          IMITATION IS THE HIGHEST FORM OF FLATTERY. HE LOVES ME. IT IS HOW HE SHOWS IT.

        • DPete27
        • 3 years ago

        If the arm, leg, kidney, and lung were all from the same side, then surely they took f0d’s SLI support.

        • curtisb
        • 3 years ago

        Nope…click the “View full specs” link on this page:

        [url<]http://www.geforce.com/hardware/10series/titan-x-pascal[/url<] It's listed as supported. There goes his other arm, leg, kidney, and lung.

      • BurntMyBacon
      • 3 years ago

      You dropped your spleen. You can use it for a water block or something.

      • Liron
      • 3 years ago

      Wouldn’t it be more practical to give them both legs and keep the arm, for when you use the card?

      • CuttinHobo
      • 3 years ago

      It’s just a flesh wound…

Pin It on Pinterest

Share This