news nvidia unveils a pascal powered titan x with 11 tflops on tap

Nvidia unveils a Pascal-powered Titan X with 11 TFLOPS on tap

Nvidia unveiled the Pascal-powered version of its Titan X uber-card this evening. The new card features 3584 stream processors running at 1417MHz base and 1531MHz boost speeds. The company promises 11 TFLOPs of single-precision performance from a board power of 250W.

While details of the silicon itself are light right now, this new Titan X is without a doubt our first look at a "bigger Pascal" chip, which the press has dubbed GP102 in the absence of more info. That GPU has a 384-bit path to 12GB of GDDR5X memory running at 10 GT/s. Nvidia says it built this chip with 12 billion transistors on the same 16-nm FinFET process it's using to fabricate its other consumer Pascal cards. The new Titan X will be available August 2 for $1200 from Nvidia's online store.

0 responses to “Nvidia unveils a Pascal-powered Titan X with 11 TFLOPS on tap

  1. It is not, people are using even Tegra processors for deep learning.
    Also the card (like the rest of Pascal consumer line-up) doesn’t offer double performance in fp16, it works at 1/128.
    But given the incredible INT8 performance this card will be a perfect match for deep learning inferencing.

  2. Ah, [url=<]they do[/url<] and its free.

  3. That’s what I was getting at. Some users will just want as much compute grunt as they can get without paying for Firepro support and driver testing, as they may not even use professional programs, rather, doing more custom/academic things themselves.

  4. You do need professional driver and firmware if you want to use professional-tier sofware though. The license cost for said software is much greater than the cost of a Quadro/Telsa card.

    Titan makes more sense for general compute hobbyists and students where the cost of a genuine Quadro/Telsa/FirePro is too much while regular Geforces/Radeons don’t cut it.

  5. Check out the [url=<]Quadro P6000 info[/url<]. GP102 (presumably the same chip as this) but with 3840 CUDA cores. If that's the fully enabled chip, then that's 30SMs and fits very well with 5SM per GPC.

  6. [url<][/url<] "Finally, NVIDIA has clarified the branding a bit. Despite labeling it "the world’s ultimate graphics card," NVIDIA this morning has stated that the primary market is FP32 and INT8 compute, not gaming. Though gaming is certainly possible - and I fully expect they'll be happy to sell you $1200 gaming cards - the tables have essentially been flipped from the past Titan cards, where they were treated as gaming first and compute second. This of course opens the door to a proper GeForce branded GP102 card later on, possibly with neutered INT8 support to enforce the market segmentation." So a little of both, they do market it as a gaming card, but they expect a lot of compute use. You don't need pro drivers for all compute work.

  7. So, Nvidia says the GP102 chip in this card is 471mm2. Apparently the size differences between GP100 and GP102 are primarily the HPC features (NVLink, FP16/FP64 etc). Nvidia says the primary focus of this chip is FP32 and INT8. /shrug

    So not really the Titan incarnations we’ve seen in the past.

  8. I’m not entirely guessing. Whenever my company had a chip that became an unexpected high-runner and we ran out of wafers, our CEO would do the pilgrimage to Hsinchu to get more. And he’d always get a good part of what he was asking for.

    There’s also the fact that not all companies have peak demand at the same time. Even Apple will have periods where wafer demand is considerably lower than during peak. That’s where you need your other customers.

    Really large companies usually became really large because they were using real world logic better than their competitors. Especially when applied to their core product: the ability to deliver silicon to those who need it.

  9. Indeed. It is very close to leveling off in terms cost to performance gains. I imagine unless something magical happens and you can align atoms on the cheap (that IS a joke) GPU costs are going to start going up for those in need of Moar-epenis in a few more years.

  10. I suspect that Chuckula is on point. While your theory (Assuming your guessing like the rest of us.) seems logical, it seems to me that here in the real world logic tends to escape most large corporate entities.

  11. It doesn’t really work that way.

    A fab will always try to balance its allocation between customers. It’s better to make all customers somewhat disgruntled with 90% of what they need, they will stay, than to give priority to a very big one and watch the other go to a competitor. (And for this generation, the Samsung process is a valid competitor.)

    It’s just smart business.

  12. Here’s what happens when a leading deep learning researcher at a very big company needs a bit more calculation power: he buys *another* 80 Titan X.

    [url<][/url<] A bit further down he mentions that he has 2 PFLOPS of Titan X FP32 compute power. That's 300+ Titan X machines.

  13. Apparently AMD really does have a lock on HBM1 availability and maybe first pick at HBM2.

  14. Oof. That makes me feel really good about that Opteron 165 I got for ~$330 and OC’ed to 3 GHz, then.

  15. Yeah, I think Titan is just Nvidia’s halo card at this point. It no longer really makes sense for prosumers in the way that the original Titan did.

  16. You’re surprised? Really?

    We’ve known that GP102 would exist for over six months and given that knowledge, people have predicted a 384-bit GDDR5X setup for around 4-5 months now. Once GP104 had a 256-bit GDDR5X setup, it basically sealed the deal.

    It provides enough bandwidth while fitting into a setup very similar to the last 250W GPU.

    HBM is just too expensive & supply-limited to catch on in the consumer market. AMD is going to try it with Vega, but they haven’t exactly been executing very well as of late, so I’m a little nervous for them.

  17. [quote<]You're assuming that just because a process is better for GPU's it's also better for CPU's.[/quote<] No I'm not. I'm saying that Vega -- which is a GPU -- would be much better off if it's not starting with a noticeable disadvantage at GloFo. I never said anything about whether GloFo's process is worthwhile for CPUs. I never said anything about Zen. We can have that conversation closer to Zen's launch when real information about what it's actual power consumption and performance numbers look like.

  18. It’s not even silly money if you think of it in context of other hobbies. Track days, for instance. [url=<]Good trackday tires[/url<] easily go for over $250 a corner, and you can finish off a set in just two track days. One if you're really going for it.

  19. This. Sometime during the whole 28nm thing both Nvidia and AMD said something about cost per transistor only marginally coming down with each new process because the wafer cost keeps going up node to node.

  20. You’re assuming that just because a process is better for GPU’s it’s also better for CPU’s. Why? Every company’s process has it’s own set of advantages and disadvantages.

  21. $300?
    when they were released they were $537-$1001
    [url<][/url<] nvidia are just doing what any company would do if they had the chance, its just that amd hasnt had the products to be able to price so high for a while now you can bet that if amd had a superior solution that nvidia had no answer to they would also price as high as they think they could sell them for

  22. [quote<]Which takes me nicely back to my original point; Without competition, Nvidia get unbelievably greedy. $1200 greedy in this case.[/quote<] Without competition, anyone gets greedy. The dual-core Athlon X64s were going for $300 back when intel had no answer to them, and the 5850 and 5870 were $100 more expensive than the 4850 and 4870 launched at.

  23. You aren’t paying for better silicon, you are paying for a certification process for the CAD software.

    CAD developers don’t care about FPS (within reason) or other metrics that you care about when playing games. They care about making sure that what they see on the screen is absolutely an accurate representation of what they have designed with a very high level of precision.

  24. Depending on the range of Quadros, they can be pretty much the same silicon as Geforces, but with Quadro drivers tested against pro applications. You also get pro level customer support, you don’t want to be waiting on consumer customer support if each hour is costing you a lot of money.

    On the higher end of Quadros, they stop disabling a lot more of the compute capabilities and have a higher double precision rate than consumer cards, and some have ECC memory.

    The original Titan had the same Double Precision rate (1/3rd) as high end Quadros I recall, but then the Kepler/Maxwell generation cut out a lot of the compute – see newer Titan X getting whooped by the older Titan in DP below:


  25. That’s what I was wondering. CAD, workstation. I have a brother-in-law who has purchased Quadro cards for many years; he does a lot of professional CAD work. His cards have always cost 1k or more.

    I’ve never understood how quadro cards differ from geforce cards; supposedly they use the same gpu.

  26. The pascal one should do better than the last two gens here, but we’ll see with reviews. The original GTX Titan does still stomp the X though.

  27. Titans were originally pitched as a “ultra-high” gaming GPU that used silicon that didn’t meet Quadro/Telsa standards.

    It is akin to “Extreme Edition” chips from Intel which are just Xeons that didn’t hit TDP/clockspeed grades that Intel wanted.

    It is just a happy accident that Nvidia left most of the compute logic intact for Kepler Titans (just lacks ECC support and certifications for CAD and other professional 3D applications).

    Maxwell Titans on the other hand were a fully enabled GM210 chip that had double the memory of a 980Ti.

  28. Maybe. But I guess that depends if they are smart, have their design done, and TSMC has some space and time for them.

  29. Considering that an Iphone 6s Plus 128GB is $949 direct from the manufacturer this seems good value in comparison.

  30. a) That wasn’t a launch product for G80.
    b) That review you linked [i<]specifically criticizes the cost[/i<] for what is effectively a 6% or 10% overclock. You're right that it was expensive, it was Nvidia's first attempt at jacking up the price for no reason other than "they could". In my opinion, the reason "they could" be greedy was because their competition from ATi was already 7 months late with their 2900-series and leaked benchmarks were already showing that the 7-month-old GTX was still faster than the upcoming 2900XT. Which takes me nicely back to my original point; Without competition, Nvidia get unbelievably greedy. $1200 greedy in this case.

  31. The Titan cards are akin to Intel’s X99 platform so they are re-positioned workstation/server parts aimed at prosumers so there is nothing from 10 years ago to compare with.

    Die sizes are irrelevant as a means of comparison.
    Look at a high end consumer CPUs today versus 10 years ago and the die sizes are smaller today.
    It’s gets more expensive to produce on smaller nodes so it’s not surprising the chips are smaller.

    Of course competition would help consumers but the market has changed from 10 years ago as well. Without the high end prosumer chips putting a ceiling on pricing it’s possible that the high end consumer chips would have drifted up even more in price but right now that isn’t so possible. And remember that Intel & AMD used to sell $1,000 consumer CPUs over 10 years ago.
    But Intel and Nvidia have both pushed the envelope this year with prosumer chips now going well above $1,000 especially from Intel.
    The good news is that at least AMD has a chance of making some money at the high end when they release products as that in the long term is what is needed to make them competitive which will prices back down.
    So high prices now is a good thing! 🙂

  32. there was the 8800 ultra at about $900

    edit: $830+

  33. The G80 and G70 launched at $599, are you saying cost of living has doubled since 2006?

  34. Plenty of times where flagship cards were 600-700 a decade ago though. Pretty sure we’ve moved up in the cost of living to cover that difference.

  35. And if AMD’s smart then TSMC and board partners will produce several new saleable Vega products for AMD to cut out GloFo.

  36. Well, according to nVidias own announcement, their store will have them available on August 2nd. But true, availability will likely be scarce. I’m aiming for a December build, but still not quite sure of the specs.

    I probably jumped the gun a bit on my original post, but being that Vega has a vague rumored release date of “1H 2017”, I’m not sure if I’m willing to wait.

  37. The process technology to etch those wafers however is vastly different and much more expensive

  38. The Titan Z, based on Kepler, had a whole lot of DP compute capability. Maxwell and Pascal have only a fraction as much of the compute resources. These are gaming GPUs first and foremost.

  39. [quote<]Six years ago, Nvidia's 500+mm^2 products sold for $350-500.[/quote<] Yeah, but how much did 500 mm^2 of wafer space cost on a 40nm process 6 years ago?

  40. To be fair, Nvidia gets to do this because they’re first to market.

    They do compete with AMD these days… they just do it months after their product launches.

  41. I think Nvidia pushed the original Titan as a prosumer card, only to discover than a sizable number of gamers will drop over $1k for a single GPU. So now they market it as a gamer card.

  42. Or, you could collect like 12 HD4870s from the dumpster and do it for FREE! Infinite perf/$!

  43. [quote=”cynan”<] Now that's a TITANic value! [/quote<] [url=<]What savings![/url<]

  44. Their real research cards are the Tesla line aren’t they? Or do they kind of push this like a budget workstation GPU.

    Their website only talks about gaming under the Titan line… I know this is fully capable of doing some serious research work too. But then again so is the 1080.

    Though I am sure the extra Vram goes a long way.

  45. 9GB of VRAM on the 1080 Ti is unlikely for a couple reasons.

    If you use 12Gb GDDR5X on a crazily cut-down 192-bit bus (6 chips), then you get to 9GB of VRAM, but you only have 240 GB/s of bandwidth with 10GB/s GDDR5X. If Micron or someone somehow gets 11 or 12 GB/s chips in mass production, then that’s still a max of 288GB/s. That would be enough bandwidth for like a 1070.

    If you use 6Gb GDDR5X on a 384-bit bus (12 chips), then you get to 9GB of VRAM, but then you look at everything and notice that it’s identical to the Titan X’s configuration except you’re using 6Gb chips rather than 8Gb chips. Same bandwidth, etc. So you naturally go, “Why not just use the slightly older (and therefore not super expensive) 8Gb GDDR5X chips?”

    Oh yeah, and then you notice that Micron, Sammy & Hynix aren’t showing 6Gb or 12Gb GDDR5X in their catalogs yet, so there’s that…

    Otherwise, I feel ya. 10GB of GDDR5X on a 320-bit bus is plausible and Nvidia will probably delay the 1080 Ti as long as possible so as to ruin AMD’s day weeks before big Vega drops just like the 980 Ti and Fury X.

  46. I was actually also on the 320-bit bandwagon earlier today, because it would allow a 10GB VRAM capacity that would cleanly fit between the the 1080’s 8GB and the Titan X’s 12GB. It would also provide a perfectly usable 400 GB/s of bandwidth.

    However, NVidia could use much cheaper & easily available GDDR5 on a 384-bit bus and get 384 GB/s (almost 400GB/s!) and have 12GB of VRAM capacity.

    I don’t think we’ll see the 6Gb or 12Gb GDDR5X yet. Yes, Micron [url=<]plans to make them[/url<]. However, [url=<]they don't show up on Micron's catalog at all[/url<] (not even as "sampling"). The only GDDR5X is in the 8Gb capacity (10 GB/s, 11 GB/s & 12 GB/s). Also, what if big Vega drops with three stacks of HBM? That would allow for 12GB of VRAM. No way Nvidia would risk being outdone by that.

  47. [quote<] At least the Pro Duo lets you use AMD's workstation drivers[/quote<] That certainly makes the cost worthwhile for a "prosumer" compared to an actual workstation card like the $6000 [url=<]AMD FirePro™ S9300 x2 Server GPU[/url<].

  48. Peak for the FEs i think because they hit 83°C, but sustained for the custom cards if you raise the power limit. Thermal throttling happens once the card exceeds 82°C.

    Here’s a youtube review showing sustained 2GHz OC on an EVGA 1080FE while playing Witcher 3. The reviewer raised the fan speed to 80% to keep temps just under 82°C and avoid throttling:

  49. You mean TSMC and board partners have produced several new saleable products for Nvidia??

  50. 2 chevy malibus will be more money than 1 camaro and have less performance.

    I find in the silicon world you pay for performance and quantity also, that is unless fabs are now offering buy 1 wafer, get 1 wafer free?? High-end products, niche products, etc. always have big e-peen tax on them.

  51. My bad, I misread the table. Its corrected now. Thanks again for your positive attitude.

  52. Are those peak or sustained frequencies as that must make a difference?
    I’m not up on modern GPU OCing which is why I ask.

  53. [quote<]AMD should bring back the old ATI "Pro" branding.[/quote<] Or even.... (•_•) ( •_•)>⌐■-■ (⌐■_■) [url=<]Rage Pro [/url<]

  54. [quote<]As a side note, it's interesting to speculate on whether Polaris could support GDDR5X if you really wanted to push a version with that memory.[/quote<]Could be interesting for a refresh cycle if Globalfoundries' process improves. Polaris 10 at, say, 1.5 GHz or so could probably make use of the extra bandwidth from 10 Gbps memory. AMD should bring back the old ATI "Pro" branding.

  55. Did it really surprise you? Really?

    Because it didn’t surprise me. Rumors had pegged it anywhere between $1200-1500. Plus, what has nVidia released lately that’s actually hit MSRP?

    The 1080 came out, promised at $599, but it’s really $699. The 1070 came out, promised at $379, but it’s really $449. The 1060 came out, promised at $249, but it’s really $299.

    The only thing truly surprising about the Titan is if it actually releases at that $1200 price point and doesn’t sneak up to $1400 “just cuz.”

  56. Kudos for the play on the words. However, I have to ask. Are you blaming nVidia for AMD not competing? 😉

  57. Where the hell did you find an 18.5 TFlop Radeon Pro Duo?
    Because AMD would like to see one.
    Their version is only 16.38 TFlops you see. [url<][/url<] AMD is very curious about these rogue Radeon Pro Duo manufacturers.

  58. The Titan X is not targeted at gamers, even though some gamers will pay to get it.

    Companies and educational institutions heavily invested into neural network and deep learning algorithms will probably find it a bargain.

    Because of this, as a gamer (and not an AI researcher) I take far less issue to its $1200 price tag than the markups on gaming cards (FE tax, retailers price gouging on 1080).

  59. A single GPU solution will probably game more smoothly than the Radeon Pro Dual. So, this is pointed more at gamers than professionals in my opinion. Whereas, the Radeon Pro Dual at $1500 is 25% more costly, but at 16.38 TFLOPS, has 50% more FP32 performance.

    [url=<]AMD Radeon Pro Duo bridges the professional-consumer divide[/url<] Edit1: Originally misread TFLOPS from table, now corrected. Edit2: Added link to TR's Radeon Pro Dual article with specs table.

  60. Without competition, this is how we arrive at $1200 . Nvidia: The way it’s meant to be PAID.

  61. You just keep on believing that.

    [Edit: I’m not even going into your assumptions about how well the Fury Pro actually performs BTW, but your numbers are literally not right. The Fury Pro isn’t an 18.5 TFlop card, it’s a 16.38 TFlop card according to AMD’s own advertising.

    [url<][/url<] ]

  62. A 25% higher cost for 50% more FP32 performance. As other posts make clear, was not suggesting the Pro Duo was a better gaming card.

    edit: misread TFLOPS, now corrected, Thanks Chuck

  63. AMD has stated that there is are two Vega chip models.

    Considering what Polaris does and doesn’t cover, I would guess that “small” Vega is their answer to the GP-104 and that “big” Vega is their answer to these GP-102 parts. Big Vega is pretty much confirmed to be HBM2 equipped, although I’m not sure if small Vega is also HBM2 or using GDDR5X.

    As a side note, it’s interesting to speculate on whether Polaris could support GDDR5X if you really wanted to push a version with that memory.

  64. Or for $1499, you could buy the 16.38 TFLOP [url=<]Radeon Pro Dual[/url<]. A 25% higher cost for 50% more performance. Edit: my bad misread 16.38 for 18.5 huh, wonder how that happened

  65. TPU has tested a bunch of custom 1080s, and every single one has reached 2GHz. Actually every single Pascal card they had has reached 2GHz.

    MSI GeForce GTX 1060 Gaming X 6 GB 2139 MHz 2435 MHz
    NVIDIA GTX 1070 FE 2101 MHz 2380 MHz
    EVGA GTX 1070 SC 2088 MHz 2370 MHz
    MSI GTX 1070 Gaming X 2101 MHz 2420 MHz
    NVIDIA GTX 1070 FE 2088 MHz 2330 MHz
    Palit GTX 1080 GameRock 2114 MHz 1400 MHz
    ASUS GTX 1080 STRIX 2114 MHz 1400 MHz
    Gigabyte GTX 1080 G1 Gaming 2050 MHz 1405 MHz
    MSI GTX 1080 Gaming X 2050 MHz 1400 MHz
    NVIDIA GTX 1080 FE 2114 MHz 1450 MHz

  66. Do we even have any hard information on whether there will be multiple Vega models at this point?

    Last I heard, the top (and possibly only) chip was stated to be HBM2, and I don’t see Vega coming out any time soon unless AMD wanted to push out a similar 384b GDDR5(X) variant.

  67. Using individual workstation cards for DL is dubious at best.

    99+% of users will never, ever need it, and those who do would be better served by a chassis or rack full of enterprise grade cards.
    The rare slice of people who primarily game but want to maybe play around with DL stuff once or twice aren’t really going to care about getting a 2x speedup by using fp16 calculations anyway.

  68. [quote<]The company promises 11 TFLOPs...[/quote<] So going by this metric alone, just over 2x the GPU horsepower of the RX 480 for 5x the price. Now that's TITANic value!

  69. Will be a great card for deep learning, that’s why they presented it at a deep learning developers event and will be really fast for gaming too

  70. So in a span of 67 days from May 27 to August 2 Nvidia has released four new silicon designs for commercial sale and five products when you consider the GP104 variants.

  71. 1080 details are not set because it’s waiting for vega info to show, and then tweek

    power of 2 + 50% is interesting. 8GB is the minimum (since 1080 has 8) 12GB might be too much (since Titan has 12GB), so, if it’s 6GB + 50% = 9GB, that’s a good number.

    i think 899 / 999 is too high, but it all depends on vega.

    also, pascal titan x is clearly a limited production product right now, thus only available from

    it’s interesting to why gp102 is available so soon. there’s no pressure from amd. TSMC 16nm yield is probably fabulous. nvidia can supply all 3 chips if vega challenges. (4 chips including gp100)

    of course nvidia wants to delay 1080ti as much as possible because gp104 is selling so well right now, there’s no need to lower their prices.

    changing my prediction a bit:
    vega comes out Q1 ’17,
    1080ti out around the same time at between 699 and 799 (non FE),
    price cuts on 1080, 1070
    price cuts on titan x pascal, if vega challenges.
    1080ti has 9GB if it uses Micron power of 2 + 50% chips
    or has 10GB on 320 bit bus.

    surprise release of titan x pascal is a move against ati in a game only they understand

  72. Why would you object to the existence of a better performing product, even if the price/performance ratio is worse?

    You do realise you don’t have to buy it, and if you’re right and no-one should buy it, well, nVidia would probably stop updating the TITAN line.

    However, nVidia is still updating the TITAN line, so it looks like you’re wrong.

  73. So affluent people who are gamers who buy the best are inherently irrational!
    People buying $1700 Intel CPUs when they aren’t able to harness more than half of its performance may be clueless but at least with a GPU you can harness it all unless they are running at 1280×720. 🙂

  74. Unless it has good FP64 performance then it’s kinda niche at this price.
    So it seems a smart move to sell direct and reap the profits from the small pool of buyers.

    Nvidia maximises profits and they are greedy.
    AMD flirts with financial disaster through incompetence and they are the good guys.
    Rewarding failure is not a good attitude.

  75. Sounds like the people who want to buy these things will need them for cerebral processing hookups to help their (obviously deficient) brains to think rationally…;) Talk about “sucker food,” nVidia can surely dish it out…;)

  76. Or maybe they want to avoid price inflation by retailers while obtaining higher margin in the process

  77. Those numbers are interesting, maybe NVIDIA will disable some parts of the chip.

    If this iteration of Pascal has 3584 CUDA core partitioned in SM that still composed by 128 of them, we end up with 28 SM which does not fit well with GPC made of 5 SM, unless two of them are disabled.

  78. HBM2 is rarer than hens tit nowadays. It’s next year’s high end memory.

  79. Nope…click the “View full specs” link on this page:

    [url<][/url<] It's listed as supported. There goes his other arm, leg, kidney, and lung.

  80. What, he’s problematically subsidizing the R&D of our mid-range cards with his huge margins? Good luck and godspeed, my pecuniary-heavy friend!

  81. Um, who said they will be shipping this? 😉

    Seriously, given the trouble they have with keeping stock of 1070s/1080s I expect the Titan X to be in short supply for at least a few months. If you are waiting till the end of the year to purchase one then maybe you won’t have any problems finding one but the people trying to find them at launch are likely to have serious issues finding them.

  82. Well, Nvidia has said that they aren’t trying to destroy AMD anymore.

    It looks like they are now intruding into Intel territory with an assault on Intel’s bastion of confusing names.

  83. They should have called it Titan ONE. That seems to be the thing to do these days (Xbox One, Battlefield 1, etc)

  84. …What the hell? This was supposed to be the review for the GTX 1070, not the Pascal Titan X article?

    Dammit Firefox, what did you do now

  85. If this thing had 1/2DP then my guess is Nvidia would have broadcast it loud & clear in the marketing announcement. My initial guess is that it’s also 1/32 DP just like regular Pascal cards.

    Now, there might be a cut-down version of the GP100 that does the 1/2DP coming at some point in the future, but that’s probably a 2017 product.

  86. I guess Nvidia is able to push out all these cards so fast since they don’t have to share TSMC capacity with AMD???

  87. Six years ago, Nvidia’s 500+mm^2 products sold for $350-500.

    This is what no competition at the high end causes. We saw it with Intel, and now Nvidia are taking the mickey too.

    People will argue that the 1080 and 1070 are the high end, but they’re really not. They have similar die size and manufacturing cost as the old midrange champions like the GTX 460 and 9800GT. Lack of competition from AMD means that Nvidia can get away with selling their midrange products for $699 now.

    Damn I miss healthy competition 🙁

  88. To those of you who don’t like the price [me included] there’s a simple solution: Get AMD to actually launch Vega so the GTX-1080Ti can launch a month earlier.

  89. Agreed, if this is 1/2DP and you need the compute then it’s actually a bargain.

    IIRC even entry level FirePros and Quadros are at least $4000.

  90. GP104 has the same DP2A and DP4A instructions. Same speed factor as FP32 FOR GP102.

  91. I think the recent “Adaptive VSync” feature takes the cake. I’ve yet to see someone get that 100% right. I mean that in… I’ve seen people describe the right feature with the wrong name and/or mixing it up with Adaptive Sync. Frankly I think it was Nvidia’s intention to cause confusion. They could have used “Dynamic VSync” or more accurate than either “Selective VSync.”

  92. There are situations where having this much power on a single gaming card can make sense- and while it is certainly expensive, you’re spending as much (or more) on what you’re driving with it.

    Ditto for professional uses; in that world, it’s a downright bargain!

  93. To be fair, the bad name was the last generation, which should have been the Titan IX; here the card just finally caught up with its proper name.

  94. Wouldn’t it be more practical to give them both legs and keep the arm, for when you use the card?


  96. If the arm, leg, kidney, and lung were all from the same side, then surely they took f0d’s SLI support.

  97. Maan, this ain’t no street card dude, it’s an exclusive VIP only, buy direct from the Source so you are gonna get some pure uncut shit maan.

    As for being uncut as they are selling direct nobody else is taking a cut from the profits so as well as a $200 price hike NV also gets the OEM and retailer cut as well.
    Wow, they will have a massive margin on these compared to an OEM card selling at a grand at retail.

  98. The Pro Duo is a dual chip on a stick with a watercooler. Not exactly cheap to make, so I wouldn’t be calling it overpriced compared to a Titan X. They are both expensive, but the Duo is clearly more expensive to make. I’m not saying anything about the Duo is positive, just that it is obviously more expensive to make.

    They’re both overpriced niche products which have no business being compared to mainstream cards. People buy these because they’re either someone with money to burn, or a professional who can write off these cards as a business expense. At least the Pro Duo lets you use AMD’s workstation drivers, dunno if Nvidia lets you do the same.

  99. Not sure if it’s worth the “slow clap” or not.

    Think I will wait for the Pascal Titan X Duo, x2 in SLI.

    Then he can drop the mic and walk of the stage 😉

  100. And enthusiasts holding out for any sort of price/performance value in the high end GPU segment just dropped their wallets back into their pockets and walked off to vainly queue for a 1080Ti .

  101. Don’t worry. That’ll be the street price of the Founder’s Edition when it first hits.

  102. “High performance engineering for maximum overclocking”


  103. If that has Polaris and roughly the GPU execution power of the first XBO I’ll be pleasantly surprised. After all the Wii U speculation threads of “1Tflop, minimum, no way it’s lower…They can’t even buy a part under 600Gflops anymore…Ok, here’s the die shot, 300, right, can’t be lower? …And it’s 172Gflops. Crap.” I’m leaving expectations low in hopes of being pleasantly surprised.

    Matching the first iteration of these consoles would still put them in a better spot power wise than they have been in years. The Wii U GPU was over 10x slower than the PS4 one. If the PS4 .5 is 2.2x as powerful as the PS4, and the Wii U is between the XBO and PS4, that’ll be closer than anything since the Gamecube.

  104. 1080 Ti –

    3072:192:96 with 384bit 10GHz GDDR5x (maybe with 320 bit bus)
    Approx. 8 GB RAM (Micron has GDDR5X modules with power of 2 + 50% sizes, they didn’t make those just hoping someone would use them)
    $899 with $999 FE

    I would pretty much bet money on those specs/price.

  105. I’m honestly not sure. I think they would be unless this is a slightly different version of the Pascal architecture that turns on those instructions while they aren’t activated in other Pascal releases.

    All they are doing is byte-length integer operations [just like a 1980’s NES did] and from what little I know of convolutional networks it’s almost all simple addition/subtraction operations that modify the “weight” values of different nodes in these deep convolutional networks that are popular in AI.

  106. Are you sure they couldn’t have gotten $1726 out of it? Surely they can beat Intel.

  107. About this:[quote<]I do have a massive problem with you pretending that only Nvidia charges too much for a product ...[/quote<] To be fair, there is no mention of any other company's products being a bad or a good value in this post, but I'll let Spunjji make his own case. As for myself, I'm an equal opportunity criticizer. The fact that there is a much worse value to be had in no way changes the value of the product in question. It only changes the perception of that value by the people comparing it to the other product.

  108. That is also what happens when you don’t have anything with which to compete at the high end. ;’)

  109. Radeon Pro Duo: $1500
    Titan X 2.0: $1200

    I don’t have a problem with you saying that the new Titan X is overpriced.
    I do have a massive problem with you pretending that only Nvidia charges too much for a product, especially when the $1200 Titan X is a bargain compared to its competition from a certain company that apparently can’t be criticized because they released one part for one market segment that’s not a bad value while being an actively bad value everywhere else.

  110. The naming scheme is too obfuscated. From now on I shall refer to the titans as follows:
    Titan X[sub<]m[/sub<] - Maxwell Titan X[sub<]p[/sub<] - Pascal

  111. Yeah, where were your comments on the Radeon Pro Duo that costs $300 more than this thing?

  112. It’s a problem for those who buy at the high end. It encourages nVidia to continue pushing the limits of how much they can charge. For those who buy the (relatively) cheaper lower margin cards that don’t have as much wiggle room, it can actually be a good thing as every higher margin sale subsidizes the R&D costs that went into both products.

  113. Ya know, I’ve never understood this point of view. I’ve only ever bought low to midrange hardware up until now. During all that time, never did I think that those who bought the top-end hardware were a “problem” because they could afford the hardware that I could not. But hey, not my problem. 🙂

  114. I don’t think any that has been confirmed.

    It’s fair to assume that a 1080 Ti will eventually exist and it’ll likely be gp102-based, but I don’t believe that we have any solid info on price or configuration.

  115. It looks even better if you compare it to the previous gen Titan X and pretend there was no die shrink in between, too 😀

  116. This is what happens when we don’t have competition at the high-end. 😐

    Also, what’s with all the relativism? I keep seeing posts (not just here) working out some particular circumstance in which this looks like good value. Self-deception as an art form.

    Ah well. I can’t afford one, so I’m not the target market, so I guess I should just shut up and let the big boys play.

  117. Some marketing jerk at NVidia deserves bad things for this latest product naming shenanigan. NVidia has long mastered the art of intentionally-confusing product names, but this may take the cake.

  118. Note that the comments do not have a [sarcasm] tag.

    On the other hand, this new Titan X (Pascal) will be a [b<]better[/b<] relative gaming value than the old Titan X (Maxwell) that they are still selling. For compute, there may be better alternatives. NVidia drastically reduced the compute capabilities in their GPUs when they went from Kepler (Titan Z, for those keeping track) to Maxwell.

  119. And cost more than expected, too – attempts at shilling by NVIDIA fans inserting even-more ludicrous prices to “fall back from” disregarded.

  120. Well the 1070 can boost up to 2ghz on it’s own, but a 1080 can’t. I wouldn’t hold out hope for the bigger chip.

  121. So these will only be available directly from Nvidia? Sounds to me like they won’t (or can’t) make many of these for general distribution. A Halo product in virtually every respect.

  122. The new, slower than xbone, for casual players only (according to ubisoft and Nintendo) cheap console? I doubt it.

  123. Cheap. I thought Titan of this gen was going to be at least 1500 if not 2000 given insane prices of other cards. This is possibly a better perf/$ than the current prices of 1080, and you get a higher end GPU and you also get some “pro” features (INT8, hopefully FP16 too) as well.
    But I don’t expect to see 1080Ti given price of this one.

  124. That’s what a new Titan means. I guess the 1080 sales dried up and it was time to try to lure in the holdouts with the dream of a 1080 Ti…

    Elsewhere, 1080 buyers snap their fingers and mutter, “Had highest end for a whole month!”

  125. Me either- but this is half gamer card and half professional card.

    Four-digit pricetags on professional gear isn’t unheard of, and this is just barely that. Look up the price for a comparable Quadro or Tesla, and get beck to me ;).

  126. I hope GP102 overclocks well because a 2 GHz GTX 1080 would be pretty darn close to this in terms of theoretical performance.

  127. krogoth?
    did you take over SSK’s account?
    totally unimpressed and no caps – something dodgy is going on here

  128. Obviously the (relatively) low flops (despite much higher core count and similar clock speed) means this emphasizes dual precision and is not a gaming card (hence the conspicuous lack of GeForce branding). Don’t get all hot n bothered guys. Wait for the GTX 1080Ti at the end of the year or so to fight Vega for much less $ and higher performance.

  129. You know what? It doesn’t actually sound that expensive to me, so I guess the intended effect of Founder’s Edition pricing has worked.

  130. The one percenters will buy every last one or two because they can. I don’t like SLI but even so it will put up ridiculous numbers.

  131. Don’t worry. It’ll be perpetually OOS at first, and you’ll get a nice $200+ markup on top of that.

  132. Any word on how big the GP102 die is, and how much this is cut down (if at all)?

  133. where does that leave 1080ti?
    3072:192:96 with 384bit 10GHz GDDR5x for 699$ (non FE) in october?

  134. Lol, I’m not trading in for that.

    I’m actually quite happy with the GTX 1080 that cost me exactly what a GTX-980Ti or R9 Fury X cost about 6 weeks ago and even has a factory OC.

    We all knew that “big” Pascal would show up and that it would cost a boatload when it did. The only real surprise here is that it showed up sooner than most people (myself included) actually expected.

  135. A pretty good deal for a durable good that will outlast your lifetime and your great-grandkids could use someday.

  136. trading in your 1080 for one?
    if you do can i have your address and what times you wont be home?

  137. Somebody needs to tell Nvidia that those shield tablets make lousy web servers.

    OK: Finally go the blog to load with the specs:

    So forget words. Here are its numbers:

    11 TFLOPS FP32
    44 TOPS INT8 (new deep learning inferencing instruction)
    12B transistors
    3,584 CUDA cores at 1.53GHz (versus 3,072 cores at 1.08GHz in previous TITAN X)
    Up to 60% faster performance than previous TITAN X
    High performance engineering for maximum overclocking
    12 GB of GDDR5X memory (480 GB/s)

    All of this happened because some guy was drinking with Jen Hsun and made a bet or something.

    Actually, the raw numbers are an OK step up from a GTX-1080 but the single-precision TFLOP count is only 22% higher than the stock-clocked GTX-1080. Bigger improvements (50%) to memory bandwidth and even moar capacity. The 8-bit integer instructions are also a gimmick for weight values at nodes in deep convolutional networks that aren’t going to help your games run any faster.

  138. I wasn’t keeping track of the rumors, but I’m impressed at how quickly they’re going to be shipping this. August 2nd! I’m aiming to build a new system toward the end of this year and this is going to be on the top of my list.

  139. ooo aaah
    but unfortunate timing tho. (almost) everyone is focused on that guy with the shiny hair.

  140. [url=<]It's finally here![/url<]


  142. Ooh… *boing*

    *embarrassed giggling*

    Yes this is vaguely fanboyish but my goodness. I would sell f0d’s remaining body parts for one just to have it.

Renee Johnson

Latest News

crypto reddit
Blog, Cryptocurrency, Investments, News, Price Prediction

7 Best Cryptocurrencies On Reddit To Invest In 2023

Blog, Cryptocurrency, Investments, Sustainability

The Clock is Ticking to Buy this Eco-Crypto – 7 Reasons C+Charge Will Explode Post-Presale!

The best time to buy a certain coin, if you want to have a minimum investment and maximum benefits, of course during its presale. The very beginning of 2023 offers...

savannah marshall fght
Blog, Cryptocurrency, News

Boxing World Champ, Savannah Marshall Partners with FGHT as presale approaches to $4 million!

Fight out (FGHT) has already taken the international move-to-earn crypto sector by storm as it has raised nearly $4 million in its presale. As investors all over the world are...

Twitter Files

Elon Musk’s Twitter Pledges Free Write-Only API Access To Bots Creating “Good” Content

AI Classifier

OpenAI Release Classifier to Identify Text Written by AI

Stable Attribution

New Software Stable Attribution Can Attribute AI Art to Artists Work


Apple Miss Profit Targets for First Time in Seven Years