Rumor: a GP102 GeForce Titan and GTX 1080 Ti are in the works

Nvidia may have revealed the GTX 1080 and GTX 1070 recently, but it's a fair bet the lineup of 10-series graphics cards won't stop with those two models. A new rumor from the prodigious leakers at ChipHell suggests a GeForce GTX 1080 Ti and a new GTX Titan are both on the way, along with a GTX 1060 at the lower end of the market. Here's ChipHell's rumored spec table: 

If ChipHell's information is in the ballpark, the "big Pascal" for consumer GeForces will be called GP102 (as friend of the site Rys Sommefeldt guessed at in his analysis of the Pascal architecture). Presumably, the new GTX Titan will get a fully-enabled version of this chip, and it could have 3840 stream processors on a 478-mm² die. That's more SPs than Nvidia puts on the GP100 chip that drives the Tesla P100, but it appears a GP102 Titan will maintain the same 1/32 FP64 throughput as the GM200 chip in the Titan X. We're guessing GP102 will omit dedicated FP64 hardware as part of its slimming-down compared to the enormous 610-mm² GP100 die.

If these rumors hold, 1/32 FP64 throughput isn't terribly surprising to see on a card like this. Nvidia positioned the similarly-provisioned GeForce Titan X as a card for deep-learning research, where getting large data sets close to the GPU seems to be more important than the ability to perform extremely precise calculations. That card's 12GB of RAM was its major selling point over the slightly cut-down GTX 980 Ti and its 6GB of RAM. We didn't find the GTX 980 Ti appreciably slower in games when we put it through our review wringer.

Given that positioning of the Titan X, a powerful GPU paired with lots of memory may be the name of the game for a Pascal Titan, as well. If ChipHell's source is correct, such a chip will have a similar memory subsystem to that of the GTX 1080—there's just more of it to go around. Again, if the shaky ground we're standing on here holds, this chip will have a 384-bit path to a whopping 24GB of GDDR5X RAM. The site guesses that such a chip would have 480-576 GB/s of aggregate memory bandwidth.

Humongous memory capacity aside, the differences between this purported Titan and its GTX 1080 Ti stablemate aren't that large. While the rumored GTX 1080 Ti may have "only" 3456 stream processors, it would appear to share the same lofty 1507MHz base and 1621MHz boost clocks of the Titan. Its memory subsystem appears similar, too—the only major difference is this card's purported 12GB of GDDR5X RAM.

Meanwhile, ChipHell also thinks a GTX 1060 will be joining the lineup beneath the GTX 1070 and GTX 1080. This cut-down chip appears to use the same GP104 GPU as its higher-end brethren, but it may end up with only 1280 stream processors, fewer texturing resources, and fewer ROPs when compared to its bigger brothers. The chip is purportedly clocked at about the same speeds as the GTX 1070—its 1545MHz base and 1658MHz boost speeds are only a few MHz off the more endowed chip—and it could have a 192-bit path to 6GB of GDDR5 memory, down from the GTX 1070's 256-bit interface and 8GB pool of RAM.

ChipHell didn't speculate about pricing information or timelines for these rumored cards, but given what we know about Pascal GPUs so far, these rumors don't seem too far off base. If the prices are right, these cards could be cause for more excitement at both the entry-to-mid-range and the highest-end markets alike. As always, we'll just have to see whether there's fire to go with all the smoke.

Comments closed
    • Herem
    • 4 years ago

    I was in 2 minds about getting a GTX 1080 but it’s still not quite powerful enough for smooth 4k gaming. It looks like the 1080ti should be enough of an upgrade to achieve this!

    What with the GTX 1080ti and AMD’s Vega now both probably coming this year I think I can wait a little bit longer before pulling the trigger.

      • Bacardi
      • 4 years ago

      i hate to burst your bubble i have the GTX 1080 founders edition running on 4k screen on BF4 around 90 fps thats more then enough for any 4k screen, i did a firestrike normal 3dmark test with this card and system not overclocked getting a score of 19105 thats 98% faster then evry other result. When you look at what 3dmark sets for a 4k computer is a score of 17805 so i am well passed a 4k computer with a GTX 1080 founders edition. So pls if you gonna say something about a new card, test it before you gonna talk like evry 1 else does!

    • Leader952
    • 4 years ago

    A great big bunch of Pascal’s. Sixty two in fact.

    [url<]http://videocardz.com/60289/breaking-news-aida64-developers-confirm-pascal-gp102-gp106-gp107-and-gp108[/url<] Pascal GP100 (GP100GL-A, GP100-B, GP100GL-B) 19 different device ID's Pascal GP102 (GP102GL-A, GP102-A, GP102-B, GP102GL-B) 11 different device ID's Pascal GP104 (GP104GL-A, GP104-A, GP104-B, GP104GL-B) 15 different device ID's Pascal GP106 (GP104GL-A, GP104-A, GP104-B, GP104GL-B) 9 different device ID's Pascal GP107 (GP107GL-A, GP107-A, GP107-B) 7 different device ID's Pascal GP108 (GP108-A) 1 device ID

    • USAFTW
    • 4 years ago

    This should finally be able to crack 1080 Pi.

    • Milo Burke
    • 4 years ago

    One issue with the rumor: Titan is the old name.

    Nvidia will name the Pascal equivalent the GeForce GTX Jaeger.

      • chuckula
      • 4 years ago

      And AMD will name the High-End Vega Meister.

        • Milo Burke
        • 4 years ago

        I’ve always pictured Nvidia’s Titan cards shaped more like this: [url<]http://i56.photobucket.com/albums/g175/azaraelishanti/titanae.jpg[/url<]

          • BurntMyBacon
          • 4 years ago

          Confirmed. The next nVidia halo card to be named Titan AE. Trademark legal proceedings to follow.

        • Milo Burke
        • 4 years ago

        Because, with DX12, you’d want a Jaeger and a Meister?

        I was thinking more along the lines of visual imagery. And all the drift-compatible references we could make with SLI.

          • chuckula
          • 4 years ago

          Well, if Jaeger didn’t perform that well you could always call it a Jaegerbomb.

        • anotherengineer
        • 4 years ago

        Just need some red bull for some Jaegar Bombs!!

      • September
      • 4 years ago

      That’s the biggest unknown with this rumor – the actual marketing names. They will come up with something new, Titan P? Titan Z?

      Also I really doubt that a cut down GP104 will be the vanilla GTX 1060. It would usually be called the GTX 1060 Ti, and I would expect a bit more out of it.

    • Hattig
    • 4 years ago

    “While the rumored GTX 1080 Ti may have “only” 3456 stream processors”

    3200 is far far more likely. That fits in with the 640 shader differential on all the other products, and that also matches in with how many shaders there are in Nvidia’s logical units.

    In addition, this is looking likely for Q1 2017, so don’t get your hopes up.

    And no HBM2! What’s going on?

      • ImSpartacus
      • 4 years ago

      Why do you need hbm if gddr5x provides enough bandwidth for a sliver of the cost?

        • Ninjitsu
        • 4 years ago

        I’m more curious why GP100 needs HBM but GP102 doesn’t. I suppose it’s the FP64.

          • Leader952
          • 4 years ago

          You just answered your own question.

          The reason is that the GP100 is all about compute (HPC). HPC needs all the bandwidth it can get whereas gaming does not.

          • ImSpartacus
          • 4 years ago

          I’m just a layman, so I unfortunately can’t do justice to a good question like that. However, if I had to guess, I’d say that gddr5x couldn’t provide the necessary 750 GB/s in bandwidth without being a total power hog.

          For context, Anandtech did a handy little memory math table here with ballpark power consumptions: [url<]http://www.anandtech.com/show/9883/gddr5x-standard-jedec-new-gpu-memory-14-gbps[/url<] With 10 gbps gddr5x, you need a gigantic 512-bit bus to get to 640 GB/s, still short of the P100's 750 GB/s. And the gddr5x would need 30+W while hbm is probably in the ballpark of 15-20W. When you're already cranked all the way to 300W on one gpu, every bit counts.

        • Anomymous Gerbil
        • 4 years ago

        Doesn’t HBM(2) provide lower latency as well? No idea if that’s especially important though.

          • ImSpartacus
          • 4 years ago

          I don’t think that’s as important in the consumer gaming space (as you suggested). The general consensus is that GDDR5 gives up latency for bandwidth as compared to the DDR3 that it’s based on. That kind of trade-off is generally a good idea for GPUs.

            • homerdog
            • 4 years ago

            GDDR5 vs DDR3 latency is about the same.

            • Ninjitsu
            • 4 years ago

            Afaik that’s incorrect. GDDR is tuned for graphics, allows higher latency for higher bandwidth, because GPUs care less about latency.

            EDIT: Although you say “about the same” so that’s vague enough to be correct lol.

            • homerdog
            • 4 years ago

            CAS latency is ~10-12 ns on both, varying slightly depending the SKU.

            • Ninjitsu
            • 4 years ago

            Indeed seems to be the case, refering to wikipedia at least. I stand corrected I suppose – thanks.

            • homerdog
            • 4 years ago

            No problem =)

            • ImSpartacus
            • 4 years ago

            Really? I never looked into that. Why isn’t gddr5 used as system memory?

            Our to phrase it another way, why isn’t ddr3 system memory remotely as quick as gddr5?

            • Krogoth
            • 4 years ago

            GDDRx is completely different from DDRx on the physical layer. GDDRx is soldered onto the PCB and tracing are much shorter than DDRx. Each GDDRx channel needs it own dedicated memory controller. This setup is what allows GDDRx to obtain much higher I/O throughput and tighter latency than DDRx. The cost is that is GDDRx not modular. If you want to add more memory chips. You need to put in additional tracings and memory controllers for them.

            • ImSpartacus
            • 4 years ago

            It sounds like there are circumstances where gddr could be used as system memory.

            We see all sorts of laptops and other sff machines where system memory is permanently affixed to the system board. They are often very compact, so the dram are probably much closer to everything else.

            Under that kind of situation, why can’t we use gddr5 as system memory?

            Pretty much every machine that’s limited to integrated graphics is generally like that. And I know that amd’s top apus are often limited by bandwidth in many use cases. Why hasn’t amd used their graphics know-how to make this a reality? I know there are rumors of that kind of project, but they obviously fell through.

            I guess what in getting at is that your description seems too good to be true.

            • Ninjitsu
            • 4 years ago

            PS4 has GDDR as system memory.

            Apart from that I’m pretty sure GDDR isn’t used for CPUs because of latency.

            • the
            • 4 years ago

            This is an interesting case as Linux has been shoe horned on to the PS4. We can now measure latency from within the system* and compare it to roughly equivalent systems using DDR3.

            *Technically second time as this could have been done via the Xeon Phi but I’m not aware that anyone actually as done so.

            • ImSpartacus
            • 4 years ago

            Yeah, I knew the ps4 did it. Though the “latency” seems inconsistent, which makes me wonder why gddr5 isn’t more common in “compact” systems that already give up the expandability that gddr5x can’t support.

            • jts888
            • 4 years ago

            The physical interface between GDDR5 and DDR3/4 is different enough that a single memory controller can’t efficiently be used to control both, and CPU manufacturers won’t want to spend the huge up-front costs for a processor that couldn’t deal with both inline memory modules and PCB-soldiered DRAM chips.

            For chips where form factors targets will never have discrete modules (tiny mobile CPUs), GDDR has way too high a base power draw, and manufacturers will want to use something like LPDDR anyway.

        • jts888
        • 4 years ago

        Power efficiency is the big purported reason. If you can simply eliminate several tens of Watts from memory IO, the power budget can be pushed towards more compute on the GPU.

        Also note that an HMB2 GPU is expected to have roughly double (1 TB/s vs. ~500 GB/s) the bandwidth of a 384b GDDR5X card, and what is “enough” bandwidth is a pure function of any individual task and not a magical absolute value determined by the GPU compute specs alone.

      • jihadjoe
      • 4 years ago

      3456 is nice and sequential though! I hope they do it just for the lulz

        • Voldenuit
        • 4 years ago

        “Because 7-8-9”.

      • Leader952
      • 4 years ago

      [quote<]And no HBM2! What's going on?[/quote<] GDDR5X at 12 Gbps is more than enough for GP102 (both "Titan P" and GTX 1080ti) and the costs are much lower than HBM2 and interposer costs. [url<]http://www.anandtech.com/show/10193/micron-begins-to-sample-gddr5x-memory[/url<]

    • HERETIC
    • 4 years ago

    Yields must be really sh*t if they’re bringing 1060 off GP104.
    That’s 50% broken…………..
    I was expecting 1060 to be 1/2 of a 104 die later in the year-and with board makers OC performance to be in between 970 and 980-and with 4GB ram power sub 100watts….
    time will tell……………………………..

      • NoOne ButMe
      • 4 years ago

      Or they need to fill a gap in the market until GP106 arrives. If Polaris comes in at $350/275 at ~1070 and one slower Nvidia may rather want to do a part to stop potential market share bleeding.

      I think there is a mobile part in 900 (965m?) series which used half a GM204 for a few months until GM206 launched.

    • hubick
    • 4 years ago

    I can see upgrading to handle games coming out over the next few years where I’m primarily concerned about keeping a steady 90hz in VR, or 30-60hz@4K, and I can see the stream processors helping with that, but can’t imagine too many games supplying textures where the 1080Ti’s 12GB of RAM would result in having to fetch from main memory often enough that the Titan would yield any real benefits (all else being the equal) at normal quality settings?

      • DeadOfKnight
      • 4 years ago

      You would be correct. Think of Nvidia’s Titans like Intel’s Extreme Edition processors. Huge premium for that little bit of extra performance just to say you have the best that money can buy. Totally not worth it. The odd thing is that from what I’ve seen, most people who have that kind of money tend to upgrade frequently. It makes more sense to get a Titan if you plan to use it for 5+ years. If you are going to upgrade every couple years, don’t bother with the high end.

        • Kretschmer
        • 4 years ago

        It never makes sense to get a Titan-class GPU, unless money is more object.

        If you’re willing to spend $X every $Y on a Titan, you’ll see much better longevity by spending $X/2 every $Y/2 years. E.g. $500 every 3 years will get you a lot more than $1,000 every six years.

          • Krogoth
          • 4 years ago

          Titan GPUs (Kepler units) do make sense if you are a GPGPU enthusiast looking for Quadro-Telsa-like GPGPU performance for a fraction of the cost and if you don’t mind missing out on ECC support and “Prosumer” software certifications.

    • mike941
    • 4 years ago

    Nvidia should call the new Titan the Titentan.

    • Tristan
    • 4 years ago

    Lol, using GP104 for 1060, look like huge waste, just 50% shaders active.

      • DeadOfKnight
      • 4 years ago

      Yeah this rumor reeks of yield issues similar to Fermi. No wonder we’re getting a 560 Ti class chip for $700.

        • Prestige Worldwide
        • 4 years ago

        The milking continues

    • Tristan
    • 4 years ago

    Let they also make GP101, real high-end for graphics, with 600mm2, 5120 shaders (without FP 64), and 16GB HBM2, for 1500-2000$. Push tech to the max.

      • ImSpartacus
      • 4 years ago

      So Nvidia sells one HBM-packing 600mm2 GPU for like $10k a pop and now you’re expecting them to design & ship [i<]another[/i<] 600mm2 GPU that'll only sell for ~$2k? Fat chance.

        • Tristan
        • 4 years ago

        GP 102 for 999$ have the same FP32 perf, as GP100 for 10K. GP 100 have many advanced features like FP64, ECC, NVLink, special form factor, and they charge 9K for these features, not for FP32 perf alone.

          • ImSpartacus
          • 4 years ago

          <500mm2 GDDR5X-packing GP102 coming out in 2017 for $999 is a lot different from the monster 600mm2 HBM-packing GP100 coming out in 2016. The ECC & NVLink aren’t the biggest cost drivers here.

          Nvidia is [u<]literally[/u<] doing everything that it possibly could do to control costs without seriously harming gaming performance.

            • NoOne ButMe
            • 4 years ago

            Are you telling me expecting Nvidia to eat an extra $100+ BoM cost is not reasonable?! ;D
            I guess 2-2.50 for GDDR5x 1GB ($1.25-1.75 for GDDR5) and $6-7 for HBM2 1GB, with 1080to 12GB HBM (3072bus) and 16GB Titan. With interposer as $20-40 extra.

          • the
          • 4 years ago

          Actually some of the markets GP100 is targeted at is primarily single precision based. Those markets still want features like ECC and nvLink though and willing to pay the price premium.

        • BurntMyBacon
        • 4 years ago

        [quote<]So Nvidia sells one HBM-packing 600mm2 GPU for like $10k a pop and now you're expecting them to design & ship another 600mm2 GPU that'll only sell for ~$2k? Fat chance.[/quote<] I'm getting old so my memory is a little fuzzy. Could you remind me exactly how big the GM200 on the Titan-X is? Sure Titan-X doesn't have HBM, and we all know HBM can't be used on a chip without raising the cost 900%, so perhaps you'd rather compare to the Fury-X. As I see it, nVidia is selling GP100 for $10k for the same reasons that it sells quadro cards for several thousand dollars (Well, there is also nvlink, but Tristan didn't ask for that). Sure, yields on the new process tech will have a large impact on price, but yields invariably improve over the lifetime of the process. By the time a theoretical GP101 would hit market, there is no reason to believe a 600mm2 chip couldn't be fabricated for double the cost that it currently done (twice) on the previous process tech. That all said, I still don't believe it [b<][i<]will[/i<][/b<] happen any time soon. I think they'll save consumer grade 600mm2 chips for Volta or even later. For those who don't know GM200 (Titan-X) weighs in at 601mm2 and Fiji (Fury-X) weighs in at 596mm2 without interposer.

          • ImSpartacus
          • 4 years ago

          That’s not a fair comparison. 28nm was mature as fuck by the time both camps decided to hit the reticle limit in their final 28nm GPUs.

          Isn’t GP100 like the biggest 16nm chip ever at the moment? And yet, it is Nvidia’s first 16nm chip.

          I think we’ve all forgotten how much of a challenge it is to move to a new process. Fermi is no longer fresh in people’s minds.

            • BurntMyBacon
            • 4 years ago

            [quote<]28nm was mature as **** by the time both camps decided to hit the reticle limit in their final 28nm GPUs. [/quote<] Of course it was mature. I didn't say it wasn't. In fact, I specifically said that a theoretical GP101 wouldn't happen any time soon. The 780Ti and 980Ti took a good while to come to market. [quote<]Isn't GP100 like the biggest 16nm chip ever at the moment? And yet, it is Nvidia's first 16nm chip. [/quote<] [quote<]I think we've all forgotten how much of a challenge it is to move to a new process.[/quote<] I fully agree with the fact that moving to a new process node is challenging (even more so than most here realize), but I'm not sure how pointing out that nVidia's very first 16nm chip hit the reticle limit supports that. Frankly, it is an incredible feat to get high enough yields on such a chip this early on to market to anyone. It is, however, proof that it is doable even if expensive to do. Given who they are marketing it to, I think if you assume similar profit margins to their quadro cards, you'll have a reasonable guess as to how much of that expense is fabrication / assembly/ software / support / profit / etc. Also consider that while process improvements will increase yields throughout the lifetime of the process, the largest gains come earlier (more low hanging fruit so to speak). It is entirely reasonable to think that by the time we're at or nearing the second generation 16nm architecture that the cost to manufacture such a chip could be down to double the cost of manufacturing a similarly sized chip in a fully mature process. [quote<]Fermi is no longer fresh in people's minds.[/quote<] That's a perfect example of what I'm talking about. Remember how the 500 series yields improved over the 400 series. That's the kind of time frame and improvement I'm suggesting here. Nothing that hasn't been done before. [i<]Edit: [quote<]That's not a fair comparison.[/quote<] You're right. There is no fair direct comparison, so I chose the closest thing in size available and compensated the lack of maturity with time to allow the process to mature. End Edit[/i<] Note: I'm not suggesting that nVidia will ever make such a chip. I doubt they see enough of market to spin a second 600mm2 design for such a small corner of the market. If they did, they probably would have spun a second 600mm2 maxwell chip for the FP64 crowd. All I'm saying is that it plausible that they could if they really wanted to.

    • Demetri
    • 4 years ago

    And in AMD rumor news…

    [url<]http://videocardz.com/60253/amd-radeon-r9-480-3dmark11-benchmarks[/url<] 480X looks to be just about on par with the base Fury.

      • ImSpartacus
      • 4 years ago

      It’s still pretty up in the air. Even videocardz didn’t want to share those leaks (read the article).

      • bfar
      • 4 years ago

      If those are engineering samples we might not be looking at the final clocks. But if AMD sells what we’re seeing here between $2-250, it could be a very popular product.

      • Redundant
      • 4 years ago

      Anytime AMD is mentioned I turn my AC down

    • NoOne ButMe
    • 4 years ago

    1060 seems plausible to me. Manage to get crazy salvage off of the wafers until GP106 is ready (Septemberish?) than have a 1060ti to keep the salvage up without having to artificially cut down chips.

      • ImSpartacus
      • 4 years ago

      I doubt even a fresh process like this one is churning out enough faulty GP104s to satisfy demand for a mid-range card like that.

      [i<]Maybe[/i<] we get a 192-bit GP104-based 1060 Ti (GK104-style), but I figure that's about it. A GP106-based 1060 could easily make due with a 128-bit bus if its packing GDDR5X. And that would allow it to be pretty freaking tiny & economical (presumably economical enough to justify the GDDR5X in the top SKU).

        • NoOne ButMe
        • 4 years ago

        I agree, but I see it like this:
        GP106 is coming later, if it is long away still Nvidia can do a more cut down GP104 at specs of full GP106 at the price a 1060ti was planned for to start. Than once GP106 is ready Nvidia announces a price drop and a 1060ti replacing the 1060 price.

        If Polaris is good enough than it doesn’t matter and Nvidia forced into 1060ti. I hope this is the case as benefits all GPU consumers more!

          • ImSpartacus
          • 4 years ago

          If Polaris 10 is priced aggressively enough, then yeah, maybe Nvidia might do something weird like that.

          But in general, I think there’s a reason that Nvidia starts at the top of their lineup and cascades down. It forces people to “buy up” into more gpu muscle than they would’ve normally wanted.

          Nvidia would be undermining that profitable strategy of they released a gp104-based card that was cut down ALL the way to the eventual gp106.

          Though as you said, if Polaris 10 is aggressive enough, Nvidia might be thrown off balance.

            • BurntMyBacon
            • 4 years ago

            [quote<]If Polaris 10 is priced aggressively enough, then yeah, maybe Nvidia might do something weird like that. [/quote<] I doubt nVidia will do it then either. There have been more than a few cases where nVidia didn't respond or only mildly responded to an aggressive ATi price cut. [quote<]I doubt even a fresh process like this one is churning out enough faulty GP104s to satisfy demand for a mid-range card like that. [/quote<] Neither do I. At least not in the mid/long term.

            • NoOne ButMe
            • 4 years ago

            If GP104 has to fill the gap it should only be 2-3 months. Not long term. As stated the good will from replacing it with a much faster 1060ti may also make it worth it?

    • Ninjitsu
    • 4 years ago

    While these look plausible, and indeed there were rumours of three GP104 variants, I will be shocked if the 1080 Ti and Titan release this year. I’m also admittedly surprised by GDDR5X on the Titan, and the severely cut back FP64. I suppose this was needed to hit yields.

    There is one discrepancy , though – the 1070 has a base clock of 1506 MHz and a boost of 1683 – according to GeForce.com.

    Anyway, if all this is true, we have:
    1060 ~ 970
    1070 ~ 980 Ti
    1080 ~ 1.3x 980 Ti
    1080 Ti ~ 1.65x 980 Ti, (1.25x 1080)
    Titan (Pascal) ~ 1.9x 980 Ti

    Further leads me to think that Vega in October may be legit, that AMD’s new cards may be quite good, and that the prices are going to fall by the end of this year.

      • NoOne ButMe
      • 4 years ago

      Why? Why spend valuable die area on crap you don’t need like FP64 and NvLink? Plus even turned off transistors still have some power…

      Sadly going GDDR5x instead of HBM kills some of the area savings. But still much cheaper overall than HBM.

        • Deanjo
        • 4 years ago

        [quote<]Why? Why spend valuable die area on crap you don't need like FP64 and NvLink?[/quote<] Because that is the "killer feature" that brought on the birth of the Titan series and was the reason behind it. Otherwise it is really nothing different than a traditional x80 Ti release.

          • chuckula
          • 4 years ago

          NvLink was never intended for PCs [even workstation PCs]. Ever.
          FP64 is useful for some workstation cards, but those will be Quadros, not consumer parts.

            • Deanjo
            • 4 years ago

            I was referring more to the FP64 and again, FP64 is what set the Titan on it’s path for people that wanted FP64 capabilities without the added cost or need for items like ECC and going to a Quadro.

            $1000 for a Titan is far more affordable than $6000-$8000 for the Quadro variant and that is why the original Titan sold like hotcakes.

            • NoOne ButMe
            • 4 years ago

            Saving money while still being able to make a $1000 and $650-750 card takes presidence over making some people happy about saving 5K+.

            • Deanjo
            • 4 years ago

            For those people there is the x80 Ti line.

            • NoOne ButMe
            • 4 years ago

            And the Titan X?
            All I see is “I want Nvidia to give me FP64 for cheap” and Nvidia has no reason to do so.

            The only pattern about Titan cards I see is so far they cost $1000+ per GPU. With all the GK110 Titan cards after Nvidia gave full FP64 support for the first one they couldn’t take it away from the Titan Black and Titan Z without huge backlash. With GM200 and going forward I predict they’ll never deliver a die that could have strong FP64 support if they can avoid it.

            • Deanjo
            • 4 years ago

            Which begs the question why even bother offering a Titan then when they have the x80 Ti series. Again, FP64 was really the only thing that separated it from the x80Ti line. With the last generation you could see them downplaying that as it was not optimized for heavy compute (which is also why the 980 Ti line blew away Titan X sales).

            The original description for the Titan was:

            [quote<]GeForce GTX TITAN Black GeForce GTX TITAN Black is a masterpiece of engineering. Starting with the award-winning GTX TITAN, the Black edition adds 6 GB of frame buffer memory, spectacular performance, [b<]double precision[/b<] and amazing thermals and acoustics. GTX TITAN Black is the ultimate gaming GPU for a pure gaming experience—the perfect balance of sleek design, uncompromising performance, and state-of-the-art technologies.[/quote<] It was a key feature, the only real feature that separated it from the 780 Ti, if FP64 wasn't needed then people grabbed the 780 Ti as it had all the same capabilities except when it came to the FP64 performance. nVidia would have a very good reason to offer FP64 performance, it adds value to the Titan line that would persuade people to purchase a premium product over their otherwise identical x80 Ti line for a few hundred dollars more with the same cost of production.

            • chuckula
            • 4 years ago

            There’s a difference:
            The original Kepler Titan offered FP64 as a bonus because the chip already had the 64-bit hardware anway. It was market segmentation, and unlike some fanboys I note that there’s nothing wrong with that.

            However, these purported GP-102 chips are a different beast: They aren’t going to offer strong 64-bit not because of some market segmentation strategy but because the silicon literally doesn’t have the 64-bit hardware available [at least not a large amount]. As they discussed in the Pascal architecture article, the 64-bit ALUs are literally separate hunks of silicon from the “regular” 32-bit hardware. Not a problem on a huge & expensive P100 HPC chip, but it becomes a much bigger deal on a piece of silicon that sells to consumers. Stripping out all those 64-bit ALUs and associated hardware is a great way to get a smaller chip that’s just as good or better at regular graphics, even though it suffers in HPC workloads.

            • Deanjo
            • 4 years ago

            And again then there is no need for a Titan line at all. That is the role that the x80 Ti line can fill.

            It would be pretty bad that my 3 year old Titan’s can still slaughter the latest in greatest offering 6X the performance in FP64. Guess it just makes my purchase three years ago even that much better of a deal.

            • NoOne ButMe
            • 4 years ago

            There is a need for a Titan Line: Being able to sell a card for a few hundred more dollars than a x80ti card. There is a market willing to pay over $1000 per card, it is in Nvidia’s interests to serve that market and make an extra few hundred bucks per sale.

            • Voldenuit
            • 4 years ago

            There’s a difference between buying an overpriced gaming card and buying a consumer workstation card for GPGPU applications.

            Not every startup or small business can afford a $10k GP100. If anything, the more gaming-centric workloads of the TitanX and rumored GP102 indicates that nvidia does not see a business opportunity for broader GPGPU adoption.

            • Krogoth
            • 4 years ago

            Kepler-based Titans seem to be nothing more than a happy accident for GPGPU enthusiast. Nvidia wanted to off-load excessive GK110 stock and/or silicon that didn’t reach TDP levels desired for their Quadro and Tesla tier.

            They experimented with extra GK110 chips with “Titan” moniker and pitched them a “ultra high-end” gaming solution like Intel’s own “Extreme Edition” chips. GPGPU enthusiast saw them as a “poor man’s” Quadro and Telsa. They didn’t mind missing out on ECC support and “Prosumer-tier” certifications.

            Maxwell “Titan” with full FP64 support didn’t exist because of yielding issues and Maxwell was really designed with gaming in mind not compute. Maxwell “Titan” was just a fully enabled GM210 chip with doubled memory capacity of a 980Ti (GM210 chip with one SMM block disabled).

            It is possible that Nvidia may attempt to repeat something similar to GK110 “Titans” with GP100 and “GP102” chips that didn’t meet Quadro and Telsa standards or just excessive stock.

            • Freon
            • 4 years ago

            Can you share this sales data? I’m curious how many they really sold.

            • Deanjo
            • 4 years ago

            Let’s put it this way, according to my guy at NCIX, they ordered as many Titans and Titans black as they could get their hands on in their production time and could not keep them in stock, when they came in stock, they sold out the same day if they were not already prepurchased. Titan X they never had a problem keeping in stock because they rarely sold any.

            Even take a look on eBay, original titans and Titan blacks are hard to come by and when they do sell, they still fetch premium prices.

        • Anovoca
        • 4 years ago

        [quote<] Sadly going GDDR5x instead of HBM kills some of the area savings. But still much cheaper overall than HBM. [/quote<] Titan and Ti are cards are built to be spare-no-expense products. Going gddr5x instead of HBM2 definitely sounds more like a supply and demand decision than a marketing one.

          • NoOne ButMe
          • 4 years ago

          by start of 2017 HBM2 volume should be in full swing. Whenever Vega launches HBM2 supply shouldn’t be the issue.

          Issue becomes cost and if you have the technology working properly.

          • the
          • 4 years ago

          Yield and cost would be another set of explanation. Even with amble supply of HBM2 and GPU dies being manufactured, there is the extra step of bonding both to an interposer. There [i<]could[/i<] be a bottleneck here that'd prevent high volume production. So far we only have AMD's Fury line, GP100 (which is shipping in VERY limited quantities in 2016) and few specialized designs making use of HBM today. This extra step also increases costs not only by the bonding process but the larger interposer also needs to be manufactured. Thankfully current inteposers can use ancient manufacturing processes (65 nm for example) without impact since they don't contain any logic. Even in the spare no expense premium world of Titans, these extra steps will eat away at profits which is something nVidia is reluctant to give up. Alternatively, they could just increase prices but the market may not accept that change. nVidia is likely waiting for HBM costs to come down before making the technology more mainstream.

      • ImSpartacus
      • 4 years ago

      What surprises you about GDDR5X on a Titan? The 384-bit bus allows for some [i<]very[/i<] marketable VRAM capacities (and the price point can support all of those pricey GDDR5X chips). People seriously eat that VRAM shit up. It's amazing. Furthermore, it allows a halved capacity 1080 Ti with 50% more VRAM than GP104. The 384-bit G*#00/256-bit G*#04 combo worked very very well for Nvidia in Maxwell & Kepler. Why should they stop now? EDIT I read that again, it sounds like you're assuming that the Titan Pascal & 1080 Ti would be GP100-based. That's what GP102 is for, bro. Cut out all of the stuff that gamers don't need (FP64, HBM, etc) and you're golden. [url=http://www.dvhardware.net/article63551.html<]We had leaks of the Pascal codenames in late 2015[/url<]. So far, I still trust them.

        • Ninjitsu
        • 4 years ago

        Yeah I had assumed that the Titan would get FP64 for prosumer stuff. :/

        I had also expected this much later when they were done milking the supercomputer market.

        EDIT: Wasn’t necessarily expecting a full GP100 for the Titan (i.e. with all SMs enabled).

          • jts888
          • 4 years ago

          High FP64 consumer parts just gut the workstation/professional/HPC market, where the entry level pricing is never under $1k/card and often obscenely higher.

          Hawaii was full 2:1 FP32:FP64 but AMD kept the Radeons locked to 8:1 IIRC.

          If Nvidia went the same route with GP100 (and I really don’t see it as a design very suitable for gaming), they would almost certainly keep the FP64 capacity locked down to at least as great an extent.

            • Ninjitsu
            • 4 years ago

            See Deanjo’s reply – FP64 was the reason for the Titan in first place.

            EDIT: What you’re saying is true – I’m just elaborating as to why I was expecting HBM and FP64 on the Titan.

            • jts888
            • 4 years ago

            Yeah, the Titan (561 mm^2 GK110 = uncut 780(Ti)) was designed for 3:1 FP32:FP64, but remember that this was the 2nd generation of Kepler and the 2nd round of 28 nm GPUs.

            I can understand selling a monstrous 600 mm^2 die full of FP64 ALUs and whatnot to the HPC market when you can get thousands of dollars apiece, but I’m skeptical about the consumer/prosumer market supporting somethings comparable at the appropriate relative pricing they would demand.

            When 1080 is going to be pretty much $700 for little more than a die shrunk and overclocked Maxwell, a 50% beefier GPU and memory pool would start at $1k on an equal price/performance basis before you even got into the new/exotic stuff.

          • ImSpartacus
          • 4 years ago

          I thought the same thing, but the $700 1080 debut price tells me that Nvidia won’t be in a position to sell a GP100-based Titan for anything near $1000 in 2016. But then as we get into 2017, then GP102 rears its head to make GP100 obsolete in the consumer space.

      • Sabresiberian
      • 4 years ago

      I think they’ll be released this year, primarily because they are going with GDDR5X and not HBM2, which I think is due to availability more than anything else. Of course there are other possible reasons; Nvidia could consider that HBM2 offers no real performance advantage at this point (over GDDR5X), and of course HBM2 will likely be significantly more expensive.

      I also think they’ll want to counter Vega as soon as they can – assuming of course that Vega performs significantly better than Pascal.

    • DPete27
    • 4 years ago

    GTX 1060 – 192 bit memory (shudder) reminds me of my GTX660. The GTX 660 was also rumored to be a GK104 but came out as GK106.

      • derFunkenstein
      • 4 years ago

      the 192-bit bus with 6GB of RAM at least means that it’s 3x 64-bit memory controllers with an equal amount of memory each. No weird partitioning like the 660 (or the 550Ti before it).

        • ImSpartacus
        • 4 years ago

        That’s what makes me think it’s not legit.

        Polaris 10 is strongly rumored to sport 8GB of VRAM and it’s going to compete directly with the 1060 (and that general area of the market).

        Nvidia has a storied history with artificially inflating VRAM capacities (970, 660 Ti, 660, 550 Ti, etc). And, on the whole, Nvidia has been wildly successful against their more genuine/honest competitor. So why are we expecting Nvidia to stop doing what’s made them so wealthy?

      • Airmantharp
      • 4 years ago

      It shouldn’t matter whether it’s a GP104 cut in half, or a half-size GP106, so long as the clocks, CUs, and memory bandwidth are all there.

    • DancinJack
    • 4 years ago

    YESSSSSSSSSS GIMME GIMME

    • Klyith
    • 4 years ago

    1060 feels a bit weak — seems like nvidia is ok with letting AMD keep the low-end market.

      • DancinJack
      • 4 years ago

      Yeah, GTX 970 perf is “low-end.”

        • Dysthymia
        • 4 years ago

        It will be at some point.

      • willmore
      • 4 years ago

      Yeah, I was looking at that, too. That’s the product segment where I’m likely to buy my next card. Half of the chip disabled? Ouch.

      Looks like I need to see what AMD comes up with this time. Last cycle my purchase went to AMD as well with an HD7850 because nVidia didn’t have anything competetive at the time. I just always seem to purchase in the cycle where the red/green balance is tipped. Well, there was a *lot* of green cards before that. 😉

      • chuckula
      • 4 years ago

      Yeah, AMD truly owns the low-end like nobody else.

        • DancinJack
        • 4 years ago

        lol +3

        • Concupiscence
        • 4 years ago

        Withdrawn, not worth it

        • ImSpartacus
        • 4 years ago

        fkin savage

        • DPete27
        • 4 years ago

        I dunno. I was/am pretty impressed with the GTX950.

      • derFunkenstein
      • 4 years ago

      The 1060 (if this is true) would represent a pretty huge step up over the 960, at least. Have to get past the CUDA core count – those clock speeds mean it’ll be around 40% faster than even an OC’d 960 without accounting for all the extra resources (50% more ROPs, for example), and add to that >50% more memory bandwidth because of the wider bus,

      • Voldenuit
      • 4 years ago

      1280 SPs at 1545/1658 MHz should (assuming equal efficiency) work out to be about as fast as a GTX 970 or thereabouts.

      If you look at Polaris 10, its rumored specs and speed also put it in roughly the same bracket as a R9 390.

      So both nvidia and AMD look to be offering similar performance to the 970/390 at a price bracket that’s hopefully $100 cheaper than the MSRP of the 970/390.

      That’s not too bad for midrange; it’s not OMG +70% performance that the 1080 offers, but it’s more than enough for 1080/1440p gaming, and maybe even enough to hit minimum specs for VR.

        • ImSpartacus
        • 4 years ago

        Bingo, this man gets it.

        The Rift/Vive VR min spec (290/970) drops down to mid-range in 2016. Polaris 10 & GP106 are both almost certainly going to have SKUs based around barely meeting the VR min spec for the lowest possible cost.

        AMD already [url=https://youtu.be/p010lp5uLQA?t=16m<]officially stated that Polaris would drive down the price point of the VR min spec[/url<] in order to expand the market of people that can buy VR headsets. This should be news to no one.

          • NoOne ButMe
          • 4 years ago

          If saving $50 for VR spec (I think cut P10 will be about 250 at lowest, 970 is around $300 now) lets you get a $600 VR headset I wonder why you cannot afford to start.

          If PSVR works with PC I start to see potential in cutting 50 dollars off the price…

            • ImSpartacus
            • 4 years ago

            It’s not the launch price that will expand the market, it’s the ability to cut the price later because the chip will eventually become cheaper to make on a 14/16nm process.

            Because as you said, 970s are already available for less than $300. A $250 replacement isn’t that big of a deal. But what about when that replacement slowly drifts down to $200 after a couple months?

            Also, for amd, Hawaii is pretty expensive to produce. It’s a decent sized die with a giant memory bus. That’s why the vr la speech by Roy Taylor touched on amd’s costs. Polaris 10 really helps with that. Likewise, the 970 is already sorta thrifty for Nvidia, so it doesn’t help them to the same magnitude.

            • NoOne ButMe
            • 4 years ago

            even $100 cheaper seems to me not enough as currently cheapest confirmed PC VR is $600. Cost of entry being $100 lower does not hurt, but I don’t think it helps much.

            • DPete27
            • 4 years ago

            I don’t think fabrication yields play into pricing as much as supply and demand.

            • NoOne ButMe
            • 4 years ago

            when the fab has more demand than supply it does.

            • terranup16
            • 4 years ago

            Less about the price of VR today, more about the market capacity for VR tomorrow. Someone who buys a 1060 or 480 for a MOBA or other 1080p game that isn’t super demanding but that headroom is nice for or that >60fps is nice for can in two years buy a cheaper Vive/Rift headset and get into VR.

            No arguments that we’d expect VR min spec to increase year over year, but I think there is a strong push from the headset manufacturers and investing developers to make sure that a GTX 970 works adequately well with most VR titles even if the settings are “low”. If that is achieved, nV and AMD get a kickback of sorts because that $200 GPU purchaser playing games that only needed a cheaper GPU is now enjoying content that can easily scale beyond what the top-end GPU setups are capable of, so that user has a stronger interest to make their next card a $350+ purchase.

        • muxr
        • 4 years ago

        > If you look at Polaris 10, its rumored specs and speed also put it in roughly the same bracket as a R9 390.

        No they don’t. Even the rumors don’t suggest this level of performance from the P10 parts. We simply don’t have a clue what the performance will be like.

        People extrapolate R9 390 performance because AMD said they will provide VR experience at mainstream prices, and everyone just assumed it’s R9 390 levels from that statement.

        We don’t even know how many shaders the top P10 part will have, so we can’t even begin to predict the performance.

          • NoOne ButMe
          • 4 years ago

          Polaris has major changes in it’s internal parts. It could have much higher performance per CU or clock much faster.

          When I first heard rumored core counts on GM107 I was surprised, than turned out it has much higher per core performance and higher clockspeeds.

      • trek205
      • 4 years ago

      how does the 1060 look like any less of an upgrade over the 960 then the 1080 is over the 980? in fact it looks better as the 960 was exactly half of everything where as here the 1060 has 192 bit bus. bottom line is that the 1060 should be even a slightly bigger upgrade over its predecessor than the 1080 over its.

        • Klyith
        • 4 years ago

        You’re right, the 1080-70-60 is following almost the exact same pattern as the 900 series.

        I guess I just don’t like nvidia’s market segmentation. It feels like they’re trying to push the higher price cards by sandbagging the cheaper one. The price/performance curve for their lineup is much flatter than the normal PC gear. Which is fair I guess, you get what you pay for. But I need a new video card and I don’t want to pay $400.

        Hopefully polaris is good so we can get some competition back. AMD at least still fills that $300 sweet spot.

      • ImSpartacus
      • 4 years ago

      The thing that makes me call “bunk” is the 6GB of VRAM. Polaris 10 will be the direct competitor of the 1060 and it’s rumored to have 8GB of VRAM.

      Nvidia has shown that it’s willing to play games to maintain marketable VRAM capacities (970, 660 Ti, 660, and that’s just recent history).

      Shady as it might be, it’s worked well for Nvidia and I don’t see them stopping any time soon.

        • kmm
        • 4 years ago

        If it has 3/4 of the ROPs, 3/4 of the memory bus, and 3/4 of the memory, what kind of games do they need to play here?

          • ImSpartacus
          • 4 years ago

          The same games that they played with the 660 ti, which was also the heavily cut-down third variant of a G*104 part. The similarities are enormous.

          And yet, [url=http://www.anandtech.com/show/6159/the-geforce-gtx-660-ti-review/2<]the 660 ti still had 2gb of vram, just like its 670 and 680 cousins even though it only had 3/4 of its memory bus to play with[/url<]. The 660 ti was a popular card. It was very marketable against 2gb competitors. The hypothetical gp104-based 1060 ti needs to compete against 8gb Polaris 10-based cards. What makes you think Nvidia will deviate from the successful strategy that worked in the Kepler days?

        • synthtel2
        • 4 years ago

        How would 6 GB not be enough? Doubly so because it’ll be a cheap minimum VR spec card (VR doesn’t take as much VRAM as equivalent normal loads). There are plenty of cards out there right now with similar performance, 4 GB VRAM, and no trouble at all. If you mean it’ll be bad for marketing to have a card with less VRAM than the competition, they’ve done it before.

          • ImSpartacus
          • 4 years ago

          For performance and practical reasons, it’s plenty.

          But we all know that those aren’t the only considerations that a gpu maker looks at when they plan a chip. Unfortunately, vram capacity is one of the most important marketing bullet points for a graphics card.

          In just about about every chance in recent history, Nvidia went out of their way to jury rig enough vram on their cards so as to compete on capacity (970, 660 ti, 660, 550 ti, etc). I simply don’t see them stopping now in the face of an 8gb Polaris 10.

      • anotherengineer
      • 4 years ago

      1280 shaders, same as the old radeon 7870 that has aged very well.

      And exactly the ball park I’m looking for a card, as long as it’s in the $180-$220 range.

      • Hattig
      • 4 years ago

      It’s a stop-gap product until GP106 is available.

      • Freon
      • 4 years ago

      50% more memory bus width, more bandwidth efficient, 50% more ROP than the 960, ~30% more clock speed.

      By these specs I would expect it to at least match the 970 and likely beat it. That’s not good?

        • Ninjitsu
        • 4 years ago

        70% over the 960 would have been better! 😛

      • Krogoth
      • 4 years ago

      It is perfect for 2Megapixel gaming for current and future DX12 content. You need something more if you want to go 4Megapixel gaming and beyond.

    • chuckula
    • 4 years ago

    Is there a big Pascal coming in the future: No doubt.
    Is it this rumored GP102 specifically: Definite doubt.
    Is it coming this year: High level of doubt.

      • Srsly_Bro
      • 4 years ago

      AMD’s Vega could speed the roll-out of new products.

        • nanoflower
        • 4 years ago

        That would depend on whether Nvidia was holding back their top of the line cards to milk the suckers/customers that are willing to pay a premium for the 1080 and then upgrade to the 1080TI/Titan.

          • Srsly_Bro
          • 4 years ago

          lol “depend on whether.”

          TWIMTBM

          The Way It’s Meant to Be Milked®

          • ImSpartacus
          • 4 years ago

          “Depend on”? Dude, they definitely are.

          Nvidia knows how to keep the shareholders happy.

          The existence of GP100 shows that Nvidia could probably pull a Fermi and debut pascal in the form of a “big” chip. But they learned from Fermi, and their post-Fermi strategy of leading with the G*#04 part been retardedly successful. With AMD in a corner, forced to make their debut 14/16nm parts attractive to Apple & the 4K consoles, Nvidia doesn’t need to shake up their strategy.

        • Kretschmer
        • 4 years ago

        Wouldn’t Vega itself have to be on time, though? I highly doubt that we’ll see it before Q2 2017…

      • ImSpartacus
      • 4 years ago

      I think the idea of a GP102 makes sense. GP100 is expensive, but Nvidia still needs a big Pascal for gamers.

      Though admittedly, it’s ridiculous to think that we’re getting it in 2016.

      • Kougar
      • 4 years ago

      Why? Titan / Ti models seem to show 6-8 months after the new gen launch, which puts it right at Nov/Dec, the time frame NVIDIA likes to launch stuff for the holidays. If the card isn’t going to use HBM2 then there isn’t anything delaying it. Volta is due next year so there isn’t much point to delaying a Titan model further.

        • ImSpartacus
        • 4 years ago

        Volta is due next year in the same way that big Polaris was sure in q1 2016. There are probably strings attached.

        Nvidia didn’t create a custom form factor for gp100 just to abandon it. If Volta comes on a new process, then there’s a good chance that it will debut in the same manner that gp100 debuted, that is, in a server blade that costs a hundred grand.

        But we’ll see. Who knows?

      • yogibbear
      • 4 years ago

      God dammit. Do I… get a 1080GTX… or do I wait?!?!?!?!……………

      How long until a 2080GTX Ti with HBM2? Might wait for that!

      Or will it be called a 1440GTX Ti?

        • ImSpartacus
        • 4 years ago

        Geforce GTX 4K – it’s worth waiting for.

        • End User
        • 4 years ago

        Which resolution are you running at now?

        I play at 2560×1440. I’m buying a 1080 at launch.

          • chuckula
          • 4 years ago

          You have a point: It’s not that Nvidia won’t release something faster in the future. Of course they will.

          It’s more along the lines of: At 2560×1440, will that faster solution really make enough of a difference to make waiting and still paying a high price worth it? I can see how the answer would be no at 2560×1440 (I’m in a similar boat myself).

          At 4K resolutions? The answer to the same question could be different.

            • End User
            • 4 years ago

            4K is too pricey for me at the moment. I think the sweet spot right now is FreeSync/G-Sync @ 2560×1440.

Pin It on Pinterest

Share This