Rounding up Nvidia GeForce RTX rumors

Gerbils, it's time to strap on your muck waders and sit down with the whole can of Morton's, because we're about to wade into the rumor mire. The color: Nvidia green. The company released its latest generation of graphics cards with the high-end RTX 2080 and 2080 Ti models back in September, followed by the lower priced RTX 2070 cards. Presumably, Nvidia will follow will RTX 2060 and RTX 2050 variants; nothing has been confirmed or released yet, but predictably, there are rumors and leaks aplenty. 

Image: EEC

Gigabyte filing with the Eurasian Economic Commission indicates no less than 39 GeForce RTX 2060 graphics cards are on the way. The filing, first spotted by Twitter user @KOMACHI_ENSAKA, lists RTX 2060 cards in various configurations with 3 GB, 4 GB, and 6 GB of GDDR5 or GDDR6 memory. It's possible that not every card listed in the filing will find its way to market, of course. 

The GeForce RTX 2060 was the subject of a number of leaks earlier in the month. One of the folks responsible for those tidbits, TUM_APISAK, also spotted an OpenCL result in the Geekbench database for an Nvidia graphics card that conspicuously comes up as "Graphics Device." Despite having only 14 compute units—896 shader ALUs, or "CUDA cores"—the mystery chip put up an OpenCL score of 114206. That apparently puts it well ahead of the GeForce GTX 1050 Ti, its closest extant relative, in that benchmark.

There are a lot of things that could be going on there to be sure. Falsified or "cheated" results are the most obvious answer, but all signs point to these numbers being legitimate. The unnamed card's performance lines up neatly with the recently leaked RTX 2060 Geekbench result (which TUM_APISAK helpfully displays immediately alongside) when you take into account the difference in compute resources.

Image: Cjscope

There's nothing specifically indicating that the card is a desktop part, though, which segues nicely into our next bit of maybe-news. The last time we told you about these RTX rumors, we noted that the aforementioned TUM_APISAK had spotted results for a number of mobile GeForce RTX GPUs. Now, Taiwanese laptop maker Cjscope has gone and done us the favor of spilling the specs on the full-fat mobile RTX 2060, RTX 2070, and RTX 2080 cards. This data comes from the product page of the company's range-topping HX-970 GX laptop. Said page has now been pulled from the Cjscope site, but the link above goes to Google's cache of the site. (Thanks to WCCFTech for the link.)

You can pore over the parts if you like, but here's the short version: Just as with the Pascal-powered mobile GeForces, the mobile GeForce RTX family seems to be nearly identical to its desktop counterparts. The sole exception is the GeForce RTX 2060, which appears to be slightly cut down compared to what we know about its AIB cousin—only 1536 shaders versus the rumored 1920 of the desktop model. That card isn't even released yet, though, so it's difficult to really draw any conclusions.

It's worth noting that the HX-970 GX is a laptop-in-name-only that uses socketed Intel CPUs and likely has a battery life measured in minutes, not hours. Thus, these versions of the mobile RTX cards will probably be found exclusively in similarly endowed gaming laptops. More conventional portables will no doubt end up using the Max-Q versions of these chips that we discussed previously. Thanks to VideoCardz and TUM_APISAK for the tips.

Comments closed
    • Johnny Rotten
    • 11 months ago

    All of these nvidia whisperings take a backseat to AMD at the moment. Everyone is waiting to see if the rumored specs/prices of the new 7nm radeons are true. If they are, these 2060s are dead.

      • jihadjoe
      • 11 months ago

      You know, as a fellow TSMC customer Nvidia has access to the same 7nm process AMD will be using.

      Actually, I bet they have 7nm Turing GPUs lined up, just not talking about them otherwise there’d be even less reason to buy anything RTX today.

        • Voldenuit
        • 11 months ago

        It’s not like you can just flip a switch and jump to 7nm. Yes, nvidia has probably put in the chip design work and resources for 7nm, but they still have to recoup the development costs of 14 nm Turing, not to mention their existing supply contracts to board and OEM partners.

        Turing was a perfect storm of disasters for nvidia. I don’t think anyone at nvidia could have predicted:

        a) an overstock of 300k Pascal inventory they have to sell off
        b) underperformance of Turing in the market
        c) All the major games they pinned their hopes on for RTX to become a must-have to flop. BF V massively undersold compared to BF 1. Shadow of the Tomb Raider has been the worst-selling of the reboot trilogy, and Hajime Tabata left Squeenix, leaving FF XV unfinished.

        If 7nm Navi performs as well as predicted, at the prices predicted, it will massively undercut nvidia in traditional rasterized games. And I, for one, welcome our new red-logo overlords (not that RTG has not had its share of troubles this year). But an 80/20 industry split has not been healthy to consumers, and we might finally see market share shift to a better balance for consumers, if AMD can pull this off.

      • psuedonymous
      • 11 months ago

      Anyone expecting 7nm to be a magical performance improvement (CPU or GPU) or price drop is going to be vastly disappointed. 7nm will merely continue the trend that started with 28nm: performance remains constrained or regresses, cost/transistor increases, and any performance increase comes from throwing more transistors at the problem (and thus increasing cost).

      • Krogoth
      • 11 months ago

      2060 will simply adjust its pricing accordingly to what it should be ~$249-$299. Nobody wants the 2060 at its current pricing. It was intentionally done this way so it would make 1070Ti look good by comparison.

    • Rakhmaninov3
    • 11 months ago

    Good lord. I simplified my gaming life by getting an Xbox One X. Looks beautiful on 4K TV. It’s a rebellion against my hardware-enthusiast core but it’s a hell of a lot of fun and the whole thing cost “only” $500. And the reflections on GTA V look better than the ray traced ones coming out of the 2080.

    • DPete27
    • 11 months ago

    I’ve seen/heard:

    RTX2060 is as far down the chain as RT will go. Any lower than that and ray tracing isn’t suitable.

    There will be GTX1160(Ti?) and down. Turing without RT.
    The GTX1160 is presumably a counter to AMDs purported Navi chips which are to be insanely price competitive so Nvidia needs to simplify the die in order to be able to match that price/perf.

    RTX 2060 will launch at CES in January to get the jump on Navi to maximize revenue on those chips before they need to launch the GTX1160.

      • Chrispy_
      • 11 months ago

      Like most people who are witnessing the RTX 2070 struggle to justify its RTX features, I’m really not interested in Nvidia’s self-serving desire to push RTX into the mainstream. The RTX 2060 will be nearly pointless if the inadequacy of the 2070 for raytracing is any indicator.

      An 1160 in Q2 would probably be the best mainstream option – Turing-based architecture and features with the cost/power savings of culling the RTX features that use a lot of die area, coupled with the benefits of TSMC’s 7nm process.

      I’m expecting performance/Watt and performance/$ increases of 30% or more, which means we’ll hopefully be seeing GTX 1060 performance and thermals in place of the 1050 – a huge boon for the laptop market – and hopefully a [b<]*desktop[/b<] GTX 1070 equivalent at the $200 price point without needing fancy dual-fan, multi-heatpipe cooling. * edit for clarity.

        • dragontamer5788
        • 11 months ago

        RTX 2060 will be useful as a developer GPU, for anyone who wants to emulate the #1 and #2 TOP500 supercomputers in the world (Summit and Sierra). You actually have [b<]every[/b<] feature that's in the NVidia V100, but scaled down and cheaper. Seems like a good niche IMO.

          • Krogoth
          • 11 months ago

          or be used up in the next round of crypto-currency mining madness.

            • dragontamer5788
            • 11 months ago

            Nope. Cryptocurrency miners use whatever is most power-efficient and cost-efficient. I expect the Rx 590 to take the cake, maybe AMD Vega chips.

            All of those tensor-cores and raytracing cores are utterly worthless to cryptominers.

            • Krogoth
            • 11 months ago

            It doesn’t stop somebody from developing a new crypto-currency that takes advantage of the tensor cores on Turing/Volta architecture. The Turing/Volta architecture in itself is much better at FP then its predecessors. There’s a reason why Volta is stomping GCN at general compute related stuff in raw performance.

            Nvidia already took note of the impact of crypto-currency madiness and designed their future chips accordingly. They want to utterly crush AMD RTG at its own niche.

            • dragontamer5788
            • 11 months ago

            [quote<]It doesn't stop somebody from developing a new crypto-currency that takes advantage of the tensor cores on Turing/Volta architecture.[/quote<] And THAT goes against all of the principles in the cryptocommunity. The cryptocommunity is about decentralization and democratization of coins. They prefer GPUs that are wide-spread. Heck, a large community (Monero) is trying to get things CPU-optimized, because CPUs are more widespread than GPUs. [quote<]The Turing/Volta architecture in itself is much better at FP then its predecessors. There's a reason why Volta is stomping GCN at general compute related stuff in raw performance.[/quote<] Floating point performance is completely worthless in cryptography. Floating-point operations non-communitive, imprecise, and often have ill-defined error bounds on operations. Its incredibly difficult to get two different floating-point computers to output the same bit-level values, even if they are IEEE compliant. Absolute [b<]precision[/b<] of every single bit is necessary for good cryptography. Its just fundamental to the subject matter. Its why Cryptocoin advocates prefer using AMD GPUs like the Rx 580 (historically), which for some insane reason, are designed to have high-integer operation throughput. --------- FP16 operations will [b<]never[/b<] be used for cryptography. They're too imprecise and are by definition irreversible operations. Now... maybe 8-bit and 4-bit operations in the tensor-core space [b<]may[/b<] be used. But once again, we start to come across the "centralization" problem, which is bad politics for cryptocurrencies. No community will be built on top of special hardware (like NVidia's RTX 2070 series). Its too centralizing

            • Krogoth
            • 11 months ago

            Nope, the cryptocurrency community at its core is all about money laundering and luring in people (miners) doing the “brute force” encryption who are trying to make a quick buck out of the whole process.

            They need the “miners” to keep the whole thing going or it becomes completely illiquid. They don’t care where the processing power comes from as long it keeps itself floating.

            • dragontamer5788
            • 11 months ago

            I mean, why not both? I don’t think your post really contradicts my discussion points.

    • DeadOfKnight
    • 11 months ago

    There’s also a lot of rumors floating around about Turing-based GTX desktop cards coming down the pipe. Presumably, they will target a mainstream 1060/1050 Ti-class price point first to avoid cannibalizing current high-end offerings. No word on if these would be new chips or cut down versions of existing silicon.

    I’m pretty skeptical about it. I mean, it would make a lot of sense, and I know there are a lot of people who would be excited about cheap versions of Turing. However, this would significantly undermine their big push for raytracing in games. I’d expect them to use RTX as a selling point down the line.

    Any raytracing advantage could be enough for their marketing team to run with. This is a segment where they need to be competitive. It can help to increase the adoption of raytracing, even if it’s slow. It can also help to validate high-end products. I’m not hoping they do this, but it makes more sense for them.

      • Laykun
      • 11 months ago

      You wouldn’t want to use these cut down card to promote ray tracing as the experience is more than likely just going to be negative. We may be witnessing another GeForce 4 MX vs GeForce 4 Ti moment where the capabilities of both cards are significantly different but users really won’t notice for another 2-3 years.

        • DeadOfKnight
        • 11 months ago

        For sure, but I don’t know what that means from a business/ marketing position. These RTX chips are big dies. They have to do some binning to maximize their profits. If an RTX 2060 is all they need for bottom of the barrel then great, but I have doubts about them wanting an RTX 2060 and a GTX 2060 to coexist. This would cause cannibalization same as it would at the high end.

        Go down to a possible GTX 2050 Ti and now you’re talking about a place where there’s heavy competition not only with AMD but with their own products and the secondhand market. They may want RTX just for differentiation. Obviously nobody is going to run RTX on for long on such a card when it’s barely passable on an RTX 2080 Ti, but they may not know that until they bought it.

        You have to remember that they don’t market to us enthusiasts. Any of us on that budget would sooner buy a used GTX 1060.

    • Chrispy_
    • 11 months ago

    I’m excited for better laptop graphics in 13″ or 14″ models.

    For most regions and budgets, these are only available with MX150 or the inferior Ryzen APU graphics and frankly for the common native res of 1080p the Ryzens don’t stand a chance whilst the MX150 is ‘barely adequate’ for anything except undemanding eSports titles.

    TSMC’s 7nm node looks very promising so hopefully either AMD or Nvidia can deliver a product in the RX560/GTX1050 performance ballpark at 25-35W. It’s not going to be 1080p60 at ultra details but at least people will be able to get most games running acceptably at half-decent image quality.

      • Spunjji
      • 11 months ago

      I’m currently wondering if AMD will even attempt to compete in that area with discrete graphics. Given their limited resources, poor market penetration and the need for a product with an integrated GPU not going away any time soon, they might be better off sticking to APUs in the 15-35W TDP range and focusing on their driver / stability issues. They’re already at quad-core on the CPU side, so the 7nm shrink could go almost entirely to increasing the area available for graphics processing.

      All they’d need for truly competitive performance is some closely-connected memory for the GPU to make use of… they have the technology, question is whether or not they have the will to build it.

      That last question applies to Nvidia, too. I wouldn’t put it past them to leave Pascal competing at the low end for a while yet; they do have a habit of making their smallest chips last 2 or 3 generations.

      For the record, I feel the same way you do. A 13-14″ sized device hitting 1050/Ti performance levels would be a peach!

        • Chrispy_
        • 11 months ago

        APUs are at the mercy of idiot vendors who will halve performance to save a dollar on RAM/heapipes.

        So many of the Ryzen APU laptops have been hampered by either single-channel RAM configurations, crippling 15W TDP for both CPU and GPU combined, or [b<]both[/b<]. Not only are many of the 2500U and 2700U models single-channel, but some of the dual-channel options seem to be limited to 2400MHz as well. So you rule out half the market because it's single-channel, then you rule out half of the remaining dual-channel options because of TDP, and then you're left with a tiny selection that is only available in limited regions. The feces topping on this mud-flavoured disappointment cake is that vendors don't use AMD's drivers, choosing instead to bloat/tailor the driver with their own branding and configuration. This means that the very few desirable Ryzen APU laptops are then anything up to 12 months behind with driver support - something that is a fundamental requirement for a gaming GPU. At least with a dGPU AMD aren't hampered by dumb vendor decisions since they are paried with the required amount of GDDR5, come with limited TDP configurability and use standard (regularly-updated) drivers straight from AMD. Currently, with Polaris 14nm the only sensible option for a laptop is the 60W RX560 because the next step down (RX550) is still a 50W product and provides only the performance of the far more efficient MX150 (25W). A 100% power efficiency increase over the competition is a complete failure in a mobile product, as far as I'm concerned. At least the RX560 is within 10-15% of the performance/Watt of a 1050. Of course, we'd all pick the 1050 over the RX560 given the choice, but vendors never give us the choice. At least the 560 is an acceptable alternative - which, in the limited choices of laptop graphics - is a gift-horse not to be looked in the mouth!

          • dragontamer5788
          • 11 months ago

          +1. This matches my experience with AMD’s R5 2500U laptop.

          • DoomGuy64
          • 11 months ago

          What AMD really needs to do with APUs is make a high end APU with HBM that can match Xbox performance. I’m sure it would be expensive, but that would eliminate all the problems with APUs, and gain market share at least on gaming laptops.

          Intel did it already, so there’s nothing stopping AMD from doing this except AMD themselves, and 7nm would be perfect for this monstrosity of an APU.

    • Neutronbeam
    • 11 months ago

    Zak, you know you can’t validate any of the above details without first throwing in at least one “purported” or “purportedly”. I will wait patiently as that critical issue is addressed. Good rumor story otherwise, though!

      • RAGEPRO
      • 11 months ago

      Hey man, it was there. I was exhausted yesterday and passed out before approving final edits. So I guess that’s still on me. Dangit!

        • drfish
        • 11 months ago

        I can vouch for this, don’t worry, we’ll sort it out with the new guy.

    • Krogoth
    • 11 months ago

    NGREEDIA STRIKES AGAIN! WHILE AMD RTG IS STILL EATING NAVI GLUE!

    On a more serious note, it looks like 2019 will be a repeat of 2017. Navi is going to be a Polaris successor while Nvidia continues to milk the higher portions of the market. The existing stock will be replaced by upcoming 7nm refresh.

      • Voldenuit
      • 11 months ago

      It’s nice of Krogoth to fill in for Chuckula over the holidays.

        • chuckula
        • 11 months ago

        I’m on vaycay getting ready for a trolling new year!

          • Leader952
          • 11 months ago

          Thanks for the truth.

        • Leader952
        • 11 months ago

        They both act like Thing 1 and 2 from Dr Suess or more to the point like Troll 1 and 2. Their NGREEDIA line is for pure trolling and getting very old and tiresome.

      • rudimentary_lathe
      • 11 months ago

      Navi will almost certainly be a Polaris successor, but hopefully the performance jump is into GTX 1080 (not Ti) territory. A GTX 1080 type card for Polaris pricing and FreeSync support would be a compelling proposition. I would upgrade from my RX 480.

Pin It on Pinterest

Share This