PNY reveals RTX 2080 Ti specs and potential $1000 price tag

We just tried to predict the performance of Nvidia's rumored RTX 2080 and RTX 2080 Ti using a mix of public information and best guesses, and now we have the first official spec sheet for an RTX 2080 Ti thanks to a misfire from board partner PNY and a good catch from a TR staffer. The company posted a product page for its  RTX 2080 Ti Overclocked XLR8 Edition early, revealing several critical specifications for the as-yet-unannounced and unreleased card.

Most critically, PNY lists a 1350-MHz base clock, a 1545-MHz boost clock, and a 285-W TDP for the card. While those clock speeds might sound low, they seem likely to continue the Pascal tradition of conservatism regarding delivered clock speeds from Nvidia products—GPU Boost 3.0 or its successor will likely push Turing cards' clocks much higher in real-world use under good cooling. The 285-W board power, on the other hand, likely reflects a 30-W allowance for the VirtualLink connector coming to Turing cards, meaning the RTX 2080 Ti itself lands close to the GTX 1080 Ti's 250-W figure.

PNY's product page also suggests that NVLink support is coming to Nvidia consumer products with the Turing generation. The page lists support for two-way NVLink, suggesting builders could be able to join their RTX 2080 Tis together over that coherent interconnect to create dual-card setups with a single 22-GB pool of VRAM.

Past those tantalizing details, the product page is fairly straightforward. The card will require two eight-pin PCIe connectors and could list for $1000. Presuming our guess at Titan-beating performance holds, it looks like that speed and a lack of a comparable product from AMD will result in Turing cards costing a pretty penny. We'll presumably find out more Monday.

Comments closed
    • adamlongwalker
    • 1 year ago

    Well lets just see….

    • leor
    • 1 year ago

    So I have a question: Reading some of the comments here, what is it that people care about? I am reading stuff about die size, whether Turing is more interesting than the 2880WX, GDDR6, what it cost to manufacture, etc. I know some people follow tech companies like sports teams and root for/against which ever, but all I can see here is spending a lot more money, for the same (or less of a?) performance jump , and it bums me out.

    Even if this really is titan level performance, the fact that the current titan is at 3k, and the next cheaper card after the 2080 Ti is the vanilla 2080 (which is predicted to not be much better than the current 1080 Ti), I don’t see how this isn’t a net loss for the customer all around.

      • techguy
      • 1 year ago

      A few minutes ago Jen Hsun got up on the stage and said ‘The good news is, you’re gonna be surprised. … Everything on the web, every spec is wrong. You’re gonna be surprised’

      So there’s that…

        • Voldenuit
        • 1 year ago

        Has Jen Hsun been taking Elon pills?

        • leor
        • 1 year ago

        That’s cause he made up a new spec. 😛

    • barich
    • 1 year ago

    A GeForce DDR cost $300 at launch in 1999. That’s about $450 in today’s dollars. And people were complaining about that price then.

    Something has gone horribly wrong.

      • Eversor
      • 1 year ago

      Even if the inflation numbers were accurate, these are monster GPUs at over 750mm². The GeForce was a 111mm² chip.
      There is no way to compare yields on the two and this becomes one of those situations where if there isn’t demand for $1000 cards, they won’t make it and will revert to small die designs for consumer markets.

        • barich
        • 1 year ago

        The inflation numbers are accurate, I promise.

        Those are all good points.

        Nvidia’s profit margin was about 10% in 1999 and it’s 35%+ now. I guess I should be mad at consumers for being willing to pay absurd prices.

      • techguy
      • 1 year ago

      While I’m inclined to agree that the price increases for this generation are disturbing, I’m not sure that the original Geforce DDR is an apt point of comparison, given the market situation then to now. Back then, NV was still trying to establish a name for their products (hence the switch to Geforce from Riva/TNT) and in heavy competition against (primarily) 3dfx with ATi thrown in the mix too. They had every reason to price their cards competitively.

      A more apt comparison perhaps would be the 7800 GTX 512, a ~$700 part in a time of $400 high-end cards. By the time the next generation rolled around (8800 GTX/8800 Ultra) that $700 figure had become the new standard. Sadly then, this move is not without precedent, and I could see the ~$1000-1200 price range being with us from here on out. It was one thing when the Titan cost this much, people could say “well, that’s clearly just a halo part and it’s priced accordingly” but now that it’s moved into the “mainstream” Geforce lineup, future graphics card pricing doesn’t look good for consumers.

    • psuedonymous
    • 1 year ago

    My bet: FE pricing will be in line with 10xx series FE pricing. AIB launch pricing will be the anomaly being [i<]above[/i<] FE pricing rather than below (matching current 10xx series AIB pricing), possible with Nvidia severely limiting FE stock to placate AIBs.

    • RtFusion
    • 1 year ago

    Just out: Reference RTX 2080 Ti leaked, it is a dual fan:

    [url<]https://videocardz.com/newz/nvidia-geforce-rtx-2080-ti-dual-fan-reference-design-leaked[/url<]

    • ZGradt
    • 1 year ago

    These prices are BS. I got a 1070 on sale for $285 before the crypto gouging started, and I would have figured that the inventory crunch would be over by now. Why are prices still stupid?

      • K-L-Waster
      • 1 year ago

      You’re comparing a purported 2080TI price to a 1070TI price? That’s not apples to oranges, it’s apples to watermelon.

    • chuckula
    • 1 year ago

    2990WX: 860 mm^2 of silicon, $1800: $2.09 / mm^2, and it’s really 4 pieces of much smaller and supposedly cheaper silicon. [Incidentally, the $900 2950X has the exact same price / mm^2]
    Extra memory in that price: Zero.
    Is there a competitor? AMD says there isn’t but there is.
    Is it really brand new? Nope, using silicon that’s only slightly tweaked from last year and base chips that have been on sale for 6 months.

    Big Turing: 754 mm^2 of silicon in a single, more expensive die, $1000 (if this is correct): $1.32 / mm^2
    Extra memory in that price: 11 GB of DDR6 Memory.. remember when we all gave AMD credit for being the first with DDR5 memory?
    Is there a competitor? Nope.
    Is it really brand new? Looks like it.

    It’s interesting that there are plenty of people here who think exactly one of these products is literally the only innovative piece of technology to exist in the last 15 years while the other one is an evil ripoff by a greedy corporation.

      • DancinJack
      • 1 year ago

      This is Turing.

      • NTMBK
      • 1 year ago

      No denying it, getting mass market yields on a die that huge (on a new process variant) is a big achievement. Nvidia has some fantastic engineers when it comes to physical implementation.

      • ptsant
      • 1 year ago

      An accurate comparison would have been with Titan V or even Quadro/Tesla. The Threadripper is practically a workstation class product. A 2080 is for playing games. I highly doubt that the special sauce for AI/compute is available on the 2080, even if the hardware is there.

        • smilingcrow
        • 1 year ago

        EPYC is the comparison with Quadro as TR is gimped as are the NV gaming cards.

      • Waco
      • 1 year ago

      Big Turing in Quadro form is 4-6X more expensive.
      EPYC is…double?

      • Sahrin
      • 1 year ago

      …mass market yields?

      At $1,000, yields are predicted to be about 50% for 12″ risk wafers.

      That’s atrocious.

        • NTMBK
        • 1 year ago

        How on earth do you get from RRP to yield estimation? Did Nvidia tell you how much they paid for GDDR6? Did they tell you what their profit margin is?

          • chuckula
          • 1 year ago

          What’s even funnier is that in his Pavlovian-impulse emotional response to attack Nvidia he’s inadvertently providing a legitimate rational justification for Nvidia to charge the “high” price seen here.

          • Sahrin
          • 1 year ago

          Lol, you’ve never working in ic manufacturing I see.

            • chuckula
            • 1 year ago

            Put up or shut up.

            Post your publicly verifiable resume including your exact dates of employment and a leading-edge fabrication facility.

            I also expect to see a highly detailed description of your job responsibilities and specific products that you were personally responsible for getting out the door and onto the market.

            • Sahrin
            • 1 year ago

            Yes, because I’m eager to get fired over a neckbeard calling me out on the internet.

            • chuckula
            • 1 year ago

            1. You’re a liar.
            2. Funny how you called me a neckbeard here but also claimed to be older than me the next day.

    • Rakhmaninov3
    • 1 year ago

    It’s been 11 years since I got a new desktop. Med school and residency/work/other BS got in the way. That venerable Q6600 box bit the dust several months ago (actually I just think it’s overheating and needs the HSF reseated but it’s too much work to bother with). I’m itching to make an over-9000 Godbox with one of these Ti cards and some giant screen and huge memory and some liquid-cooled overclocked processor and 48 speakers and an attached Kegerator.

    It’s probably more fun to dream about than to actually have but I actually want to have it anyway!

      • Chrispy_
      • 1 year ago

      Reseating a heatsink is too much work to bother with but building an entire high-end rig from scratch isn’t?

      Oooohkay then.

        • rnalsation
        • 1 year ago

        Intel® Core™2 Quad Processor Q6600 Launch DateQ1’07

        Everything is too much.

        • blitzy
        • 1 year ago

        yeah, but building from new parts is a lot more fun.. and even if he does re-seat the CPU, it’s still gonna be slow and old relatively…

          • Wirko
          • 1 year ago

          But you can overvolt a Q6600 to hell and back with little to lose … well, I’d hold a fire extinguisher in each hand if I was doing that.

        • derFunkenstein
        • 1 year ago

        A Core 2 Quad 6600 is so old, Apple isn’t even shipping it in current Macs anymore. 😆

      • snarfbot
      • 1 year ago

      dear diary..

      • NTMBK
      • 1 year ago

      Any modern midrange card will be a huge improvement over that old system, you don’t need to blow $1k on a GPU!

    • leor
    • 1 year ago

    Titan level pricing for a non Titan card. I had planned to grab one of these but now it’s feeling very meh to me.

      • Durante
      • 1 year ago

      The die is larger than any previous Titan card save for the $3000 one.

        • leor
        • 1 year ago

        Who cares about the die size? Bulldozer has a hefty die size, anybody want that chip? 1,000 for a GPU that doesn’t bring a performance increase larger than any other next step in GPU generations is silly.

        GeForce GTX 980 September 18, 2014 $549
        GeForce GTX 980 Ti June 2, 2015 $649
        GeForce GTX Titan X March 17, 2015 $999

        GeForce GTX 1080 May 27, 2016 $549
        GeForce GTX 1080 Ti March 10, 2017 $699

        50 dollar bump from the 980 Ti to the 1080 Ti, 300 dollar bump from the 1080 Ti to the 2080 Ti? This makes sense to people?

      • NTMBK
      • 1 year ago

      They seem to be taking Titan branding back to “prosumer” cards like the Titan V, aimed at being a cheapish deep learning accelerator.

      My advice, ignore the branding and look at whether the device is worth it to you.

      • DoomGuy64
      • 1 year ago

      I have a feeling this will be the initial reaction among most gamers on launch, which will quickly turn to disdain after Nvidia strongly pushes gameworks raytracing, because every non R branded card on the market will have extreme performance drops. (everything sub $500)

      This is going to be Batman Arkham Gameworks all over again, and while the naysayers may deny it, I know this is going to play out exactly as I predict in the future. Windows 8 had the same outcome as well. Hype, shills, then acceptance. It just takes a while for reality to kick in. The only people defending it long term will be the Ti users, who will be drowned out by everyone else that buys mid-range. Nvidia better not push this scam hard, or their reputation will be irrevocably damaged.

        • techguy
        • 1 year ago

        What kind is a delusional world do you have to live in where new features are a scam? It’s obvious that Turing requires a large die to pack in all this capability in 12nm, do you expect them to stay handing out 500mm+ GPU dies in the $100-300 price range now?

      • smilingcrow
      • 1 year ago

      Titan is just a marketing name.
      Don’t buy on marketing but on performance, features and value.

    • Krogoth
    • 1 year ago

    Somebody at PNY is going to get canned for violating NDA.

    Based on those leaked specs, 2080Ti should be least 15-30% faster than 1080Ti with the same power consumption at current generation stuff. If you are adventurous enough it should scale better with overclocking then 1080Ti due to GDDR6 having more bandwidth.

    $999 MSRP sounds about right. People were still buying 1080ti when they were at around $1,000 during the crypto-currency craze. Nvidia marketing is confident enough that 2080Ti will still move at that price point.

      • rudimentary_lathe
      • 1 year ago

      A 15% performance improvement would be somewhat disappointing since the gap between new card releases has been growing.

        • Krogoth
        • 1 year ago

        Physics have caught-up and GPU engineers have already exhausted all of the low-hanging fruit.

        The days of massive performance bumps between generations of GPUs have been over ever since GCN 1.0/Kepler. Maxwell/GCN 1.2 are transitional parts into the new paradigm.

          • smilingcrow
          • 1 year ago

          Clueless as you often are.

            • Krogoth
            • 1 year ago

            No, it is cold, old reality. The halcyon days are long over.

    • btb
    • 1 year ago

    285W TDP on the regular 2080 version!? Thats pushing it..

      • Krogoth
      • 1 year ago

      Most people don’t care as long as it has the performance to back it. I suspect that GDDR6 is the culprit for the noticeable increase.

        • DoomGuy64
        • 1 year ago

        That’s true, with the sole exception of when fanboys troll on AMD. But now that Nvidia’s power use is obscene, the truth is admitted. Looking back, Fermi got a pass by most as well, including myself, although not by the reviewers, and the 480 was kinda a meme.

        That’s fine with me, but I don’t want to hear a single peep out of anyone ever again about power use. Reality is, power use doesn’t matter that much, unless you are SLIing these cards. The numbers also back that up, as there have been several videos and articles written debunking the argument that high performance cards will increase your power bill. You’d have to be gaming 24/7 for it to actually matter.

          • Kretschmer
          • 1 year ago

          The difference is that other manufacturers deliver world-class performance with their high TDPs.

          AMD pushes their clocks to the bleeding edges of manufacturing processes to make up for design or fab issues.

          Not every card that gobbles power is bad, but converse that doesn’t mean that every card that gobbles power is acceptable.

          I mean, Vega 64 sucks down more power for worse performance than a 1080Ti or the purported 2080. Bulldozer was greedy and flopped. Prescott was a little furnace for little to show for it.

          Power use DOES matter in smaller form factor builds with limited cooling potential.

            • DoomGuy64
            • 1 year ago

            Sure, Vega 64’s a reasonable argument, but this has been going on since Kepler vs GCN 1.0, and it’s a dead horse at this point.

            Vega 64 is the worst case scenario, and that chip is designed more for general purpose compute than any specific task. Jack of all, master of none, and the manufacturing process is probably responsible for a good chunk of the power consumption.

            Still, the Vega 56 isn’t anywhere near as bad as the 64, especially with the lower power options, and the workstation Vega 56 shows it is possible to make it faster at lower power consumption.

            Polaris is even better, but you’ll still see the power argument brought up with Polaris, and that’s pretty ridiculous for a chip that’s nowhere near Vega 64, 56, or Fermi. It’s a double standard hyperbolic argument that’s solely brought up to stir up drama, and people are getting desensitized to it.

            All I’m saying is that it would be nice if we could finally get over this ridiculous argument, and save it for the real problems like Vega 64. Otherwise, you call wolf too many times, and eventually nobody will take it serious. Quite frankly, I think we’re already at that point, or at least the tipping point for anyone to care in the future.

      • Waco
      • 1 year ago

      Reading is hard. It also supports a 30 watt VR plug, so the “real” TDP for the GPU is around 250 watts.

        • Duct Tape Dude
        • 1 year ago

        Physics is hard: Do those 30W really factor into the [i<]TDP[/i<]? TDP is thermal dissipation power, and for once that does not equate to total power consumption. That 30W is sent down a VR plug to be dissipated elsewhere. Maybe Jeff can clarify which is correct.

          • Waco
          • 1 year ago

          My guess is we’ll know Monday. 🙂

            • Shobai
            • 1 year ago

            IT’S MONDAY AND I DON’T KNOW

            (SSK, did I do it right?)

            • Shobai
            • 1 year ago

            Now it’s Tuesday, and I haven’t looked to see if that info is available.

            • Waco
            • 1 year ago

            What time zone are you in? 😛

            • Shobai
            • 1 year ago

            GMT+10

      • chuckula
      • 1 year ago

      This article isn’t about the 2080, it’s about the 2080Ti

        • btb
        • 1 year ago

        True. I thought there was a link to the original videocardz source, guess not. Anyway PNY lists their 2080 as 285W too: [url<]https://cdn.videocardz.com/1/2018/08/PNY-GeForce-RTX-2080-XLR8-1000x1151.jpg[/url<]

    • smilingcrow
    • 1 year ago

    If they have improved non RT performance and added the RT hardware whilst still on a 1xnm process then it will be a big die with a big price to match.
    So the real action will come next year at 7nm.

    • Rand
    • 1 year ago

    Stating a price of $1000.00 even rather then $999.99 makes me think this is just a placeholder price point and not the actual expected MSRP. PNY doesn’t list anything else prices like this it’s always x.99 for all other products, so I’m disinclined to read anything into this.

      • jihadjoe
      • 1 year ago

      I’m kinda hoping that price is for a Founder’s or Launch Edition card and it’ll eventually come down a bit.

        • thecoldanddarkone
        • 1 year ago

        One of the screenshots I saw also had one for 900. Honestly though 900-1000 doesn’t really bother me. If anything I kind of expected it.

    • Robotics
    • 1 year ago

    PNY remove this info from their web site.

    • christos_thski
    • 1 year ago

    “Presuming our guess at Titan-beating performance holds, it looks like that speed and a lack of a comparable product from AMD”

    but…but…but … it’s not that AMD isn’t competing in graphics. They’re simply “refocusing on low performance OEM where the real profits lie”. Like they did with CPUs before zen. 😉

    PS: sorry to beat on a dead horse here, but I dislike it how whenever AMD fucks up it’s not a fuck up, it’s “strategy”.

      • SHOES
      • 1 year ago

      Meh.

      • tacitust
      • 1 year ago

      Nvida’s R&D budget was nearly double AMD’s in 2017, and AMD had to spread their budget across both CPU and graphics.

      AMD doesn’t split out the numbers for the two divisions, but it’s not hard to conclude that their increasingly profitable CPU business earned the lion’s share of the R&D funding last year, meaning Nvidia likely spent around five times as much as AMD on new graphics tech.

      So while you might dislike it, AMD made the strategic decision to focus its efforts on their CPU business, which has made it hard for them to keep up with Nvidia.

      Is that f***ing up? I don’t think so.

        • christos_thski
        • 1 year ago

        For some reason I thought AMD was the larger company… and then I looked at respective market
        capitalizations (it seems nvidia has a 147 billions market cap, while AMD is 19 billion worth?)

        Yeah I think you’re right. I stand -mostly- corrected.

        I would still insist that AMD’s graphics group has been dragging its leg for quite a while now, years before Zen, but in the end, nVidia’s much larger size must account for a substantially bigger R&D budget, as you wrote.

        Thanks for setting me straight.

        So do you think we can expect AMD to make enough profit off of the CPU business success in order to fund a new competitive GPU design, or will we have to wait for intel in 2020 with arctic sound?

      • Sahrin
      • 1 year ago

      Sorry, but every ‘mistake’ AMD makes comes with the cushion of nVidia and Intel’s anti-consumer anti-market practices.

      We don’t root for AMD because we love AMD, we root for AMD because we hate the way nVidia and Intel treat us.

      If you’re the technology equivalent of a finsub, then sure, AMD is just as bad as the other too.

      If you like competition driving down prices, then fuck nVidia up its greedy ass.

        • chuckula
        • 1 year ago

        Yes, literally everything that’s ever happened wrong is always Ngreedia & InHell’s fault.

        Poor innocent little AMD just trying to single handedly cure cancer and bring about world peace! You don’t know how hard it is to be the only innovator only for InHell to rip off Bulldozer’s innovations and for Ngreedia to steal all that Ray Tracing stuff we put in Navi! [wipes away tear]

          • BIF
          • 1 year ago

          Points for creative naming.

      • Leader952
      • 1 year ago

      [quote<]They're simply "refocusing on low performance OEM where the real profits lie[/quote<] This old tired and false line keeps getting repeated over and over again. That however doesn't make it true. In fact Nvidia has claimed to have 90% of the REVENUE share of the discrete market. That means Nvidia is selling a lot of high-end and professional graphics cards at very high margins which over comes selling a bunch of low end cards at low margins. Also realize that Nvidia competes in that same low end as AMD.

        • Waco
        • 1 year ago

        Do we need sarcasm tags on here?

    • SoM
    • 1 year ago

    that like 15″ ?

    • End User
    • 1 year ago

    I can’t wait!!!!!! 🙂

Pin It on Pinterest

Share This