Nvidia’s GeForce RTX 2080 Ti graphics card reviewed

Earlier this year, a fellow editor and I did some pie-in-the-sky thinking about Nvidia’s plans for its next-generation GPUs. We wondered how the company would continue the impressive generation-to-generation performance improvements it had been delivering since Maxwell. We guessed that the AI-accelerating smarts in the Volta architecture might be one way the green team would set apart its next-generation products, but past that, we had nothing.

Turns out the company did us one or two better. With the Turing architecture‘s improved tensor cores and unique RT cores, Nvidia is shipping a pair of intriguing new technologies in its next-generation chips while also bolstering traditional shader performance with parallel execution paths for floating-point and integer workloads. On top of that, the company introduced a whole new way of programming geometry-related shaders, called mesh shaders, that promise to break the draw-call bottleneck at the CPU for geometry-heavy scenes. There’s a lot going on in Turing, to put it mildly. Those interested should consult Nvidia’s white paper for more detail.

A logical representation of the TU102 GPU. Source: Nvidia

My speculation about the Turing architecture several weeks back turned out to be more correct than not, at least, even with the wildly incomplete info we had on hand. The GeForce RTX 2080 Ti that we’re testing this morning and the Quadro RTX 8000 that debuted at SIGGRAPH both use versions of one big honkin’ GPU called TU102. At a high level, this 754 mm² chip—754 mm²!—hosts six graphics processing clusters (GPCs) in Nvidia parlance, each with 12 Turing streaming multiprocessors (SMs) inside. The RTX 2080 Ti has four of its SMs disabled for a total of 4352 shader ALUs (or “CUDA cores,” if you like), of a potential 4608.

The full TU102 chip has 96 ROPs, but as a slightly cut-down part, the RTX 2080 Ti has 88 of those chiclets enabled. In turn, the highest-end Turing GeForce so far boasts a 352-bit bus to 11 GB of memory. TU102 gets to play with cutting-edge, 14-Gbps GDDR6 RAM, though, up from the 11 Gbps per-pin transfer rates of GDDR5X on the GTX 1080 Ti. That works out to 616 GB/s of raw memory bandwidth. Nvidia also claims to have improved the delta-color-compression routines it’s been employing since Fermi to eke out more effective bandwidth from the RTX 2080 Ti’s bus. Between GDDR6’s higher per-pin clocks and the improved color-compression smarts of Turing itself, Nvidia claims 50% more effective bandwidth from TU102 compared to the GP102 chip in the GTX 1080 Ti.

Despite its monstrous and monstrously-complex die, the RTX 2080 Ti Founders Edition actually comes with a slightly higher boost clock spec than the smaller GP102 die before it, at 1635 MHz, versus 1582 MHz for the GTX 1080 Ti. Nvidia calls that a factory overclock—if you believe overclocks are something that comes with a warranty, at least. In practice, the GPU Boost algorithm of Nvidia graphics cards will likely push Turing chips to similar real-world clock speeds, given adequate cooling. We’ll need to test that for ourselves soon.

Aside from the big and future-looking changes in Turing chips themselves, Nvidia’s new pricing strategy for the RTX 2070, RTX 2080, and RTX 2080 Ti is going to make for some tricky generation-on-generation comparisons. The $600 RTX 2070 is $150 more expensive than the $450 GTX 1070 Founders Edition. The $800 RTX 2080 Founders Edition sells for $100 more than the GTX 1080 Founders Edition did at launch—and as much as $300 more than that card’s final suggested-price drop to $500. In turn, the RTX 2080 Ti Founders Edition commands a whopping $500 more than the GTX 1080 Ti’s $700 sticker, at $1200.

In the past, then, the RTX 2070 might have been called an RTX 2080, the RTX 2080 a 2080 Ti, and the RTX 2080 Ti some kind of Titan. The reality of Turing naming and pricing seems meant to allow Nvidia to claim massive generation-to-generation performance increases versus Pascal cards by drawing parallels between model names and eliding those higher sticker prices.

Dollar-for-dollar, however, keep in mind that the RTX 2080’s $700 partner-card suggested price and the Founders Edition’s $800 price tag make the $699-and-up GeForce GTX 1080 Ti a better point of comparison for Turing’s middle child. The GeForce GTX 2080 Ti Founders Edition matches the Titan Xp almost dollar-for-dollar. We don’t have a Titan Xp or Titan V handy to test our RTX 2080 Ti against our back-of-the napkin math for those cards, but our theoretical measures of peak graphics performance put the RTX 2080 a lot closer to the GTX 1080 Ti than not. On a price-to-performance basis, then, the improvements in Turing for traditional rasterization workloads could be more modest than Nvidia’s claims suggest.

On top of the naming confusion, the two suggested-price tiers for Turing cards—a cheaper one for partner cards and a more expensive one for Nvidia’s Founders Editions—seem guaranteed to cause double-takes. I expect that at least in the early days of Turing, there’s no reason Nvidia board partners will want to leave a single dollar on the table with those separate, lower prices when Founders Edition cards are commanding more money for what is essentially the same product once the rubber hits the road. In the real world, the Founders Edition suggested price is the de facto suggested price, and retailer listings are already bearing that fact out.

 

Our testing methods

If you’re new to The Tech Report, we don’t benchmark games like most other sites on the web. Instead of throwing out a simple FPS average—a number that tells us only the broadest strokes of what it’s like to play a game on a particular graphics card—we go much deeper. We capture the amount of time it takes the graphics card to render each and every frame of animation before slicing and dicing those numbers with our own custom-built tools. We call this method Inside the Second, and we think it’s the industry standard for quantifying graphics performance. Accept no substitutes.

What’s more, we don’t rely on canned in-game benchmarks—routines that may not be representative of performance in actual gameplay—to gather our test data. Instead of clicking a button and getting a potentially misleading result from those pre-baked benches, we go through the laborious work of seeking out interesting test scenarios that one might actually encounter in a game. Thanks to our use of manual data-collection tools, we can go pretty much anywhere and test pretty much anything we want in a given title.

Most of the frame-time data you’ll see on the following pages were captured with OCAT, a software utility that uses data from the Event Timers for Windows API to tell us when critical events happen in the graphics pipeline. We perform each test run at least three times and take the median of those runs where applicable to arrive at a final result. Where OCAT didn’t suit our needs, we relied on the PresentMon utility.

As ever, we did our best to deliver clean benchmark numbers. Our test system was configured like so:

Processor Intel Core i7-8086K
Motherboard Gigabyte Z370 Aorus Gaming 7
Chipset Intel Z370
Memory size 16 GB (2x 8 GB)
Memory type G.Skill Flare X DDR4-3200
Memory timings 14-14-14-34 2T
Storage Samsung 960 Pro 512 GB NVMe SSD (OS)

Corsair Force LE 960 GB SATA SSD (games)

Power supply Corsair RM850x
OS Windows 10 Pro with April 2018 Update

Thanks to Corsair, G.Skill, and Gigabyte for helping to outfit our test rigs with some of the finest hardware available. AMD, Nvidia, and EVGA supplied the graphics cards for testing, as well. Behold our fine Gigabyte Z370 Aorus Gaming 7 motherboard before it got buried beneath a pile of graphics cards and a CPU cooler:

Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests. We tested each graphics card at a resolution of 4K (3840×2160) and 60 Hz, unless otherwise noted. Where in-game options supported it, we used HDR modes, adjusted to taste for brightness. Our HDR display is an LG OLED55B7A television.

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

 

Shadow of the Tomb Raider

The final chapter in Lara Croft’s most recent outing is one of Nvidia’s headliners for the GeForce RTX launch. It’ll be getting support for RTX ray-traced shadows in a future patch. For now, we’re testing at 4K with HDR enabled and most every non-GameWorks setting maxed.


The RTX 2080 Ti blasts out of the gate in Shadow of the Tomb Raider. Its impressive average frame rates are tempered by some concerning patches of frame-time spikes, an issue experienced to a lesser degree by the RTX 2080, as well. We retested the game several times in our location of choice and couldn’t make that weirdness go away, so perhaps some software polish is needed one way or another. Still, the performance potential demonstrated by the GeForce RTX cards is quite impressive. Remember that we’re gaming at 4K, in HDR, with almost all the eye candy turned up in a cutting-edge title.


These “time spent beyond X” graphs are meant to show “badness,” those instances where animation may be less than fluid—or at least less than perfect. The formulas behind these graphs add up the amount of time our graphics card spends beyond certain frame-time thresholds, each with an important implication for gaming smoothness. Recall that our graphics-card tests all consist of one-minute test runs and that 1000 ms equals one second to fully appreciate this data.

The 50-ms threshold is the most notable one, since it corresponds to a 20-FPS average. We figure if you’re not rendering any faster than 20 FPS, even for a moment, then the user is likely to perceive a slowdown. 33 ms correlates to 30 FPS, or a 30-Hz refresh rate. Go lower than that with vsync on, and you’re into the bad voodoo of quantization slowdowns. 16.7 ms correlates to 60 FPS, that golden mark that we’d like to achieve (or surpass) for each and every frame.

To best demonstrate the performance of these systems with a powerful graphics card like the GTX 1080 Ti, it’s useful to look at our strictest graphs. 8.3 ms corresponds to 120 FPS, the lower end of what we’d consider a high-refresh-rate monitor. We’ve recently begun including an even more demanding 6.94-ms mark that corresponds to the 144-Hz maximum rate typical of today’s high-refresh-rate gaming displays.

Despite its fuzziness in our frame-time plots, the RTX 2080 Ti doesn’t chalk up more than a handful of milliseconds past the 33.3-ms threshold. It also spends just under three seconds of our one-minute test run on tough frames that would drop frame rates below 60 FPS. Even the GTX 1080 Ti can’t hope to keep up with this seriously impressive performance. The RTX 2080 is the only thing that comes close.

 

Project Cars 2


Once again, the RTX 2080 Ti leaps ahead of the pack, even at 4K and with Project Cars 2‘s settings maxed. Its 99th-percentile frame time suggests gamers won’t see frame rates below 60 FPS much, if any, of the time, too. Let’s see how that plays out in our time-spent-beyond-X graphs.


Not a single millisecond shows up in the RTX 2080 Ti’s bucket at our 16.7-ms threshold. This is the holy grail of 4K gaming performance: high average frame rates with worst-case performance that never dips below 60 FPS. The RTX 2080 isn’t far behind by this measure, though. To really show what the RTX 2080 Ti can do, it’s worth flipping over to our 11.1-ms graph. There, the Ti spends just over four seconds on tough frames that drop rates below 90 FPS. The RTX 2080 spends 10 seconds longer on those tough scenes.

 

Hellblade: Senua’s Sacrifice


Hellblade relies on Unreal Engine 4 to depict its Norse-inspired environs in great detail, and playing it at 4K really brings out the work its developers put in. As we’ve come to expect so far, the RTX 2080 Ti opens a wide lead over the rest of the pack, even if it can’t deliver the 16.7-ms 99th-percentile frame time we’d want for near-perfect smoothness.


Despite that slightly bumpy 99th-percentile frame time, the 2080 Ti spends just under two seconds of our one-minute test run on tough frames that drop the instantaneous frame rate below 60 FPS. The RTX 2080 spends a whopping 12 seconds on similarly difficult work, and the numbers only snowball from there.

 

Gears of War 4


Gears of War 4‘s DirectX 12 flavor of Unreal Engine 4 takes well to the RTX 2080 Ti. Once again, we get impressively high average frame rates and a sterling 99th-percentile frame time.


To drive home just how well the RTX 2080 Ti plays Gears of War 4, the top-end Turing card so far spends just 23 ms of our one-minute test run on frames that spoil its 60-FPS-or-better performance. It’s hard to ask for more.

 

Far Cry 5


Despite running Far Cry 5 at 4K with HDR and maxed settings, the RTX 2080 Ti turns in another familiar performance. Its 99th-percentile frame time isn’t quite perfect, but as usual, we can turn to our time-spent-beyond-X metrics to see just how short it fell.


Not too short at all, as it happens. The RTX 2080 Ti spends just 216 ms of our test run on frames that take longer than 16.7 ms to render. That’s seriously impressive performance.

 

Assassin’s Creed Origins


Assassin’s Creed Origins is one of the most punishing titles of recent memory, and even the RTX 2080 Ti can’t push it much past 60 FPS at 4K with HDR, on average. The card’s 99th-percentile frame time is well-controlled, but it’s well over the 16.7 ms we’re looking for.


As usual, though, the RTX 2080 Ti spends just a blip of our test run on tough frames that take longer than 16.7 ms to render. Even compared to the RTX 2080, the 2080 Ti is entirely in a league of its own.

 

Deus Ex: Mankind Divided


Deus Ex: Mankind Divided might be a little more aged than some of the games we’re looking at today, but that doesn’t mean it isn’t still a major challenge for any graphics card at 4K and max settings. The RTX 2080 Ti delivers a commendably high average frame rate, as usual, but it can’t keep 99th-percentile frame times in check with the same aplomb.


Despite that high 99th-percentile frame time, our time-spent-beyond-33-ms threshold suggests those frames make up only a small portion of the whole in our test run. With just about three seconds spent working on frames that take longer than 16.7 ms to render, the RTX 2080 Ti continues its impressive streak of smooth 4K gaming, too.

 

Watch Dogs 2


Like Deus Ex, Watch Dogs 2 is an absolute hog of a game if you start dialing up its settings. Add a 4K target resolution to the pile, and the game crushes most graphics cards to dust. Only the GeForce GTX 1080 Ti, RTX 2080, and RTX 2080 Ti even produce playable frame rates, on average, and their 99th-percentile frame times testify to the fact that there’s no putting a leash on this canine.


For all that, the RTX 2080 Ti does a commendable job of bringing Watch Dogs 2 to heel. It only spends a handful of milliseconds past our 33.3-ms threshold, and it only puts up about five seconds of our one-minute run on the 16.7-ms chalkboard. Gamers after a smoother experience might want to dial back a couple of the eye-candy settings in this title, but even with our demanding setup, the RTX 2080 Ti provides a smooth enough and enjoyable enough time .

 

Wolfenstein II


So, uh, that’s really something. Wolfenstein II uncorks an as-yet-unseen reserve of performance from our Turing cards. Both are so fast in this game, in fact, that I had to double-check and make sure that I was still testing at 4K. Interestingly enough, Nvidia uses Wolfenstein ii as a demonstration for another Turing feature that we haven’t gotten deep into yet—variable-rate shading—that could enhance performance even further, if you can believe it from these numbers.


The RTX 2080 Ti puts no time at all on the board at our 16.7-ms threshold , and it only spends a little under three seconds in total on tough frames that take longer than 8.3 ms to render. That’s some seriously impressive performance, and if Nvidia is to be believed, this game could still run faster on Turing.

 

DLSS performance with Epic’s Infiltrator demo



 

DLSS performance with the Final Fantasy XV benchmark



 

Conclusions

The GeForce RTX 2080 Ti provides that most satisfying of feelings when you fire it up in tandem with a 4K HDR display: a constant, low-level electricity that feels like the hairs on the back of your neck are about to stand up. That’s thanks to its eyebrow-raising performance in most of the titles that we could throw at it, all with levels of eye candy that make other graphics cards wither.


With the RTX 2080 Ti Founders Edition in our test rig, I found myself spontaneously savoring the smoothness and fluidity of Far Cry 5‘s Montanan waterfalls, marveling at the wavering points of candlelight in the opening scenes of Shadow of the Tomb Raider, and squinting at the fire of the desert sun in Assassin’s Creed Origins just to feel more of that pleasant tingle. It’s the kind of feeling that makes it easy to forget that you dropped $1200 or more on a graphics card.

That tantalizing feeling comes even before we consider the potential of Deep Learning Super-Sampling, or DLSS, an AI model powered by Turing’s tensor cores. We saw enormous gains in performance at 4K in the two canned demos we were able to test with DLSS versus full-fat 4K rendering with temporal anti-aliasing, and even my hyper-critical graphics-reviewer eye couldn’t pick out any significant degradation in image quality from the switch to DLSS. Black magic, that, but it really seems to work.

Assuming its performance carries through to real-world gaming, DLSS is great news for folks who have so far had to compromise on image quality to get high-refresh-rate 4K experiences. We’ll want to reserve final judgment until fully-playable titles with DLSS support hit the market, but I’m optimistic the feature will do a lot to make 4K monitors useful to the enthusiast rather than a curiosity for the pixel-addicted.

Ray-traced effects are the other half of what might make Turing revolutionary, but we’re going to have to save testing them for a later date. We got to try the “Reflections” Star Wars demo that Nvidia used to introduce its RTX technology on our own Turing cards, and man, does it ever look cool to see Captain Phasma’s armor reflect every light in a First Order hallway in real time and with convincing detail. The problem is that said demo is meant to run at a non-interactive 24 FPS with the help of DLSS, and there’s no telling how ray-traced effects will perform in interactive gaming where demands on responsiveness and frame rates are much higher.

This is normally where we’d talk about the competition, but if you hunger for more performance from your ultra-high-end gaming PC than what we’ve enjoyed over the past couple of years, where else are you going to get it? Uncompetitive markets have never been a good thing for PC builders, but the graphics-card space seems poised to become one—and by no fault of Nvidia’s, to be clear. Until Intel or AMD have something to show that can challenge Turing, the high-end PC gaming crown is the green team’s to lose.

The flip side of that bittersweet situation is that TSMC and Nvidia have extended an incredibly consistent and enviably successful streak of execution that began with Maxwell, continued through Pascal, and seems poised to keep going with Turing. I have to admire the sheer ballsiness of the green team for pushing die sizes to the limit and continuing to advance performance even as it faces perhaps the least competition in high-end graphics that it ever has.

So, should you buy an RTX 2080 Ti? Even if we put a -tan sticker on the end of that Ti, this card’s sticker price is going to give all but the most well-heeled and pixel-crazy pause. Titan cards have always been about having the very best, price tags be damned, and Nvidia’s elevation of the Ti moniker to its Titan cards’ former price point doesn’t change that fact.

If you can’t tolerate anything but the highest-performance gameplay at 4K with most every setting cranked, the 2080 Ti is your card. Its potential second wind from DLSS feels almost like showing off, and that’s a switch that owners should be able to flip with more and more games in the near future. Even without DLSS, the RTX 2080 Ti is the fastest single-GPU card we’ve ever tested by a wide margin. If you want the best in this brave new world of graphics, this is it. Just be ready to pony up.

Comments closed
    • DeadOfKnight
    • 1 year ago

    One thing many aren’t considering is that it very well may be that if the whole chip was made up of CUDA cores to scale better on perf/$, it might just be too CPU bound to get much more out of it these days. It’s bound to happen sooner or later with CPU progress slowing down and games not benefiting much from multi threading. For all we know, this is the direction they had to take, adding features instead of legacy app performance.

    • hkuspc40
    • 1 year ago

    That thing costs almost as much as my whole machine and my machine is crazy overkill for all but gaming (and it’s 3-4 years old now). If this is the way graphics card pricing is going to go then you can count me out. The next setup will be a Xbox and a laptop…

    • DPete27
    • 1 year ago

    Hey Jeff.
    October is coming up for TRFrankenbot. How do the RTX cards compare on F@H PPD vs the 10 series?

      • Krogoth
      • 1 year ago

      Potentially they should be 30-50% faster if the software can properly harness the tensor/RT cores.

      Titan V is a beast at folding and 2080Ti should follow closely to it.

      • BIF
      • 1 year ago

      This is what I want to know; what work can I do with it? I looked in the table of contents and I see neither F@H nor actual 3D rendering testing done here. I don’t play any of the games tested, so I have no idea if this card works well for the things I do, let alone works at all. There are tons of software that could be tested, ranging from F@H itself (free) to Blender (free), and even DAZ Studio’s iRay (free).

      I love TR and I certainly don’t expect you guys to pay for a $1500 rendering package just so I’ll know if I can make angle-brackets or wingnuts or something, but honestly, a lot is available free now. In this day and age, nobody should have to search other sites to get the COMPLETE story on graphic hardware.

      Please do some F@H and Blender testing in future evaluation articles. And thanks for the great job you do on “all that other” stuff. 😉

    • ColeLT1
    • 1 year ago

    Great article!

    I was worried that 2080 would not be quite up to 1080ti level, glad to see it meets and/or beats it. I am really happy with the results, not so happy with the prices, but I’ll justify my 2080 order because my non-FE launch 1080ti paid for itself a 3 times over on nicehash and nemos.

    What is everyone’s opinion on prices 6 months from now? Do you think they will hold or slowly fall like all cards of the past, sans-10series?

      • ColeLT1
      • 1 year ago

      I also went back and looked and forgot the 1080 was $699 at release. Then there was a 10month window before the release of the 1080ti, also at $699. If Nvidia waited 10 months to release the 2080ti I don’t think they could get away with the huge price, I think Jeff had it right, they should have called this one the 2080ti(tan).

      • psuedonymous
      • 1 year ago

      Likely fall, along the lines of past launches.

      Unlike last time around (2013), there has been no mass selloff of GPUs coinciding with a drop in Cryptocurrency prices (the decline was gradual and down only to above still-profitable levels, and hashrate has remained flat since around March). Any future ‘Crypto boom’ would need to hit peaks above the Dec 2017 peak to trigger mass purchases of more GPUs, and would need to compete with more efficient ASICs now available for most hashing algorithms.

      • Krogoth
      • 1 year ago

      Outside of AMD RTG pulling a “Hail Mary” with discrete Navi. I honestly don’t see Nvidia have any reason to reduce price points on any of their SKUs.

      What will likely happen is that existing Pascal stock will slowly be sold out and be taken over by lesser Turing SKUs which will have slightly better performance then Pascal stock at the price points they will be replacing.

        • ColeLT1
        • 1 year ago

        As a stock holder of AMD and NVDA, I can only hope for either one of those scenarios as NVDA has been posting amazing profits and AMD’s stock has been booming.

        As a once cash-strapped student and young professional who gamed on second hand and handmedown parts years ago, these are out of reach for most people.

    • Unknown-Error
    • 1 year ago

    Again, thanx for the great review Jeff. Especially for adding the two DLSS tests. Are there any other reviews which included more DLSS specific tests? There were few with just Final Fantasy XV, but that was about it.

    Amazing to see nVidia keep pushing the envelope despite the utter lack of competition.

      • NoOne ButMe
      • 1 year ago

      those are all the DLSS things that exist. And due to being fixed/near-fixed may have better IQ than real games. Although Digital Foundry look at FFXV battle scene (dynamic) seems like it is good.

      Digital Foundry’s dive
      [url<]https://www.youtube.com/watch?v=MMbgvXde-YA[/url<]

    • puppetworx
    • 1 year ago

    No performance/dollar chart? With the recent price drop of the 10 series I was pretty curious to see it.

      • Jeff Kampman
      • 1 year ago

      Just added.

        • Goty
        • 1 year ago

        For as much as I want to complain about the price, at least it looks pretty linear with respect to performance. The 2080 Ti isn’t really priced like a halo product given it backs the cost up when it comes to framerates.

          • ptsant
          • 1 year ago

          It doesn’t look exactly linear but slightly curved upwards, as you’d expect. It would have been pretty good if 1080 [ti] belonged on the same generation. But knowing that we stay on the same price/perf curve with a NEW generation is a bit disappointing (lack of competition and all that).

          EDIT: the curve does not turn upwards, but the price does, which is what I wanted to say. The high end has a slight premium in $/fps.

            • Goty
            • 1 year ago

            Quickly plotting the data out on my own, it’s actually a slight downward curve, being well described by a second order polynomial with an exponent of -0.00003 on the quadratic term. 😉

            Still, nothing crazy other than what you mentioned about the prices simply proceeding from the last gen.

            • ptsant
            • 1 year ago

            You are right. In fact this was what I wanted to say but when I was writing the comment I was thinking of the price in the Y axis (which is how I would have plotted it, btw).

            Anyway, as one would expect for a premium product, there is a slight premium for the high end that is reasonable if we forget the fact that in the past generations tended to shift the whole curve downwards.

          • jensend
          • 1 year ago

          Draw a line from the 1070 to the 1080, and you see that line intercept the 2080Ti’s performance level at about $800. Continue that line on to $1200 and you’d expect a card that’s about half again as fast as the 2080Ti.

          As is usual, marginal performance per dollar drops as you move up the product line; it’s not linear. The 2080Ti isn’t a terrible outlier, but some people had hoped a new generation would shift the price/performance curve upwards rather than just extend it sublinearly.

            • DoomGuy64
            • 1 year ago

            This is not a good trend, especially if it continues into the next generation. The 3080Ti would be priced like $3000.

            • ptsant
            • 1 year ago

            I don’t see why you were downvoted.

            Computing has been very exciting for a couple decades because every 2-3 years you could get a much better product for the same amount of money. The simple fact is that for the $300 I paid years ago for an RX480 I can’t get something much better. Vega is at $500 where I live (I check every day) and the 1060 is not much of an upgrade. So, if nVidia can’t get a 2060 at the price of a 1060 and AMD is busy launching consoles, I’ll just wait for my card to die or hope to win the lottery.

            • jensend
            • 1 year ago

            Rather than checking every single day for improved pricing on marginally improved gear, anyone who has either Pascal or GCN 4th-gen (the 16/14/”12″ nm process GPUs) should probably just enjoy what they’ve got until the appearance of Vega/”Ampere” and hope the price curve is better at that point.

            Heck, maybe by the time of “Ampere” nV will have finally caved in and added support for DisplayPort adaptive sync too.

            • Airmantharp
            • 1 year ago

            Expect prices to drop over time during this generation; perhaps 20×0-series parts dropping to below where their previous namesakes released at supposing there isn’t some equivalent of the mining craze that prevented that from happening for the 10×0-series and the Vega cards.

            [I base that on experience watching the market; obviously Nvidia could keep the prices high if they wish, but their sales volume, revenue, and perhaps worst their market share would suffer if they prevent their market prices from yielding to market forces too much]

        • puppetworx
        • 1 year ago

        Noice.

    • End User
    • 1 year ago

    Upgrade from a GTX 1080 to a GTX 2080 Ti looks good to me. I’m on the lookout for an EVGA GeForce RTX 2080 Ti FTW3 ULTRA GAMING before the end of the year.

      • ptsant
      • 1 year ago

      Is it really called like this? Haven’t their marketing departments learned anything?

        • Krogoth
        • 1 year ago

        EVGA just likes to make up several different SKUS based on how well they can factory-overclock their GPUs.

        “Yo dawg, I heard you like binning! We are going binned your SKUs like your GPUs!”

        It is marketing brilliance to be honest. If you think EVGA is bad? They got nothing on Intel.

    • Jigar
    • 1 year ago

    Were you asked to not include TITAN V in the review ? I don’t see any Titans in any of the reviews online.

      • NTMBK
      • 1 year ago

      I don’t think that TechReport [i<]has[/i<] a Titan V.

        • DoomGuy64
        • 1 year ago

        Why not? Aren’t we all supposed to be buying Titans with 4k BFGDs for gaming now? That’s totally the average Joe pc gamer now. Or at least that’s what Nvidia would like us to believe, and is solely catering to.

          • Kretschmer
          • 1 year ago

          Yeah, except for the bazjillions of 1050Tis and 1060s that are way, way more common than their AMD counterparts.

            • DoomGuy64
            • 1 year ago

            More common doesn’t mean they’re better. They’re more common solely because of the mental bandwagon effect of having the top high end card, and they are ultimately subpar mid-range parts that don’t support freesync, and will obsolete quicker due to crippled specs and driver optimizations being dropped on older architectures. If the 960 beat a 780 in The Witcher 3, the 660 was then completely unplayable, and that’s how all those mid range parts end up after Nvidia releases a new architecture. More realistically, I think maxwell will be the first to go, and the 1060 will age a little better.

            Either way, none of nvidia’s mid-range cards can hold 60 fps perfectly, and do not compete with any AMD card paired with a freesync monitor. No appealing to the masses, bandwagon effect argument can invalidate that fact.

            Spending more for a crippled architecture and being forced into using outdated monitors or higher priced gsync is not my idea of value. There is no value here, only scams and a cult mentality.

            Jensen is the new Steve Jobs. He wears a leather jacket instead of a turtleneck, and has just as many brainwashed cult members. I don’t do cults. Anyone who’s ever dealt with a misbehaving cult IRL would know how toxic they are, and I don’t get along with their kind anywhere, real or virtual. Just call me the Hitchens of gaming, because I’m not a blind follower.

            • Airmantharp
            • 1 year ago

            Quoting for rank asshattery…

            DoomGuy64 “none of nvidia’s mid-range cards can hold 60 fps perfectly”

            You know quite well that this depends on the settings used, the game used, the system used, and the benchmark used, whatever the card. I can get ’60 fps perfectly’ on an ultrabook with integrated Intel graphics. Your dig here is unfounded.

            DoomGuy64 “They’re more common solely because of the mental bandwagon effect of having the top high end card, and they are ultimately subpar mid-range parts that don’t support freesync, and will obsolete quicker due to crippled specs and driver optimizations being dropped on older architectures. If the 960 beat a 780 in The Witcher 3, the 660 was then completely unplayable, and that’s how all those mid range parts end up after Nvidia releases a new architecture. More realistically, I think maxwell will be the first to go, and the 1060 will age a little better.”

            This is some pretty ridiculous fanboi-grade projection.

            I buy what works; actually have Intel, AMD, and Nvidia graphics all in my current workstation. Most everything else has Intel graphics and some have Nvidia dGPUs, most not because of their name as you claim the common motivation but rather because Nvidia had (and usually has) the best solution.

            DoomGuy64 “because I’m not a blind follower”

            Your monologue is clear evidence of the converse.

            • DoomGuy64
            • 1 year ago

            [quote<]You know quite well that this depends on the settings used, the game used, the system used, and the benchmark used, whatever the card. I can get '60 fps perfectly' on an ultrabook with integrated Intel graphics. Your dig here is unfounded.[/quote<] No it's not. If you're spending $300 on a graphics card, you'll be using settings that it should be capable of. Otherwise, it's a waste of money that could have been spent on a cheaper card. The Gsync lock is also a value problem that means you spend $200 above typical mid-range builds, or get stuck limiting your settings to maximize framerate or disable vsync. It's not a better choice than a 580. Anyone who bought a 1060 over a 580, did so under a fanboy mindset, and not from an unbiased perspective. I'm not saying you can't lower your settings to hit 60 fps, I'm saying you put yourself in that situation because of fanboyism towards nvidia that refused to even consider the alternative. [quote<]This is some pretty ridiculous fanboi-grade projection.[/quote<] No it's not. The 660 was instantly obsoleted by the 960 when games started using newer effects and higher resolution textures. Nvidia's going to start pushing a multitude of new stuff with RTX, things like async and hdr, and Maxwell can't handle that going forward. It's a dead architecture at this point. Pascal will take a small hit too, but Maxwell will definitely be done from this point forward. Something that is much less of an issue with AMD, aside from gameworks. [quote<]DoomGuy64 "because I'm not a blind follower" Your monologue is clear evidence of the converse.[/quote<] Here's the thing. People are dumping on AMD because of RTX. Where is AMD competing against RTX? It's not. That's blind fanboyism. "Vega is dead." Is it? Vega 56 has finally hit $400, and 580's have hit the $200 range. If you're buying a 1060 or 1070 over AMD at this point, you are clearly doing it from a position of bias. Which is a better deal? Not the Nvidia cards. You are all basing your opinions FROM THE RTX REVIEWS, and not from what price point these mid-range cards are competing at. You also have the whole gsync problem that everyone conveniently ignores. Vega 64 is the only bad card, because it doesn't hit the performance/$ segment that it is competing in. AMD's failure to compete in the high end doesn't invalidate it's mid-range. That's the area where AMD is doing well, but the fanboys are ignoring it. Those RTX scores don't transfer over to a 1060 in reality, so nvidia buyers are not getting what they pay for. If you can't acknowledge this, you're the one who has the bias, because I can CLEARLY acknowledge that Nvidia has a lock on the high end. I just don't game with high end, and I honestly look at perf/$, therefore OOOH CONTROVERSY. No. Not so much.

            • Airmantharp
            • 1 year ago

            “should be capable of”
            “instantly obsoleted”
            “dumping on AMD because of RTX”

            Wild. Projection, assumptions, you’re fighting a (very silly!) war in your own mind against a foe that you have imagined.

            I considered humoring your reaching, but instead I’ll just let you continue arguing with yourself.

          • K-L-Waster
          • 1 year ago

          [quote<]Aren't we all supposed to be buying Titans with 4k BFGDs for gaming now?[/quote<] No. [quote<]That's totally the average Joe pc gamer now.[/quote<] Also no. [quote<]Or at least that's what Nvidia would like us to believe, and is solely catering to.[/quote<] I think you're referring to the AMD Defence Force Straw Man Argument handbook...

          • NTMBK
          • 1 year ago

          Titan V was aimed at compute workloads in workstations, which is why they never sent review samples to enthusiast/gaming oriented websites.

            • Krogoth
            • 1 year ago

            Actually post-Kepler Titans are really just for game developers/artists to get an early handle on next generation hardware platforms for future content without having to get a Quadro which are overkill for their purposes.

            The general compute stuff/AI learning on them are still crippled through firmware/drivers. Nvidia wants its professional customers to get Quadro/Telsa SKUs for that.

            • NTMBK
            • 1 year ago

            Titan V had full throughout FP64 and FP16. What did their cripple on the Titan V?

            • Krogoth
            • 1 year ago

            It less memory, no NVlink support and the firmware doesn’t have access to Tesla/Quadro driver suite along with associated professional software.

            This has been case with Titan ever since they were introduced. They have always been a marketing experimental on trying to sell off excessive/defective Telsa/Quadro SKUs. You got the hardware but none of professional-tier software/firmware to go with it.

            • NTMBK
            • 1 year ago

            That’s why it’s aimed at compute workloads. It doesn’t need fancy Quadro OpenGL drivers, and it has access to Tesla Compute Cluster mode. It’s a great CUDA accelerator.

            • Krogoth
            • 1 year ago

            You mean limited access. You need to get a Tesla SKU if you want the full Tesla Compute access (mostly stuff that is beyond the scope of a single Titan V).

            Nvidia isn’t going make that easy and isn’t stupid enough to cannibalize their Tesla SKUs.

            Again, Titans are just Tesla/Quadro “rejects” that are crippled via software/firmware to prevent market cannibalization.

            • NTMBK
            • 1 year ago

            What do you mean by limited access? If it’s like any other Titan it will reboot into TCC mode and run headless, without the Windows driver overhead. NVLink is disabled so multi-GPU systems aren’t as potent, but again, CUDA acceleration should work just great.

            Yes, Titan V is a cut down Tesla V100 (less memory bandwidth, less memory, no NVLink), but it’s still aimed at accelerating compute workloads for workstations.

            • Krogoth
            • 1 year ago

            Titan V is aimed at hobbyist and small-time AI learning researchers(running one or two units at most) that don’t have the need and/or [b<]budget[/b<] for Tesla SKUs. You don't get any of the fancy management and other HPC-related stuff that is found in Tesla SKUs which aren't useful for intended market of Titan V.

      • Krogoth
      • 1 year ago

      They don’t have Titan V on hand nor they did any reviews with a Titan V (Nvidia barely gave any units out for reviewers). The last Titan that TR reviewed was the Titan X (Maxwell/GM200) which was marginally faster than 980Ti.

      • Jeff Kampman
      • 1 year ago

      As usual, Nvidia didn’t ask or tell us to do anything. The company doesn’t sample Titan cards for primarily gaming-focused reviews, and that’s what we do, so…

        • Jigar
        • 1 year ago

        Thank you for the clarification, i was looking forward to see how TITAN V holds against RTX 2080Ti and also if DLSS works on TITAN V or not.

          • Krogoth
          • 1 year ago

          There are gaming reviews out in the wild, but it looks like 2080Ti outpaces the Titan V at gaming performance. DLSS and RTX could technically work on it, but I don’t think Nvidia will provide official support for it.

          The intended buyers of Titan V don’t care for gaming or graphical performance. They are far more interested in general compute performance.

    • synthtel2
    • 1 year ago

    tl;dr: I still must object to thinking of DLSS as a game-changer. The real breakthrough isn’t in the technology, it’s in Nvidia’s marketing department managing to convince us that a 1527p benchmark result belongs on a graph full of 4K results.

    Now that FFXV DLSS comparison shots are out there:

    * Objectively, it’s blurry in different places than TAA, but still blurry. It does do an excellent job of reconstructing subpixel geometry (hair and foliage) with a result that passes more directly for (blurred) photoreal, but there are also way too many cases where the lower original resolution is obvious. It looks at least pretty good at eliminating aliasing, though it’s tough to judge that without seeing it in motion.

    * As a gamer, I could hardly be less impressed. I freaking hate TAA, and would rather run 1440p on a 1440p monitor without any AA at all than DLSS 4K on a 4K monitor for similar performance. 1440p with SMAA (in a non-temporal variant) would be even better. Maybe I’m just too used to doing AA in my brain rather than on the screen. I see precisely one way I’d use DLSS and like it (same as things like DOOM’s TSSAA): if you’re so short on graphics power that you can’t run the game at a playable framerate at native res, something like this beats naive upsampling any day.

    * As a developer, I think the results are fairly awesome (most people seem to care much more about aliasing and less about blur than me), but the gap between it and state-of-the-art upsampling TAA algos doesn’t seem worth bringing in the neural nets and losing vendor agnosticism. Consoles could use something like this a lot more than PCs can. That said, a state-of-the-art upsampling TAA algo is a bit of a fiddly beast, and DLSS is probably much less so. If it’s very easy to make it an option, many developers will, and that looks like what we’re seeing.

    On a slightly different topic, a 2080 performs a bit better than a 1080 Ti, costs a bit more, and burns a bit more power. 20?0 is cool if last year’s halo products just weren’t halo-y enough for you, but other than that, meh.

    • albundy
    • 1 year ago

    i’ll probably pick one up in 10 years when it becomes affordable.

    • Bensam123
    • 1 year ago

    Going to go a weird direction with this. I believe cards are going to start diverging from one another in terms of what gamers are looking for. Hardcore gamers that are after the professional scene and absolute performance always turn graphics down, they drive high refresh rate monitor, with low response times, and high frame rates to absolutely limit the amount of variance (spiking) that is present in their gaming experience.

    Nvidia went for absolute fidelity where they believe the best gaming experience will come from picture perfect representations of an environment, such as with ray tracing. I see ray tracing as a gamer and I go ‘Welp that’s something I’ll turn off’. Hardware review websites are only looking at gaming from a cinematic standpoint, where best gaming will always have everything maxed out running at 8k. Cards do perform differently under different resolutions and especially with different amounts of eye candy turned on. I really think sites should look into testing cards at 1080p on lowest settings with everything uncapped – Not as the only datapoint, but as another one. Professional gamers or anyone who takes gaming seriously will be running that route.

    Which runs into another scenario, where card makers are going to diverge. AMDs upcomming 7nm version of Vega for instance may continue down Vegas current path, which means they’ll be focusing on current day performance (although they mentioned concentrating more on compute we can assume the two will intersect). That means while a 2080ti might be faster running 4k@ultra, especially with rays if that ever takes off, it may lose out completely at 1080p@low (but not eyecancer, such as turning down resolution or textures) to Vega/1080ti/Intel/(insert other unsuspecting candidate here).

    For testing at absolute bleeding speeds, that 1% that is removed in 99% percentile testing really starts to matter. Mainly because the spikes, the micro-stutters, the extra long hiccups get you killed and that pisses off gamers that aim for the pinnacle. Those might seem like outliers, but if they happen frequently-infrequently enough, they are part of a distribution and shouldn’t be removed. When aiming for bleeding speeds, they actually matter a lot more.

    So thus is born the esports gaming card and the cinematic gaming card. Please test both.

      • synthtel2
      • 1 year ago

      For that hardcore gaming crowd (which I’m arguably a part of), it’s more about CPU performance. My RX 480 can churn out frames in Planetside in under 9 millis in just about all cases (medium settings / no shadows), and the gain from going to Vega would be either maybe 4 millis of latency reduction or the ability to run on high settings. High would increase CPU load though, so it’s a no-go. Planetside may be an outlier in exactly how much CPU it takes (all of it you can scrounge up), but it’s normal in how an overkill GPU isn’t worth much.

      At extreme framerate targets, the correct choice of GPU may be more about which vendor’s drivers take less CPU time. At present, that seems to vary a lot depending on the game.

      The more hardcore I’m being about my gaming at any given moment, the less I tend to care about hitchiness and the more I tend to care about latency. If collision detection has a bad frame and it drags out to 30 millis on the CPU, oh well, the chance I was doing anything critical at that moment was fairly small anyway. If the graphics driver trips over itself and adds 15 millis of latency across the board, that’s a major problem.

        • Bensam123
        • 1 year ago

        With stronger processors, bottlenecks like that become less of a thing, especially with hex being the new baseline. There isn’t a single game I’ve played that has saturated all six cores by more then 80% (HTing is off). Some games will use quite a bit less then that.

        Based on my experience playing Planetside2, that game is definitely a outlier as far as cpu utilization goes, and could definitely benefit from either optimization or going to DX12 to reduce overhead. But since the development is dead, that’s not going to happen. Most mainstream titles today that are competitive don’t have this issue.

        Increasing FPS isn’t just about draw times and getting pixels to your monitor, it also increases simulation speed as most game engines are tied to FPS. That means you have better hit registration, more up to date model locations, smoother aiming, and the games just feel better when playing them. Which is why you always want to leave your FPS uncapped unless whatever game you’re playing has issues with high FPS.

        I would highly disagree about hitching not ruining your experience. While it may not always be equivalent of a death, it can often times be jarring and take you out of your flow, which inadvertently results in your death shortly there after. That’s why some people cap their FPS below their monitor refresh to prevent stuttering when g-sync flips on/off (which was supposed to be fixed) and also to keep g-sync always on.

          • synthtel2
          • 1 year ago

          Bottlenecks like what? I didn’t focus on PS2’s CPU use because it wasn’t relevant. My point was that a faster GPU is really not worth so much as all that, and that applies regardless of what speed the CPU side is managing to crank out.

          It’s DX9, but it doesn’t burn much CPU time on that. They’ve had a DX11 backend built for the engine and haven’t even bothered dropping it in (yet – it might be a thing in the works presently) because there isn’t that much to gain from it. It does use more CPU time than it might, but not by any ridiculous amount given what they’re accomplishing. Development isn’t dead. There have been plenty of great updates lately (though Reddit would have you believe everything is terrible), and among other things a new server and new continent are both in the works.

          Let’s use PS2 as an example again – if you do the obvious MOAR FRAMES thing and leave MaximumFPS at 250 (default) with smoothing off, you’re not only opening yourself up to all kinds of avoidable latency problems, but also your rate of fire is going to be worse than it might. Turns out, that simulation *really likes* knowing what framerate it’s targeting, and your gaming experience will almost certainly be better if you sacrifice a frame here and there to give it what it wants. (I run MaximumFPS=115.) Contrary to popular belief, Planetside is not unusual in this regard.

          Edit: you’ve got a point in that if you never cap your framerate, a GPU upgrade will on average be worth a much bigger latency reduction than I said. That’s why you should cap your framerate.

          Edit 2: edit 1 is bad writing, sorry. The right place for a latency-reducing framerate cap is just below whatever the GPU can consistently turn out, so you’re still getting all of the framerate gain as a percentage from the GPU upgrade. The difference is that, when capped, the latency added will resemble the time it takes the GPU to render one frame, while uncapped it may resemble several.

            • Bensam123
            • 1 year ago

            Bottlenecks like CPU matter less when you have more cores and they become less of a point of contention. If you set all your graphics at low that doesn’t mean your GPU doesn’t do any work nor does it mean it can’t be saturated. That also doesn’t mean that there is no longer any long frame times, such as hitching or stutters.

            Hence why you don’t get 1 bajillion FPS when you set your graphics to low in any game and why that same game run at low on different GPUs will result in different FPS. That’s my whole point and while some people strive for absolute fidelity such as high resolution and ultra settings, other people strive for absolute consistency at high FPS and low variance. Cinamatic vs esports experience.

            Each GPU handles workloads differently, that includes smaller workloads which don’t stress the cards in the same manner.

            I get that PS is your baby, but the games dead. 1.5k concurrent players, the dev team was shifted to H1Z1 years ago, and that’s dead now too. It’s not relevant at any point to modern titles. Once again you making a point of how broken that engine is by pointing out it basically explodes when it tries to produce more FPS.

            Try using more modern titles to make a point, especially competitive ones with a engine that’s actually maintained.

            • synthtel2
            • 1 year ago

            I’ve got 8C16T, and I assure you you can be CPU-bound without high utilization readings. For that matter, if you’re having stutter problems it’s pretty likely to be slowness on one thread, and your CPU utilization readings will drop rather than rise. The GPU still does work when it’s OP for a light load, but if it can turn out frames in a consistent 5 millis versus 6, does it really matter? Maybe if you’ve got a 240 Hz monitor and are semi-pro.

            1.5k concurrent players doesn’t put PS2 in the big leagues, but they’re players who really like it and it’s enough to keep some devs working on it. They actually just hired another recently.

            Saying PS2 isn’t relevant to modern titles is like saying it isn’t worth thinking about how car engines work because modern car engines are really good at handling whatever you throw at them. The new stuff is better at hiding it, but the moving parts under the hood are pretty much the same, and have the same pitfalls. PS2’s tech is pretty raw, yes. It doesn’t mean lessons learned from it don’t apply anywhere else.

            I didn’t say PS2’s engine explodes when it tries to produce more FPS. I said if you leave the cap at 250 when you’re not generating anywhere close to 250, you’re going to have problems. If you actually are somehow staying above 150 much of the time, then go for it, it’ll work great. You don’t have to explicitly adjust the cap, turning smoothing on does it for you dynamically. Most games use something that looks at least vaguely like that smoothing, and it gets a really bad rap with people like you because MOAR FRAMES. I’m just saying they’re not putting that stuff in because they hate your framerate.

            PS2 is the only game I care enough about (the competitive aspects of) to research things like that rate of fire problem. Most things like that aren’t immediately obvious (and neither is the rate of fire problem in PS2, for that matter). The latency effect of FPS caps is fairly universal, though. Take Destiny 2. It tends to be heavily GPU-bound and has a long and pileup-prone CPU pipeline, so it’s a particularly good demonstrator for the effect. The lowest-FPS scene in the game seems to be what you see when you first spawn in at the tower, so use that framerate. Subtract 5, set it as a cap in the game’s own settings, and then try to tell me with a straight face that the controls don’t feel massively better.

            • Bensam123
            • 1 year ago

            Except it doesn’t. You can literally play any modern game that’s maintained and figure out that you don’t just end up with constant FPS. Fortnite or Overwatch for instance. You can’t run a 1080ti and then run a 1060 for instance on all low settings and get the same FPS.

            This goes double down when you start switching architecture, when how the cards are configured will change how fast certain elements are rendered. Say for instance going between the 1XXX series, 2XXX series, Vega, and 5XX series. Then throwing Intel in there will further differentiate how they operate on the low end (in addition to the high end).

            It’s not a difference of 5ms vs 6ms. If you spent any time looking at frame time graphs or even look at the results here, on this very website, you would know spiking is vastly different then 1ms. That’s why it’s a stutter or a spike.

            Now you go on to justify the existence of PS2 which I don’t care about, is still irrelevant to what we’re talking about, and I’m not going to bother reading or responding to it. If you want to use a game for an example, use a modern game with a maintained engine.

            Also no one says millis for MS/ms/millisecond.

            • synthtel2
            • 1 year ago

            It isn’t explicitly stated anywhere I see, but it looks like TR is using present times rather than display times for their measurements. There’s a lot of slop left between present times and when you actually see a frame on the screen. I don’t know if they’re still filtering with a 3-frame moving average, but even with that, the graphs can be pretty spiky while what’s happening at the monitor is rock solid (albeit latency is probably bad and inconsistent). That’s a match with TR’s graph and my experience of DX:MD (the only thing in this review I’ve played) – there was clearly something a little weird going on earlier in the pipeline, but consistent delta times (probably heavily smoothed) and consistent GPU times were enough to make frame delivery great.

            This works the other direction too in that there’s room for PCars, GoW4, and Wolfenstein to have stutter at the display while keeping the smooth graphs we see here, but we don’t worry about that one much because one GPU frame tends to take about as long as the next.

            Since when are Intel graphics factoring into anything here? My original point was about the lack of difference between a slightly overkill graphics card and a really overkill one and that it is actually possible to be well into overkill territory at this point. Of course there are going to be big differences if you’re not reaching overkill territory in the first place.

            If you want me to think PS2 is unsuitable for something like this, you’ll have to do a lot better than dissing its age and playerbase. Maybe bring some technical arguments next time? I would welcome it, if you could find one.

            Would you prefer that language not evolve? Millis is less ambiguous than MS with that capitalization, certainly, and I must have picked it up from somewhere.

            • Bensam123
            • 1 year ago

            What is present time vs display times? Frame time is how long each frame takes to render, there is no other ‘frame time’.

            And my point was there IS a big difference between a low end and a high end GPU, furthermore architecture changes how GPUs deal with extremely high FPS, it’s not a linear relationship nor does a full GPU workload with all the eye candy turned on fully constitute the experience you will have when you turn it all of.

            Intel announced a dedicated GPU that’s going to come out in 2020ish, it’s been talked about a few times here and I left that open as another architecture that will be quite different from either AMD or Nvidia.

            PS2 isn’t suitable because the engine isn’t maintained, it’s not designed for a competitive environment where every action counts (quite the opposite to take into account the amount of players that have to be near one another), and the engine is ancient in and of itself, with the development team basically abandoning it as they were fired. Even you mentioned how the game blows chunks when you try to run higher FPS, normal well maintained game engines don’t do that. Go play Fortnite, Overwatch, or CSGO (which I’ve stated for the nth time). There are other engines too, but those are the big ones and cover all the major bases. There is also MOBAs and those too don’t have issues with high FPS, but also high FPS doesn’t matter nearly as much as it does in FPS.

            Why would you talk about a game that utilizes a engine that can’t benefit from high FPS in a discussion about how to benchmark performance on a level that esports or highly competitive players who seek absolute performance would like to see (IE extremely high FPS at consistent game times)?

            • synthtel2
            • 1 year ago

            There are many frame times, because different components can be working on different things at the same time. CPU versus GPU should at least be obvious, and that’s basically what we’re looking at here – what we’re measuring is the time between frame completions on the CPU. There’s then three frames of slop by default between that and the GPU (flip queue for AMD / maximum pre-rendered frames for Nvidia), and then the GPU has to actually execute its own work, and only then can it be sent to the screen. It’s still miles better than just looking at average framerates, but you’ve got to pay attention to where the data is coming from when trying to use it for something like this.

            Of course there’s a big difference if you don’t have enough GPU to become CPU-bound. Once you are CPU-bound, it’s about latency and drivers. Getting CPU-bound and latency improvements as you go past that are pretty much the same regardless of architecture. You just throw more power at it, and it’s the same variety of power Jeff’s throwing at 4K in this review. Drivers are pretty irrelevant when they’re working right, they just spend an unfortunate amount of time broken. (“Drivers” is really shorthand for “drivers and the game’s render backend”. That whole space is a twisted mess, and I don’t mean to put all the blame on the GPU vendors.)

            You should read the 4th paragraph of my 3rd post in this thread, more carefully this time. PS2 utilizes high framerates as well as anything else. The trouble is people misconfiguring it in pursuit of an extra frame here and there, and this is actually more of a problem at low framerates than high. When modern games are not subject to this misconfiguration, it’s generally because they’ve figured out (correctly in this case) that they know better than you do and won’t even allow you to misconfigure it.

            CS:GO was released three months before PS2 at a similar tech level and has seen a lot less engine evolution in that time than PS2, but if you like it, OK, that works. Ever notice how everyone who’s anyone in CS:GO really, *really* wants to be running on the 300 fps cap, even if they can’t articulate why? I don’t find it a very fun game and haven’t bothered looking into specific effects like the rate of fire thing, but have played it and other Source engine games enough to be confident it’s suffering from the same general problem PS2 does. It’s actually worse because it doesn’t even have an option for smoothing IIRC, but is at least better in that it’s easy to get to 300 fps. (I haven’t played Fortnite or Overwatch.)

            • Bensam123
            • 1 year ago

            Except GPU and CPU work in tandem when you’re talking about games and that doesn’t change whether you run on ultra or low graphic settings.

            And no, no that’s not what we’re looking at here. Once again going back to talking about how you can put a 1060 and a 1080ti in a system and run it on the lowest settings and they don’t produce the same FPS regardless of the settings being turned down. Which is why that is very important as someone who is striving (for the Nth time) for absolute performance will notice the stutters/microstutters.

            You don’t end up with a completely CPU bottlenecked scenario when you turn your graphics down. You keep saying you do and insinuating it, but even the most basic testing disproves that.

            You are also mixing up terminology. Frametime is not input lag and input lag is not frame time. They are two very different things, don’t intermingle the terms you not only confuse yourself (as it seems you can’t differentiate between the two), but you confuse other people you’re talking to. I didn’t even know you were talking about input lag and were arguing about it till Airman pointed that out. And you’re still attempting to argue input lag with me when I’m talking particularly about frame time.

            Why would I be talking about input latency in a thread that’s literally discussing GPUs and specifically relating to the testing they’re doing here regarding GPUs? Input lag matters, yes, however that’s not what I was talking about from the first post or through all my other posts from that point on nor is that what the article was talking about.

            I have no interest in talking about a antiquated game engine that can’t produce high FPS because it causes the engine to literally vomit all over itself. You could talk about pretty much any other modern engine that runs competitive games, especially for esports.

            CSGO was released a few years ago, you’re talking about the Source engine, which has been well maintained and used over the years by a company called Valve, not sure you heard of them. As far as it being the same as the forgelight engine (lol?), at the most basic level you don’t need to cap your FPS or the game turns into a stuttery mess. They’re very very different beasts and the forgelight engine was designed for lots of players, not competitive game play, which is also why H1Z1 started crapping all over itself when it came down to the wire.

            I’ve been attempting to hold off finger pointing, but I swear you have absolutely no idea what you’re talking about, but know enough buzzwords and google-fu to fill in any blanks and make it seem like you do. This reminds me of talking with a scammer who keeps saying stuff that doesn’t make sense, but they keep trying to get you to believe it by saying it more and more. I like to term that ‘scammer logic’. However that makes this a pointless discussion as I thought you were actually trying to discuss what I was talking about (as you know all of these posts followed my original one), but you aren’t nor were you. You don’t even know the difference between input lag and frame time.

            And no, no, I have not heard of a 300fps cap in CSGO. You’re mistaking that with Overwatch. Like I said, google-fu.

            Also if the upvote/downvote system is keeping you going and making you think you’re right, each gold member here gets +3/-3 votes per person. I’ve been around this site a long time and people routinely go through and downvote anything I say. Notice how they didn’t actually take time to either respond to you or myself?

            • synthtel2
            • 1 year ago

            I’m not saying 2/3rds of the things you seem to think I’m saying. Seriously, take a step back and re-read the thread from the start before trying again. If I actually didn’t know the difference between frametimes and latency you’d be right to say I have no idea what I’m talking about, but there really are reasons I’m saying what I’m saying.

            “Tandem” isn’t clear. If you’re trying to say they are doing work at the same time because you thought I said otherwise, it’s true I didn’t write very clearly on that point. If you’re trying to say they’re not both doing work at the same time, you’re just plain wrong.

            You absolutely can end up in a completely CPU bottlenecked scenario if the GPU is going through work fast enough. If the GPU can always finish a frame before the CPU side has got any of the next frame’s work lined up for it, then getting a faster GPU will not affect your framerate, period. It’s sometimes very difficult to get to this point due to work submission not being instant and the CPU side delivering work to the GPU at very inconsistent intervals (as seen in TR’s graphs). This creates a diminishing-returns effect where faster GPUs will give you very slightly higher framerates well past the point you would expect if CPU/GPU-boundedness were truly binary. This does not automatically mean it helps stutter or is otherwise worthwhile.

            If it helps, [url=https://imgur.com/a/6VXFYKe<]here's[/url<] a thing I have lying around that happens to illustrate what looks to be going on in at least DX:MD (and probably more) pretty well. You can see CPU and GPU frametimes there. It's mostly unrealistic in that there are more pipeline stages than that (usually at least CPU sim -> CPU render -> GPU). I do know the difference between frametimes and latency, and keep bringing up latency because it's the thing that will continue to improve as you throw more GPU power at something no matter how CPU-bound you may be. If your GPU could previously turn out frames at a consistent 167 and you upgrade it to make that figure 250, you're probably not getting significantly higher average framerates or reduced stutter / 99% frametimes because your CPU is holding those back (assuming it isn't something very CPU-light like CS:GO), but you guaranteed just shaved 2 ms off your average latency figures. "can't produce high FPS because it causes the engine to literally vomit all over itself" I've been saying the opposite of that, repeatedly. It'd handle 250 like a boss if you could get it there. Literally the only ways it's abnormal for these purposes are that (a) it takes more CPU power than you would expect to get to a given framerate when a lot is going on in-game, and (b) it has an exceptionally tight render backend that behaves a lot better than you would expect from a game like that on DX9. It is not at all stuttery, even if you leave the cap at 250 and don't use smoothing. CS:GO's version of Source already did what it needed to admirably at the game's inception and there hasn't been cause to change much. It is better maintained in that it's closer to delivering a perfect experience in the game to start with so there's been less maintenance to do, but CS:GO is fundamentally a much simpler game that demands a lot less from its engine. Forgelight has advanced since 2012 to become capable of things that Source can't even come close to (and naturally Source can do things Forgelight can't also). Where the eff did you even come up with the idea that I said Source and Forgelight were the same? I said "at a similar tech level". UE3 would also fit that bill. If you haven't heard of a 300 fps cap in CS:GO, you aren't paying attention. Don't take my word for it or even Google's word for it, just fire up the game and see for yourself.

        • Airmantharp
        • 1 year ago

        Very interesting take on the input latency problem, +1!

        How do you test for this?

          • synthtel2
          • 1 year ago

          Test for what, exactly? Sorry, I see a couple of directions to go with that prompt. If it’s the “graphics driver trips over itself” thing, the fix was setting flip queue to 1 (takes a regedit on AMD drivers at this point unfortunately), and the symptom must unfortunately be pretty subjective unless you’ve got an appropriately high-speed camera.

          If you want to artificially induce the problem for comparison, getting GPU-bound at ~30 fps (maybe using DSR/VSR) and then trying the usual tweaks will usually make them pretty clear to feel. If pre-tweaks it feels like your mouse is moving through molasses, you know you’re in the right place.

    • Fonbu
    • 1 year ago

    Outstanding review for the Turing début!
    The analysis certainly brings alot more questions about this new architecture:
    such as…

    SLI capability results?
    Any new VR tech in the gpu?
    Acoustics?
    Thermal Characteristics
    Coil Wine Possibility?
    Video decode engines?
    How long till the next architecture?

    And Tech Reports famous, 99th percentile FPS per dollar scatterplot!

    • aspect
    • 1 year ago

    The 2080 costs $150 more than the 1080ti for nearly the same performance.

      • ptsant
      • 1 year ago

      Yes, but can it cast rays?

        • Goty
        • 1 year ago

        Yes, actually, just not as many.

      • Krogoth
      • 1 year ago

      2080 runs cooler and it doesn’t choke on HDR mode. It is 1080 versus 980Ti all over agian.

    • anotherengineer
    • 1 year ago

    So what’s the performance per shader and per transistor and per $?

    edit got the per $ one.

    [url<]https://www.techpowerup.com/reviews/NVIDIA/GeForce_RTX_2080_Ti_Founders_Edition/35.html[/url<]

    • rutra80
    • 1 year ago

    so, how is it minin?

    • Forge
    • 1 year ago

    Hmm. If you properly parse all the names, and you consider the 2080 Ti as the Titan T, the 2080 as the 1080 Ti replacement, and the 2070 as the 1080 replacement, it’s not a really overwhelming launch, and that’s putting it as politely as I can. I’m not seeing any reason to move off a 1080 Ti. I just put mine under water, and now performance has shot way up. I wonder how long till that idea becomes more mainstream? It’s clearly becoming a major performance constraint, even with dual slot coolers being the mainstream, and overheight and oversized three slot coolers becoming a standard option.

      • Srsly_Bro
      • 1 year ago

      My ftw3 boosts to mid 1900s without an overclock and power limit set to 90%. How much faster is the card on water?

    • YukaKun
    • 1 year ago

    Would you be adding a picture-by-picture analysis of DLSS and other regular AA techniques?

    EDIT: Forgot to say that it was a very nice article as always. Thanks a lot for the hard effort behind all of it!

    Cheers!

    • derFunkenstein
    • 1 year ago

    Wow, faster than I expected. GG, Nvidia.

    edit: good to see another review brings out all the morons on TR. Don’t ever change, Internet.

    • agentbb007
    • 1 year ago

    Performance looks great, super excited I pre-ordered a 2080 ti!!!

    • Phaleron
    • 1 year ago

    I’m wondering about some other games – not many were done. Was this the result of the Nvidia NDA specifying what games you can review and cannot? (see the [H] article on it)

      • Jeff Kampman
      • 1 year ago

      It’s a pretty big stretch to go from “this review doesn’t test some games I was wondering about” to “Nvidia dictated the terms of the review,” don’t you think?

      Also, find me another site that did full frame-time data for nine games and two graphics demos and I’ll humor your complaint about “not many being done.”

        • benedict
        • 1 year ago

        His point still stands. Are you planning to redo the tests with retail hardware and public drivers when they are out?

          • kuraegomon
          • 1 year ago

          No, his point does _[i<]not[/i<]_ stand. He asked about the number of games tested, and Jeff responded by pointing out that TR's testing methods are more rigorous, and hence take more work to deliver. If you've spent more than a month or two on this site, you know that is nothing less than the (simple) truth. NDA lifted, Jeff busted his butt, and delivered what he could in a timely manner.

          • Jeff Kampman
          • 1 year ago

          I’m not planning to run down to Best Buy and drop $2000 on Founders Edition cards so I can double-check my numbers, no. That approach is no different than any other hardware review for any company we cover.

          As for the drivers in play, we had early access to the 411.63 release that went live today. I have no reason to expect that the pre-release build offers meaningfully different performance than the public release.

            • chuckula
            • 1 year ago

            [quote<]That approach is no different than any other hardware review for any company we cover. [/quote<] Remember Jeff, when accurate benchmark results from TR that are clearly high quality and that are consistent with the results of other reviews around the web don't make Nvidia look bad enough, then the only explanation that doesn't defy the laws of physics is that you must be part of a global anti-AMD conspiracy! #WhyDidAMDLetKampmanDriveTheFerrari???

            • jihadjoe
            • 1 year ago

            I think people just want these cards to be bad so they have another reason to hate on Nvidia for making them so expensive.

            • Srsly_Bro
            • 1 year ago

            Around 25-30% faster than 1080 Ti at 1440p and currently priced almost double? I think Nvidia is doing a fine job on its own.

          • derFunkenstein
          • 1 year ago

          That’s moronic.

        • Phaleron
        • 1 year ago

        Intent of question:

        Did Nvidia dictate game choice to put the 2080 and 2080Ti in the best possible light? Between what they pulled with branding of “gaming” add-in cards and what I’ve read of the NDA for this launch I was curious.

        Not an intent of the question:

        Complaining about the rigorous methods TR uses to review cards that benefit everyone. My apologies if it came across that way Jeff.

          • Voldenuit
          • 1 year ago

          [quote<]Did Nvidia dictate game choice to put the 2080 and 2080Ti in the best possible light?[/quote<] Games nvidia provided [url=http://videocardz.com/77983/nvidia-geforce-rtx-2080-ti-and-rtx-2080-official-performance-unveiled<]guidelines and recommended settings[/url<] to reviewers for: [list<] [*<] Battlefield 1 [/*<][*<] F1 2018 [/*<][*<] Call of Duty WWII [/*<][*<] Hitman [/*<][*<] Mass Effect: Andromeda [/*<][*<] Middle Earth: Shadow of War [/*<][*<] Star Wars: Battlefront II [/*<][*<] Playerunknown's Battleground [/*<][*<] Rainbow Six Siege [/*<][*<] Shadow of the Tomb Raider [/*<][*<] Witcher 3 [/*<][*<] Wolfenstein II [/*<] [/list<] Games TR actually reviewed ('*' denotes game in nvidia recommended list): [list<] [*<] * Shadow of the Tomb Raider [/*<][*<] Project Cars 2 [/*<][*<] Hellblade: Senua's Sacrifice [/*<][*<] Gears of War 4 [/*<][*<] Far Cry 5 [/*<][*<] Assassin's Creed Origins [/*<][*<] Deus Ex: Mankind Divided [/*<][*<] Watch Dogs 2 [/*<][*<] * Wolfenstein II [/*<] [/list<] Conclusion: Short answer, no. Case closed.

            • Redocbew
            • 1 year ago

            Hah. For some reason I thought the list of games from Nvidia wouldn’t be made public. Clearly I didn’t follow the story too closely. I seem to be getting less and less interested in the drama of all this all the time.

            • Voldenuit
            • 1 year ago

            Technically, the list was under NDA and was not intended to be leaked to the public. It was a control set of benchmarks nvidia made so reviewers would have a reference frame for the expected performance of the cards.

            That way, if a game performed way worse than previously documented, they might be able to troubleshoot or pinpoint unexpected problems before the reviews went live.

            EDIT: You’ll notice that some of the benchmarks TR and other sites reported are actually /higher/ than nvidia’s own numbers, so I don’t think nvidia was trying to be underhanded or misleading at all.

            • Redocbew
            • 1 year ago

            I skimmed past the headlines when the story broke, because I saw everyone losing their minds over another megacorp doing something they probably shouldn’t be doing. That hardly ever happens, right?

            Sometimes when that happens the fallout from it can be genuinely dangerous, but this isn’t one of those times.

            • Redocbew
            • 1 year ago

            [quote<]You'll notice that some of the benchmarks TR and other sites reported are actually /higher/ than nvidia's own numbers, so I don't think nvidia was trying to be underhanded or misleading at all.[/quote<] Or they screwed up the test. 🙂 I'm only half kidding. Generally we view vendor supplied tests as unreliable because we often don't know the exact configuration of the testing environment. There's no way to rule out any shenanigans which may have been involved, but there's no way to rule out something boneheaded either. It works both ways.

            • auxy
            • 1 year ago

            Nvidia tested with a 7900X and Jeff tested with an 8086K so that’s why TR’s numbers are higher.

            • Redocbew
            • 1 year ago

            Yeah, that’s the other thing. Comparing test results over time doesn’t always work out so well. If the intention was for Nvidia to provide these results, and have them stand as some kind of guideline for time immemorial, then I’d categorize that under “screwing up the test”.

            I also don’t think it’s too much to ask to expect reviewers to know without being told when their test results are wiggy, but maybe that’s just me.

          • Jeff Kampman
          • 1 year ago

          No, this review went like any other. Nvidia sent me the cards, provided me access to the supporting materials (white papers, reviewer’s guides, drivers, etc.) and I did my thing and asked whatever questions I had and ultimately produced this piece.

          People need to stop taking the existence of reviewer’s guides as diktats from the manufacturer about what we can and can’t do with their products when writing about them. The point of those documents is to establish a frame of reference for what to expect (useful in the event of a hardware or software issue) and to prevent less technical/experienced outlets from testing in a pants-on-head fashion.

          I have never been told “you can’t do X” or “you can only do Y” by any company when it comes to product reviews. Companies are free to [i<]suggest[/i<] things we might test to establish points of interest for their products, but we're free to take or leave those suggestions.

            • K-L-Waster
            • 1 year ago

            Anyone wanna bet that whenever the next AMD card reviews occur we’ll get the *opposite* complaint?

            As in “AMD asked everyone to test using [x] list of games, you tested with [y] games, those games play bad on the new AMD card, you’re trying to sabotage them!!!11!1!”

            • Redocbew
            • 1 year ago

            [url<]https://www.youtube.com/watch?v=5mjUF2j9dgA[/url<]

        • Redocbew
        • 1 year ago

        Jeff, did Nvidia ask you not to include Buffalo’s in your review? How about potatoes?

        And cheese! We must not forget the cheese, but mostly it’s just the Buffalo. The distinct lack of Buffalo is leaving me very unsettled and a bit disturbed.

        • puppetworx
        • 1 year ago

        It’s not as if companies dictating how reviews are performed is unheard of these days. He asked a legitimate question which is easy to answer, I don’t understand the knee-jerk defensiveness, by you or others.

        Maybe you’re overtired from doing the review but a simple “no” would suffice, be more helpful and cost less energy. Just some unsolicited advice.

        • Chrispy_
        • 1 year ago

        Does DLSS add any perceptible lag?

        ‘Dumb’ AA usually applies a very basic, low-cost filter either on the current frame in the buffer, or on the current frame’s differences from the previous frame (temporal).

        If DLSS needs to analyse each frame and make decisions on it, I’m assuming there’s a little bit of a delay there – the real question is whether that delay is significant compared to the refresh rate and TFT pixel response.

          • MathMan
          • 1 year ago

          It’s fine to look at DLSS as a filter, because that’s what almost all deep learning networks are. It’s just that it has way more coefficients and non-linear elements. But there is usually not part where it needs to make decisions in and if/then/else kind of way.

          It’s also very unlikely that DLSS works at the frame level. There is no obvious benefit in doing so, since anti-aliasing is a very localized processing step anyway.

          If there is additional latency, I expect it to be simply due to the huge amount of calculations, not because of some kind of decision making.

      • Redocbew
      • 1 year ago

      I wonder if it’s cool now to be critical of reviews. If it is, then you’re doing it wrong.

      Now get off my lawn!

      • chuckula
      • 1 year ago

      [quote<](see the [H] article on it)[/quote<] According to that article, the NDA for Turing won't expire for 5 more years. So literally every review on the Internet doesn't exist or violates the NDA. Or the conspiracy rants over there are just plain wrong... but that's far too rational a conclusion to come to.

      • derFunkenstein
      • 1 year ago

      Nvidia limited TR to just the games where the 2080Ti was faster than Vega, which is to say all games. Don’t be a doofus.

        • Srsly_Bro
        • 1 year ago

        Only read to the first comma. I’m outaged. What about primitive shaders?

        #secretsauce

        Vega fan ppl are already silly buying that card for anything other than mining.

          • DoomGuy64
          • 1 year ago

          Vega 56 is worth buying for 1440p freesync. That’s pretty much it, and the features like primitive shaders require dev support, which AMD is not sponsoring developers like Nvidia to include it. Nvidia has a lock on the high end, and they aren’t competing with mid-range at all, so Vega 56 is still a viable card. Even at that, Vega 56 is overpriced enough that it almost isn’t worth buying unless you get one on sale. I’m waiting for either a good sale or Navi.

          Comparing Vega to RTX is a joke. It doesn’t mean anything to mid-range gamers because we’re not gaming @ 4k, nor spending $1000+ on video cards or monitors. AMD has a lock on the mid-range, Nvidia has a lock on the high end. No competition, no comparison. Nvidia needs to first offer something in Vega’s price range that supports freesync.

          PS. This red team vs green team fanboy tribalism is juvenile and serves no purpose aside from distracting us from the realization that both sides are screwing the consumer by deliberately not competing with each other.

            • K-L-Waster
            • 1 year ago

            [quote<]PS. This red team vs green team fanboy tribalism is juvenile and serves no purpose....[/quote<] Agreed, so it would be really nice if you would stop.

            • DoomGuy64
            • 1 year ago

            Gaslight much? Because It seems like you have some sort of double standard mental disorder that allows you to ignore all of the Nvidia tribalism posts, but when I call it out, OH NO, YOU CAN’T DO THAT. Shove it, hypocrite.

    • wizardz
    • 1 year ago

    while the performance numbers are really interesting, there is no chance in hell that i buys this tech today.

    i really really wanted to update my GTX570 (yeah, no typo here) to a 2080 TI, but at roughly 1.6k$ frozen dollars, nope. not happening.

    i’ll go with the best 1080ti i can get and call it a day.
    thanks Nvidia.

      • rnalsation
      • 1 year ago

      Is your CPU also a 2010 vintage? Because a $1000-$1200 MSRP GPU would be kinda a waste on that.

        • wizardz
        • 1 year ago

        i7-2600k @4.6ghz

        i was actually planning a full refresh. motherboard has lost a few usb ports and a lan card.

        i havent played anythiny other than factorio for a while.. but the kids are getting older, and i now have more free time.. RTX was looking great…

          • Redocbew
          • 1 year ago

          I wouldn’t get one of these if I was looking to upgrade now. If it were significantly faster than a 1080Ti right now, and my hardware was of the same age as yours then maybe. If ray tracing becomes a thing, then you’ll probably be looking to upgrade again anyway before it’s well supported.

            • jihadjoe
            • 1 year ago

            Yeah 7nm is where it’s gonna be at.

            Current 12nm tech doesn’t have enough transistor density to combine the necessary amount of traditional shaders, RT and Tensor cores without making the package very large and expensive like TU102.

          • Srsly_Bro
          • 1 year ago

          I play lots of factorio on my 2700k @ 4.5ghz. My current factory is using 11.5GW and my CPU is not well. I need to upgrade to play factorio at decent frame rates. I’m getting around 30fps and the game is sluggish with around 500k copper wires/min.

            • wizardz
            • 1 year ago

            my factory is at about 40-50GW? and yeah.. my CPU doesn’t like it either. 20UPS maybe? 30 if i zoom in enough.

      • DoomGuy64
      • 1 year ago

      Do you have one of the double memory models? Those are still somewhat capable of playing modern games, but the standard models are worthless and would be better off changed out for something like a used 960.

        • wizardz
        • 1 year ago

        humm. not sure about the double memory.
        [url<]https://www.evga.com/products/specs/gpu.aspx?pn=a3fca6c7-7b75-4c2d-a0d5-7a6d1228ea8a[/url<] this is the one i have.

          • rnalsation
          • 1 year ago

          1280MB
          You do not, he was talking about the 2560MB cards.

    • odizzido
    • 1 year ago

    When factoring in the price these cards don’t look so great :\

    • LocalCitizen
    • 1 year ago

    nvidia is using ati’s missteps to introduce the turning architecture. notice how there is no volta based gaming cards? notice there was confusion mid-year on what is to follow pascal?

    nv probably have the RT technology (hardware / software) ready for a bit now. but can’t include it in volta because of the big increase in die area, ie big increase in cost. nv realize ati navi is going to be late, very late, which may give nvidia the time and price room to cover the extra cost. they took a gamble by making turing and skip volta, and i think they won it big.

    by introducing rt now, they give a new technology to the game programmers, which further separate them from ati, and whatever intel dgpu have in store. in a year’s time when 7nm is mature, their next chip will have normal sized dies with normal costs. if ati / intel is competitive at that time, nv can cut costs to maintain market share. if ati / intel is not competitive, nv can have higher prices, but include more cuda / tensor / rx cores on slightly bigger dies to make high performance cards.

    that’s just my speculation.

    it’s like nv and ati are playing a game of chess, and ati was so slow with navi that nv made 2 moves instead of 1. i think ati is at a severe disadvantage.

    we’ll see what navi can do. if it is competitive with 2170 / 3070, then at least they can grab a piece of the big mainstream market. but remember polaris (rx480) was only as good as 1060, in that case they are not going to make much money at all.

      • Srsly_Bro
      • 1 year ago

      [url<]https://www.techpowerup.com/reviews/ASUS/GeForce_RTX_2080_Ti_Strix_OC/5.html[/url<] I'm not trying to be rude, but the die size for the 2080Ti is 754mm^2. Your argument is founded upon ignorance. A few min of research invalidates your big die theory when Turing also has large die.

        • LocalCitizen
        • 1 year ago

        ??

        1080ti is only 470 mm^2

        turing has a big expensive die. that’s my point. nvidia can do it now because ati is not competitive, which give the pricing room.

          • Srsly_Bro
          • 1 year ago

          But AMD wasn’t competitive before so big die then versus now is irrelevant. AMD wasn’t competing in the high end and Nvidia has product cycles. With post crypto currency high inventories, Nvidia waited and released Turing, that like Volta, has a large die.

            • LocalCitizen
            • 1 year ago

            true that ati was not competitive before, but competitive enough that if nvidia introduced $1200 1080ti it would have flopped.

            but ati is in very bad shape now. polaris in 2016, vega in 2017, and completely blank in 2018. im saying that if ati were more competitive now, turing might not have happened now. nv could have just released volta gaming cards, lower priced but lower cost, and business as usual.

            but then what? navi comes next year should not be underestimated. and intel is also coming. it may not perform right away, but intel got a lot of resources to make it happen sooner or later.

            by making TX a feature for games developers, nvidia has an extra competitive edge for the future.

            • Srsly_Bro
            • 1 year ago

            A $1200 Volta with 2080ti performance wouldn’t have. The Titan V still sold.

      • Krogoth
      • 1 year ago

      Volta and Turning are nearly the same thing. Turing is really just a Volta adapted to graphical needs instead of general compute and use a different memory controller (GDDR6 instead of HBM2).

        • LocalCitizen
        • 1 year ago

        volta doesn’t have RT core. if nv does the volta gaming cards on 12nm instead of turing, then the cards would’ve been a lot cheaper. but nv might not get another chance to introduce RT. it would have to compete with intel and ati next year with one fewer feature.

          • NoOne ButMe
          • 1 year ago

          But, Volta does have:
          Split INT/FPU units
          64C SM
          Tensor cores

          There is more….
          Just look at the Whitepapers!

          (note: there are differences, but largely it is the same. Like Pascal v. Maxwell versus Kepler v. Maxwell, or Pascal v. Volta)

          [url<]http://images.nvidia.com/content/volta-architecture/pdf/volta-architecture-whitepaper.pdf[/url<] [url<]https://www.nvidia.com/content/dam/en-zz/Solutions/design-visualization/technologies/turing-architecture/NVIDIA-Turing-Architecture-Whitepaper.pdf[/url<] SMs for example:  Six GPCs Each GPC has: ● Seven TPCs (each including two SMs) ● 14 SMs  84 Volta SMs Each SM has: ● 64 FP32 cores ● 64 INT32 cores ● 32 FP64 cores ● 8 Tensor Cores ● Four texture units Now Turing: The TU102 GPU includes six Graphics Processing Clusters (GPCs), 36 Texture Processing Clusters (TPCs), and 72 Streaming Multiprocessors (SMs). (See Figure 2 for an illustration of the TU102 full GPU with 72 SM units.) Each GPC includes a dedicated raster engine and six TPCs, with each TPC including two SMs. Each SM contains 64 CUDA Cores, eight Tensor Cores, a 256 KB register file, four texture units, and 96 KB of L1/shared memory which can be configured for various capacities depending on the compute or graphics workloads. The Volta cache also is "together" like Turing press slides advertised (Volta has more cache to split (128kb v. 96kb for Turing), which makes sense for an HPC part from my understanding): The new combined L1 data cache and shared memory subsystem of the Volta SM significantly improves performance while also simplifying programming and reducing the tuning required to attain at or near-peak application performance.

            • LocalCitizen
            • 1 year ago

            yes. turing is just like volta + rt core

            oh and gddr6 on rtx 20xx rather than hbm2 on the titan v, but gddr6 on 2080ti gives 616gb/s only slightly less than 652gb/s. pretty good.

            rt core (hybrid ray tracing) is a feature meant for gaming market (and hollywood rendering farms), where nvidia intents to keep its dominance.

            • NoOne ButMe
            • 1 year ago

            Maybe this will be more clear:
            Turing was planned like this from the start, to be the consumer side of Volta.

            We know that Nvidia had cards ready a while back, and the mining boom and other things set them back in launching.

            I would speculate that this was always planned. Because the only option would be that NVidia canceled a whole lineup of Volta cards in early-mid 2017. At latest.

            I find that unlikely. I believe Nvidia pulled the trigger 3-4 years ago on moving forward with RT in Turing-Volta.

            • Krogoth
            • 1 year ago

            RTX mode was simply Nvidia finding a use for the tensor cores for graphical needs.

            • NTMBK
            • 1 year ago

            Using the tensor cores for filtering and upsampling isn’t just restricted to raytracing.

      • NoOne ButMe
      • 1 year ago

      Eh, I have a sneaking suspicion that there is a reason why TU100 does not exist. Well, not officially. Because GV100 is TU100.

      They’re the same architectural when all is said and done.

      Just in order to fit in 1/2 rate FP64, Nvidia had to make compromises with GV100, due to, well…. die size.

      Cause for the big things outside of Raytracing… well, it is very nearly the same.

      Should also add that neither HPC or ML/DL care about Raytracing. And hence not putting it on a card aimed at HPC and ML/DL would make little sense.

        • Krogoth
        • 1 year ago

        ^^^^^^

        Bingo, Turing is just a “tweaked” Volta design because Volta originally wasn’t made for graphics in mind.

        Nvidia is starting diverge their general compute and graphical chip designs. Volta is start of the dedicated general compute family while Turing’s descentants will continue the focus on graphics with general compute being secondary.

    • dragontamer5788
    • 1 year ago

    Ha ha! I found a typo.

    [quote<]The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.[/quote<] The "forums" link is weird. It is clearly copy/pasted from the 1070 Ti review and goes there instead.

      • derFunkenstein
      • 1 year ago

      It’s not a typo, it’s an easter egg.

      edit: typos. Is that irony or coincidence?

    • CScottG
    • 1 year ago

    Where is my fakakta VALUE METRIC GRAPH?

    *grins*

    • dragontamer5788
    • 1 year ago

    Since these cards have Tensor Cores… can you guys get a Tensorflow benchmark done?

    I’m sure that Tensorflow is a niche within a niche, but its one of the serious reasons IMO someone should buy this card. The previous card with tensors was $3000+ (the Titan V), so getting Tensors for that sweet FP16 matrix-multiplication under $1000 (in the case of the 2070 or 2080) seems like a big deal to me.

      • ptsant
      • 1 year ago

      I am curious about this. Would they leave the deep learning hardware uncrippled in a consumer card? That would be a first.

      I really think there could be some sort of driver (not physical, on the actual chip) obstacle to running these full speed. Otherwise, these will sell like hotcakes for AI research…

    • techguy
    • 1 year ago

    Until I saw the frametime metrics my opinion of the 2080 Ti was that it was rather unimpressive in today’s titles, and that it wouldn’t really shine until next-gen games start to come out. As the mantra goes though, average FPS doesn’t tell the whole story. Time spent beyond 16.7ms with 2080 Ti vs. 1080 Ti is usually an order of magnitude difference!

    I’m still not sold on 1st gen. Turing SKUs though, those prices are outrageous and the lack of additional VRAM compared to Pascal means it’s not going to be a definite win in every workload for me. Guess I’ll wait to see if any partner cards hit the “MSRP”.

    • Techonomics
    • 1 year ago

    Hey Jeff,

    FWIW, it might be more useful to include a 98FPS benchmark (perhaps in lieu of 11.1ms/90FPS) in your “Time Spent Beyond” graphs going forward since 98Hz is the maximum bandwidth defined by HDMI 2.0b for 4:4:4 chroma in HDR.

    Beyond 98Hz at 4K, sub pixel chroma needs to be reduced to 4:2:2 or 4:2:0 over HDMI 2.0b (an important caveat with these new “120Hz” 4K monitors just released), which defeats the purpose of HDR gaming in my opinion, especially on a desktop where 4:4:4 chroma is king.

    • Voldenuit
    • 1 year ago

    Could we please have the tested resolution in the chart titles so we don’t have to hunt through* the accompanying article text to make sense of what we’re seeing?

    * I mean, sometimes the tested resolution is mentioned in the article text above the chart, sometimes it’s mentioned below the chart.

      • DancinJack
      • 1 year ago

      FWIW, this particular article did say 4K/60Hz for everything unless otherwise stated.

        • Voldenuit
        • 1 year ago

        Thanks for that.

        I have a feeling that when ray-tracing becomes available to test in actual games, people will be interested in RT performance at 1080p, 1440p and 2160p resolutions.

    • Krogoth
    • 1 year ago

    Ceiling Cat and Lime Cat find your lack of cat pictures [b<]cat[/b<]astrophic. 😉

    • drfish
    • 1 year ago

    So, three years later, I can replace my 980 Ti with something that unequivocally doubles its performance for double its original price. Smooth.

      • NarwhaleAu
      • 1 year ago

      Exactly. Both the 1080 release and the 2080 release have just scaled price vs. performance. They have moved performance forward, but given us nothing in the way of better value. My 960 at $210 still looks like a decent buy – the 1060 doubles its performance, but is almost double its price.

        • kuraegomon
        • 1 year ago

        And the blame for that goes straight to AMD’s door. They made the right choice by prioritizing Ryzen development over RTG, but that choice is still what got us to this point. Also, they botched their move to HBM (too early, poor/no return on technological investment).

          • odizzido
          • 1 year ago

          I understand why you say that, but the blame is 100% on nvidia. They’re the ones setting the prices. AMD simply hasn’t essentially forced them to lower prices is all.

            • kuraegomon
            • 1 year ago

            That’s how markets work. And why effective anti-trust enforcement matters 😉 Nvidia is charging what they think market will bear – if all those 2080Ti units don’t get sold, the price will drop. My guess is that they’ll get sold, and that the price will only drop if the yields get high enough that the supply-demand calculation suggests a price drop.

          • Krogoth
          • 1 year ago

          HBM2 has returned on its investment for professional-tier SKUs. The problem is that bandwidth isn’t a major concern with gaming-tier SKUS especially mid-range/value segments.

            • kuraegomon
            • 1 year ago

            Fair enough – my implicit assumption was that we were only discussing the gaming segment. I believe my statement holds true with that caveat established.

      • gerryg
      • 1 year ago

      Maybe it’s time to upgrade from my old Voodoo card, these new Tomb Raider games look like they improved the graphics a little bit…

        • meerkt
        • 1 year ago

        You should check out the Verite V2000.

          • ptsant
          • 1 year ago

          Great product. I got one dirt cheap back then and overclocked it by 50% by slapping a heatsink on it. I believe I increased the frequency of the chip (weren’t called GPUs back then) from 40 to 60MHz. Beat that nVidia.

        • Klimax
        • 1 year ago

        Intel’s i760 looks good too…

      • DPete27
      • 1 year ago

      Why should they alter the current price/perf curve?
      From the looks of the value scatter plot, Nvidia placed these cards right where they should be.

      And for that matter, the crypto craze preserved the launch-day price/perf slope of the last gen cards and didn’t require AMD/Nvidia to offer price cuts, so they’re free to to just go up, up, up in price from where we were.

        • drfish
        • 1 year ago

        I’m not saying they should or that I expected them to. It still kind of sucks, though. This is coming from someone that is 99% he’ll be buying one as soon as hybrid models are out.

      • Mr Bill
      • 1 year ago

      No worries, if that flat cost/performance has corresponding profit/performance; then they merely doubled their profit and thats what counts. Any economies of scaling, well…

      • anotherengineer
      • 1 year ago

      Remember when you could double your GPU performance in a year for the same price.

      The Radeon HD4850 remembers…………..

        • Voldenuit
        • 1 year ago

        That’s partly because the HD 2900 and HD 3870 were so bad…

        The 4850 and 4870 were great cards, though, their perf/$ was so good that nvidia was forced to slash the GTX 280 and GTX 260 prices by $100+ overnight.

          • Srsly_Bro
          • 1 year ago

          I think that was the 5850/5870 iirc. I had an HD 4850 and sli Gtx 260 c216. They were both great cards.

            • Voldenuit
            • 1 year ago

            Ah, it was prices on the GeForce 9800 that got slashed when the 4850/4870 came out.

            Ironically, I skipped over the 5850/5870 because AMD, cocky on their previous success, raised the MSRP of the cards by $100-150 over the last gen.

    • NarwhaleAu
    • 1 year ago

    I would like to upgrade my 960, which cost me a little over $200, but these prices are just crazy for anyone on a budget.

    I can’t see how the average gamer can pay $800 to $1,200 for a graphics card. Even if they release a 2060 for $400 and a 2070 for $600, that is still crazy amounts of money. I realize that for many, they will be able to spend the money – and I’m happy for them. But for me, it isn’t even a consideration. Perhaps with 35% margins, someone like Intel will jump in and compete.

    Edit: lot of comments below seem to be missing that if they are charging $800 for a 2080, they aren’t going to have any qualms charging $600 for a 2070, and $400 for a 2060. Maybe the 2050 is the new entry level card at $200, but I can’t imagine that is going to be much of a performance jump over the 1060. Maybe 10% like we are seeing for the 2080 over the 1080 Ti. In short, the performance / $ hasn’t changed and isn’t likely to move with these new cards. Very disappointing.

      • K-L-Waster
      • 1 year ago

      Wait and see if any 10xx cards get put on firesale prices.

        • rnalsation
        • 1 year ago

        Last month I saw a GTX 1080 going for $430 base price, no rebate. I’m sure if you set up a Slickdeals alert you could get one cheap.

      • DancinJack
      • 1 year ago

      The “average gamer” doesn’t buy a XX8X card though. The “average gamer” buys a XX5X or XX6X card from Nvidia.

        • NarwhaleAu
        • 1 year ago

        See my edited point above. I wasn’t saying the average gamer buys an 80 series card. I’m saying with the 2080 being so expensive, it is likely the 2070, 2060 and 2050 are going to fill in below it, at higher price points. That means no real movement in performance / $.

      • Laykun
      • 1 year ago

      These are high-end cards, the average gamer isn’t supposed to be buying these, they are for the enthusiast segment of the market. Admittedly a lot of tech companies seem to be taking the enthusiast segment of the market for a bit of a ride lately, $1200 GPUs, $1400 iPhones, ugh -_-.

      • Kretschmer
      • 1 year ago

      This is initial pricing for the absolute bleeding edge.

      • Chrispy_
      • 1 year ago

      Yep. $800 for a GPU alone, just buy a PS4 Pro, keep the change, and sleep with a clear conscience knowing that it’s the platform that game developers are putting (by far) the most effort into tuning their game engines for.

      Nvidia have priced themselves out of the mainstream.
      AMD failed to even show their face.
      Good job, PCMR, Good job.
      /golfclap

        • Voldenuit
        • 1 year ago

        With Sony’s shenanigans re: cross-play, I am not giving them a single red cent for any of their products until they change course to be less anti-consumer.

        EDIT: And this from someone with a Vita, sony headphones, used to have an Experia phone.

          • Chrispy_
          • 1 year ago

          Oh, I know Sony are evil asshats, but don’t pretend for ONE MINUTE that Nvidia and Microsoft aren’t even bigger asshats.

        • NarwhaleAu
        • 1 year ago

        Yeah. Unless they cut prices in the upcoming months, I think I’ll be looking for a 1060 on sale. Or maybe just wait until a 3080 and a true die shrink. Ray tracing will be a lot more fully fledged at that point as well.

        • Kretschmer
        • 1 year ago

        I mean, there are plenty of affordable Pascal and Vega cards available. No one needs to buy the absolute bleeding edge.

      • Ifalna
      • 1 year ago

      The average gamer still plays at 1080p and everything past a 1070 is pretty useless at that resolution.

      I get your point, though. It sucks that the prices explode but that’s what you get in a market w/o competition and ever increasing R&D costs.

      We’re getting closer and closer to the limit of what we can do with silicone technology, don’t expect major savings in future generations to come.

    • juampa_valve_rde
    • 1 year ago

    Nice one Jen-Hsun (and Jeff for the review in a frametime window!), DLSS is the bread and butter feature. I wasn’t expecting that much performance from this chips. Price is also brutal, RTX 2060 can’t come quickly enough.

    • Neutronbeam
    • 1 year ago

    Hmmm….tricky decision. US$1440+ for an iPhone OR US$1200 for an RTX 2080 Ti Founders Edition?

    Solution: Skip both, but still consider raiding my chief male heir’s college fund so I can play Mechwarrior 5 next year the way it’s meant to be played. He’s a gamer, so he understands priorities.

    • maroon1
    • 1 year ago

    If DLSS give 35-40% boost in performance for only 2 or 5% loss in image quality then that impressive, because you have better quality per fps than normal resolution

    I like to see a comparison between DLSS and normal resolution that give same fps (Maybe something like 80% of 2160p in resolution scaler), and then let see which one has better image quality at same fps

      • NoOne ButMe
      • 1 year ago

      I am eagerly awaiting DigitalFoundry to go over some DLSS games myself!

      • exilon
      • 1 year ago

      From what I’ve seen, DLSS is like a blurrier TAA which is already lampooned for blurring.

      [url<]https://www.kitguru.net/components/graphic-cards/dominic-moass/nvidia-rtx-2080-ti-founders-edition-11gb-review/13/[/url<] After all, it really is just fancy upsampling...

      • psuedonymous
      • 1 year ago

      [url=https://bit-tech.net/reviews/tech/graphics/nvidia-geforce-rtx-2080-ti-and-rtx-2080-founders-edition-reviews/11/<]Bit-tech have some UHD comparison shots for TAA & DSS[/url<].

    • tfp
    • 1 year ago

    Good read, though I did miss seeing the scatter graphs at the end.

      • thedosbox
      • 1 year ago

      Seconded – I’d like to see the price/performance graphs. If only to laugh at them 😀

    • USAFTW
    • 1 year ago

    It’s interesting that the price for the non-Ti variant has been raised above the previous gen Ti variant. Also interesting that for the $100 increase in price vs. a 1080 Ti, you get either exactly the same performance or just a smidgen faster.
    Thanks, Nvidia.
    Also, Thanks, AMD (for dropping the ball).

      • Krogoth
      • 1 year ago

      AMD RTG has given up since Tahiti failed to overtake Kepler and Hawaii did little to stop Maxwell from dominating the scene.

      Why bother wasting precious capital and time on a market that doesn’t care about your products even if they are technically superior and faster then the competition. Said market is about to become marginalized in the following decade.

    • Ruiner
    • 1 year ago

    Impressive, I guess.
    The picture of the Countach on my wall in HS was at least nice to look at.

    The 12 people who play Hellblade are grateful.

    • Anovoca
    • 1 year ago

    I think from now on I am going to call these cards by the following names:

    2080 ti[s<]tan[/s<] 2080 [s<]ti[/s<] 20[s<]80[/s<]70 ect....

    • Nictron
    • 1 year ago

    So is the 2080 Ti the card for constant 144 FPS @1440p gaming on Ultra?

      • Anovoca
      • 1 year ago

      No, you are thinking of Volta Titan. The 2080 Ti is the card for suckers.

        • Krogoth
        • 1 year ago

        What are you smoking?

        Volta Titans are going for $3K while it is bested by a 2080Ti at gaming which runs at $1.2-$1.3K. 2080Ti is fastest gaming SKU on the market at this time.

          • Anovoca
          • 1 year ago

          Sorry, Titan XP. I have my titans mixed up. I forgot to consult my flowchart to keep them straight.

            • Krogoth
            • 1 year ago

            2080Ti is faster at gaming than Titan XP for roughly the same price point.

            • Anovoca
            • 1 year ago

            Only that one of those card’s prices is likely to fall while the other is likely to inflate. Assuming the market reacts to this sku launch like all others, which isn’t necessarily a given.

      • Krogoth
      • 1 year ago

      It will easily go beyond that threshold provided that your CPU isn’t holding it back.

      1080Ti and 2080 are already there.

      • synthtel2
      • 1 year ago

      Yes, but good luck finding a CPU that’ll match that 144 consistently. If you’re dropping $1200 on a graphics card, at least you can probably also afford a delidded 8700K and custom loop.

    • Kretschmer
    • 1 year ago

    Man, my launch 1080Ti really held up well.

      • K-L-Waster
      • 1 year ago

      TRX 2080 TI — winner in the “Most Desirable Unattainable” category.

    • firewired
    • 1 year ago

    GeForce RTX is just GeForce 256 all over again. The older folks will get my meaning, you young’uns will have to look through internet archives and do some reading to get it.

    Nice article TR, good read.

      • Krogoth
      • 1 year ago

      Nah, it is much closer to Geforce 3.

      They both spearhead new rendering techniques that weren’t relevant to a few years later but weren’t that much faster than their predecessors at content at the time of launch.

        • NTMBK
        • 1 year ago

        [quote<]weren't that much faster than their predecessors[/quote<] Did you miss the part where the 2080ti was considerably faster than the 1080ti?

          • Krogoth
          • 1 year ago

          Outside of niche situations (HDR/4K and FP16 depended) really isn’t that much faster than 1080Ti.

            • kuraegomon
            • 1 year ago

            4K is getting less niche by the day 😉

            • Redocbew
            • 1 year ago

            Yeah, just like OLEDs that don’t color shift and die. They’ve been getting less and less niche for at least the past 20 years.

            Granted, that’s not really the same thing. It’s just kind of silly to quantify the popularity of 4k beyond “not so much”.

            • VincentHanna
            • 1 year ago

            4k/HDR might be niche to somebody who spends $100 on a GPU.

            I don’t think you can call it niche when we are discussing an $800 gpu.

            • Krogoth
            • 1 year ago

            It is a niche since it only makes up less than 1-2% of the total market share.

            The 4K/HDR-capiable monitors cost way more than a single 2080Ti.

            • Kretschmer
            • 1 year ago

            4K is still niche for gamers. Much better to run at 1440P and 120+ FPS.

          • YellaChicken
          • 1 year ago

          I think he noticed the part where the 1080ti is actually in competition with the 2080, not the 2080ti. And there’s barely a hair between them in many of these games, both in average FPS and frame times.

            • Andrew Lauritzen
            • 1 year ago

            Right. They clearly just renamed the lineup to make it seem like things were getting faster, but a comparison of pricing and die sizes shows you most of those fancy new transistors likely went to the ML and RT hardware and such.

            Nothing wrong with that, but don’t be misled.

          • Audacity
          • 1 year ago

          If the 2080Ti assumed the price point of the 1080Ti launch price, then that would be a good argument. Don’t feel bad NTMBK, some people just aren’t good with numbers.

        • mad_one
        • 1 year ago

        That’s mostly true for the GeForce 256 as well. T&L wasn’t used much until the GeForce 3 came out. It was an immensely important step for games (though it should have been programmable from the start, I doubt it would have added too many transistors), but adding massive amounts polygons to a game that also has to run on old cards was hard and the engines needed work.

        And at least the more common SDR model was only faster than the TNT2 in 16 bit rendering.

        The GeForce 3 was maybe even worse, since the 1.x shaders proved quite unimpressive and only made old effects a bit more efficient by doing them in fewer passes. It really didn’t take too long from the GF3 to the Radeon 9700 and that was a whole new world.

    • DPete27
    • 1 year ago

    Power consumption?
    Noise?

      • Krogoth
      • 1 year ago

      Jeff will most likely do a follow-up article on it. He was barely was able to get the regular article out after the NDA lift.

      • DancinJack
      • 1 year ago

      Not that I don’t want you to read Jeff’s follow ups, but PCPer had decent numbers if you want to take a look.

        • DPete27
        • 1 year ago

        Looks like little/no improvement in power efficiency for non-RT and non DLSS testing between the RTX2080 and the GTX1080Ti.

        DLSS has promise..IF Nvidia keeps up with their whitelist. Historically, features which require whitelisted game support (aka not universal) are doomed to fail unless it’s almost entirely automated…in which case it doesn’t really need a whitelist. There are MANY more games out there than a driver team care to enable feature support for, and new games coming out every day. Not to mention that this is another entry in the long line of proprietary AA techniques introduced by Nvidia.

        RT…time will tell. As many other discussions have concluded, until consoles have similar capabilities, RT is going to have a hard time gaining support. Similar to PhysX.

          • kuraegomon
          • 1 year ago

          I’m fairly confident that (triple AAA) game developers are going to be clamoring for DLSS support (well, assuming real-world image quality assessments track fairly close to what the demos are showing). Run your game through Nvidia’s training program, and see halo product users get ~25% performance gains at 4K? As long as the cost is minimal/free, why wouldn’t you?

          Also, in the case of DLSS even a very high degree of automation will still require a whitelist. By definition, if AI training hasn’t been done for a specific game, said game _[i<]cannot[/i<]_ be supported by DLSS. If a dev can still submit their game with 3 mouse-clicks (exaggeration for effect here), they still have to click those three buttons...

    • DPete27
    • 1 year ago

    What’s so confusing about the naming convention? The RTX2080 = GTX1080Ti. That’s par for the course in all Nvidia launches I can remember.
    N = N-1(Ti)

    Sure, the pricing scale is messed up with the RTX 20xx, but they’re brand new, and IF ray tracing becomes more standard in future games, the RTX 20xx cards will pull away from the 10 series cards.
    Not to mention that there’s literally no competition in this performance tier except for Nvidia’s own 10 series cards…..

      • RandomGamer342
      • 1 year ago

      The pricing *is* the issue with the new naming schemes. x70 = (x-1)80 is normal, but the x70 would also usually be released at (x-1)70 prices. The naming scheme obscures that performance hasn’t changed this gen if you compare price-equivalent cards.

      By the time ray tracing becomes standard, these cards will also be obsolete. Most likely, these cards will never run ray tracing at playable framerates. See the “cinematic” framerates in the star wars bench mentioned in TR’s conclusion. If the games could run at playable framerates, NVidia’s marketing department wouldn’t let you hear the end of it.

      There’s really no reason to get the new cards if you can get the equivalent older ones for cheaper.

      • NoOne ButMe
      • 1 year ago

      You have the 2080 FE (dual fan) cards sometimes losing out to the 1080ti FE (blower) card in some results.

      Barely beating it in overall results. Losing to AIB cards.

      Overclocking seems to give you less gain than OCing a 1080ti.

      2080 and 2080ti Good cards, but with bad (current, street) pricing.

    • Waco
    • 1 year ago

    The scaling of MSRP by performance increase over the last generation is concerning. I can’t remember the last time the performance/dollar ratio didn’t change between generations at least a little downward.

      • chuckula
      • 1 year ago

      Assuming Navi’s worthwhile, I think there’s going to be a “miraculous” price drop sometime in 2019.

        • ptsant
        • 1 year ago

        Several rumors point to Navi being a mid-range chip, at least vs 20×0. The main reasoning is that the development of Navi has mostly been driven by the next console generation, not high-end PC graphics. So, many expect it to be relatively cheap and competitive with 2060 (at best 2070).

        We’ll see how it goes, but I really wouldn’t bet on AMD for the next round of GPUs.

          • benedict
          • 1 year ago

          Sound perfect for all of us who actually care how much a product costs. I’ve always been buying midrange cards and nVidia doesn’t have anything interesting there.

      • DPete27
      • 1 year ago

      This also may be the first generation where Nvidia is certain AMD has zero competition for them at this level of performance, so Nvidia can literally do whatever they want.

      Say what you will about Intel vs AMD, but clearly Nvidia is worse in this regard.

        • kuraegomon
        • 1 year ago

        I’m not so sure that’s true in this case. Nvidia is clearly pushing the limits of their current process to ship this generation, and is likely taking a bath on yields to do so. Note the 2080Ti availability (or rather, lack thereof). Make no mistake, they’re charging what they think the market will bear in the absence of any competition, but I also think that they’re more supply-constrained (and hence making lower margins) than most people are taking into account.

          • ptsant
          • 1 year ago

          They are probably making a load of money but are, at the same time, definitely constrained. There is no benefit to NOT flooding the market with 2080[Tis]. Still, I don’t think the cost is the main driver of the price. They most likely widened their margins, not the opposite.

    • Bomber
    • 1 year ago

    I’m moderately impressed with the performance, but until RT is supported next month I’ll reserve final judgement. Running on a 980ti the upgrade path is reasonable…but I’m still on the “A 1080ti is the bargain here” stance as prices drop.

    • Krogoth
    • 1 year ago

    I give it -2 Krogoths. They are technically impressive pieces of silicon that will be the spearhead for the transition of rasterization to actual real-time ray-tracing.

    It is really no surprise why Nvidia delay them for so long. They want to clear out their existing Pascal stock and lesser Turning SKUs to come out (likely on upcoming TSMC’s 7nm process)

    If you are on Pascal, I would suggest to wait until 7nm refresh to come out (a repeat of Geforce 4 launch).

    The whole thing is eerily similar to Geforce 3 launch.

      • guardianl
      • 1 year ago

      As someone who had an original Geforce 3 I think this is a pretty good comparison. Geforce 3 shader capabilities were really only useful for developers and validating shader model. Like-wise the ray tracing performance of turing is enough for some demo’s but it’s not fast enough for even consistent 1080p usage. Useful to help get developers ready, but not a good deal for gamers.

      The ray-tracing transition is happening, but it’s going to be a very slow process. It also doesn’t align well with the mobile world priorities (power consumption) so we’ll probably have both methods employed for another decade plus.

      So that leaves us with a interesting card for developers and enthusiasts, but it’s a pretty poor value for today’s games.

      • psuedonymous
      • 1 year ago

      ::EDIT:: On reflection, that should probably not be public, sorry.

      • Voldenuit
      • 1 year ago

      [quote<]I give it -2 Krogoths. [/quote<] Wait, is that like a double negative? Waiting for 30xx/21xx as well, but it's always been a good idea to skip every other generation with GPUs, anyway. People with 7xx/9xx cards might be looking at Turing - I'd still advise to wait a little bit as the FEs are a $100-200 markup over the 'MSRP'. AIBs will probably hew closer to FE pricing if they can get away with it for a while, though, until supply eases up, or Vega+ on 7nm comes out (assuming it's not limited to workstation only cards).

        • derFunkenstein
        • 1 year ago

        I think he’s saying he’s impressed, and then goes on to explain why he’s not. Pretty typical for this AI algorithm.

          • Redocbew
          • 1 year ago

          Indeed. We call it artificial intelligence for a reason.

    • NTMBK
    • 1 year ago

    Damn, those are some nice improvements. The rejigged shader cores must have made a big difference. Can’t wait to see this thing crunching CUDA workloads.

      • Krogoth
      • 1 year ago

      It is because Nvidia went back on FP16 performance instead of neglecting it and this is most apparent in titles like Doom 2016/Wolfenstein: New Colossus.

      I suspect it is because they are trying to make their chips more attractive to miners though.

    • lycium
    • 1 year ago

    Would love to know what score this GPU gets on IndigoBench 😀

    Thanks for the great review!

    • jihadjoe
    • 1 year ago

    Are you gonna do image quality comparisons between DLSS and TAA?

      • Jeff Kampman
      • 1 year ago

      This is a monstrously hairy issue that deserves an article on its own but the short take is that if you are looking at moving pictures, the two are 99.9% indistinguishable, IMO, at least in the two demos we have to work with. It’s really amazing stuff.

        • NoOne ButMe
        • 1 year ago

        with my understanding of how DLSS is trained I think that demos kinda break it:
        1. DLSS is trained on Nvidia’s servers
        2. DLSS is trained on sections of a game to work well for that game
        3. DLSS on card uses the training done on the network

        So, for a Demo, well. It can be trained in a situation where almost everything happens in the exact same way. There is no or little variance, and gives a best case for image quality.

        The performance improvement is probably in the ballpark of in real games, but the image quality may not be.

        This is all my speculation.

        • jensend
        • 1 year ago

        Well, but people may not be able to tell the difference in moving pictures between 4K and something that was actually rendered at 1440p and intelligently upscaled. Yet we wouldn’t want to be comparing cards that did the latter to cards that did the former.

        Image quality questions become problematic as we approach the limits of human visual acuity.

        • Andrew Lauritzen
        • 1 year ago

        I’m actually mostly concerned about aliasing in motion TBH. From the 4k youtube videos it definitely looks pretty impressive and usually a better use of performance than “native 4k” rendering. That said, there are certainly areas – particularly in infultrator – where you see additional swimming/aliasing in the DLAA version that isn’t present in the 4k TAA version, which is not altogether unexpected, but worth noting.

        I agree with the notion that native 4k rendering often looks nearly indistinguishable from lower resolutions + smart upsampling in the first place, but DLSS certainly looks impressively sharper than even TAA @ native 4k, which I think is worth noting! Of course not all TAA implementations are created equal and things can be tuned for different output sharpness, but still.

        Note that I still disagree with how this is being marketed by NVIDIA, and even the name “supersampling” really, but the tech itself is interesting.

          • ptsant
          • 1 year ago

          Another issue that must not be forgotten is the fact that DLSS depends on a continuous investment on nVidia’s part, as they need to train the AI for each game. You can definitely fall back on TAA but I cannot be sure that 3 years from now they will continue vigorously optimizing games for DLSS if they are concentrated on another new hot technology.

          Whether DLSS ages well (or disappears) cannot be predicted, but buyers are probably safe for this generation and the major AAA titles.

        • Disinclined
        • 1 year ago

        IMHO TAA is the monstrously hairy visual of all AA’s. If DLSS is anything like that I’ll be disabling it. The only time I found TAA to be good was on displays measured in feet, 15 feet back.

          • jihadjoe
          • 1 year ago

          I believe the use of the term TAA in the article means Traditional Anti Aliasing, not temporal.

            • NoOne ButMe
            • 1 year ago

            nope, Nvidia is comparing to Temporal

    • chuckula
    • 1 year ago

    I just have to say that this review really shows how future-proof the GTX-1080 from 2016 is compared to the larger Vega 64 from 2017. For an old card it really stands up well.

    Truly some Fine Wine there Nvidia.

    #NotWhatWeIntendedWhenWeMadeUpFineWine

      • Krogoth
      • 1 year ago

      GP104 struggles with HDR and 4K though unlike Vega 10 due to the lack of memory bandwidth.

      But it is unlikely GP104 users are going to be using either option for a while until mid-tier SKUs get similar performance to the current 2080 and 2080Ti.

        • synthtel2
        • 1 year ago

        HDR isn’t slow and isn’t going to substantially change the position of one card versus another. Yes, it is bandwidth-heavy, but we’re talking about sub-millisecond run times on GP104 anyway.

          • Krogoth
          • 1 year ago

          Maxwell/Pascal architecture do take a noticeable hit with HDR because their color/texture compression isn’t at good at handing it. They end-up eating up more precious bandwidth and it shows on SKUs where bandwidth is a premium (GP104/GP106).

            • synthtel2
            • 1 year ago

            It shows, yes, but “struggles with” implies it’s a big enough problem it should affect purchasing decisions, which it isn’t.

            • Krogoth
            • 1 year ago

            In light of Turing release, 1080 and 1070 are now sub-optimal choices for HDR gaming if that’s your thing.

            • synthtel2
            • 1 year ago

            And AMD GPU tech is suboptimal if you like to turn shadow settings up to the moon. That’s a similar effect magnitude.

            • Krogoth
            • 1 year ago

            Hasn’t been a problem since GCN 1.2 and later drivers. Shadowing is demanding in itself. Nvidia hardware is also heavily taxed by it.

            Pascal/Maxwell’s lack of Async shading support and extra difficulty at HDR will end-up being used by Nvidia to phase them out for Turing and future chips under future content.

            That’s precisely why Nvidia’s “sneak peek” marketing comparison chart between Turing and Pascal intentionally run stuff under HDR and/or used async shading.

            • synthtel2
            • 1 year ago

            It hasn’t been a problem in general in the same way that HDR isn’t a problem for Pascal. You’re talking about 5% differences as if they’re 25% differences.

            If they try to use HDR like that, I’m gonna need more popcorn. Making raster-heavy operations heavier to make AMD look bad is nothing new for them; making operations that *AMD hardware is superior at* heavier in an effort to compete with their own old cards better would be a sight to see.

            If it’s just marketing, you gotta fully expect them to choose settings to squeeze out every 1% difference they can, and to reverse their settings choices next week if next week’s comparison calls for something else.

            • Krogoth
            • 1 year ago

            Turing uses Nvidia’s own implementation of Async shading which is partly why it outpaces Pascal in applications that take advantage of it by large margin and beats the GCN family at it.

            It will no surprise that Nvidia will use its marketing clout to force async shading and HDR down developer’s throats to ensure that older stuff will become “obsolete”. They already did this with Fermi/Kepler SKUs once Maxwell took center stage.

            • synthtel2
            • 1 year ago

            I see “async behaves as I’ve been saying HDR does, therefore HDR does also”. What?

            I did not nor did I intend to make any claims about async capabilities.

    • Anovoca
    • 1 year ago

    [quote<] The reality of Turing naming and pricing seems meant to allow Nvidia to claim massive generation-to-generation performance increases versus Pascal cards by drawing parallels between model names and eliding those higher sticker prices.[/quote<] So basically these are 2090ti cards. Got it. Marketing gimmick or not, it cant fool the old TR value/performance scatter plot. I look forward to seeing how these Turing cards fill in the charts in that regards. The saddest part here is how many consumers will fall for this trap and buy a 2080 over a 2070 or 2070 over a 2060 (ect) all because of the weight behind those branding numbers. edit: JK, calling this a 2090 would assume there would be more of a generation leap in performance worth the new nomenclature.

    • VinnyC
    • 1 year ago

    Am I the only person curious about 1080p performance?

      • rnalsation
      • 1 year ago

      I would be more interested in 2560 x 1440 seeing as that is where most of the premium gaming monitors lie right now. A $1000-$1200 GPU (depending on how long you wait) paired with a 1920 x 1080 monitor just, no? Definitely no.

      Edit: I guest the one standout example would be the Alienware AW2518H 240Hz 1920 x 1080 screen.

        • blitzy
        • 1 year ago

        yeah we know this thing will really stretch its legs at 4K, but it would be interesting to see just how bottle necked it is a 1440p. I expect the majority of people are not going to be playing at 4K, so largely 4K performance isn’t really that important (right now that is, until 4K monitors with fast refresh rates are more mainstream)

      • Krogoth
      • 1 year ago

      The 2080 and 2080Ti will be considerably CPU-bound at that resolution even with generous AA/AF on top.

        • NTMBK
        • 1 year ago

        Just wait until you turn on raytracing!

          • Krogoth
          • 1 year ago

          It would be still surprisingly be CPU-bound by a far-less degree.

      • Anovoca
      • 1 year ago

      if you are paying $1200 for a gpu and you are still using a 1080p monitor/tv, you are doing it wrong. Especially if you consider you could get 2-3 1440p monitors and a 1070ti to run them for about the same cost.

        • Krogoth
        • 1 year ago

        It doesn’t stop some CS:GO junkie who is convinced that 300FPS is actually meaningful from trying that combo.

          • NoOne ButMe
          • 1 year ago

          Someone should tell them 900p monitors exist! More FPS!

            • Anovoca
            • 1 year ago

            Do they make CRT monitors with display port?

            • Krogoth
            • 1 year ago

            Not possible without build-in DACs on the CRT monitor.

            CRTs are already dead (killed by RoHS compliance). That’s why getting rid of old units is such a royal PITA.

            • Anovoca
            • 1 year ago

            This is the part of the conversation where I laugh, agree with you, then raise the point about async. Only this is an nVidia product article, and well….

            • synthtel2
            • 1 year ago

            Many of them run 1024×768. 😉

            300 FPS is actually worthwhile in a game that competitively twitchy, if only for the latency improvement.

            • Krogoth
            • 1 year ago

            300FPS does nothing at all. It is pure placebo effect. Anything beyond 100FPS for multi-player is totally depended on netcode or rather it makes it more obvious.

            • synthtel2
            • 1 year ago

            Rendering in 3.33 ms instead of 10 ms is a small but not irrelevant latency improvement, especially if you’ve got a 240 Hz monitor to go with it.

            • Krogoth
            • 1 year ago

            It is irrelevant even on a LAN setup. It simply makes netcode issues more glaring if anything else.

            • synthtel2
            • 1 year ago

            If nothing else, getting images to your eyeballs 6.7 ms sooner is 6.7 ms longer you have available to decide what to do next (more reflexively than in such conscious terms of course, but it does make a difference).

            • Krogoth
            • 1 year ago

            Sorry, the human visual cortex doesn’t process imagery that fast and musculo-skeletal system is painstaking slow by comparison. All of those “superhuman” reflexes you see at high-level FPS matches come from reading players ahead of time and anticipating them.

            • synthtel2
            • 1 year ago

            All those effects do is offset the time of response from the time you see something to respond to. If you see it 6.7 ms sooner, you’re still responding 6.7 ms sooner in the end. This applies even if you know an enemy is just about to walk around that corner, because you still don’t get to fire until you see them on your side of the corner. If it’s CS:GO and we’re talking about two very good players, one who can respond 6.7 ms quicker has a very significant competitive advantage.

            The way I phrased it in the previous post assumes you’ve got a deadline, which in CS:GO you probably do.

            • Krogoth
            • 1 year ago

            Super high framerates and super-low latency doesn’t decide high-level matches. It is entirely game sense and predicting your opponent(s) with dashes of map/fight control.

            The real reason why progamers disable eye candy on competitive shooters is to make target recognition easier (it also the same reason why competitive skins are in bright, striking colors).

            Needing 300FPS is pure 100% placebo meme-tier non-sense that needs to die. The whole reason the more FPS = better! meme came into existence is because of Idtech 2 engine (a.k.a Quake 1-3). Higher framerates made your characters hop further and acceleration faster because the engine’s primitive phsyics and movement was tied to the framerate. It made bunny hopping and strafe-running more effective. This gave players with super-high framerate a massive competitive advantage over those who operated at lower framerates.

            • synthtel2
            • 1 year ago

            Game sense beats aiming skill if you’ve got to choose only one, but realistically if you’ve only got one you’re going to be pretty bad at CS:GO. I know from PS2 that I don’t suck at aiming, but decent CS:GO players are on some whole other level I’m nowhere close to, because that’s just what the game demands.

            It doesn’t decide high-level matches because everyone’s already on the 300 fps cap. If you stuck a high-level team with 100 Hz rigs, they’d still demolish casuals like me without even thinking about it, but they’d be noticeably behind the curve when playing against others with similar skills.

            • Redocbew
            • 1 year ago

            I doubt it’s safe to assume that waiting on service calls is a problem specific to only systems using high refresh rate displays. I don’t know the details, but there almost certainly needs to be some kind of buffering in place. Making an external request and just assuming that everything will go exactly according to plan is a good way to blow up your application regardless of what it’s doing.

          • Laykun
          • 1 year ago

          What a huge audience that totally makes it worth spending the time and money doing 1080p benchmarks for.

            • auxy
            • 1 year ago

            [url<]https://store.steampowered.com/hwsurvey/?platform=pc[/url<] 62% using 1080p! People will probably have to play RTX games in 1080p unless they have a 2080 Ti, where they might be able to put up with 40 FPS in 2560x1440. I play on a 240Hz 1080p monitor. 1080p benchmarks are relevant. ('ω')

            • Redocbew
            • 1 year ago

            You probably already know this, but it’s the refresh rate that makes it relevant [i<]for you[/i<], and not so much the resolution. I'm actually kind of surprised Valve doesn't include refresh rates in their hardware survey now, but high refresh rate displays would almost certainly be the minority if they did.

            • Srsly_Bro
            • 1 year ago

            Some dork down voted you for that SMH.

            I run 144hz 1440 but the csgo competitive servers I play on are limited to 64 tick which I assume is just server updates per minute.

            240hz must be nice in the specific use scenarios that benefit from the high refresh rate.

            • Laykun
            • 1 year ago

            240hz, again, what a huge audience. You can’t equate 62% 1080p to 62% people that are going to buy a 2080Ti. 1080p gaming is predominantly low to mid-tier price range, and people in that price range generally buy low to mid-tier hardware. A quick glance at a site that does do 1080p benchmarks should show you that these scale poorly price to performance wise at 1080p.

      • ptsant
      • 1 year ago

      Only if it is 3840x1080p 🙂

      • DancinJack
      • 1 year ago

      If you’re referring to 1920×1080, then yes. Why on earth would you use either of these cards for that resolution? Even with everything turned up you’re gonna be pushing it to find a CPU that can keep up. It might be fun to think about, but in practice it just doesn’t make sense.

      • jensend
      • 1 year ago

      TRANSLATION: “I am happy to spend $1200 on a GPU for bragging rights but I can’t be bothered to buy a monitor that costs more than a hundred bucks.”

      Testing extreme unbalanced setups turns careful real-world tests into something that is at best a synthetic benchmark. What’s the point? It’d be a poor use of limited reviewer time and effort.

      For 1080p it doesn’t make sense to spend even a quarter this much money. A 1060 or 480 will be fine. Note that TR didn’t even test that tier of cards at 1080p; Kampman [url=https://techreport.com/discussion/30812/nvidia-geforce-gtx-1060-graphics-card-reviewed?post=1006333<]explained that he didn't think that would be a very good indicator of GPU performance[/url<]. That reasoning would certainly apply to testing the 2080 at 1440p.

        • Ninjitsu
        • 1 year ago

        While i disagree that the 1060 tier didn’t need a test at 1080p, I do agree that *these* cards don’t.

        • DragonDaddyBear
        • 1 year ago

        I agree with most of your statement comment. However, 1440p is, I think, a good test. The reason is these cards could offer very high refresh rates that might appeal to FPS gamers. Additionally, many may only have a 1440p monitor because, until now, cards really struggled with 4K and sustain frame latency that can push at least60FPS.

        • derFunkenstein
        • 1 year ago

        Not entirely. I mean, a 1080p G Sync display with a high refresh rate is more than a hundred bucks. Lots of folks value super-high frame rates over high pixel densities. It’s one of the joys of PC gaming: you can use the hardware however you want.

      • NoOne ButMe
      • 1 year ago

      I was looking for 768×1378 myself…

      • Freon
      • 1 year ago

      Yes, you’re the only one.

        • Redocbew
        • 1 year ago

        Beat me to it.

      • K-L-Waster
      • 1 year ago

      Basically whatever your CPU and monitor can deliver.

      • Forge
      • 1 year ago

      That plateaued around the original GTX1080. It doesn’t get a lot better unless you’re running a very high framerate monitor, and even those are starting to be WQHD or UHD now.

      1080p is dead.

        • Ninjitsu
        • 1 year ago

        “1080p is dead”

        lol i suppose that explains why only 60% of steam users use 1080p monitors…

          • Krogoth
          • 1 year ago

          I think what Forge meant is that 1080p is dead as being used as a metric as far pushing hardware to its limits is concerned.

      • puppetworx
      • 1 year ago

      There are a ton of 240Hz 1080p monitors out there, they must be making them for someone.

        • Krogoth
        • 1 year ago

        In the overwhelming majority of those cases. The CPU will be holding the GPU back and it doesn’t really yield any tangible benefits over 100-120FPS rendering.

        Aren’t the laws of diminishing returns such a snag? 😉

    • chuckula
    • 1 year ago

    I know TR will have its own power consumption numbers up soon, but from other reviews around the web it looks like Nvidia kept the power draw reasonable considering the performance levels we are seeing.

    The RTX 2080Ti comes in with a lower power draw vs. a standard Vega 64 across the board, which is certainly nothing to sneeze at when you look at the performance delta.

    [Nice downthumbs for factually accurate statements. Any AMD shills want to argue with facts instead of emotional outrage? Sahrin, Kuttan, DoomGuy… why don’t you post your objectively collected factual test results that show Vega destroying Turing at power efficiency?]

      • Krogoth
      • 1 year ago

      Vega 10 barely matches Pascal at power efficiency if you play around with the undervolting. There’s simply not way it will be able to match Turing. Turing silicon is very energy efficient and most of the excessive draw is likely from power-hungry GDDR6.

      Just imagine the power efficiency of a Turing + HBM2 setup (closet approximation ATM is GV100).

      • Concupiscence
      • 1 year ago

      It looks like a nice piece of hardware, and it’s hard to argue that Nvidia hasn’t made some meaningful progress on new fronts. They continue to cheerfully lead on power consumption too. But the 2080 resembles a 1080 Ti outside of those new niches, and the price is the kind of scandal you get when you’ve meaningfully slammed the door on your competition and don’t feel any shame capitalizing on it. So overall it’s sort of a Geforce2 to Geforce3 paradigm shift, but with a greater likelihood of running into the limitations of the new class of calculations that’s been opened up.

        • kuraegomon
        • 1 year ago

        Honestly, Nvidia would probably get sued by at least some of their shareholders if they _[i<]didn't[/i<]_ capitalize on their dominant position by raising margins. And - as I said in one of my other posts - I don't think those margins are _quite_ as high as you may think. I'm pretty sure that if Nvidia could put more 2080Tis in the channel they would have.

          • NoOne ButMe
          • 1 year ago

          I’m pretty sure Nvidia big FE push is actually lowering margins.

          Which is probably part of why they have higher prices. So make the hit to margins less damaging.

    • RtFusion
    • 1 year ago

    Thank you Jeff for the hard work as always.

    • marvelous
    • 1 year ago

    Only if they could lower the price.

      • renz496
      • 1 year ago

      You guys really want AMD to totally quit discrete GPU market isn’t it?

        • Forge
        • 1 year ago

        Oh, we should all buy AMD GPUs because we feel bad? Lisa Su made a call, she built RX480/RX580 to Apple’s design specs, and they’re building the mainstream Navi to match what Sony wants for the PS5. We only got Vega as a placeholder so that AMD is still in the game, at least in name. We will get a similar boost to show they’re still playing in a year or so, but it’ll rival the 2080/2080 Ti a year late and at a similar price. AMD won’t start making seriously competitive top GPUs for at least 3-4 years at this rate, unless something changes.

        I can’t blame her, it was the smart call from a money POV. It just sucks for people who like competition and top performance.

          • renz496
          • 1 year ago

          nah what i’m saying is that nvidia high price is good for AMD.

      • Krogoth
      • 1 year ago

      Not going to happen unless AMD RTG throws a Hail Maryx10 (Something bigger than R300 launch).

      • kvndoom
      • 1 year ago

      They have no reason to, since AMD can’t compete. They are in the Phenom stage for graphics right now and need some Ryzen sauce on the GPU side.

        • willyolioleo
        • 1 year ago

        really wish they can figure out Infinity Fabric for GPUs, then they can scale things up like EPYC.

        • Krogoth
        • 1 year ago

        AMD RTG is more like in Bulldozer stage. The hardware has potential but it is locked behind software implementation. It also less power efficient then the competition. AMD RTG simply doesn’t have the capital or marketing clout to convince developers to optimize for their hardware platforms.

        It is likely AMD RTG will continue to pursue intergrated GPUs and semi-intergrated solutions for their mainstream graphics while discrete solutions will end-up being professional only with rejects being sold as “performance gaming” SKUs.

          • Kretschmer
          • 1 year ago

          Bulldozer wasn’t a good chip that software didn’t properly optimize for, it was a bad chip that performed OK in certain workloads. It would be completely unreasonable to expect everyone to code for 8 threads in 2011.

            • Krogoth
            • 1 year ago

            Bulldozer was a good chip but it was limited to server-workloads and virtualization. The problem is that at the time it was released. The majority of customer-tier software and OS didn’t know how to handle it properly which threw it in a even worst light. Bulldozer was somewhat vindicated by the time software started to go beyond two-threads and Microsoft patched their subpar scheduler for NT 6.x (Windows 8/8.1 or newer). Nobody heed much attention to it because it was overshadowed by newer silicon at the time.

            The funny part is that it is Nvidia’s clout in throwing more support for async shading might end-up helping out Vega 10 while at the same time throw Pascal and older silicon under the bus.

            • NTMBK
            • 1 year ago

            Bulldozer Opteron chips blew chunks, especially compared to their Intel counterparts. Sandy Bridge Xeons were significantly faster and more power efficient.

            • Krogoth
            • 1 year ago

            Not for virtualization and comically parallelized stuff, although they are still niches in the SMB/enterprise world and more so back when Bulldozer made its debut in that world.

            • Klimax
            • 1 year ago

            Sorry but Buldozer was bad idea from beginning to end relying on something that could NEVER work. OS was irrelevant. All decisions about Bulldozer were wrong. They made proper scheduling impossible. (Too many contradictory requirements)

            • Krogoth
            • 1 year ago

            Most of the ideas behind Bulldozer were ahead of its time. Zen incorporates a lot of the good elements from Bulldozer. Like how Conroe and its descendants incorporated a lot of the good ideas from the Netburst dynasty.

            • Klimax
            • 1 year ago

            Like what? Shared decoder is long gone, sharing of FP is gone. Instead of speed demon there is now classic Core-style architecture.

            • Krogoth
            • 1 year ago

            I said the good elements not the dysfunctional ones. Intel did the same thing with Conroe where they added good elements of Netburst onto it.

        • benedict
        • 1 year ago

        The only segment in which AMD can’t compete is in the “Don’t care about sticker price” segment. NVidia has a tight hold on that one.

          • techguy
          • 1 year ago

          That and the “build a capable 4k gaming GPU” segment. Or the “release a GPU on-time” segment. Or everyone’s favorite “GPU that doesn’t eat power like it’s going out of style” segment.

          • Kretschmer
          • 1 year ago

          Really? Because the GTX 1060 is cheaper than the RX 580 right now, despite being equivalent performance. I don’t think $250 is “Don’t care about sticker price” territory.

          Edit: I’m referencing the 6GB 1060, to clarify.

            • Krogoth
            • 1 year ago

            Only the crippled 1060 3GiB are cheaper but they are slower then 480/580. 1060 3GiB are meant to compete with 470/570.

            1060 6GiB have higher price points than 480/580 and only rival them in performance.

            • Kretschmer
            • 1 year ago

            Have you…have you checked Newegg today? The 1060 6GB is cheaper than the 580 8GB.

            • Krogoth
            • 1 year ago

            Actually, 1060GIB 6GiB SKUs tend to be $20-50 more than 8GiB 580 equivalents.

            In before, cherry-picking a barebones, reference cooler 1060 6GiB SKU versus a factory-overclocked, super-fancy HSF with bundled games 8GiB 580 SKU or vice versa.

            • synthtel2
            • 1 year ago

            Have you yourself checked Newegg today?

        • jihadjoe
        • 1 year ago

        Problem is I don’t think there’s a Jim Keller like free agent who can carry their graphics group to the promised land.

    • NoOne ButMe
    • 1 year ago

    Looks like came in as expected by the community…
    Great increase in max performance, absolute garbage (if not negative) improvement in performance/dollar.

    FE 2080 (dual fan) barely outpacing the FE 1080ti (blower)

    At least I’m assuming you’re using the 1080ti FE.

      • Krogoth
      • 1 year ago

      This is the beginning of the new pricing paradigm when performance discrete GPUs start to get marginalized by lesser SKUs and intergrated GPU eating up the mid-range/value market.

      Nvidia has to recoup the massive R&D costs involved in designing and making Volta/Turing.

        • NoOne ButMe
        • 1 year ago

        Eh. Don’t think that it going to happen quite yet. In 2019 depending how far AMD pushes APUs.

        Or if they can get them to work with 1-2 chips of GDDR6/HBM and still access main memory.

        • Waco
        • 1 year ago

        Nvidia recoups the R&D via the Tesla/Quadro lines (take a look at the margins there). The only reason they’re getting away with this here is because AMD likely won’t have any sort of response for another year or two at best.

        If AMD had a competent answer we’d be seeing MSRPs more in line with the 10X0 and 9X0 launches adjusted for inflation instead of adjusted for [i<]performance[/i<]. The raw material costs haven't gone up much.

          • Krogoth
          • 1 year ago

          They also recoup the costs from high-end gaming market too. (They are really just “rejected” Quadro/Telsa)

          The shareholders simply want to retain the insane profit margins since Maxwell.

            • Waco
            • 1 year ago

            If they really want to squeeze consumers, sure, but I’m all too familiar with the costs of the “equivalent” silicon in the enterprise sector.

    • chuckula
    • 1 year ago

    What Nvidia wanted to talk about most: Ray Tracing.

    What’s actually showing up as the major feature that people care about: DLSS.

      • Krogoth
      • 1 year ago

      Pretty much, the only thing that is exciting about Turing is DLSS and FP16 performance. I suspect they’ll be insanely potent at mining and folding. RTX mode is just a massive gimmick that will not become revelant until it starts being on implemented consoles.

      It is lucky that we are middle of a crypto-currency bust, otherwise these SKUs would be sold out for months on end with even more insane price tags (2080 ~$1.1-$1.3K, 2080Ti going near $2K)

        • kuraegomon
        • 1 year ago

        One suspects that we’ll see larger-than-usual performance improvements with driver updates (and, potentially, game patches) over the lifespan of Turing, even discounting what games that get DLSS support will see. A new architecture with entirely new potential sources of performance improvements to play with == good times.

          • Krogoth
          • 1 year ago

          Yep, Nvidia pretty much did their own implementation of async shading with Turing. Unlike, AMD RTG they have the capital and clout to have developer support for it.

          Turing will probably endure for a while until real-time Ray-Tracing rendering begins to take off outside of tech demos.

      • derFunkenstein
      • 1 year ago

      At least DLSS uses the new silicon. If it didn’t, this would be a pretty rousing failure.

      • auxy
      • 1 year ago

      I’m really excited for RTX, but, like, maybe the second generation? Or third? (;´・ω・)

      These cards are too expensive and there’s no software yet. RTX is definitely the way forward but we’re still looking at it on the map; it hasn’t even come over the horizon yet.

      DLSS is stupid crap. Requires GFE login, only works on specific games, and is ultimately fakery. Jeff says it looks good and I admittedly haven’t seen it but I am super super sensitive to aliasing so I am very dubious about the IQ of DLSS. Much less the fact that it probably won’t be in ANY game I will EVER play. (・へ・)

      • Klimax
      • 1 year ago

      Actually when Iray gets support for RTX cards, then there will be fun.

    • ronch
    • 1 year ago

    TLDR; Poor Vega.

      • watzupken
      • 1 year ago

      This is not unexpected since AMD release Vega more than a year late with performance largely matching that of a GTX 1080.

      • morphine
      • 1 year ago

      Who?

      (that’s a joke, folks.)

      • ptsant
      • 1 year ago

      There is no bad product, only a bad price. Vega 56 should be at $350, at least until the 2060 launch…

        • ronch
        • 1 year ago

        A bad product is one that sucks, forcing the company that poured a lot of money and effort into making it to sell it at bargain basement prices with little to no profit.

          • ptsant
          • 1 year ago

          Yeah, but AMD shares are through the roof so the choice of putting out ryzen instead of 680 or whatever seems to be working for them. At the same time, for me as a consumer a Vega at $350 would be very nice for 1440p.

            • ronch
            • 1 year ago

            Well, Ryzen is another topic. Radeon is far behind. It’s hot, it’s slow (or rather, just about half as fast). So AMD needs to price it aggressively (or else!!). Kinda reminds me of construction equipment, I dunno why.

      • Krogoth
      • 1 year ago

      Raja: “Don’t blame me!”

      #PrimativeShadersMatter
      #AsyncShading

    • Jeff Kampman
    • 1 year ago

    Folks, as has sadly become usual my appetite for testing outran my ability to comment on it all. We’re gradually adding to the article but for now the numbers are there for those who want to parse them on their own. Thanks for your patience.

      • psuedonymous
      • 1 year ago

      TR always falls on the right side of data vs. jabber for early reviews.

      • Anovoca
      • 1 year ago

      Just curious if you would be willing to tweet/ send out a notification when more content is added to the article. 🙂 Additional thought, have you ever considered sending out a newsletter to TR subs when these bigger articles are published?

      • derFunkenstein
      • 1 year ago

      More graphs and fewer words is preferable in this case. It’s better to at least have the data presented, and let us draw our own conclusions. The reality is just that this thing is freakin’ fast. Not sure how many different ways you can phrase it, anyhow. All that to say, don’t worry about that.

      • Laykun
      • 1 year ago

      Just put in half the comments and let DLSS fill in the rest.

        • Redocbew
        • 1 year ago

        That sounds like fuzzy logic.

      • Jeff Kampman
      • 1 year ago

      All games have some commentary now, at least.

      • Jeff Kampman
      • 1 year ago

      And I just added value scatters for those interested.

    • tsk
    • 1 year ago

    RTX on!

Pin It on Pinterest

Share This