Nvidia’s GeForce RTX 2080 graphics card reviewed

Nvidia’s GeForce RTX 2080 Ti has already proven itself the fastest single graphics card around by far for 4K gaming, but the $1200 price tag on the Founders Edition card we tested—and even higher prices for partner cards at this juncture—mean all but the one percent of the one percent are going to be looking at cheaper Turing options.

So far, that mission falls to the GeForce RTX 2080. At a suggested price of $700 for partner cards or $800 for the Founders Edition we’re testing today, the RTX 2080 is hardly cheap. To be fair, Nvidia introduced the GTX 1080—which this card ostensibly replaces—at $600 for partner cards and $700 for its Founders Edition trim, but that card’s price fell to $500 after the GTX 1080 Ti elbowed its way onto the scene. Right now, pricing for the RTX 2080 puts it in contention with the GeForce GTX 1080 Ti. That’s not a comfortable place to be, given that software support for Turing’s unique features is in its earliest stages. Our back-of-the-napkin math puts the RTX 2080’s rasterization capabilities about on par with those of the 1080 Ti, and rasterization resources are the dukes the middle-child Turing card has to put up today.

On top of that, plenty of gamers are just plain uncomfortable with any generational price increase from the GTX 1080 to the RTX 2080. That’s because recent generational advances in graphics cards have delivered new levels of graphics performance to the same price points we’ve grown used to. For example, AMD was able to press Nvidia hard on this point as recently as the Kepler-Hawaii product cycle, most notably with the $400 R9 290. Once Maxwell arrived, the $330 GeForce GTX 970 thoroughly trounced the Kepler GTX 770 on performance and the R9 290 on value, and the $550 GTX 980 outclassed the GTX 780 Ti for less cash. The arrival of the $650 GTX 980 Ti some months later didn’t push lesser GeForce cards’ prices down much, but it did prove an exceptionally appealing almost-Titan. AMD delivered price- and performance-competitive high-end products shortly after the 980 Ti’s release in the form of the R9 Fury X and R9 Fury.

Overall, life for PC gamers in the Maxwell-Hawaii-Fiji era was good. Back then, competition from the red and green camps was vigorous, and that competition provided plenty of reason for Nvidia and AMD to deliver more performance at the same price points—or at least to cut prices on existing products when new cards weren’t in the offing.

Pascal’s release in mid-2016 echoed this cycle. At the high end, the GTX 1080 handily outperformed the GTX 980 Ti, while the GTX 1070 brought the Maxwell Ti card’s performance to a much lower price point. AMD focused its contemporaneous efforts on bringing higher performance to more affordable price points with new chips on a more efficient fabrication process, and Nvidia responded with the GTX 1060, GTX 1050 Ti, and GTX 1050. Some months later, we got a Titan X Pascal at $1200, then a GTX 1080 Ti at $699. The arrival of the 1080 Ti pushed GTX 1080 prices down to $500. Life was, again, good.

The problem today is that AMD has lost its ability to keep up with Nvidia’s high-end product cycle. The RX Vega 56 and RX Vega 64 arrived over a year after the GTX 1070 and GTX 1080, and they only achieved performance parity with those cards while proving much less power-efficient. Worse, Vega cards proved frustratingly hard to find for their suggested prices. Around the same time, a whole bunch of people got the notion to do a bunch of cryptographic hashing with graphics cards, and we got the cryptocurrency boom. Life was definitely not good for gamers from late summer 2017 to the present, but it wasn’t entirely graphics-card makers’ fault.

Cryptocurrency miners’ interest in graphics cards has waned of late, so graphics cards are at least easier to buy for gamers of every stripe. The problem for AMD is that Vega 56 and Vega 64 cards are still difficult to get for anything approaching their suggested prices, even as Pascal performance parity has remained an appealing prospect for gamers without 4K displays. On top of that, AMD has practically nothing new on its Radeon roadmap for gamers at any price point for a long while yet. Sure, AMD is fabricating a Vega compute chip at TSMC on 7-nm FinFET technology, but that part doesn’t seem likely to descend from the data center any time soon.

No two ways about it, then: the competitive landscape for high-end graphics cards right now is dismal. As any PC enthusiast knows, a lack of competition in a given market leads to stagnation, higher prices, or both. In the case of Turing, Nvidia is still taking the commendable step of pushing performance forward, but it almost certainly doesn’t feel threatened by AMD’s Radeon strategy at the moment. Hence, we’re getting high-end cards with huge, costly dies and price increases to match whatever fresh performance potential is on tap.  Nvidia is a business, after all, and businesses’ first order of business is to make money. The green team’s management can’t credibly ignore simple economics.

A block diagram of the TU104 GPU. Source: Nvidia

On that note, the RTX 2080 draws its pixel-pushing power from a smaller GPU than the 754-mm² TU102 monster under the RTX 2080 Ti’s heatsink. The still-beefy 545-mm² TU104 maintains the six-graphics-processing-cluster (GPC) organization of TU104, but each GPC only contains eight Turing streaming multiprocessors, or SMs, versus 12 per GPC in TU102. Those 48 SMs offer a total of 3072 FP32 shader ALUs (or CUDA cores, if you prefer). Thanks to Turing’s concurrent integer execution path, those SMs also offer a total of 3072 INT32 ALUs. Nvidia has disabled two SMs on TU104 to make an RTX 2080. Fully operational versions of this chip are reserved for the Quadro RTX 5000.

Boost

clock

(MHz)

ROP pixels/

clock

INT8/FP16

textures/clock

Shader

processors

Memory

path (bits)

Memory

bandwidth

Memory

size

RX Vega 56 1471 64 224/112 3584 2048 410 GB/s 8 GB
GTX 1070 1683 64 108/108 1920 256 259 GB/s 8 GB
RTX 2070 FE 1710 64 120/120 2304 256 448 GB/s 8 GB
GTX 1080 1733 64 160/160 2560 256 320 GB/s 8 GB
RX Vega 64 1546 64 256/128 4096 2048 484 GB/s 8 GB
RTX 2080 FE 1800 64 184/184 2944 256 448 GB/s 8 GB
GTX 1080 Ti 1582 88 224/224? 3584 352 484 GB/s 11 GB
RTX 2080 Ti FE 1635 88 272/272 4352 352 616 GB/s 11 GB
Titan Xp 1582 96 240/240 3840 384 547 GB/s 12 GB
Titan V 1455 96 320/320 5120 3072 653 GB/s 12 GB

The massive TU104 die only invites further comparisons between the RTX 2080 and the GTX 1080 Ti. The GP102 chip in the 1080 Ti measures 471 mm² in area, although it’s given over entirely to rasterization resources. That means GP102 has more ROPs than TU104 has in its entirety—88 of which are enabled on the RTX 2080 Ti—and a wider memory bus, at 352 bits versus 256 bits. Coupled with GDDR5X RAM running at 11 Gbps per pin, the GTX 1080 Ti boasts 484.4 GB/s of memory bandwidth.

Like the RTX 2080 Ti, the 2080 relies on the latest-and-greatest GDDR6 RAM to shuffle bits around. On this card, Nvidia taps 8 GB of GDDR6 running at 14 Gbps per pin on a 256-bit bus for a total of 448 GB/s of memory bandwidth. Not far off the 1080 Ti, eh? While the GTX 1080 Ti has a raw-bandwidth edge on the 2080, we know that the Turing architecture boasts further improvements to Nvidia’s delta-color-compression technology that promise higher effective bandwidth than the raw figures for GeForce 20-series cards would suggest. The TU104 die has eight memory controllers capable of handling eight ROP pixels per clock apiece, for a total of 64. All of TU104’s ROPs are enabled on the RTX 2080.

Peak

pixel

fill

rate

(Gpixels/s)

Peak

bilinear

filtering

INT8/FP16

(Gtexels/s)

Peak

rasterization

rate

(Gtris/s)

Peak

FP32

shader

arithmetic

rate

(TFLOPS)

RX Vega 56 94 330/165 5.9 10.5
GTX 1070 108 202/202 5.0 7.0
RTX 2070 FE 109 246/246 5.1 7.9
GTX 1080 111 277/277 6.9 8.9
RX Vega 64 99 396/198 6.2 12.7
RTX 2080 115 331/331 10.8 10.6
GTX 1080 Ti 139 354/354 9.5 11.3
RTX 2080 Ti 144 473/473 9.8 14.2
Titan Xp 152 380/380 9.5 12.1
Titan V 140 466/466 8.7 16.0

As a Turing chip, TU104 boasts execution resources new to Nvidia gaming graphics cards. First up, TU104 has 384 total tensor cores for running deep-learning inference workloads, of which 368 are active on the RTX 2080. Compare that to 576 total and 544 active tensor cores on the RTX 2080 Ti. For accelerating bounding-volume hierarchy traversal and triangle intersection testing during ray-tracing operations, TU104 has 48 RT cores, 46 of which are active on the RTX 2080. TU102 boasts 72 RT cores in total, and 68 of those are active on the RTX 2080 Ti.

The RTX 2080 Founders Edition we’re testing today has the same swanky cooler as the RTX 2080 Ti FE on top of its TU104 GPU. Underneath that cooler’s fins, however, Nvidia has provided only an eight-phase VRM versus 13 on the 2080 Ti, and the card draws power through a six-pin and eight-pin connector rather than the dual eight-pin plugs on the RTX 2080 Ti. Nvidia puts the stock board power of the 2080 FE at 225 W, down slightly from the GTX 1080 Ti’s 250-W spec but way up from the GTX 1080’s 180-W figure. Given the RTX 2080’s massive price tag, massive die, and extra power requirements versus the GTX 1080 Founders Edition, however, the 45-W increase isn’t that surprising.

 

Our testing methods

If you’re new to The Tech Report, we don’t benchmark games like most other sites. Instead of throwing out a simple FPS average—a number that tells us only the broadest strokes of what it’s like to play a game on a particular graphics card—we go much deeper. We capture the amount of time it takes the graphics card to render each and every frame of animation before slicing and dicing those numbers with our own custom-built tools. We call this method Inside the Second, and we think it’s the industry standard for quantifying graphics performance. Accept no substitutes.

What’s more, we don’t rely on canned in-game benchmarks—routines that may not be representative of performance in actual gameplay—to gather our test data. Instead of clicking a button and getting a potentially misleading result from those pre-baked benches, we go through the laborious work of seeking out interesting test scenarios that one might actually encounter in a game. Thanks to our use of manual data-collection tools, we can go pretty much anywhere and test pretty much anything we want in a given title.

Most of the frame-time data you’ll see on the following pages were captured with OCAT, a software utility that uses data from the Event Timers for Windows API to tell us when critical events happen in the graphics pipeline. We perform each test run at least three times and take the median of those runs where applicable to arrive at a final result. Where OCAT didn’t suit our needs, we relied on the PresentMon utility.

As ever, we did our best to deliver clean benchmark numbers. Our test system was configured like so:

Processor Intel Core i7-8086K
Motherboard Gigabyte Z370 Aorus Gaming 7
Chipset Intel Z370
Memory size 16 GB (2x 8 GB)
Memory type G.Skill Flare X DDR4-3200
Memory timings 14-14-14-34 2T
Storage Samsung 960 Pro 512 GB NVMe SSD (OS)

Corsair Force LE 960 GB SATA SSD (games)

Power supply Corsair RM850x
OS Windows 10 Pro with April 2018 Update

Thanks to Corsair, G.Skill, and Gigabyte for helping to outfit our test rigs with some of the finest hardware available. AMD, Nvidia, and EVGA supplied the graphics cards for testing, as well. Behold our fine Gigabyte Z370 Aorus Gaming 7 motherboard before it got buried beneath a pile of graphics cards and a CPU cooler:

Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests. We tested each graphics card at a resolution of 4K (3840×2160) and 60 Hz, unless otherwise noted. Where in-game options supported it, we used HDR, adjusted to taste for brightness. Our HDR display is an LG OLED55B7A television.

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

 

Shadow of the Tomb Raider

The final chapter in Lara Croft’s most recent outing is one of Nvidia’s headliners for the GeForce RTX launch. It’ll be getting support for RTX ray-traced shadows in a future patch. For now, we’re testing at 4K with HDR enabled and most every non-GameWorks setting maxed.


The RTX 2080 comes second only to its Turing cousin out of the gate, although its 99th-percentile frame time suffers a bit from an early patch of fuzziness. We retested the game several times in our location of choice and couldn’t make that weirdness go away, so perhaps some software polish is needed one way or another. Still, the performance potential demonstrated by the GeForce RTX cards is quite impressive. Remember that we’re gaming at 4K, in HDR, with almost all the eye candy turned up in a cutting-edge title.


These “time spent beyond X” graphs are meant to show “badness,” those instances where animation may be less than fluid—or at least less than perfect. The formulas behind these graphs add up the amount of time our graphics card spends beyond certain frame-time thresholds, each with an important implication for gaming smoothness. Recall that our graphics-card tests all consist of one-minute test runs and that 1000 ms equals one second to fully appreciate this data.

The 50-ms threshold is the most notable one, since it corresponds to a 20-FPS average. We figure if you’re not rendering any faster than 20 FPS, even for a moment, then the user is likely to perceive a slowdown. 33 ms correlates to 30 FPS, or a 30-Hz refresh rate. Go lower than that with vsync on, and you’re into the bad voodoo of quantization slowdowns. 16.7 ms correlates to 60 FPS, that golden mark that we’d like to achieve (or surpass) for each and every frame.

In less-demanding or better-optimized titles, it’s useful to look at our strictest graphs. 8.3 ms corresponds to 120 FPS, the lower end of what we’d consider a high-refresh-rate monitor. We’ve recently begun including an even more demanding 6.94-ms mark that corresponds to the 144-Hz maximum rate typical of today’s high-refresh-rate gaming displays.

The RTX 2080 makes a strong statement for itself by these metrics. It spends less than half the time past 16.7 ms on tough frames versus the GTX 1080 Ti. Even though their performance may look similar at a glance, metrics like these really let us tease out the difference between the old and the new.

 

Project Cars 2


The RTX 2080 and GTX 1080 Ti end up dead-even in our one-minute romp through a section of the Spa-Francorchamps circuit. Let’s see whether our time-spent-beyond-X graphs can put any light between their bumpers.


A look at the 16.7-ms mark shows that the RTX 2080 and GTX 1080 Ti remain as closely-matched as ever here, and even our more-demanding 11.1-ms and 8.3-ms thresholds let nary a ray pass between the two cards.

 

Hellblade: Senua’s Sacrifice


Hellblade relies on Unreal Engine 4 to depict its Norse-inspired environs in great detail, and playing it at 4K really brings out the work its developers put in. The RTX 2080 opens a small lead over the GTX 1080 Ti in our measure of performance potential, but the cards are within a hair’s breadth in our 99th-percentile measure of delivered smoothness. Let’s see if our time-spent-beyond-X measure can


Turns out it can. The RTX 2080 spends about five fewer seconds past 16.7 ms than the GTX 1080 Ti. Chalk up another win for the Turing card.

 

Gears of War 4


Gears of War 4 puts the lesser Turing and the greater Pascal cards nose-to-nose with one another once again. To the time-spent-beyond-X graphs we go.


By a nose, the GTX 1080 Ti holds an advantage in our time-spent-beyond graphs at 16.7 ms and 11.1 ms.

 

Far Cry 5



Far Cry 5 proves another title where our time-spent-beyond-X graphs make all the difference. The RTX 2080 spends a little over half the time that the GTX 1080 Ti does at the critical 16.7-ms threshold, although its lead narrows at the 11.1-ms mark. Still, the Turing card proves a smoother way to romp through the Montana landscape.

 

Assassin’s Creed Origins



Once again, despite their superficially similar performances in our highest-level measurements, the time-spent-beyond-16.7-ms mark hands the win to the RTX 2080, and by no small margin. Traveling to Egypt on the RTX 2080 is simply a smoother and more enjoyable experience.

 

Deus Ex: Mankind Divided


Deus Ex: Mankind Divided might be a little more aged than some of the games we’re looking at today, but that doesn’t mean it isn’t still a major challenge for any graphics card at 4K and max settings. The RTX 2080 pushes closer to a 60-FPS average than the GTX 1080 Ti, for sure, but its 99th-percentile frame time is just as troubled as the Pascal card’s.


Looking at our time-spent-past-33.3-ms graph puts the 1080 Ti and 2080 on even footing with regard to some of the rougher frames in our test run, although both cards put up more time than we’d like to see here. At 16.7 ms, however, the RTX 2080 spends a little less than two seconds on tough frames, and it holds onto that lead at the 11.1-ms mark.

 

Watch Dogs 2


Like Deus Ex, Watch Dogs 2 is an absolute hog of a game if you start dialing up its settings. Add a 4K resolution to the pile, and the game crushes most graphics cards to dust. Only the GeForce GTX 1080 Ti, RTX 2080, and RTX 2080 Ti even produce playable frame rates, on average, and their 99th-percentile frame times testify to the fact that there’s no putting a leash on this canine.


At 33.3 ms, the RTX 2080 fares a little better than the GTX 1080 Ti beneath Watch Dogs 2‘s heel, and that trend continues at the 16.7-ms mark. Neither card comes anywhere close to the RTX 2080 Ti’s performance, however, putting a point on just how demanding this game can be.

 

Wolfenstein II


This may not be Doom, but Wolfenstein II‘s Vulkan renderer still unleashes some form of unholy processing power from our Turing cards. The RTX 2080 clobbers the GTX 1080 Ti in average FPS and puts up lower 99th-percentile frame times while doing it.


Wolfenstein II puts up perhaps the most dramatic difference in our time-spent-beyond-X graphs in this entire review. While neither the 1080 Ti nor the 2080 put meaningful amounts of time on the board at the 16.7 ms mark, the Turing card slices over nine seconds of trouble off the GTX 1080 Ti’s toils at 8.3 ms.

 

Conclusions

Put it up against the GTX 1080, and the GeForce RTX 2080 crushes its Pascal predecessor. We never expected anything less. Despite its name, though, the RTX 2080 is priced in the same bracket that the GeForce GTX 1080 Ti presently occupies, and that means there is no world in which the GTX 1080 is a reasonable point of comparison for the Turing middle child.


Putting the 1080 Ti and 2080 head-to-head with our 99th-percentile-FPS-per-dollar and average-FPS-per-dollar value metrics, the RTX 2080 Founders Edition offers only small improvements over the GTX 1080 Ti FE in today’s games. Our geometric means of all our results spit out about 9% better average-FPS and about 8% higher 99th-percentile-FPS for the RTX 2080 FE. Those improvements will run about 14% more money than the GTX 1080 Ti Founders Edition. Not a value proposition that’s going to make anybody spit out their coffee, to be certain, but it’s not bad.

You’re getting a bit more polish on top of what was already peerless fluidity and smoothness in most titles today, and considering the uncompetitiveness of today’s high-end graphics market in general, a 5% vigorish for the green team above linear price-performance gains seems positively restrained. In that light, TU104’s tensor cores and RT cores really aren’t that expensive to get into at all.

If you’re focused on getting the most bang-for-your-buck right this second, it might be tempting to get a GTX 1080 Ti on discount as stocks of those cards dwindle, but I’m not entirely sure that’s the best use of your cash for the long term. Pascal performance is as good as it’s ever going to be, while Turing opens new avenues of performance and image quality improvements for tomorrow’s games.

We’ve already been intrigued by what’s possible from the demos we’ve seen of DLSS, and we expect developers will find all sorts of ways to play with even the sparse ray-tracing possible with Turing. Even if you discount the possibilities of tensor cores and RT cores entirely, titles that support half-precision math for some operations, like Wolfenstein II, perform startlingly better on Turing. That’s yet another avenue that developers might run down more and more often in the future.

Yes, gamers are going to be waiting on those features to bear fruit, but the upside could be considerable, and it’s not as though Nvidia isn’t courting developers to use its features. There are plenty of games in the pipe with DLSS support at a minimum, and a handful of developers have already run up a flag for ray-traced effects in their games. Those are just the capabilities that Nvidia has put a bow on, too—Turing mesh shaders could change the way developers envision highly detailed scenes with complex geometry, and that stuff isn’t ever coming to Pascal, either. With so many potential routes to better performance from this architecture, it seems unreasonably pessimistic to say that none of Nvidia’s bets will pay off.

On the basis of a $100 difference, it could be smarter to get on the Turing curve and risk a bit of a wait than it is to tie your horse to an architecture that will never benefit from those future developments, especially given the lengthening useful life of computing hardware these days. That’s especially true if you’re a pixel freak, and if you’re shopping for an $800 graphics card, how can you not be? If GTX 1080 Ti prices fall well below Nvidia’s $700 sticker, we might be telling another story, but a look at e-tailer shelves right now doesn’t suggest that’s happening en masse yet.

Speaking of pixel freaking, if you’re reading The Tech Report, you should already be keenly aware of differences in delivered smoothness among graphics cards. As useful as our 99th-percentile frame times are for determining those differences at a glance, our time-spent-beyond-X measurements help tell that tale in even more depth.

Here’s a little thought experiment with today’s games that might put a point on just how much smoother the RTX 2080 can be versus the GTX 1080 Ti. The 2080 spends far less time in aggregate than the GTX 1080 Ti does on frames that take longer than 16.7 ms to render—31% less—than the GTX 1080 Ti at 4K across all of our titles. We think that’s a difference in performance that you’ll notice.

To be fair, we don’t experience games in aggregate, but it’s still worth noting that the 2080 spends less time—often significantly less time—on tough frames by this measure than the 1080 Ti does in the majority of our titles. The performance of the RTX 2080 and GTX 1080 Ti may appear similar at a high level, but where the rubber meets the road in our advanced metrics, the 2080 easily matches the 1080 Ti and often delivers superior performance. I think that’s a difference worth the 2080’s extra cost.

In any case, two things are true as we wrap up our first week with Turing. One is that we’ll frequently be revisiting these cards down the line as games that support their capabilities emerge. The other is that the RTX 2080 is an exceptionally fine graphics card today, and if you have the dosh to spend, it can be a better performer in noticeable ways versus the superficially-similar GTX 1080 Ti, even as prices for the Pascal card fall. You really can’t lose either way. Whether Turing fully comes into its own with time remains to be seen, but I’m optimistic the wait will be worth it. Now, that wait begins.

Comments closed
    • DoomGuy64
    • 1 year ago

    For the people complaining about 4k, it’s a benchmark.
    For the people promoting 4k to actually game on, watch this video.
    [url<]https://www.youtube.com/watch?v=VxNBiAV4UnM[/url<] This video is why I think DLSS will do really well for 4k gaming, and yes I've seen the screenshots showing loss of detail when zoomed in.

    • chrcoluk
    • 1 year ago

    Perhaps the 99th percentile fps per price graph needs updating as here in the UK the cards are not in the same price bracket.

    1080ti for £620
    2080 for £840

    and 2080ti’s are up to a whopping £1400 LOL

    the 2080 is 25% more expensive, but not 25% faster.

    Also really the 2080 should be the same price as a 1080 was at launch, and it looks like 2070 and lower are barely going to be faster than their predecessors.

    • evilpaul
    • 1 year ago

    Any overclocking attempts/results?

    • cozzicon
    • 1 year ago

    Boy this is somewhat lost on me… Another poster mentioned the lack of good new games- and that’s true. Everything is rehashed with little or no innovation.

    Who do these cards sell to? Even for heavy compute applications that leverage CUDA (non coin related) they usually do not tax far older cards. I use my GTX 980 for vector analysis, antenna modeling, and RF modulation.

    If the games were better- I might jump. Maybe I’m just getting old 🙂

      • Firestarter
      • 1 year ago

      To .. gamers? People still love to play games you know. Your opinion of games these days does not necessarily reflect that of the people buying these cards, myself included. I am looking forward to buying a new high end GPU and playing some of the new upcoming AAA and not-so-AAA games in the near future

        • Gastec
        • 1 year ago

        Perhaps those games should all be called simply just “AAA”.
        ‘ Hey man, they’ve released the AAA Beta’
        ‘ Yeah dude, I saw, but didn’t had time to check it out, I was playing AAA all night long’

      • End User
      • 1 year ago

      iRacing kills my Vega 64 when using VR.

        • DoomGuy64
        • 1 year ago

        AMD doesn’t do VR optimization like Nvidia. Their answer is use crossfire with one card for each eye, and even at that support is questionable. AMD doesn’t optimize for any scenario beyond basic use. It’s like how the memory clockspeed efficiency options disappeared from the 390 cards after the new UI, and dual monitors spike memory frequency to 100% for no reason. Nvidia does all these edge case scenarios far better.

    • PopcornMachine
    • 1 year ago

    I’ve already been waiting, and this point I am very disappointed. People will spin this how they want due loyalty or because of what’s good for the industry. But this is not what anyone hoped for. And I do not feel optimistic.

      • DancinJack
      • 1 year ago

      k

    • Fonbu
    • 1 year ago

    Thank you for the review!

    As mentioned about the high price factors. The research and validation of three SKU’s of TU102, TU104 and TU106 has got to be expensive. Not to mention the size of each chip to manufacture (as an example,the smallest 2070 is TU106 @ 454mm2 when the 1080Ti is GP102 @ 471mm2), the use of GDDR6 for the first time and to mention the payouts to developers to implement ray tracing, advanced turing shading and DLSS within software! So intially the sale price will be high, to recouperate the aforementioned.

    Hang in there fellow PC enthusiast 😉

      • ptsant
      • 1 year ago

      You are right, this is an important point. These are indeed distinct SKUs, not just rejects of a big chip…

      • Gastec
      • 1 year ago

      Yeah, research cost, feeding the starving children of the World and cleaning the oceans of plastic is what’s driving up the prices of computer components. 🙂

    • leor
    • 1 year ago

    I picked up a Titan X Pascal in late 2016 shortly before they introduced the XP. I never would have thought at that time that this card would have a 3 year tenure as the GPU of my main rig, but almost 2 years down the road, spending 800 bucks on a tiny boost, or 1200 on a somewhat larger boost makes no sense to me.

    My guess is sometime next year we will see a Navi card that performs pretty close to these and nvidia will respond with an optimized Turing, call it 30XX, and it till be 15-20% faster than these. I know people like to say “wake me up when…,” usually followed by something ridiculous, but that’s pretty much how I feel about these.

    Wake me up sometime in 2019 when moving from 2016 tech looks compelling and doesn’t cost the price of a decent gaming laptop.

    • paultherope
    • 1 year ago

    The block diagram on the first page should read TU104, not TU102. Confused me for a while there.

    Thanks for the great articles as usual.

    • maxxcool
    • 1 year ago

    underwhelmed by a wide margin again.

    • Srsly_Bro
    • 1 year ago

    When I want to read reviews that I can personally use, I go to other sites because 4K results are only meaningful to a very small percentage. Certainly 4k isn’t the most popular resolution on TR. I like TR reviews for comparisons but they are not useful to the large majority of people who do not use 4k.

    I assume Jeff was forced to use 4k or doesn’t understand the gaming display market.

    I don’t think Jeff is a dumb dumb so why would Nvidia force the resolution and disallow 1080P and 1440P?

      • Welch
      • 1 year ago

      Because the main claim from Nvidia was that the new 2080 cards were going to make 4k a truly possible experience. What better way to test that assertion than to test 4k……………………..

      If you are spending that kind of money on a graphics card in the first place, you already know its going to tear through 1440p anyhow, you give no damns about the middle ground.

      • jensend
      • 1 year ago

      Wow, great maturity, characterizing anyone who doesn’t agree with your specious reasoning as “a dumb dumb.” Showing your sophistication there.

      Kampman has repeatedly stated his reasoning ([url=https://techreport.com/discussion/30812/nvidia-geforce-gtx-1060-graphics-card-reviewed?post=1006333#1006333<]example[/url<]). TR chooses test parameters based on two priorities: 1) Clearly showing the capabilities of the hardware 2) Realistic actual consumer use scenarios. 1440p testing won't do as good a job of differentiating between these cards' capabilities. CPU bottlenecks etc will cloud the comparisons. And the 1070/Ti are very capable 1440p cards at around half the cost. I do sometimes wish priorities 1) and 2) were swapped. For instance, CPU gaming tests have become slightly divorced from reality because TR tends to use unbalanced setups to "highlight" differences in gaming performance that won't appear in balanced setups because the games will be more GPU-bound. But 1080p testing for an $800 card does not better represent real-world consumer use. Very few people are spending $800 for a card to go with their 1080p monitors, and those who do shouldn't be pandered to or enabled by wasting scarce reviewer time giving them CPU-bottlenecked data. Instead, such people should be referred to qualified help. Yes, "the large majority of people" do not use 4K. The large majority of people do not use $1000 GPUs either. By your specious reasoning we should just omit all reviews of any gear outside the mainstream budget ranges.

        • RAGEPRO
        • 1 year ago

        I feel you chief, but he did actually say he DOESN’T think Jeff is a dumb-dumb.

          • jensend
          • 1 year ago

          He says it fairly clearly: “either Jeff would not voluntarily test at 4K or he is a dumb dumb, I don’t think he’s a dumb dumb, therefore he was forced.” (Apparently nV hires thugs to break reviewers’ thumbs.) So yes, he’s saying anyone who disagrees with him is a dumb dumb.

        • chrcoluk
        • 1 year ago

        Just to point out I just brought a 1080ti card for 1440p gaming and its close to a 2080, so to assume noone will buy a 2080 for 1440p is niave.

        As an example a 1070 is destroyed at 1440p in FF15 and cannot even handle 1080p at 60fps sustained in that game. The problem is sites tend to test well optimised games and not bad optimised ones. So then readers assume cards can destroy resolutions in “every” game based on these limited tests.

      • blitzy
      • 1 year ago

      I don’t think the reviews should focus on 1440p, but it should at least be covered so that we can see for ourselves the difference in performance from the previous gen cards at what is likely the most common gaming resolution. From what I have seen in other reviews it’s about 10-15% difference, so it is fairly bottlenecked at lower resolutions. It’s a point worth clearly noting for anyone who is gaming at those resolutions and not sure if an upgrade is worth it.

      Many TR visitors are familiar with the graphics card market and safely / automatically make this inference based on their knowledge and experience, but for others who aren’t knowledgable having the 1440p performance delta noted is helpful. Seeing the raw numbers is more definitive that just assuming the cards will be bottlenecked.

      Personally I don’t see myself gaming above 1440p for a very long time, so largely the 4k performance of these cards is irrelevant to me.

      • Jeff Kampman
      • 1 year ago

      We’re reviewing graphics cards, not system-level performance. 4K resolution ensures that performance deltas are isolated to the GPU under test.

        • Voldenuit
        • 1 year ago

        But resolution tells users and potential buyers a lot about how cards perform and what (if any) specific bottlenecks are applicable to their own setups. I feel that neglecting 1440p tests on Turing leaves out crucial information on these cards, especially when it comes time to assess their performance in actual ray-traced applications.

          • Redocbew
          • 1 year ago

          Good grief. Yes, of course, leaving out test X means there’s less information to be had. Only by using your exact machine will you get results that match exactly with your own setup. There’s never going to be any review which does that, and the thing being tested here is a GPU! If you want to know how fast a not-GPU is, then go find a review of that. If you want to know how some specific combination of components behave together, then I wish you good luck, because you may be up a creek if you don’t have the ability to do your own testing. Either way, you’re not going to find it here in a review about one particular component.

          The point of my little tirade here is that there is no amount analysis no matter how in depth, professional and thoughtful it may be that will remove the need for you to think before buying something. Thinking(especially before buying something) isn’t what Nvidia or anyone else in the industry wants you to do, but it’s a good idea. Really.

            • blitzy
            • 1 year ago

            this isn’t talking about nit picky out of left field stuff though, it’s demonstrating how the card performs in what is probably the most common display resolution. There is a large market of gaming monitors with high refresh rates that are 1440p (or near to). Right now there are only about 3 or 4 bleeding edge monitors that are 4k with high refresh rates (HDR, and $2k+ price tags to match), so 4k is the niche scenario, not the other way around.

            We aren’t suggesting the main focus of the review shouldn’t be to isolate performance to the GPU as much as possible, we are saying that for most people, there’s a key factor which is not detailed here – 1440p. It is worth covering briefly. It’s an important factor that people want to understand, and seeing numbers provides that clarity.

            • Redocbew
            • 1 year ago

            What difference does it make what’s most common? Just because a particular use case is most common doesn’t mean it’ll give you the most information about the hardware. In fact, it might just do the opposite. I have no idea what the specific design targets are for a modern GPU, but I doubt it’s so simple as saying “this GPU is built for 1080p” or “this GPU is built for 4k”. Most likely there’s a range of performance for which they’re aiming, and it’d be ridiculous if what’s “most common” wasn’t somewhere in the middle of that range. “most common” is average and boring, and there’s no point in running tests at all that don’t show you the full capabilities of the hardware. The only way you’re going to do that is by going past whatever those targets are.

            I know this is an unpopular sentiment for some people, but I still have yet to hear a reasonable argument against it. This isn’t a popularity contest. It’s an investigation of a deterministic system.

            • blitzy
            • 1 year ago

            What relevance are performance metrics at 4K if practically nobody is actually going to be playing games at 4K? It shows the potential, but not the actual realistic use case scenario. People read reviews to make buying decisions, and 1440p is the most common use case. Should there not at least be a basic metric given to show performance on the resolution people will actually be playing at?

            It doesn’t make sense to deliver a high frame rate to a low refresh rate 4K monitor, how many people are actually gaming on high refresh rate 4K monitors? I’d argue very few. People are far more likely to be running 1440p displays for gaming, I don’t expect 4K to be mainstream for gaming displays for 3 or more years (e.g. when a high refresh rate monitor is at mainstream price of say $500). Right now they exist, but practically nobody is gaming on them due to price. 4K displays are generally used for productivity, not gaming at this point in time.

            Running a couple of benchmarks at 1440p will not take too long and it adds a valuable metric to the information.

            • Ifalna
            • 1 year ago

            Problem is: it won’t be helpful.
            If you are bottlenecked by other components the GPU sits there, twiddling it’s thumbs.
            Then you won’t see much of a difference and think “these cards aren’t so good after all”.

            Well they AREN’T. These are not designed for 1440p or even hilariously boring (for the GPU) 1080p.
            My 1070 is rarely taxed out at 1080p, because my poor old 3570K just can’t keep up and that is in an overclocked state to 4.6 GHz.

            4K is in the minority. Aye. But so are 800€+ GFX cards. People that buy a beast like that and play on a bog standard 1400p/1080p display are simply put: wasting their money on E-Peen overkill.

            • blitzy
            • 1 year ago

            I have no problem if there is a bottleneck, I want to see the data that shows the bottleneck, and indeed quantifies it. If the user requires a 4K monitor in order to see any benefit from moving from a 1080ti to a 2080ti, that is worthwhile information to demonstrate, with data, not just assumption.

            • Redocbew
            • 1 year ago

            Dude, I know you care about what’s popular, but the machine does not, and the machine is what we’re testing. The insides of a computer are weird that way. Strange things happen in there, and they happen quickly. It’s not the kind of place you can figure out using common sense unless you’re weird also. All this testing isn’t just for fun you know.

            The question I’m really asking here is what you want out of all this. Do you want to be spoon-fed and catered to where the only thing you get is what someone else thinks you might want? If you do, then choose your sources carefully. I hope you can see how that may become a problem.

            The alternative is to find information collected for a general case and then figure out for yourself how to best make use of it. If you choose not to do that and refuse to leave the herd, then don’t be complaining if the what you get fed to you isn’t to your liking.

            • blitzy
            • 1 year ago

            I’m having trouble following your logic, but can see you disagree so that’s fine. That’s your opinion, and I have mine.

            Effectively what you’re saying is that 1440p performance is irrelevant, yet I would argue the opposite. 4K performance is largely irrelevant. 4K performance is only interesting theoretically, and to those who will be gaming at 4K – a tiny minority of gamers. While it’s interesting to see potential performance, it is also important to show with actual data what real performance is likely to be like.

            Does it really matter if a 2080 is X% faster at 4K if nobody is actually playing games at 4K? Why not benchmark at 8K if we really want to isolate the GPU performance delta.

            If you don’t believe me that 4K monitors are not prevalent, here is a list of high refresh rate monitors. [url<]https://www.144hzmonitors.com/gaming-monitor-list/[/url<] Gamers who take performance seriously enough to look at benchmarks are probably going to be using a monitor from that list. High refresh rate, modest resolution. Note the distinct absence of high quality 4K displays, there are only a small handfull, and the ones that are actually good are either not yet available, or cost $2000+ Clearly most gamers with an existing monitor are more likely to be gaming at around 1440p, making it an obvious data point to include in the benchmarks. 1440p performance is only one metric to cover, I am not suggesting this should be the primary focus of the review. But it is important to at least touch on briefly. We can make an assumption that the cards will be bottlenecked, but that is not as good as seeing actual data that shows the performance delta. It would not be overly time consuming to add this additional data to the review, it's not like we expect all benchmarks reproduced 4K and 1440p. A couple of data points would be enough to demonstrate the delta. Also, I am only making a suggestion and sharing my opinion. Taking it as a complaint is to put unnecessary negative perspective on it.

            • Redocbew
            • 1 year ago

            [quote<]Why not benchmark at 8K if we really want to isolate the GPU performance delta[/quote<] That would squash the curve flat instead of inflating it. It's not a bad idea in theory it'd just be difficult to see the relative differences in performance between the parts being tested. However, I'd still call that a better GPU test than using a resolution that's too low and leaves you with test results that may not tell you much about the GPU at all. How important is this supposed bottleneck at 1440p anyway? Everyone always seems to be so afraid of these mysterious macro-scale bottlenecks between components that are threatening to completely screw up their carefully laid plans. If you buy one of these cards and run it at 1440p, then yeah you're probably leaving some performance on the table, but who cares? Keep it for a few years and then maybe you won't need to upgrade again as soon as you would otherwise, but none of that has anything to do with testing. That testing is different from everyday usage shouldn't be so weird, but it seems like for a lot of people it is. I mean, aside from those crazy people who do nothing but run 3dmark all day nobody really runs benchmarks when they aren't testing things, and yet people lose their minds when a testing procedure doesn't fit their own usage pattern exactly. Weird.

            • kuraegomon
            • 1 year ago

            Oh dear. I came late to this thread, but I’ll make it really simple. If you don’t want to game at 4K, and aren’t otherwise interested in high-end GPUs purely for the sake of the new technologies introduced, then you should completely ignore the 2080Ti, and likely the 2080 as well.

            On the other hand, I own two 40+-inch 4K monitors and a 55-inch 4K TV. I do game at 4K, and bought the 1080Ti because it was the only card that made it feasible for some games at the time of its release. I’m part of the (yes, really small) realistic target segment for these cards. I’m also interested in both RTX and DLSS from a technology standpoint. Again, if neither one of these things is true for you, then just ignore the reviews for these cards, and wait for the 2070/2060 etc reviews. When those cards are released, I promise you that Jeff and co _[i<]will[/i<]_ review them running at 1440p - because that's the target resolution that said cards are intended to run at. Benchmarking is about exposing the maximum capabilities of the device under test, and running CPU-limited benchmarks does not do that for a GPU.

            • blitzy
            • 1 year ago

            I agree with some of what you’ve said, though I also made it pretty simple too. Why not include a data point which would take a minimal effort to demonstrate, that covers a key question for gamers. What is realistic performance likely to be? Realistic meaning at a resolution people will [i<]actually be using[/i<], rather than what for the majority is only theoretical (4K)? Essentially what you are saying is accept the assumption that a 2080ti is not for you, we don't need to demonstrate any data to show where this product shines best, and where it is actually bottlenecked. The reader should just know, without seeing any numbers the relative performance of the product. Providing a data point at 1440p would not be arduous to include and provides key insight into what realistic performance is likely to be for a majority of users. That's great that you are gaming at 4K, though it's realistically not the resolution that most serious gamers would be playing at. e.g. See list of currently available high refresh rate gaming displays above. The main focus of the review being on 4K performance is the correct approach, I do not question that at all, but 1440p should have been at least briefly covered to demonstrate with actual numbers what if any bottleneck there is. It's somewhat strange to see other readers argue against what is quite frankly a simple and small piece of data to provide, that does not detract from the overall review in any way whatsoever, and would not be onerous to provide. And it would answer the crucial question, what is real world performance going to be at the most common resolution for serious gamers?

            • Redocbew
            • 1 year ago

            What exactly do you mean by “briefly covered”? If there’s this much fuss about testing at 4k I can only imagine what would happen if some games were tested at 1440p but others were not. There would have to be another complete data set for all the applications used in testing. I am not a hardware reviewer, but that doesn’t sound like something trivial to me.

            You also keep talking about “realistic” and “theoretical” performance as if one were completely different from another when they’re really just different points on the same curve. Some tests are inherently more theoretical than others, and the results given by those tests don’t always give us realistic expectations. However, those are usually the synthetic tests, or tests that measure maximum memory bandwidth, fill rate, and so on. I don’t think that’s going to be a problem here.

            For this hardware testing at 4k gives me a better idea of the relative differences in performance. If I were a game developer, or I had specific reasons other than just being a gamer that required me to know more exactly what kind performance I could expect, then yeah I’d be looking for more data. As it is, I’m comfortable doing a little guesswork. If I hadn’t built the machine myself, then I think the chances of me being able to tell the difference between an RTX 2080 and a 1080 would be pretty low, so I’m not worried so much about getting it wrong.

            • blitzy
            • 1 year ago

            Briefly covered means literally run a couple of the most common / popular games at 1440p and show the numbers, and make a brief comment. It would not be difficult or time consuming.

            Yes 4K does show valuable insight into the potential capabilities of the card, and likely future performance as 4K becomes more widely adopted. Why not also provide a data point to cover the resolution that gamers will most commonly be using? It’s that simple.

            • blitzy
            • 1 year ago

            It’s really this simple.

            Who is gaming at 4K? Right now, generally people on TVs or other low refresh rate displays (in which case high FPS is largely irrelevant). VR is also a candidate, as usually this is close to 1440p per eye, so 4K performance is relevant there also. High refresh rate displays with 4K resolution are simply not readily available (they exist, but are extremely expensive, and hence are a niche market segment. 1440p gaming monitors are already fairly expensive as it is).

            Who is gaming at 1440p? Nearly all current gaming monitors are 1440p or near to that resolution. Don’t believe me? See for yourself. [url<]https://www.144hzmonitors.com/gaming-monitor-list/[/url<] There are very few good quality 4K displays that are suited to gaming, those that are, are extremely expensive. And if your 4K display isn't capable of high refresh rate, then a high performance gfx card isn't very relevant, is it? OK so why benchmark at 4K then? It shows the potential performance of the cards, 4K will only become increasingly relevant in future as technology advances and becomes more affordable. It should be the main focus of the review, and it is. Great, makes sense. Though if most gamers won't actually be gaming at 4K any time soon, how can we make this review more relevant to them? Show benchmarks at the resolution people will likely be gaming at now, and in the near term future. 1440p! Do we need a barrage of benchmarks? No, just a few to demonstrate with numbers what the real performance is. Wow! It's possible! For a minute I was worried I'd have to sell a kidney so I could play at 4K like all the big kids.

            • Redocbew
            • 1 year ago

            Disclaimer: car metaphor ahead. Furthermore, I am not a car person so this one may be even worse than most.

            If you want to know how fast your car can go you don’t go puttering around town or sit in stop-and-go traffic. That’s not going to tell you much of anything. Why would you want to know how fast your car can go if you’re never going to go that fast? No idea. Like I said, I’m not a car person. You’d have to ask someone else.

            I get it dude, really, and if some random dude on the Internet told me “you only think you need to know this, but you really don’t” I probably wouldn’t listen to them either. For a general audience I just don’t think the tolerances on these tests need to be quite as tight as you do, and following the herd without question is clearly a pet peeve of mine. That’s all.

            • Froz
            • 1 year ago

            Your assumption that all high end cards perform the same way in resolution lower than 4k and there is no gain from upgrading 1080 to 2080 is simply wrong. Look for example here:

            [url<]https://www.anandtech.com/show/13346/the-nvidia-geforce-rtx-2080-ti-and-2080-founders-edition-review/14[/url<] At 2560x1440 1080ti has 74.5 FPS compared to 96.8 for 2080ti. That is a substantial difference. And even at full HD there is a difference of about 10% (105 to 118). Comparison between 1080 and 2080 shows even bigger gain. Or check Wolfenstein 2 test in the same article. FFS, fullhd FPS is almost doubled with the new cards (160 vs 260). For people playing with high refresh rate it's going to matter. Of course it's going to vary a lot from game to game, but at least there is some info on what to expect.

            • Redocbew
            • 1 year ago

            I’m not assuming they all perform the same. I’m saying higher resolutions are better for GPU testing. It’s true though that all this talk of resolution is missing the point in a way, because the behavior of the hardware stays the same regardless.

            For the 2080Ti and the 1080Ti the test you’re referencing shows a 37% increase at 4k, a 22% increase at 1440p, and only a 12% increase at 1080p. So, which one is it? How much faster is the 2080Ti? Isn’t that what you want to know?

        • ptsant
        • 1 year ago

        Ideally what everyone would want is to send their system to Jeff for testing with the exact games they play. And this is obviously not doable.

        Still, I think for a couple of games there would be some use in putting out 1440p results. With the exception of the very recent 4k 144Hz panels I don’t think there are many gamers playing at 4k (how about a poll?) and extrapolating 1440p performance from 4k is not a trivial exercise.

        For example, if the 2080 runs 4k at 60 fps, the theoretical scaling would predict 135 fps (2.25 more pixels in 4k) but I have not idea if this is true. Would we expect something closer to 100fps? Or maybe even more than 135 fps (if some bottlenecks at removed when using lower res)?

        • chrcoluk
        • 1 year ago

        you should never aim to isolate, gpu’s are only part of a PC, and how that component interacts with others is a fair test.

        The 4k test smacks of following the nvidia guidelines which by the way do request 4k testing and also requests that a 2080 is reviewed vs a 1080ti.

        I got no issue with 4k been tested, just 4k been the “only” resolution been tested.

        • BurntMyBacon
        • 1 year ago

        Agreed. This is a video card review not a system review. Your review choices are plenty defensible in that regard.

        That said, a system level gaming review with TR’s frame time focus and targeted at common setups would be a desirable article not currently delivered elsewhere. It would answer questions like “How much of a CPU or GPU do I really need to deliver smooth gameplay at (1080/1440/4K@60Hz/100Hz/144Hz)?” and “Can a better CPU improve my experience despite providing similar average frame rates?” that many other sources can’t fully answer due to heavy focus on average frame rates. I’m sure you’d need to limit the number of GPUs and CPU tested to a small subset, but given a sensible number of data points, the results can be reasonably extrapolated for the rest. I’m not sure if or when you could find time to write such an article, but it would be appreciated and useful. If it is a no go, I’ll just have to “content myself” with the already appreciated and useful articles you have and continue to produce.

      • Redocbew
      • 1 year ago

      Not helpful, bro.

      • Durante
      • 1 year ago

      Unlike you, Jeff understands how to benchmark a GPU (rather than other system components).
      A metric by which the performance relation of GPUs towards each other changes significantly if you use a faster CPU is a “dumb dumb” metric to use for inter-GPU performance comparisons.

      If I can’t contextualize the data presented and need someone to (probably fallaciously) interpret it for me I could go to a random youtuber.
      If I want to understand the performance of a GPU, I go to TR.

      • travbrad
      • 1 year ago

      I can’t imagine very many people with 1080p monitors are buying $800 graphics cards. I’m still running 1080p@144hz for the foreseeable future specifically because I know I can’t afford cards like this but I still want high framerates/low frametimes.

      1440p testing would be helpful for a lot of people though considering how common those monitors are, and up until now were the highest resolution you could still get high refresh rates. I do understand at the end of the day they have to prioritize how they run tests and have limited time, and 4K does push cards harder.

    • gerryg
    • 1 year ago

    Jeff, since I’m not a pixel freak with money to burn, the price/performance graphs are usually the ones I study the most, especially when there’s not much competition in the review. It would be awesome if you guys put out a monthly updated price/performance graph for the mid and high -tier cards using average price at the start of each month. You could then point back to the original reviews for the cards listed. More value = more page views = more ad views = more dollars in the TR pockets. Seems like it would be an inexpensive way to add content. Think about it?

    • dragontamer5788
    • 1 year ago

    Good article, good analysis, and good technicals!

    My only comment is that maybe a bit more discussion should have gone into GPU pricing [b<]volatility[/b<]. The 1080 Ti has been anywhere from $530 to $650 the past few weeks. Supply / Demand is shifting significantly, so its really difficult to really make a proper price comparison. True, for $100 more the 2080 seems reasonable. But what if the 1080 Ti was $550?? (which it was just a few days ago??). Then +$200 for the 2080 doesn't seem so good anymore. The market prices are shifting dramatically. What is true one day will not be true the next. I think this article did a great job at putting price points (so that future readers will know the conditions that this conclusion was written under). But maybe one or two paragraphs letting people know how much crazy price-shifting is going on may help as well. Turbulent times ahead. I dunno when prices will stop bouncing around and that just makes pricing difficult.

    • CScottG
    • 1 year ago

    -nice. ..the “VALUE” Graph I was looking for.

    Did this particular graph feel like this to you?

    [url<]https://www.youtube.com/watch?v=YHv5jgXz9I8[/url<]

    • derFunkenstein
    • 1 year ago

    If y’all don’t like the prices on these, just you guys wait for Navi! I mean, seriously. You’re going to have to wait. It’s gonna be a while.

      • chuckula
      • 1 year ago

      Great [url=https://www.youtube.com/watch?v=4bgnBduAd4s<]Googly Moogly[/url<].

        • Redocbew
        • 1 year ago

        We so totally need an emoticon for that.

        • derFunkenstein
        • 1 year ago

        Nice

          • chuckula
          • 1 year ago

          True story: That guy was a college intern when Intel started 10nm development.

      • LostCat
      • 1 year ago

      I’m waiting for HDMI 2.1 kit anyway, so. Yes.

      All machines>TV>new receiver with eARC.

      • Prestige Worldwide
      • 1 year ago

      lol

      • anotherengineer
      • 1 year ago

      Not as bad as these prices.

      [url<]https://www.canadacomputers.com/product_info.php?cPath=43_1200_557_559&item_id=124175[/url<] $1100 +13% tax = $1243 I remember not too long ago that would get one a pretty decent PC.

        • Firestarter
        • 1 year ago

        that’s still enough for a pretty decent PC, just not one with a high end GPU

          • anotherengineer
          • 1 year ago

          Well I was talking cnd$, and sad thing is $300-$350 used to be high end vid card not too many years ago.

      • nanoflower
      • 1 year ago

      For me it seems like this isn’t quite ready for the general public. The prices are high and the new features are really going to take a while for developers to take advantage of. As with most new technology I think most people would be better off to wait for the next round of RTX cards in 2019 when there will hopefully be a number of games out or soon released that take advantage of the new features.

    • moose17145
    • 1 year ago

    I might have been interested. But not at those prices.

      • derFunkenstein
      • 1 year ago

      The good news is it’s just a toy. That’s all any graphics card is unless you’re a crazed miner. So value is in the eye of the beholder. It’s not there for you, and it’s not there for me, but it’s not the end of the world.

    • djayjp
    • 1 year ago

    Great review though was hoping to see a DLSS preview of sorts. Also I think it might be more helpful to have *percentage* of time spent beyond x instead of absolute time.

    • DeadOfKnight
    • 1 year ago

    Thank you TR for helping me with a purchasing decision yet again. I was considering getting a discounted 1080 Ti, but this review convinced me to go ahead and get Turing.

      • Gastec
      • 1 year ago

      Looool! This reminded me of those positive comments on scam sites, like a recent one about crypto-currency “investment” site. Where a guy posts and replies to his own posts using multiple accounts to impersonate as different people but using the same name 🙂

        • DeadOfKnight
        • 1 year ago

        Well these guys work for basically nothing these days. The money they make pays the bills, but you can tell they really can’t increase the budget for the site or hire a bunch of new writers. The least I can do is make them feel appreciated, especially when this site is one of the few that are doing it right, and in fact pioneered the measurement of gaming performance by the frequency and length of the bottom outliers. Other sites try to copy it and are still essentially measuring the minimum fps.

    • hiki
    • 1 year ago

    I would buy one if there were at least 3 new appealing games. But I can’t count one.

    Alien isolation? Stalker? Crysis 1? Old, adventurer tomb raiders? Borderlands 3? Bioshock 1 or 2? Dead space 1 or 2?

      • ClickClick5
      • 1 year ago

      Heh. In a lull of modern gaming, I dug out the old P4 670 with a Geforce 210 and booted up Deus Ex again. Well, I’m playing them backwards this time. Almost done with Invisible War, then I’ll start the original.

    • USAFTW
    • 1 year ago

    Stop whining and pay up, people. Do you want to waste your life without ray tracing? /s
    It’s kinda astonishing to me as a frequent reader of TR the lengths this article goes to promote these things. Whatever happened to judging a card by its merits?
    1. The new Founder’s Edition cooler is a dual-axial, open air design, in contrast with the previous Founders Edition 1080 Ti which uses a blower and runs at higher temperatures, reducing clock speeds more often. A more fair comparison would be to compare it with a non-founders edition dual fan aftermarket cooler.
    2. All those shiny new bells and whistles are unavailable right now to consumers, that includes RTX and DLSS. Instead of speculating about how Pascal owners might feel when these features get released, how about withholding judgement until the true visual benefits of RTX and DLSS, as well as their performance costs, are independently verified and tested?
    3. What if RTX really is a performance hog that the playable demos (Shadow of the Tomb Raider, Battlefield V) showed? Buying a $800 RTX 2080 or a $1200 RTX 2080 Ti to play games, albeit with RTX enabled, at 1080p, seems like an unjustifiable sacrifice of performance for users who have probably invested in 4K HDR monitors.
    4. The only applications that support DLSS at the moment are two demos, neither of which are available to consumers.
    I’m kinda disappointed with how uncritically TR has approached this release. Not only are other publishers more nuanced about pricing and performance, publications that are held in much less regard than TR provided a much more comprehensive and representative sample of games to present an accurate picture of where things stand (e.g. HardwareUnboxed, including frametime metrics). All we got from the TR 2080 Ti review were two pages – which read like a press release – dedicated to the non-playable demos provided by Nvidia, and games specifically highlighted by Nvidia, like Wolfenstein and FF XV.

      • chuckula
      • 1 year ago

      1. TR uses the word “price” 23 times in the article. Doesn’t sound uncritical to me.

      2. Do you see a TR Editor’s Choice logo at the bottom of the review? I sure don’t.

        • USAFTW
        • 1 year ago

        1. Counting a particular word within the article is not as informative as studying the context in which those words have been used. For example, I could say: “The “price” is $800, but don’t worry that you’re not getting as big of a performance jump as you had from previous generation in the games that you actually play, but you “could” see some shiny new effects in the future that’ll be totally worth it, trust us.” 23 times. That’s not critical.
        2. Giving it an Editor’s Choice when others have given it a flat “don’t buy” would be kinda out there.

          • chuckula
          • 1 year ago

          At the risk of having Jeff edit my comment for being over-long again… here’s a quote taken from the actual article instead of the fascinating non-factual construct you invented for your comment:

          [quote<]Right now, pricing for the RTX 2080 puts it in contention with the GeForce GTX 1080 Ti. That's not a comfortable place to be, given that software support for Turing's unique features is in its earliest stages. Our back-of-the-napkin math puts the RTX 2080's rasterization capabilities about on par with those of the 1080 Ti, and rasterization resources are the dukes the middle-child Turing card has to put up today. On top of that, plenty of gamers are just plain uncomfortable with any generational price increase from the GTX 1080 to the RTX 2080. That's because recent generational advances in graphics cards have delivered new levels of graphics performance to the same price points we've grown used to. For example, AMD was able to press Nvidia hard on this point as recently as the Kepler-Hawaii product cycle, most notably with the $400 R9 290. Once Maxwell arrived, the $330 GeForce GTX 970 thoroughly trounced the Kepler GTX 770 on performance and the R9 290 on value, and the $550 GTX 980 outclassed the GTX 780 Ti for less cash. The arrival of the $650 GTX 980 Ti some months later didn't push lesser GeForce cards' prices down much, but it did prove an exceptionally appealing almost-Titan. AMD delivered price- and performance-competitive high-end products shortly after the 980 Ti's release in the form of the R9 Fury X and R9 Fury. Overall, life for PC gamers in the Maxwell-Hawaii-Fiji era was good. Back then, competition from the red and green camps was vigorous, and that competition provided plenty of reason for Nvidia and AMD to deliver more performance at the same price points—or at least to cut prices on existing products when new cards weren't in the offing. Pascal's release in mid-2016 echoed this cycle. At the high end, the GTX 1080 handily outperformed the GTX 980 Ti, while the GTX 1070 brought the Maxwell Ti card's performance to a much lower price point. AMD focused its contemporaneous efforts on bringing higher performance to more affordable price points with new chips on a more efficient fabrication process, and Nvidia responded with the GTX 1060, GTX 1050 Ti, and GTX 1050. Some months later, we got a Titan X Pascal at $1200, then a GTX 1080 Ti at $699. The arrival of the 1080 Ti pushed GTX 1080 prices down to $500. Life was, again, good. The problem today is that AMD has lost its ability to keep up with Nvidia's high-end product cycle. The RX Vega 56 and RX Vega 64 arrived over a year after the GTX 1070 and GTX 1080, and they only achieved performance parity with those cards while proving much less power-efficient. Worse, Vega cards proved frustratingly hard to find for their suggested prices. Around the same time, a whole bunch of people got the notion to do a bunch of cryptographic hashing with graphics cards, and we got the cryptocurrency boom. Life was definitely not good for gamers from late summer 2017 to the present, but it wasn't entirely graphics-card makers' fault. Cryptocurrency miners' interest in graphics cards has waned of late, so graphics cards are at least easier to buy for gamers of every stripe. The problem for AMD is that Vega 56 and Vega 64 cards are still difficult to get for anything approaching their suggested prices, even as Pascal performance parity has remained an appealing prospect for gamers without 4K displays. On top of that, AMD has practically nothing new on its Radeon roadmap for gamers at any price point for a long while yet. Sure, AMD is fabricating a Vega compute chip at TSMC on 7-nm FinFET technology, but that part doesn't seem likely to descend from the data center any time soon. No two ways about it, then: the competitive landscape for high-end graphics cards right now is dismal. As any PC enthusiast knows, a lack of competition in a given market leads to stagnation, higher prices, or both. In the case of Turing, Nvidia is still taking the commendable step of pushing performance forward, but it almost certainly doesn't feel threatened by AMD's Radeon strategy at the moment. Hence, we're getting high-end cards with huge, costly dies and price increases to match whatever fresh performance potential is on tap. Nvidia is a business, after all, and businesses' first order of business is to make money. The green team's management can't credibly ignore simple economics.[/quote<] Oh yeah, that sound's like TR just mailed it in and copy-n-pasted an Ngreedia press release right there!

            • USAFTW
            • 1 year ago

            So, in other words, times have changed. Competition is asleep at the wheel, expect to pay exorbitant prices for hardware. Nothing that is stated in that comment is incorrect, and I appreciate the analysis. What’s missing is a reasonable analysis of what exactly the RTX 2080 bring to the table at $100 more than the 1080 Ti, which the consumer can enjoy on day 1. Instead what we get is guesswork at what might be available at some point in the future, while the merit of that thing is taken at face value without any data to back it up.
            I stand by what I said earlier. I have been critical of AMD in the past, when Vega turned out to be a very late flop. When Nvidia does this, it deserves a stern rebuke, that’s not what this review is.

            • dragontamer5788
            • 1 year ago

            [quote<]What's missing is a reasonable analysis of what exactly the RTX 2080 bring to the table at $100 more than the 1080 Ti, which the consumer can enjoy on day 1[/quote<] The [b<]bulk[/b<] of the review focuses on features that are available today: RTX 2080's performance compared to the 1080 Ti on actual games. There are a couple of suppositions about maybe potential features in the future, but the article is quite clear that none of those features are quite ready yet. [quote<]Right now, pricing for the RTX 2080 puts it in contention with the GeForce GTX 1080 Ti. That's not a comfortable place to be, given that software support for Turing's unique features is in its earliest stages.[/quote<]

            • USAFTW
            • 1 year ago

            We could do without those speculations and “suppositions” about features that are not quite ready yet, especially when they’re being viewed a features worthy enough to stomach a huge price hike.
            The statements that you have quoted are not all that is given in the last page. How about:
            [quote<]Yes, gamers are going to be waiting on those features to bear fruit, but the upside [b<]could[/b<] be considerable, and it's not as though Nvidia isn't courting developers to use its features. There are plenty of games in the pipe with DLSS support at a minimum, and a handful of developers have already run up a flag for ray-traced effects in their games. Those are just the capabilities that Nvidia has put a bow on, too—Turing mesh shaders [b<]could[/b<] change the way developers envision highly detailed scenes with complex geometry, and that stuff isn't ever coming to Pascal, either. With so many [b<]potential[/b<] routes to better performance from this architecture, it [b<]seems[/b<] unreasonably pessimistic to say that none of Nvidia's bets will pay off.[/quote<] or: [quote<]On the basis of a $100 difference, it [b<]could[/b<] be smarter to get on the Turing curve and [b<]risk[/b<] a bit of a wait than it is to tie your horse to an architecture that will never benefit from those future developments, especially given the lengthening useful life of computing hardware these days. That's especially true if you're a pixel freak, and if you're shopping for an $800 graphics card, how can you not be?[/quote<] or: [quote<][b<]Whether[/b<] Turing fully comes into its own with time [b<]remains to be seen[/b<], but I'm [b<]optimistic[/b<] the wait will be worth it. Now, that wait begins.[/quote<]

            • dragontamer5788
            • 1 year ago

            [quote<]We could do without those speculations and "suppositions" about features that are not quite ready yet[/quote<] Why? Its information. We all know from the article that they aren't ready yet and everything is about potential. If spending $100 on "potential" is worth it to someone, then that's their decision. But there's no recommendation here, and the article as a whole seems to lean towards Pascal's superior price/performance. I mean, what would you rather come out here with? An immature Linus rant on Youtube for multiple minutes? There's Youtube for that. I'm here for facts, not personality or mob-mentality. Youtube and Reddit have plenty of mob mentality hate already, no need to bring it here.

            • Jeff Kampman
            • 1 year ago

            I outlined what the RTX 2080 brings to the table on day one for traditional rasterization performance and set that against its price increase in the conclusion. There are concrete performance improvements to be had right now, and we outlined exactly what they are.

            I also noted that ray-tracing, DLSS, etc. are bets. Having watched a bunch of SIGGRAPH presentations and read a few papers, I’m reasonably confident that those bets will pay off with developer support. I follow a few graphics devs via Twitter, and there is considerable enthusiasm for Turing features. As a result, I expressed my comfort wagering some portion of the $100 price increase on that risk. Nvidia is a large company with plenty of cash to throw at developer relationships and software support, and I’m reasonably confident it’ll use that muscle to ensure RTX buyers aren’t left holding the bag.

            If you’re an acutely price-sensitive buyer shopping for a $700 or $800 video card, sure, go for a discounted 1080 Ti. If $100 is going to make or break you, then don’t take the risk. I think enthusiasts who have $100 extra to play with and want to know whether Turing is worth it today should be willing to take what is a relatively low-stakes bet in the context of an entire system build for what could be considerable upside.

            Finally, I noted that much testing lies ahead to really get a handle on these cards, and that we’re going to be waiting a while to really render a verdict. At the same time, there’s literally nothing wrong with buying an RTX 2080 (or a GTX 1080 Ti) today if you need its performance.

            • VincentHanna
            • 1 year ago

            Your opinion, I think, is worth about as much as mine is (not much.) But let me say that my opinion is the POLAR OPPOSITE of yours.

            I think that comparing Apples to Oranges, GTX to RTX, is a waste of time… It leaves performance on the table, and gives “dumb dumbs” as you call them, something to complain about because, in their myopic worldview, the only thing that matters is high FPS and High resolution… well, those things are important, and these reviews show that the RTX can more than handle them as well… but I prefer visual fidelity. Given the choice between water that looks like water, and water that has a really, really high triangle count, I’ll pick the former.

            So maybe it’s a good thing that the article lands somewhere in the middle.

        • drfish
        • 1 year ago

        Not even recommend, actually. Many bets were hedged. No denying that they’re beasts with untapped potential, though.

      • auxy
      • 1 year ago

      Can you explain to me how TR was “uncritical” in these two reviews?

        • USAFTW
        • 1 year ago

        How is dedicating two pages to two unplayable tech demos provided by Nvidia, while the selection of games is very small, critical?
        Furthermore, basically all the games tested here were the ones Nvidia showed in their presentation labelled “RTX 2080: 2x GTX 1080”.
        As a reviewer, it is one’s obligation to report on what is necessary, not what is asked of you by the manufacturer. Also, Jeff goes on an on in conclusion about Ray Tracing and DLSS, when practical, real-world implementations of these technologies are virtually non-existent to the end user.
        In the RTX 2080 review, Jeff runs a benchmark of DLSS to showcase its noticeable uplift, while providing no image quality comparison except “my hyper-critical graphics-reviewer eye couldn’t pick out any significant degradation in image quality from the switch to DLSS”. If you’re that confident in your conclusion, why not provide some comparison screenshots for good measure?

          • auxy
          • 1 year ago

          [quote<]How is dedicating two pages to two unplayable tech demos provided by Nvidia, while the selection of games is very small, critical?[/quote<]What do you expect him to do, ignore DLSS? Nvidia claims it's a big deal, TR analyzed those claims. Jeff came away impressed. [quote<]Furthermore, basically all the games tested here were the ones Nvidia showed in their presentation labelled "RTX 2080: 2x GTX 1080". [/quote<]Even assuming it's true, and I'm not saying it's not (I don't know), what does this have to do with anything? [quote<]As a reviewer, it is one's obligation to report on what is necessary, not what is asked of you by the manufacturer. Also, Jeff goes on an on in conclusion about Ray Tracing and DLSS, when practical, real-world implementations of these technologies are virtually non-existent to the end user. [/quote<]As Jeff has said elsewhere in the comments, Nvidia didn't "ask him" to do anything or say anything or report on anything. He is "going on about" these things because they could affect the value proposition of the card in the future. You seem to think he should have considered the card as if it were identical to the 10 series? You are monstrously stupid if that is the case. HUGE amounts of die area on these chips were dedicated to hardware that simply isn't used yet, but Jeff doesn't have the software available to test those parts yet. If you actually read the review including the sections you quoted to dragontamer5788 earlier, you would notice he left the question of whether these features present any real value more or less up in the air. [quote<]In the RTX 2080 review, Jeff runs a benchmark of DLSS to showcase its noticeable uplift, while providing no image quality comparison except "my hyper-critical graphics-reviewer eye couldn't pick out any significant degradation in image quality from the switch to DLSS". If you're that confident in your conclusion, why not provide some comparison screenshots for good measure?[/quote<]Screenshots are not a good comparison tool for anti-aliasing, particularly not multi-frame anti-aliasing techniques such as DLSS. You would need to use video with lossless compression and I doubt TR wants to host gigs of raw 4K video for everyone to pore over and say "yep that looks just like Jeff said it does." If you're reading this site as much as I know you do then you should be confident in his analysis, so why are you being so judgemental? What did you want him to say? "Yep it's garbage and AMD is the best woo!" Idiot.

            • USAFTW
            • 1 year ago

            I don’t think I’m being unreasonable about asking for some concrete examples for technologically innovative features that are, at best, in their infancy.
            There’s a way of addressing this, and that is to wait for the games that implement DLSS and RTX to be released and do a brief analysis about how they implement each feature and what impact they may have on visuals or performance.
            As to why selecting these games matters, it’s the same selection Nvidia chose for their slides since they represent a best case scenario for Turing.

            • dragontamer5788
            • 1 year ago

            You are setting up a strawman. You clearly have beef with what someone said, somewhere else, and you’re projecting it into this article.

            [quote<]I don't think I'm being unreasonable about asking for some concrete examples for technologically innovative features that are, at best, in their infancy. [/quote<] Indeed. And the article seems to agree with this sentiment. The article not only tells you that these features are unavailable, but also spends 10-pages talking about [b<]rasterization[/b<] instead. And furthermore, the article very clearly states that these increased speeds come at a dollar cost. [quote<]That's not a comfortable place to be, given that software support for Turing's unique features is in its earliest stages.[/quote<] [quote<]On top of that, plenty of gamers are just plain uncomfortable with any generational price increase from the GTX 1080 to the RTX 2080.[/quote<] [quote<]Hence, we're getting high-end cards with huge, costly dies and price increases to match whatever fresh performance potential is on tap. Nvidia is a business, after all, and businesses' first order of business is to make money. The green team's management can't credibly ignore simple economics.[/quote<] What more do you want? The entirety of the conclusion is based on [b<]rasterization[/b<] argument, the thing that TechReport was able to measure today. And that's completely reasonable. ------------- You are unreasonably focusing on an imaginary argument that exists for no more than 4-paragraphs of a [b<]12-page[/b<] document. Raytracing / DLSS [b<]has[/b<] to be mentioned, its a feature of the card. But it is hardly something that TR has focused on at all. And Raytracin g / DLSS it [b<]is[/b<] mentioned, it always has the "but not quite ready yet" qualifiers. (IE: lots of "could", or "early state", etc. etc. kinds of words in those paragraphs.)

            • derFunkenstein
            • 1 year ago

            I stopped reading his reply at “supplied by Nvidia” because those demos belong to Epic and Square Enix. You’re giving this guy way more respect than he deserves just by replying to him.

    • liquidsquid
    • 1 year ago

    I didn’t see it mentioned:

    Power consumption for all of this ungodly processing power?

    It is getting to the point we can likely run a reasonable weather sim on our own computers and get an answer within a few days. Super computer on a card.

      • NoOne ButMe
      • 1 year ago

      results on various other places place it about 40-50W less from the PSU compared to a 1080ti FE.

        • auxy
        • 1 year ago

        [super<]* when running code that doesn't use the R/T or tensor hardware[/super<]

    • Kretschmer
    • 1 year ago

    People building from scratch have insane performance available, and I get to stay happy with my 1080Ti. Everyone wins!

      • Redocbew
      • 1 year ago

      That’s far too reasonable of an opinion.

    • puppetworx
    • 1 year ago

    On the one hand it’s great to see Nvidia achieve significantly higher performance at a time when AMD’s offerings have decidedly stagnated. On the other I want to know what the rate of change of (inflation-adjusted)price/performance is in the GPU market because I’d wager it’s near 20 year lows.

      • moose17145
      • 1 year ago

      Not quite.

      The Geforce3 Ti 500 released in 2001 was released for an MSRP of about 350-400 USD

      Adjusting for inflation… those same cards today would cost 504-576 USD today.

      Inflation calculated using the below calculator
      [url<]https://data.bls.gov/cgi-bin/cpicalc.pl[/url<] Edit: In fact, even if you go all the way back 20 years to 1998 and take the 300 dollar initial release price of the 12MB Voodoo 2, that would only equate to 468 Dollars in todays 2018 inflated dollars. Even the Voodoo3 3500TV only released for 250 dollars ($384 in 2018 dollars). So in short... the price of these cards are hardly just keeping up with inflation. Even factoring in inflation these are probably some of, if not the most expensive graphics cards ever to hit the general market.

        • puppetworx
        • 1 year ago

        Interesting. I am talking about the rate of improvement of the price/peformance ratio over prior generations though.

          • moose17145
          • 1 year ago

          I would argue these are still a terrible value even in that case. Back when the 4870 and 280’s were trading blows, or maybe the GeForce 9×0 and R9 2×0 series were trading blows it was quite a bit better because there was actual competition keeping the prices on the various tiers lower. This is nothing but a prime example of what happens no one else can compete. We all get screwed.

          Unfortunately, that likely means I am probably going to be holding onto my R9 290 for another 2ish years.

    • Demetri
    • 1 year ago

    The 1080ti tested is the FE, correct? I imagine one of the partner cards with dual axial or triple fan setups would close the gap on performance. Might be a better comparison considering they switched off the blower and factory overclocked the FE this time around.

      • jihadjoe
      • 1 year ago

      The 1080Ti FE performance isn’t bad at all, the sacrifice you make for having the blower fan is mostly in the noise department. EVGA SC2 is only ~3% faster.

      [url<]https://www.techpowerup.com/reviews/EVGA/GTX_1080_Ti_SC2/31.html[/url<]

      • BurntMyBacon
      • 1 year ago

      Does seem a bit disingenuous to pit a hot clocked (+5.26% over stock) 2080 against a stock clocked 1080Ti with a potentially inferior cooler. Most AIB partners offer 1080Ti boards with a nicer cooler and a similar ~5% overclock for not a lot more scratch than stock.

      On the other hand, the memory disparity still holds and the 8% higher 99th-percentile-FPS in the current charts makes it likely that the 2080 will still win, albeit with an even less impressive margin of victory. Given the current price disparity (especially in the UK) and early stage of Raytracing and DLSS support, I don’t believe purchasers of appropriately priced 1080Ti cards will regret their decision. For those interested in getting in on RT and DLSS early, I would still wait a bit for AIB partner cards and see where prices setlle out. It will be a little while before game developers can make good on their RT and DLSS support promises anyways.

      [i<]Edit: My distaste with nVidia changing their definition of FE cards from stock cards with blower coolers to overclocked cards with multi-fan open coolers and calling them equivalent should in no way be interpreted as a slight against TR in their card selection.[/i<]

    • firewired
    • 1 year ago

    Anyone buying this generation of Nvidia cards (RTX) is a fool or poser.

    The only tangible benefit you get in real-world performance is new image quality technology that cannot be leveraged yet and therefore cannot even be properly bench-marked for real-world performance yet.

    If you think buying one of them future-proof’s your build, you are also mistaken. History in this marketplace has shown that the first-generation of any new technology released that enhances image quality typically does not perform at a high-enough level to make that technology worth enabling.

    If you do not believe it then you need a history lesson and should read up on all of the major graphics chip releases since 1997.

    If you absolutely want this technology, wait for the die-shrunk refresh that will inevitably arrive and bring higher performance and greater power efficiency.

    Unless you like throwing money away, in which case go ahead, knock yourself out.

      • Krogoth
      • 1 year ago

      Actually, the only reason to get the current Turing SKUs is if you want to game at 4K or with HDR on top (Said monitors with both 4K/HDR are way more expensive then any of the Turing SKUs) or are upgrading from Kepler/GCN 1.0-era hardware or older.

      Their performance does scale with their MSRP. The 2080Ti may not be cheap, but it does have the performance to back it.

      RTX stuff is a pure marketing gimmick like Pixel/Vertex shading was back when the Geforce 3 was introduced.

        • kuraegomon
        • 1 year ago

        Bingo. If you really want to _[i<]start[/i<]_ to game with 4K or/and HDR today, then Turing is the only purchase that makes sense. If you were interested in that a year ago, then you already have a 1080Ti (guilty), and were (almost) never going to upgrade to this generation anyway. I await the 7nm successor to Turing with great interest 🙂

      • Kretschmer
      • 1 year ago

      The raw performance scales with the price, and DLSS will allow for unprecedented performance in supported titles.

      The raytracing is more for people who are already doing raytracing professionally and to let devs test the feature.

      • Action.de.Parsnip
      • 1 year ago

      Don’t down vote the guy he’s got history on his side.

      When has betting on future goodies ever, EVER been a good play? Seriously.

      It’s a marginal improvement over the 1080ti, it’s 12nm with 7nm more or less round the corner, so it’ll come into it’s own riiiiiiight about the time it’s made obsolete. In the meantime you’ve paid far, faaar over the odds for it. Jump on a cheap 1080ti or wait till Christmas-ish for a price cut is surely the way forward.

        • tanker27
        • 1 year ago

        I agree look how the widespread adoption of Vulkan/Mantle has been.

        /snark

          • Action.de.Parsnip
          • 1 year ago

          It’s true though mantle went nowhere. Don’t buy into future maybes

      • auxy
      • 1 year ago

      I downvoted you even though I agree because you write like a jerk and frankly you sound like a self-important cockbite.

    • Pville_Piper
    • 1 year ago

    I’ve read/watched other reviewers and I have to say this is the most intelligent, thought out review.

    Most reviewers look at the raw data and scream about how pricey it is… This review strikes an excellent balance between what is Turing/Pascal today and Turing has the potential to become. If I was going to buy a 1080ti, I would certainly consider the getting the RTX 2080 over it instead for 2 reasons:
    1. I play Battlefield and will most likely get Battlefield V, so I will benefit from Turning.
    2. I play VR on Elite Dangerous and the smoothness of the RTX 2080 could really help push the framerates in very demanding situations.

    Very nicely summed up Jeff.

    • chuckula
    • 1 year ago

    Kampman! You got some splainin’ to do!

    We never put [url=https://www.youtube.com/watch?v=QkM70XFhs10<]Far Cry 5[/url<] on our Nvidia approved game list! [Looks at the results] But we won't push the issue.

      • Krogoth
      • 1 year ago

      C’mon man, don’t tell AMD fans that AMD Evolved titles run exceptionally well on Turing SKUs. ;D

      #PoorGCN
      #PrimativeShadersMatter
      #AsyncShadingIsTheFuture

      • thedosbox
      • 1 year ago

      For laughs, I would like to see someone run the old Ruby demo’s on this thing.

    • Krogoth
    • 1 year ago

    It is basically a repeat of 1080 versus 980Ti back in the day.

      • K-L-Waster
      • 1 year ago

      With the exception that the 1080 didn’t introduce entirely new render techniques compared to the 980TI — it was basically more of the same, but newer.

      Also IIRC the 1080 was *cheaper* than the 980 TI, not more expensive.

      (Edit to fix card number dyslexia…)

        • drfish
        • 1 year ago

        [quote<]With the exception that the 1080 didn't introduce entirely new render techniques compared to the 980TI -- it was basically more of the same, but newer.[/quote<] Don't forget about all those awesome VR-specific enhancements. Buwahahahaha!

        • ColeLT1
        • 1 year ago

        Krogoth is right, TR’s 1080 review had ~+150 for the 1080 over the 980ti.

        [url<]https://techreport.com/review/30281/nvidia-geforce-gtx-1080-graphics-card-reviewed/14[/url<]

      • cosminmcm
      • 1 year ago

      And the 1080 was around 20% faster.

        • ColeLT1
        • 1 year ago

        At 1080 release:
        980ti -> 1080 = +20% performance for +27.5 Price
        1080ti -> 2080 = +10% fps performance +17% 99th percentile performance for +15% price (really close to 20% price difference due to MIR/sales on 1080ti).

        [url<]https://techreport.com/review/30281/nvidia-geforce-gtx-1080-graphics-card-reviewed/14[/url<]

      • psuedonymous
      • 1 year ago

      Take a look at those [url=https://techreport.com/review/30281/nvidia-geforce-gtx-1080-graphics-card-reviewed/14<]vintage perf/$ charts from the 1080 review conclusion[/url<], and at the launch pricing vs. today.

    • Zizy
    • 1 year ago

    From the data provided, the card seems too slow for ray tracing, while DLSS could perhaps offer better performance when compared to more accurate AA methods… but we have to see the effects on image quality in independent testing. So, it has to distinguish itself as a good graphics card for today games. Every sucker wanting to play tomorrow games bought Vega 64 instead 😀

    That said, 2080 isn’t a BAD card (unlike said Vega 64). It just isn’t offering anything relevant in my opinion. If you wanted that level of performance at that price, 1080Ti offered that for a year. When you also consider die size increase and power draw increase in this generation, this product is a dud from engineering point of view. Not as bad as AMD ones, but still shitty compared to incredible gains of 9xx and 10xx series.

      • Chrispy_
      • 1 year ago

      I don’t see data provided about raytracing here. Or anywhere else, for that matter.

      I’m very curious to know about the RTX raytracing performance though, since the GTX 2060 is likely to be the affordable mainstream card that most people end up buying and – [i<]if it even has raytracing at all[/i<] the rumours are that it will be a TU116 die with 24 SMP units, which means a best-case, maximum possible (fully-active die variant) of less than 35% of the raytracing power that TU102 has. I mean, nobody's really benchmarked the RTX 2080Ti at raytraced titles yet, because the developers haven't released those games yet, but EA/Dice have already said that they've needed to dial back the raytracing from the BF5 demo videos they released for RTX launch because at those settings it tanks the framerate. In context, they'd be talking about the 2080 and 2080Ti. That's not even a fully raytraced game. That's a traditionally pre-baked shadow-mapped/reflection-mapped game with a few objects being given the pseudo-raytracing treatment. The only thing we've actually seen is pre-baked, 24fps non-interactive demos, and when I say [i<]we[/i<] I actually mean independent reviewers like Jeff, and not the general public.

    • Blytz
    • 1 year ago

    Allow me ask a question that has plagued me since the 1080’s

    Whats with 11 gig of ram (not 8,12,16 or even 10)

      • Jeff Kampman
      • 1 year ago

      32-bit interface to each GDDR package * 11 1-GB packages = fully connecting a 352-bit bus

      • NTMBK
      • 1 year ago

      One of the 12 memory channels on the chip is disabled (and the corresponding chip is not present). [url=https://img.purch.com/msi-geforce-gtx-1080-ti-gaming-x-11g/o/aHR0cDovL21lZGlhLmJlc3RvZm1pY3JvLmNvbS9aLzEvNjY5OTk3L29yaWdpbmFsL01TSS1HVFgtMTA4MC1UaS1HYW1pbmctWC0xMUctUENCQS1OYWtlZC1Gcm9udC5qcGc=<]You can actually see the pads for the missing memory die on the circuit board.[/url<]

      • Blytz
      • 1 year ago

      I consider myself further educated. Thanks 🙂

Pin It on Pinterest

Share This