Home Nvidia’s GeForce RTX 2080 graphics card reviewed
Reviews

Nvidia’s GeForce RTX 2080 graphics card reviewed

Renee Johnson
Disclosure
Disclosure
In our content, we occasionally include affiliate links. Should you click on these links, we may earn a commission, though this incurs no additional cost to you. Your use of this website signifies your acceptance of our terms and conditions as well as our privacy policy.

Nvidia’s GeForce RTX 2080 Ti has already proven itself the fastest single graphics card around by far for 4K gaming, but the $1200 price tag on the Founders Edition card we tested—and even higher prices for partner cards at this juncture—mean all but the one percent of the one percent are going to be looking at cheaper Turing options.

So far, that mission falls to the GeForce RTX 2080. At a suggested price of $700 for partner cards or $800 for the Founders Edition we’re testing today, the RTX 2080 is hardly cheap. To be fair, Nvidia introduced the GTX 1080—which this card ostensibly replaces—at $600 for partner cards and $700 for its Founders Edition trim, but that card’s price fell to $500 after the GTX 1080 Ti elbowed its way onto the scene. Right now, pricing for the RTX 2080 puts it in contention with the GeForce GTX 1080 Ti. That’s not a comfortable place to be, given that software support for Turing’s unique features is in its earliest stages. Our back-of-the-napkin math puts the RTX 2080’s rasterization capabilities about on par with those of the 1080 Ti, and rasterization resources are the dukes the middle-child Turing card has to put up today.

On top of that, plenty of gamers are just plain uncomfortable with any generational price increase from the GTX 1080 to the RTX 2080. That’s because recent generational advances in graphics cards have delivered new levels of graphics performance to the same price points we’ve grown used to. For example, AMD was able to press Nvidia hard on this point as recently as the Kepler-Hawaii product cycle, most notably with the $400 R9 290. Once Maxwell arrived, the $330 GeForce GTX 970 thoroughly trounced the Kepler GTX 770 on performance and the R9 290 on value, and the $550 GTX 980 outclassed the GTX 780 Ti for less cash. The arrival of the $650 GTX 980 Ti some months later didn’t push lesser GeForce cards’ prices down much, but it did prove an exceptionally appealing almost-Titan. AMD delivered price- and performance-competitive high-end products shortly after the 980 Ti’s release in the form of the R9 Fury X and R9 Fury.

Overall, life for PC gamers in the Maxwell-Hawaii-Fiji era was good. Back then, competition from the red and green camps was vigorous, and that competition provided plenty of reason for Nvidia and AMD to deliver more performance at the same price points—or at least to cut prices on existing products when new cards weren’t in the offing.

Pascal’s release in mid-2016 echoed this cycle. At the high end, the GTX 1080 handily outperformed the GTX 980 Ti, while the GTX 1070 brought the Maxwell Ti card’s performance to a much lower price point. AMD focused its contemporaneous efforts on bringing higher performance to more affordable price points with new chips on a more efficient fabrication process, and Nvidia responded with the GTX 1060, GTX 1050 Ti, and GTX 1050. Some months later, we got a Titan X Pascal at $1200, then a GTX 1080 Ti at $699. The arrival of the 1080 Ti pushed GTX 1080 prices down to $500. Life was, again, good.

The problem today is that AMD has lost its ability to keep up with Nvidia’s high-end product cycle. The RX Vega 56 and RX Vega 64 arrived over a year after the GTX 1070 and GTX 1080, and they only achieved performance parity with those cards while proving much less power-efficient. Worse, Vega cards proved frustratingly hard to find for their suggested prices. Around the same time, a whole bunch of people got the notion to do a bunch of cryptographic hashing with graphics cards, and we got the cryptocurrency boom. Life was definitely not good for gamers from late summer 2017 to the present, but it wasn’t entirely graphics-card makers’ fault.

Cryptocurrency miners’ interest in graphics cards has waned of late, so graphics cards are at least easier to buy for gamers of every stripe. The problem for AMD is that Vega 56 and Vega 64 cards are still difficult to get for anything approaching their suggested prices, even as Pascal performance parity has remained an appealing prospect for gamers without 4K displays. On top of that, AMD has practically nothing new on its Radeon roadmap for gamers at any price point for a long while yet. Sure, AMD is fabricating a Vega compute chip at TSMC on 7-nm FinFET technology, but that part doesn’t seem likely to descend from the data center any time soon.

No two ways about it, then: the competitive landscape for high-end graphics cards right now is dismal. As any PC enthusiast knows, a lack of competition in a given market leads to stagnation, higher prices, or both. In the case of Turing, Nvidia is still taking the commendable step of pushing performance forward, but it almost certainly doesn’t feel threatened by AMD’s Radeon strategy at the moment. Hence, we’re getting high-end cards with huge, costly dies and price increases to match whatever fresh performance potential is on tap.  Nvidia is a business, after all, and businesses’ first order of business is to make money. The green team’s management can’t credibly ignore simple economics.


A block diagram of the TU104 GPU. Source: Nvidia

On that note, the RTX 2080 draws its pixel-pushing power from a smaller GPU than the 754-mm² TU102 monster under the RTX 2080 Ti’s heatsink. The still-beefy 545-mm² TU104 maintains the six-graphics-processing-cluster (GPC) organization of TU104, but each GPC only contains eight Turing streaming multiprocessors, or SMs, versus 12 per GPC in TU102. Those 48 SMs offer a total of 3072 FP32 shader ALUs (or CUDA cores, if you prefer). Thanks to Turing’s concurrent integer execution path, those SMs also offer a total of 3072 INT32 ALUs. Nvidia has disabled two SMs on TU104 to make an RTX 2080. Fully operational versions of this chip are reserved for the Quadro RTX 5000.

Boost
clock
(MHz)
ROP pixels/
clock
INT8/FP16
textures/clock
Shader
processors
Memory
path (bits)
Memory
bandwidth
Memory
size
RX Vega 56 1471 64 224/112 3584 2048 410 GB/s 8 GB
GTX 1070 1683 64 108/108 1920 256 259 GB/s 8 GB
RTX 2070 FE 1710 64 120/120 2304 256 448 GB/s 8 GB
GTX 1080 1733 64 160/160 2560 256 320 GB/s 8 GB
RX Vega 64 1546 64 256/128 4096 2048 484 GB/s 8 GB
RTX 2080 FE 1800 64 184/184 2944 256 448 GB/s 8 GB
GTX 1080 Ti 1582 88 224/224? 3584 352 484 GB/s 11 GB
RTX 2080 Ti FE 1635 88 272/272 4352 352 616 GB/s 11 GB
Titan Xp 1582 96 240/240 3840 384 547 GB/s 12 GB
Titan V 1455 96 320/320 5120 3072 653 GB/s 12 GB

The massive TU104 die only invites further comparisons between the RTX 2080 and the GTX 1080 Ti. The GP102 chip in the 1080 Ti measures 471 mm² in area, although it’s given over entirely to rasterization resources. That means GP102 has more ROPs than TU104 has in its entirety—88 of which are enabled on the RTX 2080 Ti—and a wider memory bus, at 352 bits versus 256 bits. Coupled with GDDR5X RAM running at 11 Gbps per pin, the GTX 1080 Ti boasts 484.4 GB/s of memory bandwidth.

Like the RTX 2080 Ti, the 2080 relies on the latest-and-greatest GDDR6 RAM to shuffle bits around. On this card, Nvidia taps 8 GB of GDDR6 running at 14 Gbps per pin on a 256-bit bus for a total of 448 GB/s of memory bandwidth. Not far off the 1080 Ti, eh? While the GTX 1080 Ti has a raw-bandwidth edge on the 2080, we know that the Turing architecture boasts further improvements to Nvidia’s delta-color-compression technology that promise higher effective bandwidth than the raw figures for GeForce 20-series cards would suggest. The TU104 die has eight memory controllers capable of handling eight ROP pixels per clock apiece, for a total of 64. All of TU104’s ROPs are enabled on the RTX 2080.

Peak
pixel
fill
rate
(Gpixels/s)
Peak
bilinear
filtering
INT8/FP16
(Gtexels/s)
Peak
rasterization
rate
(Gtris/s)
Peak
FP32
shader
arithmetic
rate
(TFLOPS)
RX Vega 56 94 330/165 5.9 10.5
GTX 1070 108 202/202 5.0 7.0
RTX 2070 FE 109 246/246 5.1 7.9
GTX 1080 111 277/277 6.9 8.9
RX Vega 64 99 396/198 6.2 12.7
RTX 2080 115 331/331 10.8 10.6
GTX 1080 Ti 139 354/354 9.5 11.3
RTX 2080 Ti 144 473/473 9.8 14.2
Titan Xp 152 380/380 9.5 12.1
Titan V 140 466/466 8.7 16.0

As a Turing chip, TU104 boasts execution resources new to Nvidia gaming graphics cards. First up, TU104 has 384 total tensor cores for running deep-learning inference workloads, of which 368 are active on the RTX 2080. Compare that to 576 total and 544 active tensor cores on the RTX 2080 Ti. For accelerating bounding-volume hierarchy traversal and triangle intersection testing during ray-tracing operations, TU104 has 48 RT cores, 46 of which are active on the RTX 2080. TU102 boasts 72 RT cores in total, and 68 of those are active on the RTX 2080 Ti.

The RTX 2080 Founders Edition we’re testing today has the same swanky cooler as the RTX 2080 Ti FE on top of its TU104 GPU. Underneath that cooler’s fins, however, Nvidia has provided only an eight-phase VRM versus 13 on the 2080 Ti, and the card draws power through a six-pin and eight-pin connector rather than the dual eight-pin plugs on the RTX 2080 Ti. Nvidia puts the stock board power of the 2080 FE at 225 W, down slightly from the GTX 1080 Ti’s 250-W spec but way up from the GTX 1080’s 180-W figure. Given the RTX 2080’s massive price tag, massive die, and extra power requirements versus the GTX 1080 Founders Edition, however, the 45-W increase isn’t that surprising.

 

Our testing methods

If you’re new to The Tech Report, we don’t benchmark games like most other sites. Instead of throwing out a simple FPS average—a number that tells us only the broadest strokes of what it’s like to play a game on a particular graphics card—we go much deeper. We capture the amount of time it takes the graphics card to render each and every frame of animation before slicing and dicing those numbers with our own custom-built tools. We call this method Inside the Second, and we think it’s the industry standard for quantifying graphics performance. Accept no substitutes.

What’s more, we don’t rely on canned in-game benchmarks—routines that may not be representative of performance in actual gameplay—to gather our test data. Instead of clicking a button and getting a potentially misleading result from those pre-baked benches, we go through the laborious work of seeking out interesting test scenarios that one might actually encounter in a game. Thanks to our use of manual data-collection tools, we can go pretty much anywhere and test pretty much anything we want in a given title.

Most of the frame-time data you’ll see on the following pages were captured with OCAT, a software utility that uses data from the Event Timers for Windows API to tell us when critical events happen in the graphics pipeline. We perform each test run at least three times and take the median of those runs where applicable to arrive at a final result. Where OCAT didn’t suit our needs, we relied on the PresentMon utility.

As ever, we did our best to deliver clean benchmark numbers. Our test system was configured like so:

Processor Intel Core i7-8086K
Motherboard Gigabyte Z370 Aorus Gaming 7
Chipset Intel Z370
Memory size 16 GB (2x 8 GB)
Memory type G.Skill Flare X DDR4-3200
Memory timings 14-14-14-34 2T
Storage Samsung 960 Pro 512 GB NVMe SSD (OS)
Corsair Force LE 960 GB SATA SSD (games)
Power supply Corsair RM850x
OS Windows 10 Pro with April 2018 Update

Thanks to Corsair, G.Skill, and Gigabyte for helping to outfit our test rigs with some of the finest hardware available. AMD, Nvidia, and EVGA supplied the graphics cards for testing, as well. Behold our fine Gigabyte Z370 Aorus Gaming 7 motherboard before it got buried beneath a pile of graphics cards and a CPU cooler:

Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests. We tested each graphics card at a resolution of 4K (3840×2160) and 60 Hz, unless otherwise noted. Where in-game options supported it, we used HDR, adjusted to taste for brightness. Our HDR display is an LG OLED55B7A television.

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

 

Shadow of the Tomb Raider

The final chapter in Lara Croft’s most recent outing is one of Nvidia’s headliners for the GeForce RTX launch. It’ll be getting support for RTX ray-traced shadows in a future patch. For now, we’re testing at 4K with HDR enabled and most every non-GameWorks setting maxed.


The RTX 2080 comes second only to its Turing cousin out of the gate, although its 99th-percentile frame time suffers a bit from an early patch of fuzziness. We retested the game several times in our location of choice and couldn’t make that weirdness go away, so perhaps some software polish is needed one way or another. Still, the performance potential demonstrated by the GeForce RTX cards is quite impressive. Remember that we’re gaming at 4K, in HDR, with almost all the eye candy turned up in a cutting-edge title.


These “time spent beyond X” graphs are meant to show “badness,” those instances where animation may be less than fluid—or at least less than perfect. The formulas behind these graphs add up the amount of time our graphics card spends beyond certain frame-time thresholds, each with an important implication for gaming smoothness. Recall that our graphics-card tests all consist of one-minute test runs and that 1000 ms equals one second to fully appreciate this data.

The 50-ms threshold is the most notable one, since it corresponds to a 20-FPS average. We figure if you’re not rendering any faster than 20 FPS, even for a moment, then the user is likely to perceive a slowdown. 33 ms correlates to 30 FPS, or a 30-Hz refresh rate. Go lower than that with vsync on, and you’re into the bad voodoo of quantization slowdowns. 16.7 ms correlates to 60 FPS, that golden mark that we’d like to achieve (or surpass) for each and every frame.

In less-demanding or better-optimized titles, it’s useful to look at our strictest graphs. 8.3 ms corresponds to 120 FPS, the lower end of what we’d consider a high-refresh-rate monitor. We’ve recently begun including an even more demanding 6.94-ms mark that corresponds to the 144-Hz maximum rate typical of today’s high-refresh-rate gaming displays.

The RTX 2080 makes a strong statement for itself by these metrics. It spends less than half the time past 16.7 ms on tough frames versus the GTX 1080 Ti. Even though their performance may look similar at a glance, metrics like these really let us tease out the difference between the old and the new.

 

Project Cars 2


The RTX 2080 and GTX 1080 Ti end up dead-even in our one-minute romp through a section of the Spa-Francorchamps circuit. Let’s see whether our time-spent-beyond-X graphs can put any light between their bumpers.


A look at the 16.7-ms mark shows that the RTX 2080 and GTX 1080 Ti remain as closely-matched as ever here, and even our more-demanding 11.1-ms and 8.3-ms thresholds let nary a ray pass between the two cards.

 

Hellblade: Senua’s Sacrifice


Hellblade relies on Unreal Engine 4 to depict its Norse-inspired environs in great detail, and playing it at 4K really brings out the work its developers put in. The RTX 2080 opens a small lead over the GTX 1080 Ti in our measure of performance potential, but the cards are within a hair’s breadth in our 99th-percentile measure of delivered smoothness. Let’s see if our time-spent-beyond-X measure can


Turns out it can. The RTX 2080 spends about five fewer seconds past 16.7 ms than the GTX 1080 Ti. Chalk up another win for the Turing card.

 

Gears of War 4


Gears of War 4 puts the lesser Turing and the greater Pascal cards nose-to-nose with one another once again. To the time-spent-beyond-X graphs we go.


By a nose, the GTX 1080 Ti holds an advantage in our time-spent-beyond graphs at 16.7 ms and 11.1 ms.

 

Far Cry 5



Far Cry 5 proves another title where our time-spent-beyond-X graphs make all the difference. The RTX 2080 spends a little over half the time that the GTX 1080 Ti does at the critical 16.7-ms threshold, although its lead narrows at the 11.1-ms mark. Still, the Turing card proves a smoother way to romp through the Montana landscape.

 

Assassin’s Creed Origins



Once again, despite their superficially similar performances in our highest-level measurements, the time-spent-beyond-16.7-ms mark hands the win to the RTX 2080, and by no small margin. Traveling to Egypt on the RTX 2080 is simply a smoother and more enjoyable experience.

 

Deus Ex: Mankind Divided


Deus Ex: Mankind Divided might be a little more aged than some of the games we’re looking at today, but that doesn’t mean it isn’t still a major challenge for any graphics card at 4K and max settings. The RTX 2080 pushes closer to a 60-FPS average than the GTX 1080 Ti, for sure, but its 99th-percentile frame time is just as troubled as the Pascal card’s.


Looking at our time-spent-past-33.3-ms graph puts the 1080 Ti and 2080 on even footing with regard to some of the rougher frames in our test run, although both cards put up more time than we’d like to see here. At 16.7 ms, however, the RTX 2080 spends a little less than two seconds on tough frames, and it holds onto that lead at the 11.1-ms mark.

 

Watch Dogs 2


Like Deus Ex, Watch Dogs 2 is an absolute hog of a game if you start dialing up its settings. Add a 4K resolution to the pile, and the game crushes most graphics cards to dust. Only the GeForce GTX 1080 Ti, RTX 2080, and RTX 2080 Ti even produce playable frame rates, on average, and their 99th-percentile frame times testify to the fact that there’s no putting a leash on this canine.


At 33.3 ms, the RTX 2080 fares a little better than the GTX 1080 Ti beneath Watch Dogs 2‘s heel, and that trend continues at the 16.7-ms mark. Neither card comes anywhere close to the RTX 2080 Ti’s performance, however, putting a point on just how demanding this game can be.

 

Wolfenstein II


This may not be Doom, but Wolfenstein II‘s Vulkan renderer still unleashes some form of unholy processing power from our Turing cards. The RTX 2080 clobbers the GTX 1080 Ti in average FPS and puts up lower 99th-percentile frame times while doing it.


Wolfenstein II puts up perhaps the most dramatic difference in our time-spent-beyond-X graphs in this entire review. While neither the 1080 Ti nor the 2080 put meaningful amounts of time on the board at the 16.7 ms mark, the Turing card slices over nine seconds of trouble off the GTX 1080 Ti’s toils at 8.3 ms.

 

Conclusions

Put it up against the GTX 1080, and the GeForce RTX 2080 crushes its Pascal predecessor. We never expected anything less. Despite its name, though, the RTX 2080 is priced in the same bracket that the GeForce GTX 1080 Ti presently occupies, and that means there is no world in which the GTX 1080 is a reasonable point of comparison for the Turing middle child.


Putting the 1080 Ti and 2080 head-to-head with our 99th-percentile-FPS-per-dollar and average-FPS-per-dollar value metrics, the RTX 2080 Founders Edition offers only small improvements over the GTX 1080 Ti FE in today’s games. Our geometric means of all our results spit out about 9% better average-FPS and about 8% higher 99th-percentile-FPS for the RTX 2080 FE. Those improvements will run about 14% more money than the GTX 1080 Ti Founders Edition. Not a value proposition that’s going to make anybody spit out their coffee, to be certain, but it’s not bad.

You’re getting a bit more polish on top of what was already peerless fluidity and smoothness in most titles today, and considering the uncompetitiveness of today’s high-end graphics market in general, a 5% vigorish for the green team above linear price-performance gains seems positively restrained. In that light, TU104’s tensor cores and RT cores really aren’t that expensive to get into at all.

If you’re focused on getting the most bang-for-your-buck right this second, it might be tempting to get a GTX 1080 Ti on discount as stocks of those cards dwindle, but I’m not entirely sure that’s the best use of your cash for the long term. Pascal performance is as good as it’s ever going to be, while Turing opens new avenues of performance and image quality improvements for tomorrow’s games.

We’ve already been intrigued by what’s possible from the demos we’ve seen of DLSS, and we expect developers will find all sorts of ways to play with even the sparse ray-tracing possible with Turing. Even if you discount the possibilities of tensor cores and RT cores entirely, titles that support half-precision math for some operations, like Wolfenstein II, perform startlingly better on Turing. That’s yet another avenue that developers might run down more and more often in the future.

Yes, gamers are going to be waiting on those features to bear fruit, but the upside could be considerable, and it’s not as though Nvidia isn’t courting developers to use its features. There are plenty of games in the pipe with DLSS support at a minimum, and a handful of developers have already run up a flag for ray-traced effects in their games. Those are just the capabilities that Nvidia has put a bow on, too—Turing mesh shaders could change the way developers envision highly detailed scenes with complex geometry, and that stuff isn’t ever coming to Pascal, either. With so many potential routes to better performance from this architecture, it seems unreasonably pessimistic to say that none of Nvidia’s bets will pay off.

On the basis of a $100 difference, it could be smarter to get on the Turing curve and risk a bit of a wait than it is to tie your horse to an architecture that will never benefit from those future developments, especially given the lengthening useful life of computing hardware these days. That’s especially true if you’re a pixel freak, and if you’re shopping for an $800 graphics card, how can you not be? If GTX 1080 Ti prices fall well below Nvidia’s $700 sticker, we might be telling another story, but a look at e-tailer shelves right now doesn’t suggest that’s happening en masse yet.

Speaking of pixel freaking, if you’re reading The Tech Report, you should already be keenly aware of differences in delivered smoothness among graphics cards. As useful as our 99th-percentile frame times are for determining those differences at a glance, our time-spent-beyond-X measurements help tell that tale in even more depth.

Here’s a little thought experiment with today’s games that might put a point on just how much smoother the RTX 2080 can be versus the GTX 1080 Ti. The 2080 spends far less time in aggregate than the GTX 1080 Ti does on frames that take longer than 16.7 ms to render—31% less—than the GTX 1080 Ti at 4K across all of our titles. We think that’s a difference in performance that you’ll notice.

To be fair, we don’t experience games in aggregate, but it’s still worth noting that the 2080 spends less time—often significantly less time—on tough frames by this measure than the 1080 Ti does in the majority of our titles. The performance of the RTX 2080 and GTX 1080 Ti may appear similar at a high level, but where the rubber meets the road in our advanced metrics, the 2080 easily matches the 1080 Ti and often delivers superior performance. I think that’s a difference worth the 2080’s extra cost.

In any case, two things are true as we wrap up our first week with Turing. One is that we’ll frequently be revisiting these cards down the line as games that support their capabilities emerge. The other is that the RTX 2080 is an exceptionally fine graphics card today, and if you have the dosh to spend, it can be a better performer in noticeable ways versus the superficially-similar GTX 1080 Ti, even as prices for the Pascal card fall. You really can’t lose either way. Whether Turing fully comes into its own with time remains to be seen, but I’m optimistic the wait will be worth it. Now, that wait begins.

Latest News

smartphone security organization
Community Contributions

How to Successfully Tackle Smartphone Security in Your Organization

meme-season (1)
Crypto News

8 Meme Coins to Consider for Investment During the Current Meme Coin Trend

Meme coins recorded jaw-dropping returns in the past couple of weeks. Many household projects pushed towards its new ATHs in recent weeks. Dogwifhat, surged over 600% in the last week...

SpaceX Is Building A Network Of 100 Spy Satellites For The US
News

SpaceX Is Building a Network of 100 Spy Satellites for the US Government, Angers China

Elon Musk’s SpaceX is reportedly making 100 spy satellites for the US intelligence agency. According to sources, the company recently accepted a secret contract by the US government worth $1.8 billion....

IMF Shared An Update About The February Security Breach
News

IMF Shared an Update about the February Security Breach – All Affected Email Accounts Resecured

Taylor Swift in concert
Statistics

9 Taylor Swift Controversies – The Numbers Behind the Drama

What is Darwin AI, Apple’s Latest AI Acquisition?
News

What is Darwin AI, Apple’s Latest AI Acquisition?

Cyberattack On France Govt Exposes Data of 43 Million Users
News

Massive Cyberattack On France Government Departments Leaves The Data of 43 Million Users Exposed