Single page Print

Nvidia's GeForce RTX 2080 graphics card reviewed


Smaller Turing takes on bigger Pascal

Nvidia's GeForce RTX 2080 Ti has already proven itself the fastest single graphics card around by far for 4K gaming, but the $1200 price tag on the Founders Edition card we tested—and even higher prices for partner cards at this juncture—mean all but the one percent of the one percent are going to be looking at cheaper Turing options.

So far, that mission falls to the GeForce RTX 2080. At a suggested price of $700 for partner cards or $800 for the Founders Edition we're testing today, the RTX 2080 is hardly cheap. To be fair, Nvidia introduced the GTX 1080—which this card ostensibly replaces—at $600 for partner cards and $700 for its Founders Edition trim, but that card's price fell to $500 after the GTX 1080 Ti elbowed its way onto the scene. Right now, pricing for the RTX 2080 puts it in contention with the GeForce GTX 1080 Ti. That's not a comfortable place to be, given that software support for Turing's unique features is in its earliest stages. Our back-of-the-napkin math puts the RTX 2080's rasterization capabilities about on par with those of the 1080 Ti, and rasterization resources are the dukes the middle-child Turing card has to put up today.

On top of that, plenty of gamers are just plain uncomfortable with any generational price increase from the GTX 1080 to the RTX 2080. That's because recent generational advances in graphics cards have delivered new levels of graphics performance to the same price points we've grown used to. For example, AMD was able to press Nvidia hard on this point as recently as the Kepler-Hawaii product cycle, most notably with the $400 R9 290. Once Maxwell arrived, the $330 GeForce GTX 970 thoroughly trounced the Kepler GTX 770 on performance and the R9 290 on value, and the $550 GTX 980 outclassed the GTX 780 Ti for less cash. The arrival of the $650 GTX 980 Ti some months later didn't push lesser GeForce cards' prices down much, but it did prove an exceptionally appealing almost-Titan. AMD delivered price- and performance-competitive high-end products shortly after the 980 Ti's release in the form of the R9 Fury X and R9 Fury.

Overall, life for PC gamers in the Maxwell-Hawaii-Fiji era was good. Back then, competition from the red and green camps was vigorous, and that competition provided plenty of reason for Nvidia and AMD to deliver more performance at the same price points—or at least to cut prices on existing products when new cards weren't in the offing.

Pascal's release in mid-2016 echoed this cycle. At the high end, the GTX 1080 handily outperformed the GTX 980 Ti, while the GTX 1070 brought the Maxwell Ti card's performance to a much lower price point. AMD focused its contemporaneous efforts on bringing higher performance to more affordable price points with new chips on a more efficient fabrication process, and Nvidia responded with the GTX 1060, GTX 1050 Ti, and GTX 1050. Some months later, we got a Titan X Pascal at $1200, then a GTX 1080 Ti at $699. The arrival of the 1080 Ti pushed GTX 1080 prices down to $500. Life was, again, good.

The problem today is that AMD has lost its ability to keep up with Nvidia's high-end product cycle. The RX Vega 56 and RX Vega 64 arrived over a year after the GTX 1070 and GTX 1080, and they only achieved performance parity with those cards while proving much less power-efficient. Worse, Vega cards proved frustratingly hard to find for their suggested prices. Around the same time, a whole bunch of people got the notion to do a bunch of cryptographic hashing with graphics cards, and we got the cryptocurrency boom. Life was definitely not good for gamers from late summer 2017 to the present, but it wasn't entirely graphics-card makers' fault.

Cryptocurrency miners' interest in graphics cards has waned of late, so graphics cards are at least easier to buy for gamers of every stripe. The problem for AMD is that Vega 56 and Vega 64 cards are still difficult to get for anything approaching their suggested prices, even as Pascal performance parity has remained an appealing prospect for gamers without 4K displays. On top of that, AMD has practically nothing new on its Radeon roadmap for gamers at any price point for a long while yet. Sure, AMD is fabricating a Vega compute chip at TSMC on 7-nm FinFET technology, but that part doesn't seem likely to descend from the data center any time soon.

No two ways about it, then: the competitive landscape for high-end graphics cards right now is dismal. As any PC enthusiast knows, a lack of competition in a given market leads to stagnation, higher prices, or both. In the case of Turing, Nvidia is still taking the commendable step of pushing performance forward, but it almost certainly doesn't feel threatened by AMD's Radeon strategy at the moment. Hence, we're getting high-end cards with huge, costly dies and price increases to match whatever fresh performance potential is on tap.  Nvidia is a business, after all, and businesses' first order of business is to make money. The green team's management can't credibly ignore simple economics.


A block diagram of the TU104 GPU. Source: Nvidia

On that note, the RTX 2080 draws its pixel-pushing power from a smaller GPU than the 754-mm² TU102 monster under the RTX 2080 Ti's heatsink. The still-beefy 545-mm² TU104 maintains the six-graphics-processing-cluster (GPC) organization of TU104, but each GPC only contains eight Turing streaming multiprocessors, or SMs, versus 12 per GPC in TU102. Those 48 SMs offer a total of 3072 FP32 shader ALUs (or CUDA cores, if you prefer). Thanks to Turing's concurrent integer execution path, those SMs also offer a total of 3072 INT32 ALUs. Nvidia has disabled two SMs on TU104 to make an RTX 2080. Fully operational versions of this chip are reserved for the Quadro RTX 5000.

Boost
clock
(MHz)
ROP pixels/
clock
INT8/FP16
textures/clock
Shader
processors
Memory
path (bits)
Memory
bandwidth
Memory
size
RX Vega 56 1471 64 224/112 3584 2048 410 GB/s 8 GB
GTX 1070 1683 64 108/108 1920 256 259 GB/s 8 GB
RTX 2070 FE 1710 64 120/120 2304 256 448 GB/s 8 GB
GTX 1080 1733 64 160/160 2560 256 320 GB/s 8 GB
RX Vega 64 1546 64 256/128 4096 2048 484 GB/s 8 GB
RTX 2080 FE 1800 64 184/184 2944 256 448 GB/s 8 GB
GTX 1080 Ti 1582 88 224/224? 3584 352 484 GB/s 11 GB
RTX 2080 Ti FE 1635 88 272/272 4352 352 616 GB/s 11 GB
Titan Xp 1582 96 240/240 3840 384 547 GB/s 12 GB
Titan V 1455 96 320/320 5120 3072 653 GB/s 12 GB

The massive TU104 die only invites further comparisons between the RTX 2080 and the GTX 1080 Ti. The GP102 chip in the 1080 Ti measures 471 mm² in area, although it's given over entirely to rasterization resources. That means GP102 has more ROPs than TU104 has in its entirety—88 of which are enabled on the RTX 2080 Ti—and a wider memory bus, at 352 bits versus 256 bits. Coupled with GDDR5X RAM running at 11 Gbps per pin, the GTX 1080 Ti boasts 484.4 GB/s of memory bandwidth.

Like the RTX 2080 Ti, the 2080 relies on the latest-and-greatest GDDR6 RAM to shuffle bits around. On this card, Nvidia taps 8 GB of GDDR6 running at 14 Gbps per pin on a 256-bit bus for a total of 448 GB/s of memory bandwidth. Not far off the 1080 Ti, eh? While the GTX 1080 Ti has a raw-bandwidth edge on the 2080, we know that the Turing architecture boasts further improvements to Nvidia's delta-color-compression technology that promise higher effective bandwidth than the raw figures for GeForce 20-series cards would suggest. The TU104 die has eight memory controllers capable of handling eight ROP pixels per clock apiece, for a total of 64. All of TU104's ROPs are enabled on the RTX 2080.

Peak
pixel
fill
rate
(Gpixels/s)
Peak
bilinear
filtering
INT8/FP16
(Gtexels/s)
Peak
rasterization
rate
(Gtris/s)
Peak
FP32
shader
arithmetic
rate
(TFLOPS)
RX Vega 56 94 330/165 5.9 10.5
GTX 1070 108 202/202 5.0 7.0
RTX 2070 FE 109 246/246 5.1 7.9
GTX 1080 111 277/277 6.9 8.9
RX Vega 64 99 396/198 6.2 12.7
RTX 2080 115 331/331 10.8 10.6
GTX 1080 Ti 139 354/354 9.5 11.3
RTX 2080 Ti 144 473/473 9.8 14.2
Titan Xp 152 380/380 9.5 12.1
Titan V 140 466/466 8.7 16.0

As a Turing chip, TU104 boasts execution resources new to Nvidia gaming graphics cards. First up, TU104 has 384 total tensor cores for running deep-learning inference workloads, of which 368 are active on the RTX 2080. Compare that to 576 total and 544 active tensor cores on the RTX 2080 Ti. For accelerating bounding-volume hierarchy traversal and triangle intersection testing during ray-tracing operations, TU104 has 48 RT cores, 46 of which are active on the RTX 2080. TU102 boasts 72 RT cores in total, and 68 of those are active on the RTX 2080 Ti.

The RTX 2080 Founders Edition we're testing today has the same swanky cooler as the RTX 2080 Ti FE on top of its TU104 GPU. Underneath that cooler's fins, however, Nvidia has provided only an eight-phase VRM versus 13 on the 2080 Ti, and the card draws power through a six-pin and eight-pin connector rather than the dual eight-pin plugs on the RTX 2080 Ti. Nvidia puts the stock board power of the 2080 FE at 225 W, down slightly from the GTX 1080 Ti's 250-W spec but way up from the GTX 1080's 180-W figure. Given the RTX 2080's massive price tag, massive die, and extra power requirements versus the GTX 1080 Founders Edition, however, the 45-W increase isn't that surprising.