Single page Print

Nvidia's GeForce RTX 2080 Ti graphics card reviewed


Turing, tested

Earlier this year, a fellow editor and I did some pie-in-the-sky thinking about Nvidia's plans for its next-generation GPUs. We wondered how the company would continue the impressive generation-to-generation performance improvements it had been delivering since Maxwell. We guessed that the AI-accelerating smarts in the Volta architecture might be one way the green team would set apart its next-generation products, but past that, we had nothing.

Turns out the company did us one or two better. With the Turing architecture's improved tensor cores and unique RT cores, Nvidia is shipping a pair of intriguing new technologies in its next-generation chips while also bolstering traditional shader performance with parallel execution paths for floating-point and integer workloads. On top of that, the company introduced a whole new way of programming geometry-related shaders, called mesh shaders, that promise to break the draw-call bottleneck at the CPU for geometry-heavy scenes. There's a lot going on in Turing, to put it mildly. Those interested should consult Nvidia's white paper for more detail.


A logical representation of the TU102 GPU. Source: Nvidia

My speculation about the Turing architecture several weeks back turned out to be more correct than not, at least, even with the wildly incomplete info we had on hand. The GeForce RTX 2080 Ti that we're testing this morning and the Quadro RTX 8000 that debuted at SIGGRAPH both use versions of one big honkin' GPU called TU102. At a high level, this 754 mm² chip—754 mm²!—hosts six graphics processing clusters (GPCs) in Nvidia parlance, each with 12 Turing streaming multiprocessors (SMs) inside. The RTX 2080 Ti has four of its SMs disabled for a total of 4352 shader ALUs (or "CUDA cores," if you like), of a potential 4608.

The full TU102 chip has 96 ROPs, but as a slightly cut-down part, the RTX 2080 Ti has 88 of those chiclets enabled. In turn, the highest-end Turing GeForce so far boasts a 352-bit bus to 11 GB of memory. TU102 gets to play with cutting-edge, 14-Gbps GDDR6 RAM, though, up from the 11 Gbps per-pin transfer rates of GDDR5X on the GTX 1080 Ti. That works out to 616 GB/s of raw memory bandwidth. Nvidia also claims to have improved the delta-color-compression routines it's been employing since Fermi to eke out more effective bandwidth from the RTX 2080 Ti's bus. Between GDDR6's higher per-pin clocks and the improved color-compression smarts of Turing itself, Nvidia claims 50% more effective bandwidth from TU102 compared to the GP102 chip in the GTX 1080 Ti.

Despite its monstrous and monstrously-complex die, the RTX 2080 Ti Founders Edition actually comes with a slightly higher boost clock spec than the smaller GP102 die before it, at 1635 MHz, versus 1582 MHz for the GTX 1080 Ti. Nvidia calls that a factory overclock—if you believe overclocks are something that comes with a warranty, at least. In practice, the GPU Boost algorithm of Nvidia graphics cards will likely push Turing chips to similar real-world clock speeds, given adequate cooling. We'll need to test that for ourselves soon.

Aside from the big and future-looking changes in Turing chips themselves, Nvidia's new pricing strategy for the RTX 2070, RTX 2080, and RTX 2080 Ti is going to make for some tricky generation-on-generation comparisons. The $600 RTX 2070 is $150 more expensive than the $450 GTX 1070 Founders Edition. The $800 RTX 2080 Founders Edition sells for $100 more than the GTX 1080 Founders Edition did at launch—and as much as $300 more than that card's final suggested-price drop to $500. In turn, the RTX 2080 Ti Founders Edition commands a whopping $500 more than the GTX 1080 Ti's $700 sticker, at $1200.

In the past, then, the RTX 2070 might have been called an RTX 2080, the RTX 2080 a 2080 Ti, and the RTX 2080 Ti some kind of Titan. The reality of Turing naming and pricing seems meant to allow Nvidia to claim massive generation-to-generation performance increases versus Pascal cards by drawing parallels between model names and eliding those higher sticker prices.

Dollar-for-dollar, however, keep in mind that the RTX 2080's $700 partner-card suggested price and the Founders Edition's $800 price tag make the $699-and-up GeForce GTX 1080 Ti a better point of comparison for Turing's middle child. The GeForce GTX 2080 Ti Founders Edition matches the Titan Xp almost dollar-for-dollar. We don't have a Titan Xp or Titan V handy to test our RTX 2080 Ti against our back-of-the napkin math for those cards, but our theoretical measures of peak graphics performance put the RTX 2080 a lot closer to the GTX 1080 Ti than not. On a price-to-performance basis, then, the improvements in Turing for traditional rasterization workloads could be more modest than Nvidia's claims suggest.

On top of the naming confusion, the two suggested-price tiers for Turing cards—a cheaper one for partner cards and a more expensive one for Nvidia's Founders Editions—seem guaranteed to cause double-takes. I expect that at least in the early days of Turing, there's no reason Nvidia board partners will want to leave a single dollar on the table with those separate, lower prices when Founders Edition cards are commanding more money for what is essentially the same product once the rubber hits the road. In the real world, the Founders Edition suggested price is the de facto suggested price, and retailer listings are already bearing that fact out.