Nvidia’s RTX Super and AMD’s Radeon RX 5700 series graphics cards reviewed

Nvidia’s RTX real-time ray-tracing

I’d like to do some additional experimentation with Nvidia’s RTX ray-tracing, but, even more than Radeon Anti-Lag, I don’t really have a satisfactory set-up for RTX testing. The only RTX game I have right now is Quake II RTX, and while I’m rather taken with Nvidia’s modification of the classic Id Software title, it operates in a radically different fashion from other RTX games.

Quake II RTX is difficult to appreciate from screenshots alone.

Where Battlefield V uses RTX to simulate realistic reflections and Shadow of the Tomb Raider uses RTX for lifelike lighting and shadows, Quake II RTX relies on RTX entirely to light and shade the scenes. Nevermind that it mostly uses assets from 1997. I do own Chinese indie title Bright Memory (which was recently announced to be getting RTX support in an upcoming patch) as well as Remedy’s Control (releasing August 27), so perhaps I can investigate RTX performance some other time.

Even still, I have a fair bit to say on the topic of RTX. Regardless of what screeching fanboys will insist, ray-tracing in some form is almost certainly the next big step in real-time computer graphics. Rendering professionals always knew ray-tracing offered superior quality compared to rasterization. It simply wasn’t computationally feasible to use for games—at least, until now.

Of course, some folks say that it still isn’t computationally feasible. Contemporary implementations of RTX cast rather meager amounts of rays that bounce relatively few times, compared to offline ray-tracers. RTX games also make heavy use of an AI-powered de-noising filter to smooth out the grainy results. Despite all that, enabling RTX still has an enormous performance penalty in every game so far.

Personally, I think the end product looks great, more often than not. The question of whether those end results are worth kneecapping your game’s performance is a matter of taste, but judging from the buzz around the web, it seems like a significant portion of gamers aren’t exactly enthusiastic about the idea. The concept of lopping off half your framerate for improved image quality does seem to run contrary to the trend toward high-refresh monitors.

Hardware-savvy persons that I respect have drawn parallels between GeForce RTX and the technologies introduced by the old GeForce FX and GeForce 3 GPUs. The idea behind the comparison is that current GeForce cards’ relatively-poor RTX performance (compared to non-RTX performance) is essentially a matter of “growing pains” as we move into a new, hybrid rendering paradigm. In this comparison, the implication is that the next RTX-capable architecture will offer dramatic gains in terms of ray-tracing performance.

I hope that’s the case. However, R/T cores don’t perform AI inferencing. If Nvidia is to remain a “one-architecture company,” it’s difficult to imagine the next generation of GeForce cards offering the kind of leap in performance we saw between the GeForce 3 and the GeForce 4 Ti 4600, or between the GeForce FX 5800 and GeForce 6800 Ultra. Instead, it seems more likely that we will see another modest gain in RTX performance while significant portions of the chips’ die area are devoted to more tensor cores.

There’s also AMD’s position to consider; currently, Nvidia’s only real competitor in the PC graphics card space has no ray-tracing-specific hardware despite the existence of a vendor-agnostic accelerated ray-tracing standard in Microsoft’s DXR. I’ve talked about this before, in front-page news posts, but both Sony and Microsoft have made mention of ray-tracing in hype sessions for their next-generation consoles. AMD is, of course, supplying the hardware for both machines.

That doesn’t necessarily mean anything; as demonstrated by Crytek’s impressive Neon Noir demo above, ray-tracing can be performed (after a fashion) on modern graphics hardware without specialized hardware acceleration. Given that both Sony and Microsoft have also specifically mentioned “Navi” in reference to their upcoming hardware, and considering that “Navi” as we know it does not have dedicated ray-tracing hardware, that may be the route that the console creators are taking—or it may simply be buzzword bluster.

A brief look at power consumption and efficiency

Modern mid-range and high-end graphics cards are so much more power-thirsty than the CPUs they support that the classic PC expansion slot form factor has become inconvenient and even a bit ridiculous. An in-depth discussion of that topic will have to wait for another day, though, because I’ve got some graphics cards here that need their juice consumption examined.

To test each card’s idle power draw, I rebooted the machine after installing the card’s drivers, and then waited for 5 minutes to let Windows finish its various startup tasks. You’ll notice there’s no chart of idle power numbers here. I’m not completely confident in my numbers—I had some problems with Windows 10 background tasks—but by my measure, every card was within 15 watts of my 85-W baseline. It seems idle power draw is more or less a solved problem for both AMD and Nvidia.

Afterward, I loaded up Monster Hunter World in 3840×2160—so as to generate the maximum possible GPU load—and loaded into my test area in the Research Base area of the game. I watched the numbers flip by on my Kill-a-Watt for 30 seconds, recording the minimum and maximum values I saw, then averaging the two to get the numbers above.

There’s nothing particularly mind-blowing in the results; the Radeon RX 5700 XT is ever-so-slightly faster than the GeForce RTX 2060 Super, and it draws a bit more power. The GeForce RTX 2070 Super and RTX 2080 Super are based on the same chip, so their similarity seems simply-explained, and the big Pascal part with its 352-bit memory bus draws the most power.

As I mentioned way back on the testing methods page, the Radeon RX 580 and GeForce GTX 1660 Ti cards were loaners from friends, and I unfortunately didn’t have the tools on hand to measure their power consumption at the time.

I went ahead and charted the power consumption of each card under the cards’ aggregate 99th-percentile frame times. It’s a somewhat awkward measure of efficiency, but it gives you a reasonable impression of where each card falls compared to the others.

The latest GeForce cards are quite efficient overall, as expected, but AMD’s showing here isn’t awful. For a while now, enthusiasts have been shouting from the rooftops that Radeon cards become remarkably efficient when undervolted. Perhaps the slightly lower clocks of the base-model RX 5700 help reduce its power draw beyond the cuts made to its cores. Whatever the reason, it’s a fair bit more efficient than its faster sibling.

Meanwhile, the RTX 2070 Super and RTX 2080 Super in comparison to the GeForce GTX 1080 Ti give a nice little demonstration of the advancements made in the refreshed Turing: you can stick with the same power budget and get improved performance, or keep the same performance and save a little power. Impressive improvements, considering not only all the extra capabilities of the Turing GPUs, but also that Nvidia is still using a previous-generation fabrication process.

Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19

avatar
19 Comment threads
36 Thread replies
0 Followers
 
Most reacted comment
Hottest comment thread
30 Comment authors
KretschmerMr BillwillmoreLoneWolf15K-L-Waster Recent comment authors
  Subscribe  
newest oldest most voted
Notify of
Mr Bill
Guest
Mr Bill

Zak, excellent review. You always have interesting observations for each game and how the cards deal with the game. That in-the-second commentary is what brings the nerds to the yard.

willmore
Guest
willmore

Did you crush that 5700? Or is that really how the shroud looks?

willmore
Guest
willmore

Sorry, it was the XT model that looks crushed.

K-L-Waster
Guest
K-L-Waster

That is in fact what the reference shroud looks like.

(As if peeps didn’t have enough reasons to hold out for 3rd party coolers…)

willmore
Guest
willmore

I wouldn’t call it ugly, but I really don’t like the look.

Kretschmer
Guest
Kretschmer

What this review tells me is that my 1080Ti held up really, really well. I got 2070 Super performance for two extra years at a $150 premium.

Sure no ray tracing, but I’d rather buy into that tech when it becomes better developed.

plonk420
Guest
plonk420

thank you SOOOO much for this review! Time Spent Beyond x ms / average frametime now is the first thing i look at with reviews

LoneWolf15
Guest
LoneWolf15

One note I don’t think I recall being brought up (the lack of index drop-down in the pages listed makes me loath to go back and check): Everything I’ve seen says that if Navi is your card, do not buy a reference design. Some reference blowers are quite good (nVidia Pascal was; I was very happy with the noise level on two 1070 Founders Edition cards at full load, and a single one was whisper quiet). AMD’s blower -is not. It is both loud, and a poor cooler; Sapphire’s own dual-fan Pulse 5700XT runs twenty degrees cooler. Twenty degrees [i]Celsius[/i]… Read more »

LoneWolf15
Guest
LoneWolf15

I should add, this is not meant to denigrate the AMD’s Navi. Just AMD’s reference cooler design.

I think the Sapphire cards (specifically, the Pulse 5700XT or Pulse 5700) for ten bucks more than the reference design, is competitive, and worth looking into. But they may be hard to find at the moment.

https://www.newegg.com/sapphire-radeon-rx-5700-xt-100416p8gl/p/N82E16814202349?Description=sapphire%20pulse%205700xt&cm_re=sapphire_pulse_5700xt-_-14-202-349-_-Product

https://www.newegg.com/sapphire-radeon-rx-5700-100417p8gl/p/N82E16814202350?Description=sapphire%20pulse%205700&cm_re=sapphire_pulse_5700-_-14-202-350-_-Product

anotherengineer
Guest
anotherengineer

“For gamers like myself who use myriad monitors—I’m currently using five—”

got a matrox card? 😉

when are the $235 card coming out?!?!

Mr Bill
Guest
Mr Bill

I have a Matrox G650. Its a fabulous office card for multi-monitor but far too slow for gaming.

StuG
Guest
StuG

I feel like a lot more cards should have been included in the conclusions graph (given that was already charted on previous reviews) so we could see where a larger range of cards would fall (even if the dots were marked as previous reviews or something).

Captain Ned
Guest
Captain Ned

If those cards were not benchmarked on the same rigs as used for this review, the comparison would not be univariate. Multivariate is what proper reviews try to avoid.

Krogoth
Guest
Krogoth

Vega will continue on as general compute solutions while RDNA will focus more on graphical prowess.

Yomom
Guest
Yomom

So sad that great content like this has to get fakked by this horrible horrible generic template.

DPete27
Guest
DPete27

Nvidia’s new Ultra Low Latency setting description may shed some light on AMD’s implementation:
https://www.nvidia.com/en-us/geforce/news/gamescom-2019-game-ready-driver/

By reducing maximum pre-rendered frames to 1, you go to a just-in-time frame scheduling….Sure, that would improve latency, but it would also leave you susceptible to frame-time spikes if a frame takes a little longer than expected. I suspect that using VRR can reduce this effect, but still would be interesting to test. How many pre-rendered frames is optimal?

DPete27
Guest
DPete27

Also of note:
“in DX12 and Vulkan titles, the game decides when to queue the frame”
would be nice to include in your review write-up.

Jesse
Guest
Jesse

I usually set mine to 2 globally in the Nvidia control panel – enough for double-buffered vsync if I want it, and much less latency in game engines that pre-render like 5 frames by default.

Jason Deford
Guest
Jason Deford

So… Are the Radeon Vega cards an evolutionary dead end?

I was disappointed to see the Nvidia 1080 ti card in testing, but not the Radeon RX Vega 64. If you’re making a generational comparison including the Nvidia 10-series and its follow-ons, I’d think you should include the Radeon Vega-series in comparison to the RX 5700-series. The RX 580 card shown in the comparison isn’t in the same price/performance range as the newer cards being benched.

Krogoth
Guest
Krogoth

Vega will continue on as general compute solutions while RDNA will focus more on graphical prowess.

StuG
Guest
StuG

This was exactly what I thought as well. No VII/64/56?

tfp
Guest
tfp

I was wondering the same, that said a check on newegg shows that VII is very limited. Is AMD running into production issues with the VII?

Krogoth
Guest
Krogoth

Radeon VII was a stopgap solution until Navi was ready. It was a way to clear out excessive Vega 20 stock that ate too much power for ISV customers.

Navi already bests Vega 20 at gaming performance when memory bandwidth isn’t a factor.

LoneWolf15
Guest
LoneWolf15

Yup. Radeon VII owners are sadly being left high and dry.

It was a lousy buy even for the most die-hard AMD fans, and its short market time is pretty disappointing for anyone who bought one.

jihadjoe
Guest
jihadjoe

My guess is AMD doesn’t really want to make any more of the VII than is necessary. It’s relatively cheap for something that uses such expensive components, and built on an expensive process.

90% of those chips they rather go into the Mi50 accelerators.

Krogoth
Guest
Krogoth

Yep, the Radeon VII is a much better general compute/content creation card then a gaming card. There’s nothing close to it in its price point. You have to spend a lot more if you want to get performance in either market.

It was a steal for general compute hobbyist like Kepler-based Titans back in the day.

Krogoth
Guest
Krogoth

It is likely that Zak simply doesn’t have any Vega hardware on hand and his test system is different from previous Vega benches making an apples to apples comparison difficult at best.

If you want a ballpark figure just take 5700XT results and reduce them by like ~2-10% to get Vega 64 stock/tuned Vega 56 results.

Colton Westrate
Editor

^ This. Zak was working with what he had, or in some cases, what he could borrow for a couple days.

Jason DeFord
Guest
Jason DeFord

“If you want a ballpark figure just take 5700XT results and reduce them by like ~2-10% to get Vega 64 stock/tuned Vega 56 results.” I think you’re over-simplifying the comparison. I still think seeing the Vega GPUs on the ‘scatter charts’ would be valuable data points. In addition, there is the price dimension that needs to accounted for. Right now, an ASRock Phantom Gaming X Radeon RX Vega 56 can be found for US$270, while a ASRock Radeon RX 5700 goes for US$330 @ TheEgg. Saving ~20% of GPU cost for a difference of “~2-10%” in performance is worth considering.… Read more »

Krogoth
Guest
Krogoth

Vega 56 is a bit of a wildcard because it is highly depend on how well you can undervolt the unit. Unlike the 5700 and 5700XT which can operate at their performance levels without too much hassle. Vega 64 only pulls ahead if you are brave enough to overclock/undervolt to its limits and are willing to tolerate the power consumption.

Vega units are decisively better if you care more about general compute performance.

Oliv
Guest
Oliv

Completely agree, especially since the model used was the 4GB version. One if these things is not like the other.

juzz86
Guest
juzz86

Oh Zak, you’ve done it again mate. I know any of our staff who were tasked with carrying the site’s major drawcard articles would give it every bit of justice you had – as you all do with your own posts. But to see what we all crave seeing on the site hold the same format, same prose, same detail as it always has – means an awful lot to a sentimental fella like me. [Site] Formatting and [staff] introductory niggles around the ownership change aside, I’m heartened to see the stalwart content keep coming (Fish, Bruno, Ben, Josh, Nath)… Read more »

unknown-error
Guest
unknown-error

The “all-white” background is going to take a lot more getting used to. On my desktop, sorry say, but it looks really amateurish. Since there is no drop-down menu with the relevant page titles, it would help us a lot if you put the “Page listing:” at the bottom of each page. So we can skip to pages that interest us. What I do now is, I open the page with the “Page listing:” in one tab and open the pages I like in another tab.

The reviews itself is great as usual and thanks a lot for that.

John
Guest
John

On some pages you mention about “Jeff’s write-up” and you link to another article, but that article has a different author name, without “Jeff” anywhere. You should correct it to not confuse the readers (yes, I know Jeff wanted his name removed from TR). Also, it is disappointing you did not use some other DX12 games like Metro Exodus. It is a perfect game to test performance with RTX enabled and also useful for comparing GPUs without RTX effects. It is also disappointing that you did not select some popular MMORPGs for benchmark, something like Black Desert Online or FFXIV.… Read more »

Fonbu
Guest
Fonbu

Thank You ! Tech Report Staff for making this possible.
Many of us I am sure have waited for this review. And it was smart of the Tech Report to wait for all the Geforce Super cards and the new Radeons to arrive, and showcase them against each other. Being the most productive choice.
I like how all the new driver features of the products were showcased.
Was this Zaks first major video card review? It was well done.

Ben Funk
Guest
Ben Funk
Sam
Guest
Sam

Adding a page title listing to the content of the first page would help a lot, since we lost the convenient dropdown box after the refresh. Some people just want to see benchmarks for a certain game and don’t want to click through 14 pages to find that.

chuckula
Guest
chuckula

Thanks for the review!

As for the product well, let’s say that 7nm has allowed AMD to avoid some of the worst issues with previous cards essentially being overclocked out of the box. But given how much guff Nvidia has gotten for dedicating silicon to RTX, it’s also pretty telling that their 16nm parts (2060 super in particular) are still competitive even when you never turn RTX on and even when you look at power consumption.

I think this calls for some market disruption by a third party (and of course I mean S3!)

Krogoth
Guest
Krogoth

No, a dark shadow from the distance past will emerge from its somber…..

[b]Bitboys[/b]

chuckula
Guest
chuckula

Bitboys??!!

Oy!!

LoneWolf15
Guest
LoneWolf15

I upvoted a Chuckula post. Demons must be shivering in hell as we speak.

Waco
Guest
Waco

It makes me wonder just how much of Nvidia is currently propped up by datacenter sales – their die sizes compared to AMD are monstrous.

Also, Nvidia is at 12 nm, not 16.

chuckula
Guest
chuckula

Die sizes are irrelevant since AMD is clearly paying a fortune for 7nm silicon or else they would have launched these chips for a small fraction of the price of the RTX parts to grab market share. Furthermore, TSMC’s “12nm” is the 16nm process with a couple of tweaks and a marketing name change.

As for the data center you should have paid attention to Nvidia’s most recent earnings beat where the data center was actually down a good bit but overall results beat the street and –unlike AMD — Nvidia just raised it’s outlooks for the rest of 2019.

Neutronbeam
Guest

That is one hell of a review Zak! Excellent work; well done!

Krogoth
Guest
Krogoth

It was worth the wait.

Waco
Guest
Waco

It’s good to see essentially price-parity on the XT / 2060 Super. That’s good news for anyone in the $400 and below market. The standard 5700 looks to stand on its own pretty handily between the 1660 Ti and 2060 Super, it’s slightly better value than either if you look at 99th% FPS / dollar.

Frenchy2k1
Guest
Frenchy2k1

TR left out the RTX2060 which will continue at $350 and is in direct competition with AMD 5700.
The other “SUPER” cards supersede the previous models (RTX2070 SUPER completely replaces the original 2070), but the original 2060 will continue.
It seems TR was not sourced one and hence did not include it in the graph.

Pin It on Pinterest

Share This