Asus’ ROG Strix GeForce RTX 2070 graphics card reviewed

Back in October, Nvidia launched the GeForce RTX 2070 to bring the power of the Turing architecture to more gamers. The RTX 2070 was the first card to use TU106, Nvidia’s smallest Turing GPU so far. In its fully fledged form on the RTX 2070, TU106 offers 2304 shader ALUs spread across 36 Turing streaming multiprocessors, or SMs. Those SMs are further organized into three graphics processing clusters, or “GPCs” in Nvidia parlance.

A block diagram of the TU106 GPU. Source: Nvidia

Turing chips are some of the largest GPUs around, and TU106 is no different despite its place at the base of the Turing lineup. At 445 mm², this chip’s die size closes in on the Vega 10 GPU that powers the Radeon RX Vega 56 and RX Vega 64. Area aside, however, the two teams’ most modern graphics processors couldn’t be more different.

For gamers, Turing introduced Nvidia’s RT cores for acceleration of certain ray-tracing operations, along with tensor cores for neural-network processing. Even as the smallest of the Turing siblings, TU106 boasts a diverse bench of compute resources. The RTX 2070 has 36 SMs; each SM has eight tensor cores (288 total) and one RT core. Here’s how TU106 stacks up with some well-known cards from AMD and Nvidia alike:

Boost

clock

(MHz)

ROP pixels/

clock

INT8/FP16

textures/clock

Shader

processors

Memory

path (bits)

Memory

bandwidth

Memory

size

RX Vega 56 1471 64 224/112 3584 2048 410 GB/s 8 GB
GTX 1070 1683 64 108/108 1920 256 259 GB/s 8 GB
RTX 2060 FE 1680 48 120/120 1920 192 336 GB/s 6 GB
RTX 2070 FE 1710 64 120/120 2304 256 448 GB/s 8 GB
GTX 1080 1733 64 160/160 2560 256 320 GB/s 8 GB
RX Vega 64 1546 64 256/128 4096 2048 484 GB/s 8 GB
RTX 2080 FE 1800 64 184/184 2944 256 448 GB/s 8 GB
GTX 1080 Ti 1582 88 224/224? 3584 352 484 GB/s 11 GB
RTX 2080 Ti FE 1635 88 272/272 4352 352 616 GB/s 11 GB
Titan Xp 1582 96 240/240 3840 384 547 GB/s 12 GB
Titan V 1455 96 320/320 5120 3072 653 GB/s 12 GB
Peak

pixel

fill

rate

(Gpixels/s)

Peak

bilinear

filtering

INT8/FP16

(Gtexels/s)

Peak

rasterization

rate

(Gtris/s)

Peak

FP32

shader

arithmetic

rate

(TFLOPS)

RX Vega 56 94 330/165 5.9 10.5
GTX 1070 108 202/202 5.0 7.0
RTX 2060 FE 81 202/202 5.0 6.5
RTX 2070 FE 109 246/246 5.1 7.9
GTX 1080 111 277/277 6.9 8.9
RX Vega 64 99 396/198 6.2 12.7
RTX 2080 115 331/331 10.8 10.6
GTX 1080 Ti 139 354/354 9.5 11.3
RTX 2080 Ti 144 473/473 9.8 14.2
Titan Xp 152 380/380 9.5 12.1
Titan V 140 466/466 8.7 16.0

We’re looking at TU106 today as implemented on Asus’ ROG Strix RTX 2070 O8G Gaming. This is the company’s halo factory-overclocked card, sitting atop a stack comprising the slightly less hot-clocked Strix A8G, the RTX 2070 Dual, and the blower-style RTX 2070 Turbo. Behold:

In its reference form, the RTX 2070 rings in at a 1410-MHz base clock and a 1620-MHz boost speed. The “factory-overclocked” RTX 2070 Founders Edition puts another 90 MHz on top of that reference boost spec. Asus has warmed up this Strix even further in its default “gaming mode” clock profile. This card has a nameplate 1815-MHz boost clock, and Turing’s GPU Boost 4.0 dynamic voltage and frequency management will no doubt bring that figure even higher in practice. Folks who want to grab Asus’ GPU Tweak II software can enable an “OC Mode” profile that raises the boost clock another 30 MHz, for what that’s worth. Asus doesn’t overclock the card’s memory, but the RTX 2070 already has a rather impressive 448 GB/s of peak memory bandwidth to play with anyway, thanks to its 8 GB of GDDR6 memory running at 14 Gb/s per pin.

Asus’ ROG Strix cards for Turing don’t all tap the same cooler to keep their chips in check. The Strix RTX 2080 uses a brand-new cooler, fan design, and shroud, while the Strix RTX 2070 gets capped off with the same cooler and fans as those on the company’s Strix GTX 1080 Ti. That’s fine, however, since a 2.5-slot cooler on an ostensibly 175-W graphics card is still ridiculous overkill. Incidentally, you can find this same massive cooler on the company’s ROG Strix RTX 2060, too.

As on past Strix cards, the RTX 2070 offers subtle, well-integrated RGB LED lighting for those who want to bedazzle their builds. The main illuminated features of the Strix come courtesy of a pair of light pipes flanking this card’s trio of fans. These light pipes will go unseen in traditional builds with horizontally mounted graphics cards, but folks who go to the trouble of using a vertical mount will be rewarded by the Strix.

Flipping the card over reveals a backplate that will be familiar to observers of Strix cards as far back as the company’s GTX 1080. The ROG logo lights up in RGB LED glory. If you don’t like flash, a button at the rear of the card disables all of the Strix’s RGB LEDs with one touch—no software required.

Around back, the Strix switches out the RTX 2070’s standard complement of three DisplayPort outputs and one HDMI output for a pair each of DisplayPorts and HDMI outs. The card keeps the VirtualLink USB Type-C connector for as-yet-unannounced VR headsets, though.

The front edge of the Strix card boasts a pair of PWM fan connectors to link any connected spinners to changes in GPU temperature. These headers remain unique to Asus’ Strix cards, as far as I can tell, and they’re quite useful even as motherboards have begun to offer more and more temperature sources for connected fans. A system’s GPU temperature tends to remain frustratingly hidden from motherboard makers’ fan-control utilities, and Asus’ software-free fan control approach here makes for the simplest method of ensuring the graphics card gets the fresh air it needs to stay cool.

The Strix’s own fans operate using two separate profiles that you can control with a switch near the LED on/off button. The default “performance” BIOS lacks the zero-RPM-at-idle fan mode we’ve come to expect from modern high-end graphics cards, and its fan curve is more aggressive under load to maximize heat removal. The quiet BIOS, on the other hand, enables both a zero-RPM mode and a less aggressive fan curve than its performance counterpart in order to minimize noise under load. We’ll examine the effects of these firmwares in greater depth during noise and thermal testing.

Stripping the Strix back to its bones gives us a better look at its massive cooler and impressive power-delivery circuitry, as well as a sturdy frame that both mates to the backplate to prevent card sag and helps cool the eight GDDR6 packages that ring the TU106 GPU.

Asus’ cooler uses seven heat pipes running into a large fin stack. The power delivery circuitry gets coupled to this stack using a thermal pad of its own.

The GPU itself comes in contact with a polished, solid baseplate.

To power the Strix, Asus taps the common uPI uP9512 PWM controller. This chip talks to 10 TI NextFET CSD95481RWJ integrated power stages (possibly via some PWM doublers). These highly efficient power stages are rated for 60 A of maximum output each, and even eight of them dedicated to powering the GPU would rival some high-end motherboards for current-delivery potential. On the Strix, I expect these ICs will deliver smooth power while generating minimal heat. 

Overall, Asus has built an undeniably high-end and well-considered take on the GeForce RTX 2070 with this ROG Strix card. For the privilege, the company asks $630 for its halo card at e-tail, making it one of the most expensive RTX 2070s around. Let’s have an in-depth look at the performance this card delivers now.

 

Our testing methods

If you’re new to The Tech Report, we don’t benchmark games like most other sites on the web. Instead of throwing out a simple FPS average—a number that tells us only the broadest strokes of what it’s like to play a game on a particular graphics card—we go much deeper. We capture the amount of time it takes the graphics card to render each and every frame of animation before slicing and dicing those numbers with our own custom-built tools. We call this method Inside the Second, and we think it’s the industry standard for quantifying graphics performance. Accept no substitutes.

What’s more, we don’t typically rely on canned in-game benchmarks—routines that may not be representative of performance in actual gameplay—to gather our test data. Instead of clicking a button and getting a potentially misleading result from those pre-baked benches, we go through the laborious work of seeking out test scenarios that are typical of what one might actually encounter in a game. Thanks to our use of manual data-collection tools, we can go pretty much anywhere and test pretty much anything we want in a given title.

Most of the frame-time data you’ll see on the following pages were captured with OCAT, a software utility that uses data from the Event Timers for Windows API to tell us when critical events happen in the graphics pipeline. We perform each test run at least three times and take the median of those runs where applicable to arrive at a final result. Where OCAT didn’t suit our needs, we relied on the PresentMon utility.

As ever, we did our best to deliver clean benchmark numbers. Our test system was configured like so:

Processor Intel Core i9-9900K
Motherboard MSI Z370 Gaming Pro Carbon
Chipset Intel Z370
Memory size 16 GB (2x 8 GB)
Memory type G.Skill Flare X DDR4-3200
Memory timings 14-14-14-34 2T
Storage Samsung 960 Pro 512 GB NVMe SSD (OS)

Corsair Force LE 960 GB SATA SSD (games)

Power supply Seasonic Prime Platinum 1000 W
OS Windows 10 Pro version 1809

Thanks to Intel, Corsair, G.Skill, and MSI for helping to outfit our test rigs with some of the finest hardware available. AMD, Nvidia, Asus, Gigabyte, and EVGA supplied the graphics cards for testing, as well.

To serve as a foil for the Strix, we’ve fired up Gigabyte’s RTX 2070 Gaming OC 8G card. Unlike the Strix, the Gaming OC 8G doesn’t break out of the two-slot mold, so it can fit into smaller systems where both space and delivered performance are paramount. Gigabyte’s 1725-MHz specified boost clock might seem significantly lower than the Strix’s 1815 MHz in both cards’ default “Gaming Mode” clock profiles, but we know from experience that Nvidia’s GPU Boost dynamic voltage and frequency scaling logic tends to push clocks higher than nameplate specs. If there’s a meaningful difference in delivered performance from these cards, our frame-time-focused benchmarking methods ought to tease it out. Further, this card sells for $530, giving us a look at what an RTX 2070 closer to its suggested price can deliver.

Graphics card Boost clock

(specified)

Graphics driver version
EVGA GeForce GTX 1070 SC2 Gaming 1784 MHz GeForce Game Ready 417.35
Nvidia GeForce GTX 1070 Ti Founders Edition 1683 MHz
Nvidia GeForce GTX 1080 Founders Edition 1733 MHz
Nvidia GeForce GTX 1080 Ti Founders Edition 1582 MHz
Gigabyte GeForce RTX 2070 Gaming OC 8G 1725 MHz
Asus ROG Strix GeForce RTX 2070 O8G Gaming 1815 MHz
Nvidia GeForce RTX 2080 Founders Edition 1800 MHz
AMD Radeon RX Vega 56 1471 MHz Radeon Software Adrenalin 2019 Edition 19.1.1
AMD Radeon RX Vega 64 1546 MHz

Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests. We tested each graphics card at a resolution of 2560×1440 and 144 Hz, unless otherwise noted.

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

 

Monster Hunter: World


Monster Hunter: World has a reputation as a hog of a game, and our results bear that out. Only the RTX 2080 can crack 60 FPS on average, and none of our cards turn in a 99th-percentile frame time that would suggest an entirely smooth ride. Even so, our Turing cards open a wide lead over their Pascal predecessors. Perhaps that’s attributable in part to the RTX 2070’s copious 448 GB/s of memory bandwidth versus the GTX 1080’s 320 GB/s peak figure. We also have to keep Turing’s claimed superiority in memory color compression in mind. Either way, the RTX 2070 starts off with a nice lead over its dollar-for-dollar match in the Pascal lineup.


These “time spent beyond X” graphs are meant to show “badness,” those instances where animation may be less than fluid—or at least less than perfect. The formulas behind these graphs add up the amount of time our graphics card spends beyond certain frame-time thresholds, each with an important implication for gaming smoothness. To fully appreciate this data, recall that our graphics card tests all consist of one-minute test runs and that 1000 ms equals one second.

The 50-ms threshold is the most notable one, since it corresponds to a 20-FPS average. We figure if you’re not rendering any faster than 20 FPS, even for a moment, then you’re likely to perceive a slowdown. Note that 33 ms correlates to 30 FPS, or a 30-Hz refresh rate. Go lower than that with vsync on, and you’re into the bad voodoo of quantization slowdowns. Also, 16.7 ms correlates to 60 FPS, that golden mark that we’d like to achieve (or surpass) for each and every frame.

In less demanding or better-optimized titles, it’s useful to look at our strictest graphs: 8.3 ms corresponds to 120 FPS, the lower end of what we’d consider a high-refresh-rate monitor. We’ve recently begun including an even more demanding 6.94-ms mark that corresponds to the 144-Hz maximum rate typical of today’s high-refresh-rate gaming displays.

Our time-spent-beyond analysis shows the superiority of Turing cards in this title. Where the GTX 1080 spends a whopping 16 seconds delivering frame rates below 60 FPS, the RTX 2070 duo cuts that time roughly in half. The RTX 2080 takes the GTX 1080 Ti’s already impressive 2.8 seconds or so past 60 FPS and spends nearly one-fifth the time huffing and puffing.

 

Hitman 2

Hitman 2 may not have the DirectX 12 rendering path of its predecessor, but it can still put the hurt on any modern graphics card. We cranked image quality settings to the max to fully flesh out the game’s world of assassination.


Our RTX 2070 duo turns in another fine performance in Hitman 2. Both cards shadow the GTX 1080 Ti and RTX 2080, but the GTX 1080 and RX Vega 64 aren’t far behind. The 99th-percentile frame times from this group impress, too: Even the GTX 1070 comes in right at the 16.7-ms mark, and results improve from there.


Our 50-ms and 33-ms thresholds catch a couple tough frames that the RTX 2070s had trouble with, but that’s hardly a disqualifying result when all of our cards spend about a tenth of a second or less working on frames that take longer than 16.7 ms to render. The magic really happens in this game at 11.1 ms, and the RTX 2070s shave nearly a second off the GTX 1080’s time spent working on tough frames. It’s a small improvement, but we’ll take it.

 

Far Cry 5


The RTX 2070s turn in about a 10% performance boost over the GTX 1080 Founders Edition, although the RX Vega 64 tails them closely. Each of our contenders deliver smooth gaming experiences to match their average frame rates, too. Only the GTX 1070’s 99th-percentile frame time falls below the 16.7-ms mark, and not by much, to boot.


At 16.7 ms, all of our cards put a bit of time on the board thanks to some infrequent frame-time spikes in this title. The really interesting point, as with Hitman 2, is at the 11.1-ms mark: Once again, the RTX 2070s spend half the time on frames that take longer than 11.1 ms to render than their similarly priced Pascal predecessor does.

 

Forza Horizon 4

Forza Horizon 4 drops drivers into a lovingly rendered rendition of the English countryside. Consequently, it can be rather demanding on graphics hardware if you put the quality-settings pedal to the metal. That’s just what we did to see how our graphics cards perform in this title.


At launch, AMD touted Forza Horizon 4 as a major performance win for its graphics cards, but somewhere between then and now, Radeons’ performance in this title seems to have regressed to parity with the green team’s players for whatever reason. Our Turing contenders also can’t open much of a lead over Pascal cards in this title, whether we consider average FPS or the delivered smoothness.


Not even a trip into our advanced metrics can put much light between Pascal and Turing, or indeed between red versus green. We’ll call this one a push and move on.

 

Assassin’s Creed Odyssey


Like Monster Hunter: World, Assassin’s Creed Odyssey has a deserved reputation as a difficult game to run well with the eye candy cranked. To approach or exceed 60 FPS with maximum settings in this game, Turing cards prove to be the best bet, but 99th-percentile frame times suggest even those cards’ performance is tempered by some stormy sailing.


A look at our time-spent-beyond-16.7-ms threshold shows only mild chop for our Turing trio. The RTX 2070s spend about seven seconds less than the GTX 1080 does on frames that take longer than 16.7 ms to render during our one-minute test run, and that’s an improvement in delivered smoothness that should prove easily felt.

 

Gears of War 4


Gears of War 4 really seems to favor Turing cards with its most crushing settings enabled. The RTX 2070s turn in a massive performance boost against the GTX 1080, and the RTX 2080 similarly speeds past the GTX 1080 Ti. We wouldn’t normally recommend cranking screen-space reflections or depth of field to their “Insane” maximums in this title, but if you’ve been waiting to try it for whatever reason, Turing cards make it practical even at 2560×1440.


Our time-spent-beyond-16.7-ms graphs suggest that one truly would need to be insane to push Gears of War 4‘s settings to the max on Pascal and Vega cards, but Turing pixel-pushers just aren’t much bothered by these conditions. The RTX 2070s’ time spent beyond 16.7 ms on tough frames is nearly one-tenth that of the GTX 1080s, while a click over to the 11.1-ms threshold shows that the RTX 2080 enjoys similar superiority over the GTX 1080 Ti. To be fair, one can enable Gears‘ Ultra preset and gain a ton of performance on any of these cards with minimal visual impact, but our benchmarking is trying to push the limits.

 

Battlefield V

On top of being the marquee title for Nvidia’s RTX effects so far, Battlefield V boasts a cutting-edge DirectX 12 renderer as part of EA’s Frostbite engine.


Battlefield V was plagued by reports of stuttering in its DirectX 12 mode at launch, and it seems that with DirectX Raytracing effects off, Nvidia cards still suffer from that problem to some degree. While the RTX 2070 duo turns in a high average frame rate, their delivered smoothness can’t match AMD’s Vega duo. The RX Vega 56 puts up a 99th-percentile frame time that trails only the GTX 1080 Ti, while the RX Vega 64’s delivered smoothness rivals the RTX 2080.


Our frame-time graphs and 99th-percentile frame-time measurements clue us in to potential unsmoothness, and our time-spent-beyond-X graphs let us see just how bad it is in practice. At the 16.7-ms mark, however, there really isn’t much to worry about from either RTX 2070. A look at the 11.1-ms threshold actually shows that the RTX 2070s spend less time on tough frames that take longer than that to render. Despite their higher 99th-percentile frame times overall, then, the RTX 2070s should actually deliver performance similar to, if not smoother than, the RX Vega 64.

 

DirectX Raytracing performance with Battlefield V


Nvidia and EA weren’t kidding about the performance improvements promised in Battlefield V‘s Tides of War Chapter 1: Overture update. The RTX 2080 Ti used to be the only card that could hit 60 FPS on average at 2560×1440 with Battlefield V‘s initial DirectX Raytracing deployment, but the RTX 2070 is now plenty capable of hitting 60 FPS on average at low or medium DXR presets. We didn’t see a huge difference in visual quality with DXR set to high or ultra, but the game certainly doesn’t become unplayable at those settings on an RTX 2070. For its part, the RTX 2080 can handle all DXR settings with fine average frame rates nowadays.

As 99th-percentile frame times go, the RTX 2070 still doesn’t quite manage the golden 16.7-ms mark that we’d hope for at its optimal low and medium DXR presets, but delivered smoothness is far, far better than it was back in November. Enabling Battlefield V‘s low DXR preset on the RTX 2070 at that time resulted in a 27.7-ms 99th-percentile frame time, and kicking things up to medium made the game unplayable. Now, you just have to decide whether you want 60-FPS fluidity or not when dialing in DXR on this card. Either way, the tradeoffs between low, medium, high, and ultra are no longer catastrophic for performance.


Despite putting up 99th-percentile frame times higher than we’d like to see for a perfect 60-FPS-or-better gameplay experience, the RTX 2070 duo comes close to delivering when we look at the time they spend past 16.7 ms. At the optimal low or medium DXR presets, both cards spend under two seconds of our one-minute test run rendering frames that take longer than 16.7 ms to finish. Cranking DXR to high or ultra will result in a less than ideal experience, but it won’t cause any time to show up on the board at the least-desirable 50-ms or 33.3-ms thresholds. That’s significantly better news for Nvidia’s nascent ray-tracing architecture than our initial round of tests. We hope future games with DirectX Raytracing support perform as well out of the gate.

 

Software and overclocking

Most every major graphics card manufacturer has their own spin on an overclocking utility these days, and Asus’ is called GPU Tweak II. Like the popular MSI Afterburner, GPU Tweak II offers the usual array of monitoring and overclocking tools we’ve come to expect for tweaking any graphics card, including sliders for power and temperature limits, core and memory clocks, and fan speeds. For Asus’ cards, GPU Tweak II also offers access to any prebaked clock profiles on a given card.

I started my TU106 overclocking efforts with these profiles. Asus’ prebaked “OC Mode” profile bumps delivered clocks to 1935 MHz from the 1905 MHz that GPU Boost 4.0 delivers in the card’s default “Gaming Mode.” That’s about a 1.5% increase. Pardon me if this result doesn’t exactly blow my hair back. For comparison, the Gigabyte Gaming OC 8G card turned in 1890-MHz boost clocks in its default “Gaming Mode” profile, while its “OC Mode” caused clocks to bounce between 1905 MHz and 1920 MHz. Asus has a slight edge for delivered clocks on paper, but the preceding pages suggest that edge doesn’t translate into any real performance edge.

GPU Tweak II also includes an implementation of Nvidia’s new-with-Turing Scanner SDK. Scanner automatically performs the arduous process of adjusting every point on the voltage-and-frequency curve that Nvidia makes available for adjustment. For all those smarts, however, Scanner turned in only a 1965-MHz observed boost clock, or about a 3.1% increase over out-of-the-box speeds. That’s better than Asus’ own overclocking profiles, to be sure, but still nothing to get excited about.

To extract something resembling an actual overclock from the Strix card, I had to turn to the usual process of maxing out power and temperature targets before manually increasing the clock speed slider in GPU Tweak II. To test overclocked stability, I turned to the 2016 reboot of Doom, whose lightning-fast code tends to quickly reveal any instability from an overclock. I pushed up clocks until the game began to freeze or crash, ultimately arriving at a software setting of 1950 MHz. That yielded a boost clock of 2010-2040 MHz in games, depending on the workload—a 6.3-7% increase over stock. Next, I turned to memory overclocking. Again, I simply pushed the memory slider in GPU Tweak II up until I observed artifacts or instability in Doom. Once all was said and done, our card’s memory was running at 15.75 GT/s per pin, a 12.5% boost from stock. Simple.

To get some perspective on just how good the Strix card’s overclocking chops were, I also ran through the same manual overclocking regime on the Gigabyte RTX 2070 Gaming OC 8G. After my tweaking was done, the Gigabyte card yielded a boost clock in the range  of 1980-1995 MHz and a memory overclock of 15.71 Gb/s per pin. There was just one wrinkle, though: To keep temperatures in check, I had to crank the Gigabyte card’s fans to 80% of their maximum duty cycle to keep temperatures in the same 70° C range that seemed to be optimal for stability. Given that the Gigabyte card is a strictly two-slot affair and seems designed more for silence and space-saving rather than peak overclocking performance, I’m not surprised that I had to push up the card’s fan speed to help it keep its cool.

For all that, the Gigabyte RTX 2070 Gaming OC 8G and the Strix stayed neck and neck after my manual overclocking efforts. Both cards turned in 6% higher frame rates post-overclocking, on average, and both shaved a millisecond off their 99th-percentile frame times, which in both cases is a roughly 5% reduction. That isn’t nothing, but to me it hardly seems worth the higher power draw and potential instability of an overclocked graphics card. Let’s set aside performance differences for a moment and see whether noise levels and power consumption help distinguish these two cards a bit.

Noise levels

To measure noise levels, we used the Faber Acoustical SoundMeter application running on an iPhone 6S. We measured 18″ from each card’s fans on an open test bench.

At stock speeds, the Asus card’s quiet BIOS really does work. By a hair, its massive cooler and low fan speeds make it the quietest card on the bench. The Gigabyte RTX 2070 isn’t far behind, though, and we doubt we could hear its 1.3 dBA of extra noise. Even better, the quiet BIOS doesn’t result in a large performance tradeoff. With the quiet BIOS enabled, the boost clocks we observed from the card fell to 1875 MHz from 1905 MHz. That’s good, because the card’s out-of-the-box performance BIOS makes it much louder than its Gigabyte competitor for little performance payoff. We ran all of our gaming tests with the performance BIOS enabled, and the extra fan noise didn’t translate into noticeably higher frame rates or lower 99th-percentile frame times. Unless you want to overclock or live in an environment with high ambient temperatures, clicking over to the Strix’s quiet BIOS seems like a no-brainer to us, since it enables the fan-stop-at-idle mode we’ve come to know and love on most modern graphics cards. That feature is lacking in the performance BIOS.

For overclocking, however, the performance BIOS adds only 2.5 dBA to the Strix’s baseline load noise levels, whereas the Gigabyte card trails only the hair-dryer cooler of the RX Vega 64. Presuming you want to extract the last few percent of performance potential from your graphics card, then, the Strix will let you do it quietly.

Power consumption

For an idea of how efficient each of these graphics cards is at pushing pixels, we turned to the 2016 release of Doom and monitored our test system’s power draw at the wall with our trusty Watts Up power meter.

We don’t have a reference RTX 2070 to compare our custom contenders against, but blow-for-blow, the Asus card lets our test system pull slightly less power at the wall than its competitor. For all intents and purposes, though, the Asus and Gigabyte cards perform about identically at stock speeds when it comes to performance per watt. The Asus card does extend a slight lead over the Gigabyte when we overclock both cards to their limits, though.

 

Summing up the Asus ROG Strix RTX 2070

Asus has created an exceedingly fine example of the GeForce RTX 2070 with its top-end ROG Strix RTX 2070 O8G. There’s just one problem: All of its fins and finery don’t let the Strix deliver meaningfully different performance at stock clocks or when overclocked versus Gigabyte’s much smaller and much cheaper RTX 2070 Gaming OC 8G.

If we set the performance horse race aside and focus on noise, vibration, and harshness, the Strix stands out in other ways. The second most massive version of Asus’ ROG Strix cooler is more than up to the job of cooling the ostensibly 175-W RTX 2070. As one would expect from its formidable fin stack, the Strix cooler runs politely for the performance it delivers out of the box. Asus kept coil whine to a pleasant minimum, too.

For folks who demand the absolute minimum of fan noise from their systems, Asus also includes a switchable quiet BIOS on the Strix RTX 2070 that enables a zero-RPM-at-idle mode and a quieter-under-load fan curve. In fact, the Asus’ quiet BIOS turned it into the quietest card on our bench by a hair. I saw only the slightest loss of clock speed from this quiet BIOS with the card in stock trim, too.

Those who want to void warranties will find that both Asus’ prebaked OC modes and its implementation of Nvidia’s Scanner tech don’t add more than a couple percentage points’ worth of clock speed to the Strix’s stock capabilities. I had to manually tune the card to get more than a paltry clock boost, and even then, my 2010-MHz final result was just a few percent higher than this card’s stock speeds. Memory overclocking proved more fruitful, as I took the Strix’s GDDR6 from 14 GT/s per pin to 15.75 GT/s per pin.

All told, that overclocking work nabbed about 6% higher performance from the card—nothing to sniff at, but a sign one isn’t missing much by enjoying out-of-the-box speeds, either. Your overclocking mileage will vary, of course, but I’d say a given Strix card is more than ready to squeeze out whatever latent performance potential remains in its GPU and memory chips. Should you push your Strix to the limit, its massive cooler will remain quiet under load—something we can’t say for the Gigabyte card we tested.

The ROG Strix O8G Gaming we tested normally runs $630 at e-tail, making it among the most expensive RTX 2070s on the market. Recall that the suggested price for reference-class RTX 2070s is supposed to be $500, and indeed, there are many RTX 2070s selling for that price at retail right now. Among them, the Gigabyte RTX 2070 Gaming OC 8G we pitted the Strix against is available for $530.

If it can’t win on performance, does the Strix make its $130 premium worth it in other ways?  Asus puts a pair of GPU temperature-controlled fan headers on this card that are unique in the industry, to my knowledge, and the hardware switches for its RGB LEDs and dual BIOSes are also nice extras to have for those allergic to software. On top of its fine stock performance and the easy overclocking allowed by its cooler, the Strix both looks and feels like a top-of-the-line product. Implementation details matter as custom graphics cards get harder and harder to distinguish by performance, and it’s clear to me that Asus sweated every inch when it put together the Strix RTX 2070.

I honestly can’t find a single fault with this Strix card beyond its sky-high price tag, and that’s a rare accolade from a perfectionist like yours truly. At the same time, I think all but the most noise-abhorring or overclocking-inclined builders will find greater satisfaction in putting anywhere from $100 to $130 over the RTX 2070’s suggested price into other parts of their systems. It’s worth noting that recent sales have cut as much as $100 off its typical price, and the closer the Strix gets to Nvidia’s suggested price, the more highly I would recommend it. At the Strix’s full asking price, though, buyers will need to weigh whether having one of the quietest, most feature-packed RTX 2070s around is worth Asus’ massive premium.

 

Conclusions

Let’s sum up the RTX 2070’s position in today’s graphics-card market more generally. We’ve created plots of price versus performance by taking the geometric mean of each card’s average FPS and 99th-percentile frame times in all of the games we tested and plotting that value against each card’s suggested price (if stock is no longer available) or against its retail price on Newegg (if stock is available). To make our higher-is-better approach work, we’ve converted the geometric mean of 99th-percentile frame times into frames per second.


Contrary to popular opinion, the RTX 2070 isn’t just a GeForce GTX 1080 with ray-tracing acceleration tacked on when it comes to performance. In our final reckoning, the RTX 2070 delivers a nice boost in both 99th-percentile frame times and average FPS compared to its Pascal predecessor, dollar for dollar, and its ray-tracing and tensor-processing hardware open up new rendering capabilities that the GTX 1080 simply doesn’t have. Sorry, but we’re not humoring the idea that this is a GTX 1070 successor on the basis of its name alone. Performance per dollar is what matters most when considering a graphics card, not performance per model number. At a $499.99 suggested cost, the RTX 2070 is priced like a GTX 1080 successor, so that’s how we’re analyzing it.

Although we haven’t gotten our hands on the TU106-powered RTX 2060 to see how it compares to the RTX 2070, we don’t think the lowest-priced Turing card so far obviates its fully fledged sibling. On top of its naturally higher performance, the RTX 2070 may have greater longevity than the RTX 2060, thanks to its 8 GB of GDDR6 RAM. Texture sizes aren’t getting any smaller, and other sites have already uncovered suggestions that the 2060’s complement of GDDR6 can stand in the way of achieving the best performance in Battlefield V with RTX effects enabled. If you play at 2560×1440 or above and can’t stand anything but the highest texture quality in the games of today and tomorrow, the step up to the RTX 2070 might make sense. It also helps that nothing else competes in the $500-or-so price bracket right now. High-end Pascal stock is drying up across the board, and AMD is deeply discounting the RX Vega 64 to hold the line against the apparently superior RTX 2060 in the $350-400 range. If you want to spend $500-ish on a graphics card at the moment, the RTX 2070 is your oyster.

The RTX 2070 isn’t just a winner by default, though. Some might scoff at the fact that this card delivers only 12% higher frame rates on average in our tests compared to the GTX 1080, but that doesn’t fully account for the gaming experience on these cards, as any long-time reader of The Tech Report will know. Indeed, aggregating our time-spent-beyond-16.7-ms data shows that the RTX 2070 provides an unparalleled experience at its price point for gaming smoothness at 2560×1440, and that’s something that comparing geomeans of average FPS or even 99th-percentile frame times can’t fully capture. On top of that smoothness, our re-examination of RTX performance in Battlefield V suggests that even with their performance hit versus purely rasterized rendering, RTX effects are no longer an obstacle to a pleasant gaming experience. No matter how you slice it, the RTX 2070 is a capable and enjoyable graphics card for its price.

We still think Nvidia’s Turing name games are a silly way to make it seem as though these cards are delivering greater generation-on-generation performance improvements than they are, but if we ignore this card’s name and focus on its performance per dollar, it’s easy to see that the RTX 2070 still gives you more for your money than the GTX 1080 did at its final suggested price, be it in fidelity, fluidity, or smoothness. Unless AMD detunes its upcoming Vega 20 GPU to compete with the RTX 2070, Nvidia seems poised to enjoy unchallenged domination of this price point for the foreseeable future. If you have $500 or so to spend on a graphics card right now, the RTX 2070 is the one to get.

Comments closed
    • MOSFET
    • 9 months ago

    Jeff “the Perfectionist” Kampman, the Summary and Conclusions pages of this review are joys to read. I believe these pages are the best examples one could find of tech journalism at its finest. You may have aged a few years since we met you, but you haven’t lost a keystroke. We will miss you, if we are not to read your writing again.

    • DPete27
    • 9 months ago

    Excellent write-up Jeff. Top notch.

    • sweatshopking
    • 9 months ago

    HOW ABOUT ITUNES SPEEDS?

      • K-L-Waster
      • 9 months ago

      With this card iTunes can do the Kessel run in 12 parsecs.

        • MOSFET
        • 9 months ago

        [url=https://www.youtube.com/watch?v=g-Q8aZoWqF0&t=0s&list=WL&index=56<]Just rewatched this last night.[/url<] 🙂 ~3:20 Carl Sagan talking to Johnny Carson about Star Wars and the parsec comes up. "It's like saying that from here to San Diego is 30 miles per hour; it just doesn't mean anything!"

    • albundy
    • 9 months ago

    nice to see my 1070 is still on review benchmarks, but no way am i paying 3 times more for a 2070.

    • chµck
    • 9 months ago

    the performance/watt of the 2080 is just insane

      • Chrispy_
      • 9 months ago

      It genuinely is. People are justifiably hating on Nvidia’s pricing of the 2080Ti/2080/2070 but the power consuption is actually only high if you use raytracing.

      As a traditional raster-based card, a large chunk of die area is going unused and as such we get the benefits of a tweaked architecture from Pascal to Turing, as well as the efficiency benefits of moving Turing to 12nm from Pascal’s 16nm.

      I replaced a heavily undervolted Vega56 that seemed to draw about 150W more from the wall than the same machine using the IGP instead. With the 2060 I’m drawing just 115W more from the wall, and I’ve already overclocked it. Let’s just say that I was expecting it to be comparable to the Vega56 because of the power-hungry Tensor cores and RTX logic, but since almost nothing supports those features yet, I’m just pleasantly surprised with the efficiency of Turing. RTX effects in games aren’t just going to hurt my framerates, they’re going to drive up heat and noise levels of the hardware too, and as you can probably tell already, those things matter to me more than some eye-candy in a small handful of AAA titles.

        • Jeff Kampman
        • 9 months ago

        Turning on RTX in the games where it is available does nothing to the overall power consumption of Turing cards; it just redistributes some of the board’s power budget to those resources. That’s one of the reasons ultimate performance goes down.

          • Chrispy_
          • 9 months ago

          Ew, that’s even worse – so you’re saying that power limits are one of the reasons that raytracing performance drops so hard?

          I’m just surprised that the 2060 uses so little power in my machine – I was using a 970 for a week as a placeholder and the RTX 2060 uses 35W less from the wall despite both cards having around a 160W TDP.

            • K-L-Waster
            • 9 months ago

            Shouldn’t be that surprising. Pascal was more efficient than Maxwell, and Turing is more efficient again than Pascal. Two generations of improvement.

            • Chrispy_
            • 9 months ago

            Totally different classes of product though. The 970 in question was “big Maxwell” GM104, the equivalent of which was the 1080 (GP104) and the 2080 (TU104).

            The 2060 uses the smaller TU106, which was the 1060 (GP106) and 960 (GM106) in prior generations.

            Even ignoring generation increases, the thing that actually surprised me is that Nvidia have historically been more accurate (than AMD) with their TDP ratings, and yet here I was using two cards with identical TDP ratings yet 35W difference at the wall.

            • K-L-Waster
            • 9 months ago

            Memory and fan efficiency differences?

            Or maybe you’re not maxing out the 2060?

            • Chrispy_
            • 9 months ago

            I thought TDP was the total board power, including fan headers and memory. I could be wrong though.

            The 2060 was maxed out though, I was using Unigine Superposition, vsync/gsync off and GPU-Z to check voltages/speeds/utilisation.

            I suspect silicon lottery still applies too – perhaps I have a awful 970 and an epic 2060 too 😉

            • K-L-Waster
            • 9 months ago

            I assumed TDP was chip power (similar to how it is measured for CPUs) — could be wrong though.

            I would expect the total board power usage would vary based on the specifics of board design: for example, a 3 fan board would presumably need more power to the fans than a 2 or 1 fan board. I haven’t noticed them having different TDP ratings though. (Although admittedly I haven’t been looking for them either, so may have missed them).

            • Chrispy_
            • 9 months ago

            I looked it up; TDP is board power, since it’s a thermal specification rating, and everything on the board needs cooling. It isn’t clear if the fans are included in that limit, but a single 4000RPM 100mm fan has a peak power draw of ~3Watts, and is more likely to be significantly less than 1W at 2000RPM (it’s not a linear relationship).

            • MOSFET
            • 9 months ago

            I was going to write that as a 960 and 1060 owner, but not yet a 2060….

            Better idea (I think, and please allow some leeway)

            Intel to Nvidia equivalencies:

            i3 – 960, 1060, 2060
            i5 – 970, 1070, 2070
            i7 – 980, 1080, 2080
            i9 – 980Ti, 1080Ti, 2080Ti

            i3’s never hit their ~54W TDPs in my experience. Gets the job done just with fewer resources onboard, err enabled.

            • derFunkenstein
            • 9 months ago

            I would hope that it’s a little more nuanced than that and that it balances the load as well as it possibly can. And in fact it could still very well be that the ray-tracing hardware is running all-out and the reduction in work for the traditional pipeline to do makes up for the power draw.

      • gerryg
      • 9 months ago

      …as is the price/performance. Ouch! That’s a lot to pay for that perf/watt…

        • derFunkenstein
        • 9 months ago

        name something faster for less money.

    • XTF
    • 9 months ago

    Why can’t you have both performance and low noise on idle?

      • Redocbew
      • 9 months ago

      Physics can be so annoying sometimes.

      The odd part is that using the “performance” BIOS on this card doesn’t seem to do much except make the thing a lot louder.

        • XTF
        • 9 months ago

        What’s annoying about physics?
        And why does this need a physical switch, can’t these choices be made in software?

    • chuckula
    • 9 months ago

    Hey Jeff,
    You noted the lack of DX12 in Hitman 2 even though its predecessor had DX12.

    Given that both AMD (async) and Nvidia (ray tracing) have a vested interest in pushing DX12, why do you think its adoption has been less than stellar with even a major franchise like Hitman dropping DX12 even after using it in an earlier title?

      • RAGEPRO
      • 9 months ago

      [i<](auxy posted this from my machine, which is why it's on my account. This is her post, I don't necessarily agree with her.)[/i<] It's because DX12 is inexorably tied to modern apps. ('ω') CAN you make a DX12 game that isn't UWP? Yes, demonstrably, of course, you can. Microsoft doesn't really support it, though. It's "political," in the sense of inter-company politics. Microsoft wants all software to move to UWP. Making a DX12 app that isn't UWP is a pain in the ass over and beyond DX12 simply being a pain in the ass. This is an opinion but any graphics developer who has actually tried to do it will assuredly admit that it is at the very least a lot more complicated than making a DX11 Win32 app, which is downright simple in comparison. Nevermind that DX12 is just harder to implement anyway. And has a smaller install base. And is a lot less consistent across GPU vendors. And doesn't really present large benefits over DX12 for games already developed for current console hardware. Most developers exploring new graphics APIs are looking at Vulkan now and for good reason. It's cross-platform -- even if you're still only talking about PCs -- and it simply works better on both AMD and Nvidia. DX12 is a mess.

        • auxy
        • 9 months ago

        Stop stealing my posts, nerd! ( ;∀;)

        (Sorry for posting on his account, hehe.)

          • chuckula
          • 9 months ago

          Thanks to both(?) of you!!

            • RAGEPRO
            • 9 months ago

            Just her, I had nothing to do with it. I was making a deposit in the porcelain bank.

            • Redocbew
            • 9 months ago

            Either auxy types fast, or that was a big deposit.

            • RAGEPRO
            • 9 months ago

            Both of those things are true.

            (Also she was still working on it after I came back from the bathroom. I’m working on a review myself.)

            • Platedslicer
            • 9 months ago

            T.M.I.

            • RAGEPRO
            • 9 months ago

            [url=https://i.imgur.com/63Hy4D5.png<][A link to my response, since we can't embed images in comments][/url<]

            • shaq_mobile
            • 9 months ago

            Pictures or it didn’t happen

            • NTMBK
            • 9 months ago

            You need more fibre

        • sweatshopking
        • 9 months ago

        CITATION NEEDED.

          • chuckula
          • 9 months ago

          Citation: iTunes.

          QED

        • jihadjoe
        • 9 months ago

        I think the “hard to implement” part plays a big portion in the slow adoption of all these close-to-the-metal APIs like Mantle, DX12, and Vulcan. Game developers are already working hard just to get their stuff out the door, and now they have to fine-tune code to optimize for each GPU architecture? Just do DX11 and leave that optimization shit to AMD and Nvidia.

        • XTF
        • 9 months ago

        Got a reference for the link between DX12 and UWP?

        DX12 *is* tied to Windows 10. Windows 7 still has significant market share AFAIK, so devs have to support DX11. Supporting DX12 in *addition* apparently isn’t worth it.

          • sweatshopking
          • 9 months ago

          no, there is no reference because it’s 100% speculation without evidence.

      • Klimax
      • 9 months ago

      Simple. Too much work for no benefit. You have to optimize for every single GPU chip variation and in quite few cases go beyond that. (Just basic resource allocation like VRAM width versus ROPs versus shader organization; for bonus points anomalous VRAM partitioning and variations in disabled portions of shader blocks) And as soon as new variants or new line comes up, new work.

      Tying DXR version of raytracing to DX 12 is stupid IMO and will only hurt adoption of it. And as soon as game won’t get that update, at bare minimum midrange and lower suffers. High-end usually has so much resources that problem gets brute-forced.

      What works on consoles is idiotic idea on PCs.

    • Chrispy_
    • 9 months ago

    When you add the $130 “Asus Tax” to the fact that the 2060 makes the 2070 kinda pointless, you do have to feel kind of sorry for this card.

    The 2060FE is a TU106-200A (binned) chip, and you can overclock it with Nvidia’s OC scanner software to something that is faster than the 2070FE whilst still drawing less power.

    I guess raytracing performance is where the 2070 has a 15% advantage, but neither it nor the 2060 have enough raytracing horsepower to make the feature worth enabling anyway – and the pathetic number of game(s) that even support it 5 months after launch is embarassing.

    At $280 more than a 2060FE, this card is a desperate joke. For just another $40-50 There are RTX 2080 cards ($699 + MIR) and those are in [i<]another performance class altogether[/i<]. As scathing as my opinion piece on this card may be so far, I [b<]haven't even mentioned[/b<] that [url=https://www.techpowerup.com/252196/asus-gpu-tweak-ii-smears-ads-over-your-games<]Asus' tuning software injects ad banners over your games[/url<]. [i<]Edit - Apparently Asus have claimed since yesterday that their advert is only a placeholder. Yeah, sure... It's still unsolicited, and there's still an advert [b<]by default[/b<]. Draw your own conclusions![/i<]

      • chuckula
      • 9 months ago

      If you had bothered to read the article you linked that’s not an ad but a placeholder for an OSD that lets players put up personalized logos. It’s not an ad, just a default Asus logo that can be changed.

      Now I don’t think that’s a useful feature personally, but given the number of gamers who stream their gameplay that type of logo in an OSD might be useful for their purposes.

        • Chrispy_
        • 9 months ago

        Yeah, I read that article yesterday before the update, and I still think that’s damage control on a big mistake they made.

        Asus could have used ANY profile-based placeholder but chose to put an advert there instead? I’m not buying it.

      • PixelArmy
      • 9 months ago

      RTX 2060FE MSRP = $350
      Cheapest RTX 2060 (Newegg) = $350
      ASUS O8G RTX 2070 MSRP (quoted on page 1) = $630
      Current ASUS O8G RTX 2070 (Newegg, linked on page 1, minus $100 as eluded to on page 12) = $530
      Price on scatter plot (conclusion) = $600
      Price according to you, “$330 more than 2060FE” = $350 + $330 = [b<]$680[/b<] Math is hard. Or you're fudging numbers intentionally. "neither it nor the 2060 have enough raytracing horsepower to make the feature worth enabling anyway" If you had bothered to read this article, you'd know it states otherwise. Edit: Had "2060" where it should have said "2070"

        • chuckula
        • 9 months ago

        He’s gotten all kinds of upvotes by carpet bombing any and every Nvidia related article with attacks. My favorite was the multi-paragraph screed demonizing ray tracing in an article about DLSS that isn’t really related to ray tracing at all.

        I get that there are valid criticisms of Nvidia, but the pre-packaged attack screeds are wearing thin. Something tells me that tomorrow we’re going to find out that AMD isn’t going to be able to take advantage of Nvidia’s failing with Turing, but I don’t expect to see the level of vitriol pushed on AMD. I mean, just imagine if it was AMD that had pushed ray tracing before big bad Ngreedia? We’d have literally the same people who to this day in 2019 will troll post about how great Bulldozer was and how AMD was “innovating” running around telling us how wonderful it is that AMD invented ray tracing single-handedly.

          • Chrispy_
          • 9 months ago

          Yes that’s correct. I hate Nvidia so much I preordered an RTX 2060FE and now have no AMD desktop GPUs at home at all, and it’s why I’m recommending people on the forums here to buy an RTX 2060 because it’s the best performance/$ card in its own category and the categories both above and below it. Clearly I’m directly attacking Nvidia when I criticise Asus.

          /facepalm.

          I’m hating on the $630 MSRP of this Asus when it’s only ~9% faster than cards that have a [b<]MSRP[/b<] of $349 before any kind of street-price/MIR/coupon-discount shenanigans. The fact that it is occasionally discounted is irrelevant - because it's still $530 at best, and there are $500 2070 cards out there [i<]before[/i<] discount. Give it a month or two and you'll likely see similar discounts on 2060 products as the GTX 1660 family comes in. If you're happy to pay the Asus tax for a product that is no superior to similar offerings from Gigabyte/EVGA etc then be my guest, but don't pretend it's good value; It's simple brand-snobbery and blind loyalty.

            • tay
            • 9 months ago

            More fans = faster. What kind of idi0t doesn’t know that?

            • Chrispy_
            • 9 months ago

            Is an third fan worth more/equal/less than an extra RGBLED vendor logo, and how much faster does the card perform if the RGBLED reacts to sound/screen events?

            • K-L-Waster
            • 9 months ago

            Depends: is the fan itself also RGBLED?

            • Chrispy_
            • 9 months ago

            Yes, made from tempered glass and then blocked by another sheet of tempered glass, of course.

            • chuckula
            • 9 months ago

            I really don’t give a hoot of you bought an Nvidia card or not. My point about you dumping on everything in Turing stands… Especially when you got caught recycling the same posts in completely technically incorrect manners and then explaining it away as “well it doesn’t matter if this story is about DLSS and I just copy-n-pasted 4 paragraphs dumping on totally unrelated ray tracing BECAUSE I HATE RAY TRACING SO IT’S FINE!”

            • Chrispy_
            • 9 months ago

            Wow man, take your pills or something.

            I’m not sure how I’m dumping on Turing by saying that Turing 2070 is awful because Turing 2060 is great. That’s literally the worst way to argue against Turing!

            In case your vision is impaired by the froth coming from your mouth, My post is actually about Asus’s price hikes and ridiculous MSRPs compared to other vendors, specifically in this case the FE. But don’t let me stop you on your crusade against my reasonable dislike of insane overpricing and features that are so far no more than marketing stunts.

            Perhaps RTX features will prove themselves to be worthwhile during this product cycle but in their current form most reviewers on the web aren’t that hot on raytracing, and most of the promised RTX titles have been delayed or scaled back. I don’t have a crystal ball and I’m not known for my optimism so I’m just saying how it is right now, 5 moths after launch, waiting for that “must have” proof of value for RTX.

      • gerryg
      • 9 months ago

      I think the Asus box art looks cool. (gets out wallet…)

    • NTMBK
    • 9 months ago

    HI JEFF

      • Chrispy_
      • 9 months ago

      I’m also sorry to see you didn’t get the Intel CEO job but I am selfishly glad that you now have time to write articles for TR again.

    • chuckula
    • 9 months ago

    Look guize! TR is reviewing a graphics card with three fans and is even publishing the review before the Radeon VII embargo lifts!!

    #punked

    Thanks for the review Jeff. Good to see you are still around.

Pin It on Pinterest

Share This