Back in October, Nvidia launched the GeForce RTX 2070 to bring the power of the Turing architecture to more gamers. The RTX 2070 was the first card to use TU106, Nvidia’s smallest Turing GPU so far. In its fully fledged form on the RTX 2070, TU106 offers 2304 shader ALUs spread across 36 Turing streaming multiprocessors, or SMs. Those SMs are further organized into three graphics processing clusters, or “GPCs” in Nvidia parlance.
A block diagram of the TU106 GPU. Source: Nvidia
Turing chips are some of the largest GPUs around, and TU106 is no different despite its place at the base of the Turing lineup. At 445 mm², this chip’s die size closes in on the Vega 10 GPU that powers the Radeon RX Vega 56 and RX Vega 64. Area aside, however, the two teams’ most modern graphics processors couldn’t be more different.
For gamers, Turing introduced Nvidia’s RT cores for acceleration of certain ray-tracing operations, along with tensor cores for neural-network processing. Even as the smallest of the Turing siblings, TU106 boasts a diverse bench of compute resources. The RTX 2070 has 36 SMs; each SM has eight tensor cores (288 total) and one RT core. Here’s how TU106 stacks up with some well-known cards from AMD and Nvidia alike:
|RX Vega 56||1471||64||224/112||3584||2048||410 GB/s||8 GB|
|GTX 1070||1683||64||108/108||1920||256||259 GB/s||8 GB|
|RTX 2060 FE||1680||48||120/120||1920||192||336 GB/s||6 GB|
|RTX 2070 FE||1710||64||120/120||2304||256||448 GB/s||8 GB|
|GTX 1080||1733||64||160/160||2560||256||320 GB/s||8 GB|
|RX Vega 64||1546||64||256/128||4096||2048||484 GB/s||8 GB|
|RTX 2080 FE||1800||64||184/184||2944||256||448 GB/s||8 GB|
|GTX 1080 Ti||1582||88||224/224?||3584||352||484 GB/s||11 GB|
|RTX 2080 Ti FE||1635||88||272/272||4352||352||616 GB/s||11 GB|
|Titan Xp||1582||96||240/240||3840||384||547 GB/s||12 GB|
|Titan V||1455||96||320/320||5120||3072||653 GB/s||12 GB|
|RX Vega 56||94||330/165||5.9||10.5|
|RTX 2060 FE||81||202/202||5.0||6.5|
|RTX 2070 FE||109||246/246||5.1||7.9|
|RX Vega 64||99||396/198||6.2||12.7|
|GTX 1080 Ti||139||354/354||9.5||11.3|
|RTX 2080 Ti||144||473/473||9.8||14.2|
We’re looking at TU106 today as implemented on Asus’ ROG Strix RTX 2070 O8G Gaming. This is the company’s halo factory-overclocked card, sitting atop a stack comprising the slightly less hot-clocked Strix A8G, the RTX 2070 Dual, and the blower-style RTX 2070 Turbo. Behold:
In its reference form, the RTX 2070 rings in at a 1410-MHz base clock and a 1620-MHz boost speed. The “factory-overclocked” RTX 2070 Founders Edition puts another 90 MHz on top of that reference boost spec. Asus has warmed up this Strix even further in its default “gaming mode” clock profile. This card has a nameplate 1815-MHz boost clock, and Turing’s GPU Boost 4.0 dynamic voltage and frequency management will no doubt bring that figure even higher in practice. Folks who want to grab Asus’ GPU Tweak II software can enable an “OC Mode” profile that raises the boost clock another 30 MHz, for what that’s worth. Asus doesn’t overclock the card’s memory, but the RTX 2070 already has a rather impressive 448 GB/s of peak memory bandwidth to play with anyway, thanks to its 8 GB of GDDR6 memory running at 14 Gb/s per pin.
Asus’ ROG Strix cards for Turing don’t all tap the same cooler to keep their chips in check. The Strix RTX 2080 uses a brand-new cooler, fan design, and shroud, while the Strix RTX 2070 gets capped off with the same cooler and fans as those on the company’s Strix GTX 1080 Ti. That’s fine, however, since a 2.5-slot cooler on an ostensibly 175-W graphics card is still ridiculous overkill. Incidentally, you can find this same massive cooler on the company’s ROG Strix RTX 2060, too.
As on past Strix cards, the RTX 2070 offers subtle, well-integrated RGB LED lighting for those who want to bedazzle their builds. The main illuminated features of the Strix come courtesy of a pair of light pipes flanking this card’s trio of fans. These light pipes will go unseen in traditional builds with horizontally mounted graphics cards, but folks who go to the trouble of using a vertical mount will be rewarded by the Strix.
Flipping the card over reveals a backplate that will be familiar to observers of Strix cards as far back as the company’s GTX 1080. The ROG logo lights up in RGB LED glory. If you don’t like flash, a button at the rear of the card disables all of the Strix’s RGB LEDs with one touch—no software required.
Around back, the Strix switches out the RTX 2070’s standard complement of three DisplayPort outputs and one HDMI output for a pair each of DisplayPorts and HDMI outs. The card keeps the VirtualLink USB Type-C connector for as-yet-unannounced VR headsets, though.
The front edge of the Strix card boasts a pair of PWM fan connectors to link any connected spinners to changes in GPU temperature. These headers remain unique to Asus’ Strix cards, as far as I can tell, and they’re quite useful even as motherboards have begun to offer more and more temperature sources for connected fans. A system’s GPU temperature tends to remain frustratingly hidden from motherboard makers’ fan-control utilities, and Asus’ software-free fan control approach here makes for the simplest method of ensuring the graphics card gets the fresh air it needs to stay cool.
The Strix’s own fans operate using two separate profiles that you can control with a switch near the LED on/off button. The default “performance” BIOS lacks the zero-RPM-at-idle fan mode we’ve come to expect from modern high-end graphics cards, and its fan curve is more aggressive under load to maximize heat removal. The quiet BIOS, on the other hand, enables both a zero-RPM mode and a less aggressive fan curve than its performance counterpart in order to minimize noise under load. We’ll examine the effects of these firmwares in greater depth during noise and thermal testing.
Stripping the Strix back to its bones gives us a better look at its massive cooler and impressive power-delivery circuitry, as well as a sturdy frame that both mates to the backplate to prevent card sag and helps cool the eight GDDR6 packages that ring the TU106 GPU.
Asus’ cooler uses seven heat pipes running into a large fin stack. The power delivery circuitry gets coupled to this stack using a thermal pad of its own.
The GPU itself comes in contact with a polished, solid baseplate.
To power the Strix, Asus taps the common uPI uP9512 PWM controller. This chip talks to 10 TI NextFET CSD95481RWJ integrated power stages (possibly via some PWM doublers). These highly efficient power stages are rated for 60 A of maximum output each, and even eight of them dedicated to powering the GPU would rival some high-end motherboards for current-delivery potential. On the Strix, I expect these ICs will deliver smooth power while generating minimal heat.
Overall, Asus has built an undeniably high-end and well-considered take on the GeForce RTX 2070 with this ROG Strix card. For the privilege, the company asks $630 for its halo card at e-tail, making it one of the most expensive RTX 2070s around. Let’s have an in-depth look at the performance this card delivers now.
Our testing methods
If you’re new to The Tech Report, we don’t benchmark games like most other sites on the web. Instead of throwing out a simple FPS average—a number that tells us only the broadest strokes of what it’s like to play a game on a particular graphics card—we go much deeper. We capture the amount of time it takes the graphics card to render each and every frame of animation before slicing and dicing those numbers with our own custom-built tools. We call this method Inside the Second, and we think it’s the industry standard for quantifying graphics performance. Accept no substitutes.
What’s more, we don’t typically rely on canned in-game benchmarks—routines that may not be representative of performance in actual gameplay—to gather our test data. Instead of clicking a button and getting a potentially misleading result from those pre-baked benches, we go through the laborious work of seeking out test scenarios that are typical of what one might actually encounter in a game. Thanks to our use of manual data-collection tools, we can go pretty much anywhere and test pretty much anything we want in a given title.
Most of the frame-time data you’ll see on the following pages were captured with OCAT, a software utility that uses data from the Event Timers for Windows API to tell us when critical events happen in the graphics pipeline. We perform each test run at least three times and take the median of those runs where applicable to arrive at a final result. Where OCAT didn’t suit our needs, we relied on the PresentMon utility.
As ever, we did our best to deliver clean benchmark numbers. Our test system was configured like so:
|Processor||Intel Core i9-9900K|
|Motherboard||MSI Z370 Gaming Pro Carbon|
|Memory size||16 GB (2x 8 GB)|
|Memory type||G.Skill Flare X DDR4-3200|
|Memory timings||14-14-14-34 2T|
|Storage||Samsung 960 Pro 512 GB NVMe SSD (OS)
Corsair Force LE 960 GB SATA SSD (games)
|Power supply||Seasonic Prime Platinum 1000 W|
|OS||Windows 10 Pro version 1809|
Thanks to Intel, Corsair, G.Skill, and MSI for helping to outfit our test rigs with some of the finest hardware available. AMD, Nvidia, Asus, Gigabyte, and EVGA supplied the graphics cards for testing, as well.
To serve as a foil for the Strix, we’ve fired up Gigabyte’s RTX 2070 Gaming OC 8G card. Unlike the Strix, the Gaming OC 8G doesn’t break out of the two-slot mold, so it can fit into smaller systems where both space and delivered performance are paramount. Gigabyte’s 1725-MHz specified boost clock might seem significantly lower than the Strix’s 1815 MHz in both cards’ default “Gaming Mode” clock profiles, but we know from experience that Nvidia’s GPU Boost dynamic voltage and frequency scaling logic tends to push clocks higher than nameplate specs. If there’s a meaningful difference in delivered performance from these cards, our frame-time-focused benchmarking methods ought to tease it out. Further, this card sells for $530, giving us a look at what an RTX 2070 closer to its suggested price can deliver.
|Graphics card||Boost clock
|Graphics driver version|
|EVGA GeForce GTX 1070 SC2 Gaming||1784 MHz||GeForce Game Ready 417.35|
|Nvidia GeForce GTX 1070 Ti Founders Edition||1683 MHz|
|Nvidia GeForce GTX 1080 Founders Edition||1733 MHz|
|Nvidia GeForce GTX 1080 Ti Founders Edition||1582 MHz|
|Gigabyte GeForce RTX 2070 Gaming OC 8G||1725 MHz|
|Asus ROG Strix GeForce RTX 2070 O8G Gaming||1815 MHz|
|Nvidia GeForce RTX 2080 Founders Edition||1800 MHz|
|AMD Radeon RX Vega 56||1471 MHz||Radeon Software Adrenalin 2019 Edition 19.1.1|
|AMD Radeon RX Vega 64||1546 MHz|
Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests. We tested each graphics card at a resolution of 2560×1440 and 144 Hz, unless otherwise noted.
The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.
Monster Hunter: World
Monster Hunter: World has a reputation as a hog of a game, and our results bear that out. Only the RTX 2080 can crack 60 FPS on average, and none of our cards turn in a 99th-percentile frame time that would suggest an entirely smooth ride. Even so, our Turing cards open a wide lead over their Pascal predecessors. Perhaps that’s attributable in part to the RTX 2070’s copious 448 GB/s of memory bandwidth versus the GTX 1080’s 320 GB/s peak figure. We also have to keep Turing’s claimed superiority in memory color compression in mind. Either way, the RTX 2070 starts off with a nice lead over its dollar-for-dollar match in the Pascal lineup.
These “time spent beyond X” graphs are meant to show “badness,” those instances where animation may be less than fluid—or at least less than perfect. The formulas behind these graphs add up the amount of time our graphics card spends beyond certain frame-time thresholds, each with an important implication for gaming smoothness. To fully appreciate this data, recall that our graphics card tests all consist of one-minute test runs and that 1000 ms equals one second.
The 50-ms threshold is the most notable one, since it corresponds to a 20-FPS average. We figure if you’re not rendering any faster than 20 FPS, even for a moment, then you’re likely to perceive a slowdown. Note that 33 ms correlates to 30 FPS, or a 30-Hz refresh rate. Go lower than that with vsync on, and you’re into the bad voodoo of quantization slowdowns. Also, 16.7 ms correlates to 60 FPS, that golden mark that we’d like to achieve (or surpass) for each and every frame.
In less demanding or better-optimized titles, it’s useful to look at our strictest graphs: 8.3 ms corresponds to 120 FPS, the lower end of what we’d consider a high-refresh-rate monitor. We’ve recently begun including an even more demanding 6.94-ms mark that corresponds to the 144-Hz maximum rate typical of today’s high-refresh-rate gaming displays.
Our time-spent-beyond analysis shows the superiority of Turing cards in this title. Where the GTX 1080 spends a whopping 16 seconds delivering frame rates below 60 FPS, the RTX 2070 duo cuts that time roughly in half. The RTX 2080 takes the GTX 1080 Ti’s already impressive 2.8 seconds or so past 60 FPS and spends nearly one-fifth the time huffing and puffing.
Hitman 2 may not have the DirectX 12 rendering path of its predecessor, but it can still put the hurt on any modern graphics card. We cranked image quality settings to the max to fully flesh out the game’s world of assassination.
Our RTX 2070 duo turns in another fine performance in Hitman 2. Both cards shadow the GTX 1080 Ti and RTX 2080, but the GTX 1080 and RX Vega 64 aren’t far behind. The 99th-percentile frame times from this group impress, too: Even the GTX 1070 comes in right at the 16.7-ms mark, and results improve from there.
Our 50-ms and 33-ms thresholds catch a couple tough frames that the RTX 2070s had trouble with, but that’s hardly a disqualifying result when all of our cards spend about a tenth of a second or less working on frames that take longer than 16.7 ms to render. The magic really happens in this game at 11.1 ms, and the RTX 2070s shave nearly a second off the GTX 1080’s time spent working on tough frames. It’s a small improvement, but we’ll take it.
Far Cry 5
The RTX 2070s turn in about a 10% performance boost over the GTX 1080 Founders Edition, although the RX Vega 64 tails them closely. Each of our contenders deliver smooth gaming experiences to match their average frame rates, too. Only the GTX 1070’s 99th-percentile frame time falls below the 16.7-ms mark, and not by much, to boot.
At 16.7 ms, all of our cards put a bit of time on the board thanks to some infrequent frame-time spikes in this title. The really interesting point, as with Hitman 2, is at the 11.1-ms mark: Once again, the RTX 2070s spend half the time on frames that take longer than 11.1 ms to render than their similarly priced Pascal predecessor does.
Forza Horizon 4
Forza Horizon 4 drops drivers into a lovingly rendered rendition of the English countryside. Consequently, it can be rather demanding on graphics hardware if you put the quality-settings pedal to the metal. That’s just what we did to see how our graphics cards perform in this title.
At launch, AMD touted Forza Horizon 4 as a major performance win for its graphics cards, but somewhere between then and now, Radeons’ performance in this title seems to have regressed to parity with the green team’s players for whatever reason. Our Turing contenders also can’t open much of a lead over Pascal cards in this title, whether we consider average FPS or the delivered smoothness.
Not even a trip into our advanced metrics can put much light between Pascal and Turing, or indeed between red versus green. We’ll call this one a push and move on.
Assassin’s Creed Odyssey
Like Monster Hunter: World, Assassin’s Creed Odyssey has a deserved reputation as a difficult game to run well with the eye candy cranked. To approach or exceed 60 FPS with maximum settings in this game, Turing cards prove to be the best bet, but 99th-percentile frame times suggest even those cards’ performance is tempered by some stormy sailing.
A look at our time-spent-beyond-16.7-ms threshold shows only mild chop for our Turing trio. The RTX 2070s spend about seven seconds less than the GTX 1080 does on frames that take longer than 16.7 ms to render during our one-minute test run, and that’s an improvement in delivered smoothness that should prove easily felt.
Gears of War 4
Gears of War 4 really seems to favor Turing cards with its most crushing settings enabled. The RTX 2070s turn in a massive performance boost against the GTX 1080, and the RTX 2080 similarly speeds past the GTX 1080 Ti. We wouldn’t normally recommend cranking screen-space reflections or depth of field to their “Insane” maximums in this title, but if you’ve been waiting to try it for whatever reason, Turing cards make it practical even at 2560×1440.
Our time-spent-beyond-16.7-ms graphs suggest that one truly would need to be insane to push Gears of War 4‘s settings to the max on Pascal and Vega cards, but Turing pixel-pushers just aren’t much bothered by these conditions. The RTX 2070s’ time spent beyond 16.7 ms on tough frames is nearly one-tenth that of the GTX 1080s, while a click over to the 11.1-ms threshold shows that the RTX 2080 enjoys similar superiority over the GTX 1080 Ti. To be fair, one can enable Gears‘ Ultra preset and gain a ton of performance on any of these cards with minimal visual impact, but our benchmarking is trying to push the limits.
On top of being the marquee title for Nvidia’s RTX effects so far, Battlefield V boasts a cutting-edge DirectX 12 renderer as part of EA’s Frostbite engine.
Battlefield V was plagued by reports of stuttering in its DirectX 12 mode at launch, and it seems that with DirectX Raytracing effects off, Nvidia cards still suffer from that problem to some degree. While the RTX 2070 duo turns in a high average frame rate, their delivered smoothness can’t match AMD’s Vega duo. The RX Vega 56 puts up a 99th-percentile frame time that trails only the GTX 1080 Ti, while the RX Vega 64’s delivered smoothness rivals the RTX 2080.
Our frame-time graphs and 99th-percentile frame-time measurements clue us in to potential unsmoothness, and our time-spent-beyond-X graphs let us see just how bad it is in practice. At the 16.7-ms mark, however, there really isn’t much to worry about from either RTX 2070. A look at the 11.1-ms threshold actually shows that the RTX 2070s spend less time on tough frames that take longer than that to render. Despite their higher 99th-percentile frame times overall, then, the RTX 2070s should actually deliver performance similar to, if not smoother than, the RX Vega 64.
DirectX Raytracing performance with Battlefield V
Nvidia and EA weren’t kidding about the performance improvements promised in Battlefield V‘s Tides of War Chapter 1: Overture update. The RTX 2080 Ti used to be the only card that could hit 60 FPS on average at 2560×1440 with Battlefield V‘s initial DirectX Raytracing deployment, but the RTX 2070 is now plenty capable of hitting 60 FPS on average at low or medium DXR presets. We didn’t see a huge difference in visual quality with DXR set to high or ultra, but the game certainly doesn’t become unplayable at those settings on an RTX 2070. For its part, the RTX 2080 can handle all DXR settings with fine average frame rates nowadays.
As 99th-percentile frame times go, the RTX 2070 still doesn’t quite manage the golden 16.7-ms mark that we’d hope for at its optimal low and medium DXR presets, but delivered smoothness is far, far better than it was back in November. Enabling Battlefield V‘s low DXR preset on the RTX 2070 at that time resulted in a 27.7-ms 99th-percentile frame time, and kicking things up to medium made the game unplayable. Now, you just have to decide whether you want 60-FPS fluidity or not when dialing in DXR on this card. Either way, the tradeoffs between low, medium, high, and ultra are no longer catastrophic for performance.
Despite putting up 99th-percentile frame times higher than we’d like to see for a perfect 60-FPS-or-better gameplay experience, the RTX 2070 duo comes close to delivering when we look at the time they spend past 16.7 ms. At the optimal low or medium DXR presets, both cards spend under two seconds of our one-minute test run rendering frames that take longer than 16.7 ms to finish. Cranking DXR to high or ultra will result in a less than ideal experience, but it won’t cause any time to show up on the board at the least-desirable 50-ms or 33.3-ms thresholds. That’s significantly better news for Nvidia’s nascent ray-tracing architecture than our initial round of tests. We hope future games with DirectX Raytracing support perform as well out of the gate.
Software and overclocking
Most every major graphics card manufacturer has their own spin on an overclocking utility these days, and Asus’ is called GPU Tweak II. Like the popular MSI Afterburner, GPU Tweak II offers the usual array of monitoring and overclocking tools we’ve come to expect for tweaking any graphics card, including sliders for power and temperature limits, core and memory clocks, and fan speeds. For Asus’ cards, GPU Tweak II also offers access to any prebaked clock profiles on a given card.
I started my TU106 overclocking efforts with these profiles. Asus’ prebaked “OC Mode” profile bumps delivered clocks to 1935 MHz from the 1905 MHz that GPU Boost 4.0 delivers in the card’s default “Gaming Mode.” That’s about a 1.5% increase. Pardon me if this result doesn’t exactly blow my hair back. For comparison, the Gigabyte Gaming OC 8G card turned in 1890-MHz boost clocks in its default “Gaming Mode” profile, while its “OC Mode” caused clocks to bounce between 1905 MHz and 1920 MHz. Asus has a slight edge for delivered clocks on paper, but the preceding pages suggest that edge doesn’t translate into any real performance edge.
GPU Tweak II also includes an implementation of Nvidia’s new-with-Turing Scanner SDK. Scanner automatically performs the arduous process of adjusting every point on the voltage-and-frequency curve that Nvidia makes available for adjustment. For all those smarts, however, Scanner turned in only a 1965-MHz observed boost clock, or about a 3.1% increase over out-of-the-box speeds. That’s better than Asus’ own overclocking profiles, to be sure, but still nothing to get excited about.
To extract something resembling an actual overclock from the Strix card, I had to turn to the usual process of maxing out power and temperature targets before manually increasing the clock speed slider in GPU Tweak II. To test overclocked stability, I turned to the 2016 reboot of Doom, whose lightning-fast code tends to quickly reveal any instability from an overclock. I pushed up clocks until the game began to freeze or crash, ultimately arriving at a software setting of 1950 MHz. That yielded a boost clock of 2010-2040 MHz in games, depending on the workload—a 6.3-7% increase over stock. Next, I turned to memory overclocking. Again, I simply pushed the memory slider in GPU Tweak II up until I observed artifacts or instability in Doom. Once all was said and done, our card’s memory was running at 15.75 GT/s per pin, a 12.5% boost from stock. Simple.
To get some perspective on just how good the Strix card’s overclocking chops were, I also ran through the same manual overclocking regime on the Gigabyte RTX 2070 Gaming OC 8G. After my tweaking was done, the Gigabyte card yielded a boost clock in the range of 1980-1995 MHz and a memory overclock of 15.71 Gb/s per pin. There was just one wrinkle, though: To keep temperatures in check, I had to crank the Gigabyte card’s fans to 80% of their maximum duty cycle to keep temperatures in the same 70° C range that seemed to be optimal for stability. Given that the Gigabyte card is a strictly two-slot affair and seems designed more for silence and space-saving rather than peak overclocking performance, I’m not surprised that I had to push up the card’s fan speed to help it keep its cool.
For all that, the Gigabyte RTX 2070 Gaming OC 8G and the Strix stayed neck and neck after my manual overclocking efforts. Both cards turned in 6% higher frame rates post-overclocking, on average, and both shaved a millisecond off their 99th-percentile frame times, which in both cases is a roughly 5% reduction. That isn’t nothing, but to me it hardly seems worth the higher power draw and potential instability of an overclocked graphics card. Let’s set aside performance differences for a moment and see whether noise levels and power consumption help distinguish these two cards a bit.
To measure noise levels, we used the Faber Acoustical SoundMeter application running on an iPhone 6S. We measured 18″ from each card’s fans on an open test bench.
At stock speeds, the Asus card’s quiet BIOS really does work. By a hair, its massive cooler and low fan speeds make it the quietest card on the bench. The Gigabyte RTX 2070 isn’t far behind, though, and we doubt we could hear its 1.3 dBA of extra noise. Even better, the quiet BIOS doesn’t result in a large performance tradeoff. With the quiet BIOS enabled, the boost clocks we observed from the card fell to 1875 MHz from 1905 MHz. That’s good, because the card’s out-of-the-box performance BIOS makes it much louder than its Gigabyte competitor for little performance payoff. We ran all of our gaming tests with the performance BIOS enabled, and the extra fan noise didn’t translate into noticeably higher frame rates or lower 99th-percentile frame times. Unless you want to overclock or live in an environment with high ambient temperatures, clicking over to the Strix’s quiet BIOS seems like a no-brainer to us, since it enables the fan-stop-at-idle mode we’ve come to know and love on most modern graphics cards. That feature is lacking in the performance BIOS.
For overclocking, however, the performance BIOS adds only 2.5 dBA to the Strix’s baseline load noise levels, whereas the Gigabyte card trails only the hair-dryer cooler of the RX Vega 64. Presuming you want to extract the last few percent of performance potential from your graphics card, then, the Strix will let you do it quietly.
For an idea of how efficient each of these graphics cards is at pushing pixels, we turned to the 2016 release of Doom and monitored our test system’s power draw at the wall with our trusty Watts Up power meter.
We don’t have a reference RTX 2070 to compare our custom contenders against, but blow-for-blow, the Asus card lets our test system pull slightly less power at the wall than its competitor. For all intents and purposes, though, the Asus and Gigabyte cards perform about identically at stock speeds when it comes to performance per watt. The Asus card does extend a slight lead over the Gigabyte when we overclock both cards to their limits, though.
Summing up the Asus ROG Strix RTX 2070
Asus has created an exceedingly fine example of the GeForce RTX 2070 with its top-end ROG Strix RTX 2070 O8G. There’s just one problem: All of its fins and finery don’t let the Strix deliver meaningfully different performance at stock clocks or when overclocked versus Gigabyte’s much smaller and much cheaper RTX 2070 Gaming OC 8G.
If we set the performance horse race aside and focus on noise, vibration, and harshness, the Strix stands out in other ways. The second most massive version of Asus’ ROG Strix cooler is more than up to the job of cooling the ostensibly 175-W RTX 2070. As one would expect from its formidable fin stack, the Strix cooler runs politely for the performance it delivers out of the box. Asus kept coil whine to a pleasant minimum, too.
For folks who demand the absolute minimum of fan noise from their systems, Asus also includes a switchable quiet BIOS on the Strix RTX 2070 that enables a zero-RPM-at-idle mode and a quieter-under-load fan curve. In fact, the Asus’ quiet BIOS turned it into the quietest card on our bench by a hair. I saw only the slightest loss of clock speed from this quiet BIOS with the card in stock trim, too.
Those who want to void warranties will find that both Asus’ prebaked OC modes and its implementation of Nvidia’s Scanner tech don’t add more than a couple percentage points’ worth of clock speed to the Strix’s stock capabilities. I had to manually tune the card to get more than a paltry clock boost, and even then, my 2010-MHz final result was just a few percent higher than this card’s stock speeds. Memory overclocking proved more fruitful, as I took the Strix’s GDDR6 from 14 GT/s per pin to 15.75 GT/s per pin.
All told, that overclocking work nabbed about 6% higher performance from the card—nothing to sniff at, but a sign one isn’t missing much by enjoying out-of-the-box speeds, either. Your overclocking mileage will vary, of course, but I’d say a given Strix card is more than ready to squeeze out whatever latent performance potential remains in its GPU and memory chips. Should you push your Strix to the limit, its massive cooler will remain quiet under load—something we can’t say for the Gigabyte card we tested.
The ROG Strix O8G Gaming we tested normally runs $630 at e-tail, making it among the most expensive RTX 2070s on the market. Recall that the suggested price for reference-class RTX 2070s is supposed to be $500, and indeed, there are many RTX 2070s selling for that price at retail right now. Among them, the Gigabyte RTX 2070 Gaming OC 8G we pitted the Strix against is available for $530.
If it can’t win on performance, does the Strix make its $130 premium worth it in other ways? Asus puts a pair of GPU temperature-controlled fan headers on this card that are unique in the industry, to my knowledge, and the hardware switches for its RGB LEDs and dual BIOSes are also nice extras to have for those allergic to software. On top of its fine stock performance and the easy overclocking allowed by its cooler, the Strix both looks and feels like a top-of-the-line product. Implementation details matter as custom graphics cards get harder and harder to distinguish by performance, and it’s clear to me that Asus sweated every inch when it put together the Strix RTX 2070.
I honestly can’t find a single fault with this Strix card beyond its sky-high price tag, and that’s a rare accolade from a perfectionist like yours truly. At the same time, I think all but the most noise-abhorring or overclocking-inclined builders will find greater satisfaction in putting anywhere from $100 to $130 over the RTX 2070’s suggested price into other parts of their systems. It’s worth noting that recent sales have cut as much as $100 off its typical price, and the closer the Strix gets to Nvidia’s suggested price, the more highly I would recommend it. At the Strix’s full asking price, though, buyers will need to weigh whether having one of the quietest, most feature-packed RTX 2070s around is worth Asus’ massive premium.
Let’s sum up the RTX 2070’s position in today’s graphics-card market more generally. We’ve created plots of price versus performance by taking the geometric mean of each card’s average FPS and 99th-percentile frame times in all of the games we tested and plotting that value against each card’s suggested price (if stock is no longer available) or against its retail price on Newegg (if stock is available). To make our higher-is-better approach work, we’ve converted the geometric mean of 99th-percentile frame times into frames per second.
Contrary to popular opinion, the RTX 2070 isn’t just a GeForce GTX 1080 with ray-tracing acceleration tacked on when it comes to performance. In our final reckoning, the RTX 2070 delivers a nice boost in both 99th-percentile frame times and average FPS compared to its Pascal predecessor, dollar for dollar, and its ray-tracing and tensor-processing hardware open up new rendering capabilities that the GTX 1080 simply doesn’t have. Sorry, but we’re not humoring the idea that this is a GTX 1070 successor on the basis of its name alone. Performance per dollar is what matters most when considering a graphics card, not performance per model number. At a $499.99 suggested cost, the RTX 2070 is priced like a GTX 1080 successor, so that’s how we’re analyzing it.
Although we haven’t gotten our hands on the TU106-powered RTX 2060 to see how it compares to the RTX 2070, we don’t think the lowest-priced Turing card so far obviates its fully fledged sibling. On top of its naturally higher performance, the RTX 2070 may have greater longevity than the RTX 2060, thanks to its 8 GB of GDDR6 RAM. Texture sizes aren’t getting any smaller, and other sites have already uncovered suggestions that the 2060’s complement of GDDR6 can stand in the way of achieving the best performance in Battlefield V with RTX effects enabled. If you play at 2560×1440 or above and can’t stand anything but the highest texture quality in the games of today and tomorrow, the step up to the RTX 2070 might make sense. It also helps that nothing else competes in the $500-or-so price bracket right now. High-end Pascal stock is drying up across the board, and AMD is deeply discounting the RX Vega 64 to hold the line against the apparently superior RTX 2060 in the $350-400 range. If you want to spend $500-ish on a graphics card at the moment, the RTX 2070 is your oyster.
The RTX 2070 isn’t just a winner by default, though. Some might scoff at the fact that this card delivers only 12% higher frame rates on average in our tests compared to the GTX 1080, but that doesn’t fully account for the gaming experience on these cards, as any long-time reader of The Tech Report will know. Indeed, aggregating our time-spent-beyond-16.7-ms data shows that the RTX 2070 provides an unparalleled experience at its price point for gaming smoothness at 2560×1440, and that’s something that comparing geomeans of average FPS or even 99th-percentile frame times can’t fully capture. On top of that smoothness, our re-examination of RTX performance in Battlefield V suggests that even with their performance hit versus purely rasterized rendering, RTX effects are no longer an obstacle to a pleasant gaming experience. No matter how you slice it, the RTX 2070 is a capable and enjoyable graphics card for its price.
We still think Nvidia’s Turing name games are a silly way to make it seem as though these cards are delivering greater generation-on-generation performance improvements than they are, but if we ignore this card’s name and focus on its performance per dollar, it’s easy to see that the RTX 2070 still gives you more for your money than the GTX 1080 did at its final suggested price, be it in fidelity, fluidity, or smoothness. Unless AMD detunes its upcoming Vega 20 GPU to compete with the RTX 2070, Nvidia seems poised to enjoy unchallenged domination of this price point for the foreseeable future. If you have $500 or so to spend on a graphics card right now, the RTX 2070 is the one to get.