Nvidia’s GeForce GTX 1070 graphics card reviewed

Nvidia’s GeForce GTX 1080 set a new high-water mark for single-GPU graphics card performance in our review, and gamers will pay for the privilege of owning one. The card has been difficult to find in stock ever since its release, and prices on the models in stock at Newegg tend to start at the $699.99 suggested sticker for the Founders Edition card. The $599.99 suggested price for custom cards is nowhere to be seen—those models are selling for well over $800 when they’re available. Owning the fastest thing around doesn’t come cheap.

The MSI GTX 1070 Gaming Z

The second consumer Pascal card has a different mission. When it launched the GTX 1070 at DreamHack, Nvidia promised performance greater than a GeForce GTX Titan X  for a $379.99 starting price (or $449.99 for the Founders Edition reference card). The Titan X sold for $1000 when it was still available, but a more relevant point of reference for most gamers is the other GM200-powered consumer graphics card: the GeForce GTX 980 Ti. That card listed for $650 when it first hit the market. No matter how we slice the numbers, the GTX 1070 should offer potent performance at a more accessible price point, much like the GTX 970 did when it launched alongside the GTX 980.

  ROP

pixels/

clock

Texels

filtered/

clock

(int/fp16)

Shader

processors

Rasterized

triangles/

clock

Memory

interface

width (bits)

Estimated

transistor

count

(Millions)

Die size

(mm²)

Fab

process

GTX 970 52 104/104 1664 4 224+32 5200 416 (398) 28 nm
 GTX 980 64 128/128 2048 4 256 5200 416 (398) 28 nm
GTX 1070 64 120/120 1920 3 256 7200 314 16 nm
GTX 1080 64 160/160 2560 4 256 7200 314 16 nm
GTX Titan X 96 192/192 3072 6 384 8000 601 28 nm

To hit that more affordable sticker price, Nvidia lopped off a full graphics processing cluster from the GP104 GPU. That gives the GTX 1070 1920 stream processors and 120 texture units, compared to the GTX 1080’s complement of 2560 SPs and 160 texture units. The engineers left the chip’s ROP count and memory controllers alone, so the GTX 1070 ships with GP104’s 64-ROP complement and 256-bit path to memory intact. Here’s a picture of what that might look like as a block diagram: 

Compared to the GeForce GTX 970 before it, the GTX 1070 has 15% more raw shading resources and 15% more texture units at its disposal. Pair those with the considerable clock boost that Pascal brings, though, and the card’s theoretical performance slightly eclipses both the GTX 980 Ti and the Titan X. Nvidia clocks the GeForce GTX 1070 Founders Edition card at 1506-MHz base and 1683-MHz boost speeds. Custom cards from Nvidia’s board partners can run even faster still. We’ll examine just what that means for the GTX 1070’s theoretical performance in a bit.

Nvidia also kept the GTX 1070’s price in check by relying on 8GB of good old GDDR5 RAM instead of the GDDR5X memory found on the GTX 1080. In its reference design, the GTX 1070 clocks that RAM at 8GT/s, up from the 7GT/s memory speeds we saw with the GTX 980 and GTX 970. Pair that with GP104’s 256-bit memory bus, and we get 256GB/s of potential bandwidth on tap.

Aside from those changes, the GTX 1070 offers the same improvements from the move to Pascal we detailed in our review of the GTX 1080. If you’re not already familiar with that card and the GP104 GPU, you should brush up on those changes before moving on here. Next, let’s take a look at the MSI GTX 1070 Gaming Z card we’ll be using to test out this configuration of the GP104 GPU.

 

The MSI GeForce GTX 1070 Gaming Z

We’re able to review the GTX 1070 today because MSI sent over its spiffy GTX 1070 Gaming Z card. Behold:

This GTX 1070 is the highest-end card in the company’s lineup. It uses a dual-fan “Twin Frozr VI” cooler to keep the GP104 chip in check, and it’s got LED accents scattered across its surface to please buyers’ inner 13-year-olds. In all seriousness, though, we think the card looks quite sporty. Those red lightning-bolt-ish things that surround the forward fan light up when the card is powered on.

The back side of the GTX 1070 Gaming Z is almost entirely covered with a metal backplate. The dragon crest embedded in the plate is backlit by RGB LEDs for easy color-coordination with other lighting in a build, and the MSI logo on the side of the card is similarly bedazzled. You can see the six-pin and eight-pin power connectors this card needs to do its thing from this angle, too.

MSI got in hot water a while back for sending reviewers graphics cards with a special BIOS that activates an “OC mode” clock profile by default. Our card is flashed with such a BIOS. Most custom graphics cards built these days feature several such clock speed profiles that can be selected in a proprietary companion utility, but reviewers don’t generally install that software. For what it’s worth, MSI’s rationale for sending reviewers cards configured this way is that it doesn’t want the press to miss out on the full performance potential of its products.

We’re mostly neutral about this practice, since the clock speed differences often amount to a few percent at best. Still, folks who go out and pick up an MSI GTX 1070 will want to install the MSI Gaming App and enable the card’s OC Mode if they’re looking for performance identical to what we measure in this review. We’ve detailed the changes that OC Mode produces in the following table. We’ve also tossed the GTX 1070 Founders Edition card in for comparison.

  GPU

base

clock

GPU

boost

clock

Memory

config

Memory

transfer

speed

PCIe

aux

power

Peak

power

draw

E-tail

price

MSI GeForce GTX 1070 Gaming Z (gaming mode) 1632 MHz 1835 MHz 8GB GDDR5 8.1 GT/s 1x 6-pin, 1x 8-pin 150W $469.99
MSI GeForce GTX 1070 Gaming Z (OC mode) 1657 MHz 1860 MHz
GeForce GTX 1070 Founders Edition 1506 MHz 1683 MHz 8GB GDDR5 8 GT/s 1x 6-pin 150W $449.99

 

As we begin to field-strip the GTX 1070 Gaming Z, you can see the RGB LED backlight for the MSI crest and its associated connector. Otherwise, all that’s back here is a matte-black PCB. You can get some sense of how unusually wide the PCB on this card is by taking a look at how far it extends past the mounting bracket—we’re talking Asus Strix GTX 980 Ti proportions here.

Taking off the heatsink reveals five heatpipes running over a flat base plate. MSI also applies finned heatsinks to much of the card’s power circuitry. A flat metal plate transfers heat from the GDDR5 RAM ringing the GP104 GPU. You can also see the generous dollop of “premium thermal compound” MSI applies to the chip itself.

Removing these auxiliary heatsinks and cleaning off the thermal paste gives us a better look at the Gaming Z card’s 10-phase power-delivery subsystem, the GDDR5 memory, and the GP104 GPU itself. To send power to those VRMs, MSI uses a pair of PCIe power connectors: a six-pin and an eight-pin plug. The GTX 1070 Gaming Z also offers a DVI ouput, three DisplayPort 1.3 outputs, and a gold-plated HDMI 2.0 connector.

Overall, MSI makes the Gaming Z card feel worthy of its $469.99 price tag, and that’s no surprise. We’ve enjoyed the company’s custom GeForces for quite some time. MSI’s GeForce GTX 970 Gaming 4G card was a perennial feature of our System Guides when that card was current, and we think the GTX 1070 Gaming cards continue that tradition.

Along with the Gaming Z, MSI also offers a couple milder GTX 1070s. The GTX 1070 Gaming card has a simpler backplate, and its 1531-MHz base and 1721-MHz boost clocks are less aggressive than the top-end model’s. This card also doesn’t have the multiple clock profiles of MSI’s fancier 1070s. The Gaming X card keeps the simple backplate of its less-expensive cousin, but it pushes clocks to 1582-MHz base and 1771-MHz boost speeds in its “gaming mode.” Ticking the “OC mode” checkbox pushes those clocks to a 1607-MHz base and a 1797-MHz boost.

All of these cards appear to use similar PCB designs and coolers, so the question of which one to select comes down to your budget and desired clock speeds. The GTX 1070 Gaming card sells for $439.99, while the Gaming X card is $449.99 and the Gaming Z sells for the aforementioned $469.99. We suspect the Gaming card could probably be overclocked to match or beat its more expensive cousins, but folks who don’t want to mess with Afterburner could be forgiven for dropping the extra $10 on a Gaming X. The additional $20 for the Gaming Z card gets you extra bling and slightly higher clocks still. We think it’s hard to go wrong with any of these cards, so let your wallet decide.

Now that we’ve seen the GTX 1070 Gaming Z, let’s see what the cut-down GP104 GPU can do.

 

Our testing methods

As always, we did our best to deliver clean benchmarking results. Our test system was configured as follows:

Processor Core i7-5960X
Motherboard Asus X99 Deluxe
Chipset Intel X99
Memory size 16GB (4 DIMMs)
Memory type Corsair Vengeance LPX

DDR4 SDRAM at 3200 MT/s

Memory timings 16-18-18-36
Chipset drivers Intel Management Engine 11.0.0.1155

Intel Rapid Storage Technology V 14.5.0.1081

Audio Integrated X99/Realtek ALC1150

Realtek 6.0.1.7525 drivers

Hard drive Kingston HyperX 480GB SATA 6Gbps
Power supply Fractal Design Integra 750W
OS Windows 10 Pro

 

  Driver revision GPU base

core clock

(MHz)

GPU boost

clock

(MHz)

Memory

clock

(MHz)

Memory

size

(MB)

Asus Strix Radeon R9 Fury Radeon Software 16.6.1 1000 500 4096
Radeon R9 Fury X Radeon Software 16.6.1 1050 500 4096
Gigabyte Windforce GeForce GTX 980 GeForce 368.39 1228 1329 1753 4096
MSI GeForce GTX 980 Ti Gaming 6G GeForce 368.39 1140 1228 1753 6144
MSI GeForce GTX 1070 Gaming Z GeForce 368.39 1657 1860 2002 8192
GeForce GTX 1080 Founders Edition GeForce 368.39 1607 1733 2500 8192

Our thanks to Intel, Corsair, Asus, Kingston, and Fractal Design for helping us to outfit our test rigs, and to MSI, Nvidia and AMD for providing the graphics cards for testing, as well.

For our “Inside the Second” benchmarking techniques, we use the Fraps software utility to collect frame-time information for each frame rendered during our benchmark runs. We sometimes use a more advanced tool called FCAT to capture exactly when frames arrive at the display, but our testing has shown that it’s not usually necessary to use this tool in order to generate good results for single-GPU setups. We filter our Fraps data using a three-frame moving average to account for the three-frame submission queue in Direct3D. If you see a frame-time spike in our results, it’s likely a delay that would affect when a frame reaches the display.

You’ll note that aside from the Radeon R9 Fury X and the GeForce GTX 1080, our test card stable is made up of non-reference designs with boosted clock speeds and beefy coolers. Many readers have called us out on this practice in the past for some reason, so we want to be upfront about it here. We bench non-reference cards because we feel they provide the best real-world representation of performance for the graphics card in question. They’re the type of cards we recommend in our System Guides, so we think they provide the most relatable performance numbers for our reader base.

To make things simple, when you see “GTX 1070,” “GTX 980,” or “GTX 980 Ti” in our results, just remember that we’re talking about custom cards, not reference designs. You can read more about the MSI GeForce GTX 980 Ti Gaming 6G in our roundup of those custom cards. We also reviewed the Gigabyte Windforce GeForce GTX 980 a while back, and the Asus Strix Radeon R9 Fury was central to our review of that GPU.

Each title we benched was run in its DirectX 11 mode. We understand that DirectX 12 performance is a major point of interest for many gamers right now, but the number of titles out there with stable DirectX 12 implementations is quite small. DX12 also poses challenges for data collection that we’re still working on. For a good gaming experience today, our money is still on DX11.

Finally, you’ll note that in the titles we benched at 4K, the Radeon R9 Fury is absent. That’s because our card wouldn’t play nicely with the 4K display we use on our test bench for some reason. It’s unclear why this issue arose, but in the interest of time, we decided to drop the card from our results. Going by our original Fury review, the GTX 980 is a decent proxy for the Fury’s performance, which is to say that it’s not usually up to the task of 4K gaming to begin with. You can peruse those numbers and make your own conclusions.

 

Sizing ’em up

Take some clock speed information and some other numbers about per-clock capacity from the latest crop of high-end graphics cards, and you get this neat table:

  Peak pixel

fill rate

(Gpixels/s)

Peak

bilinear

filtering

int8/fp16

(Gtexels/s)

Peak

rasterization

rate

(Gtris/s)

Peak

shader

arithmetic

rate

(tflops)

Memory

bandwidth

(GB/s)

Radeon R9 290X 64 176/88 4.0 5.6 320
Radeon R9 Fury 64 224/112 4.0 7.2 512
Radeon R9 Fury X 67 269/134 4.2 8.6 512
GeForce GTX 780 Ti 37 223/223 4.6 5.3 336
Gigabyte GTX 980 85 170/170 5.3 5.4 224
MSI GeForce GTX 980 Ti 108 216/216 7.4 6.9 336
GeForce Titan X 103 206/206 6.5 6.6 336
MSI GeForce GTX 1070 117 220/220 5.5 7.0 259
GeForce GTX 1080 111 277/277 6.9 8.9 320

In the custom-card form we’re looking at today, the GTX 1070 delivers on its promise of beating out the Titan X in many regards, at least on paper. Its peak pixel fill rate even exceeds that of the GTX 1080 Founders Edition. Since it’s down a GPC—and therefore a raster unit—compared to the GTX 1080, however, the GTX 1070 can’t move as many triangles as its bigger brother, and it also falls short of the much larger GM200 chip. Let’s see how these numbers translate to performance in our directed Beyond3D benchmark suite.

Despite its peak numbers in the table above, the GTX 1070 falls slightly behind the GTX 980 Ti in our directed pixel-fill benchmark. This may be one of the areas where being down a GPC on the GTX 1080 hurts the 1070 a bit.

Since the GTX 980 Ti and the GTX 1070 both rely on GDDR5 memory, it’s easy to make cross-generational comparisons here. The GTX 1070’s higher-clocked GDDR5 and color-compression mojo can’t make up for the fact that its bus width is significantly narrower than that of the GM200 chip on board the 980 Ti.

The GTX 1070 ekes out a win over the GTX 980 Ti in our texturing tests, thanks to its prodigious clock speeds.

Despite its theoretical disadvantage in polygon throughput, the GTX 1070 can also move considerably more triangles than the GTX 980 Ti.

The GTX 1070 lives up to its number-crunching potential in our directed ALU tests, although the GTX 980 Ti isn’t far behind. It’s impressive that the GTX 1070 can deliver this kind of performance despite being down 896 stream processors on the GTX 980 Ti. Now that we’ve teased out the differences that disabling a GPC on GP104 makes, let’s run the GTX 1070 through some real-world tests.

 

Grand Theft Auto V
Grand Theft Auto V has a huge pile of image quality settings, so we apologize in advance for the wall of screenshots. We’ve re-used a set of settings for this game that we’ve established in previous reviews, which should allow for easy comparison to our past tests. GTA V isn’t the most demanding game on the block, so even at 4K you can expect to get decent frame times out of higher-end graphics cards.


Out of the gate, the GTX 1070’s average frame rates come out a nose ahead of the GTX 980 Ti’s. The card also delivers ever-so-slightly better frame times than the GM200 card in our 99th-percentile measure, and that result is backed up by our frame time graph. Neither card has any trouble delivering smooth gameplay at 4K with GTA V, although folks who want to break the 60-FPS barrier with such a high-res monitor will still need to step up to the GTX 1080.


These “time spent beyond X” graphs are meant to show “badness,” those instances where animation may be less than fluid. The 50-ms threshold is the most notable one, since it corresponds to a 20-FPS average. We figure if you’re not rendering any faster than 20 FPS, even for a moment, then the user is likely to perceive a slowdown. 33 ms correlates to 30 FPS or a 30Hz refresh rate. Go beyond that with vsync on, and you’re into the bad voodoo of quantization slowdowns. And 16.7 ms correlates to 60 FPS, that golden mark that we’d like to achieve (or surpass) for each and every frame.

The GTX 1070 spends less time beyond the critical 16.7-ms mark than the GeForce GTX 980 Ti, more or less halving that card’s time spent at frame rates lower than 60 FPS. Our older numbers with the Radeon R9 Fury and Fury X might look better with the latest 16.7.2 driver, which contained targeted updates designed to improve smoothness in GTA V with the Radeon RX 480. As things stand, the Fury and Fury X sandwich the GTX 980 in our results. All three of those cards deliver considerably less smooth experiences in GTA V than one might get with a GTX 980 Ti, a GTX 1070, or a GTX 1080.

 

Crysis 3
Crysis 3 is an old standby in our benchmarks. Even though the game was released in 2013, it still puts the hurt on high-end graphics cards. Unfortunately, our AMD Radeon R9 Fury and the 4K display on our test bench have a disagreement of some sort, so the red team is only represented by the Fury X on this set of benches.


In Crysis 3, the GeForce GTX 980 Ti takes a razor-thin lead over the GTX 1070 in both our average FPS and 99th-percentile frame time measures. For most intents and purposes, the cards perform identically in this title, but our advanced frame-time metrics show where they differ.


The GTX 1070 spends barely any time past the 33.3-ms mark in our testing, but it spends slightly more time than the GTX 980 Ti churning on frames that drop the frame rate below 60 FPS. The GeForce GTX 1080 continues to lead the pack in both potential and delivered performance. The Radeon R9 Fury X has a slightly harder time of it still, while the GTX 980 is winded by Crysis 3 at 4K if we use 16.7 ms as our threshold.

 

Rise of the Tomb Raider
Rise of the Tomb Raider is the first brand-new game in our benchmarking suite. To test the game, we romped through one of the first locations in the game’s Geothermal Valley level, since it offers a diverse backdrop of snow, city, and forest environments. RotTR is a pretty demanding game, so we took this opportunity to dial the resolution back to 2560×1440. We also turned off the AMD PureHair feature to avoid predisposing the benchmark toward one card or another in this test, since Nvidia’s HairWorks has created significant performance deltas in past Tomb Raider games when we’ve had it on.


Noticing a pattern yet? The GeForce GTX 1070 takes back the average-FPS lead by a hair in Rise of the Tomb Raider, and it turns in a slightly better 99th-percentile frame time than the GTX 980 Ti, as well.


In our measures of “badness,” the GTX 1070 eclipses the GTX 980 Ti in the time spent working on frames past the critical 16.7-ms mark. The cards are still quite closely matched, though. Both Radeons deliver considerably less smooth experiences than any of the GeForce cards in our test suite—RoTR really seems to favor Nvidia cards.

 

Fallout 4


The GeForce generations flip again in our Fallout 4 results. The gap is the widest it’s been in our results yet, though. The GTX 980 Ti is about 5% faster in our average FPS measure, and it delivers a better 99th-percentile frame time than the GTX 1070, too. Just goes to show the GTX 1070 can’t win ’em all, we suppose.


In our measures of “badness,” none of our cards spend any time past the 50-ms or 33.3-ms marks. The GTX 1070 spends about twice as much time past 16.7 ms working on challenging frames, but neither it nor the GTX 980 Ti spend more than a second doing so in absolute terms. The GTX 1080 is still the smoothness champion for Fallout 4 if you’re targeting 4K and 60 FPS.

 

The Witcher 3
The Witcher 3 is another benchmark where we’ve re-used the settings that we’ve settled on in past reviews. We also chose to test this title at 2560×1440 rather than 4K. We didn’t crank the resolution in part because we wanted to maintain consistency with the numbers we produced in our Radeon R9 Fury X review, but also because the game is demanding enough that playing the game at 4K with high settings wasn’t a great experience even on newer high-end cards.


The GTX 980 Ti turns in another razor-thin victory over the GTX 1070 in The Witcher 3. Let’s see if our frame-time metrics can tell us why.


As it happens, the GTX 1070 spends a bit more time working on frames past the 16.7-ms mark. Both cards still turn in admirable performance here, though. You’d have to be paying close attention to discern which card was painting Geralt on your screen with these numbers.

 

Hitman

The 2016 version of Hitman closes out our test suite. We chose to bench this demanding title at 4K to really make our graphics cards sweat.


Hitman continues to be a major challenge for modern graphics cards to run at 4K. The GTX 1070 takes a tiny lead over the GTX 980 Ti here, but both cards are closer to the ragged edge of playability than we’d like.


In our measures of “badness,” we can see that the GTX 1070 delivers a slightly smoother experience overall than the GTX 980 Ti. Both cards spend considerable amounts of time past the 16.7-ms mark, though, so a smooth 60-FPS experience at 4K remains elusive.

Now that we’ve wrapped up our performance analysis, let’s see how the MSI GeForce GTX 1070 Gaming Z card performs on the power, noise, and temperature fronts.

 

Power consumption

Let’s take a look at how much juice the GTX 1070 needs to do its thing. Our “under load” tests aren’t conducted in an absolute peak scenario. Instead, we have the cards running a real game, Crysis 3, in order to show us power draw with a more typical workload.

At idle, our test system consumes a few more watts with the overclocked GTX 1070 inside than it does with the stock-clocked GTX 1080 Founders Edition. That shouldn’t come as any surprise.

Loading the GTX 1070 up with Crysis 3 brings our system power draw within 10 watts of the GTX 1080 FE. Impressively, we can see that the GTX 1070 delivers GTX 980 Ti-class performance while allowing our test system to draw 27% less power under load. Process shrinks are a wonderful thing.

Noise levels

At idle, all of our cards except the GTX 1080 stop their fans, so the noise floor of our lab is the limitation of our measurement. Under load, however, the GTX 1070 Gaming Z card produces noise levels bested only by the Radeon R9 Fury X’s closed-loop liquid cooler. MSI’s GeForce GTX 980 Ti was already among the quietest cards of that type we’ve tested, so it’s good to see the company continuing its commitment to quiet gaming.

Most importantly, the sound from MSI’s cooler is also a pleasant, broad-spectrum one. Even when we pushed them to full blast for the heck of it, the twin fans didn’t lose their polite character. Overall, MSI deserves high praise for the noise, vibration, and harshness characteristics of its latest cooler.

GPU temperatures

Of course, quiet computing is just one of the things a good graphics-card heatsink needs to deliver. The Radeon R9 Fury X and its liquid cooler aside, the GTX 1070 Gaming Z is only bested by the much louder and more aggressive cooler on the Gigabyte GTX 980 Windforce card. While it’s not an entirely fair comparison, it’s clear from these results just how much of a drop in temperatures one can get from a custom card compared to Nvidia’s reference design, too. The Founders Edition GTX 1080 runs a full 15° C hotter than the GTX 1070 Gaming Z, even if that card does have the benefit of exhausting hot air directly from the case. We may have to see how MSI’s cooler performs atop a fully-enabled GP104 chip at some point.

 

Conclusions

Let’s sum up our results with a couple of our famous scatter plots. The best values tend toward the upper-left corner of each plot, where performance is highest and prices are lowest. We’ve converted our 99th-percentile frame time results into FPS to make our higher-is-better system work.

To account for the actual conditions of the graphics card market right now, we’ve surveyed Newegg and averaged the price of all in-stock models of the GeForce GTX 980 Ti and the Radeon R9 Fury X. We’ve used the actual price of the Gigabyte Windforce GTX 980 card we employ in our testing, since it’s one of the few GTX 980s still available. We’ve also used the $469.99 retail price of the MSI GTX 1070 Gaming Z card itself in these graphs, since pricing is so wildly variable on GTX 1070s right now.

Unless you skipped straight to our conclusion, the plot of FPS per dollar above should come as no surprise. The GTX 1070 comes eerily close to duplicating the performance of the GTX 980 Ti for less money than that card demanded at launch. The GTX 1070’s dot would move even further to the left if we consider that some of those cards are selling for as little as $410 on Newegg right now. Products that deliver more bang for the buck are always easy to get excited about, and the GTX 1070 offers that improvement in spades. The $470 retail price of the fancy MSI card we tested skews the results a bit, though.

In our 99th-percentile FPS per dollar plot, the GTX 1070 just edges out the GTX 980 Ti. We’d be hard-pressed to call one card or the other better by this measure—they both deliver exceptionally smooth gameplay for the money. That said, we’d always pick the Pascal card over a deeply-discounted GTX 980 Ti for its higher power efficiency and new architectural features. Meanwhile, the Radeon R9 Fury X trails the GTX 1070 in both our average FPS-per-dollar and 99th-percentile FPS-per-dollar measures.

Since the GTX 1070 delivers about the same performance as hot-clocked GTX 980 Tis, folks with those cards have no reason to ditch them for this Pascal-powered product. Those gamers won’t get a performance boost without stepping up to a GTX 1080. Owners of GTX 770, GTX 780, and perhaps even GTX 780 Ti cards—and similar products from AMD—will likely find the GTX 1070 a compelling upgrade for the money, though. It’s also the only sensible step up from the just-released GTX 1060 for builders who want more performance from an Nvidia card without dropping $600 or more on a GTX 1080.

For better or for worse, the GTX 1070 also completes Nvidia’s stranglehold on the high-end graphics card market. Some Radeon R9 Fury X cards are now available for prices similar to some GTX 1070s, but the Fury X’s performance in our tests often trails the Pascal card, and it needs over 100W more power to hang in there under load. We’d love to see the same kind of vigorous competition at this price point that we’re seeing around $200 to $250 thanks to the Radeon RX 480 and the GTX 1060, but our best guess is that AMD’s next high-end graphics card is a while off yet. 

It’s not all bittersweet news today, though. The GTX 1070 Gaming Z card that MSI sent us shows off what’s possible when Nvidia’s board partners work their magic with Pascal. It’s an excellent piece of hardware in every regard: quiet, cool, and fast. Thanks to the strong demand for Pascal cards, however, it’s selling for $90 more than Nvidia’s $380 suggested price for custom GTX 1070s. We still think it’s an OK value at that price compared to the Radeon R9 Fury X and the GTX 1080, but we think you’d really have to like the RGB LED lighting on the backplate to make it worth the $20 price jump over the nearly identical Gaming X variant.

Still, MSI has carried over the mojo that made its GTX 970 Gaming 4G card one of our favorites of its generation. We think the Gaming X and Gaming Z cards are well worth a look if you’re in the market for a GTX 1070.  

Comments closed
    • PBCrunch
    • 3 years ago

    Are frames per second (FPS) and 99th percentile (99) figures from individual games normalized when the FPS per dollar and 99 per dollar graphs are generated?

    It seems like these individual FPS/99 values should be normalized before the averages are calculated, but I didn’t see any mention of this in the article.

    If individual game FPS/99 values are used in their raw form, a single high FPS benchmark with a large raw difference between two cards could easily wash out a number of smaller differences in titles with lower FPS.

    Example:
    Let’s consider a benchmark suite of five titles. Three of these titles are very punishing on the graphics cards being reviewed. One of them is older and less demanding. The last title is a super-high FPS competitive shooter.

    Demanding Strategy Title (DST):
    Card A1: 33.2 FPS
    Card A2: 34.1 FPS
    Card N1: 31.8 FPS
    Card N2: 33.1 FPS

    Demanding Open World Title (DOWT):
    Card A1: 35.2 FPS
    Card A2: 36.1 FPS
    Card N1: 33.8 FPSCard
    Card N2: 35.1 FPS

    Demanding Third Person Title (DTPT):
    Card A1: 34.2 FPS
    Card A2: 35.1 FPS
    Card N1: 32.8 FPS
    Card N2: 34.1 FPS

    Older Title (OT):
    Card A1: 64.2 FPS
    Card A2: 65.1 FPS
    Card N1: 62.8 FPS
    Card N2: 64.1 FPS

    High Framerate Shooter (HFS):
    Card A1: 219.5 FPS
    Card A2: 225.3 FPS
    Card N1: 251.8 FPS
    Card N2: 264.7 FPS

    If we consider these values, card A2 consistently outperforms the Nx cards in the first four titles by differences that are small in magnitude. The Nx cards outperform the Ax cards by a large margin in the last title, a competitive online shooter that delivers 30 FPS on a potato.

    To my eye, this means the card A2 is the best performer; it delivers superior performance in a wider range of (completely made up) titles. Furthermore, card A2 performs better in the titles where it really matters.

    If you average the raw FPS figures, the both Nx cards come out ahead of card A2. The results in the last test, which are probably completely imperceptible, completely overrun the results of all other tests. By this metric, the Nx cards beat both Ax cards in average FPS, despite the fact that A2 takes the crown in four of five titles.

    Raw averages:
    A1: 77.3 FPS
    A2: 79.1 FPS
    N1: 82.6 FPS
    N2: 86.2 FPS

    However, if the average FPS for each title is normalized to between 0 and 1 (*), where 0 represents the lowest FPS score for any card in that title, and 1 represents the highest FPS of any card in the same title we get results that are more representative of the true pecking order of these completely made up graphics cards:

    Normalized averages:
    A1: 0.49
    A2: 0.83
    N1: 0.14
    N2: 0.65

    When the data is normalized, card A2 is clearly the top dog, and card N1 is clearly at the bottom of the pack. Cards A1 and N2 are pretty close in the middle, which is a pretty close representation of their relative performance.

    Maybe you guys already perform this normalization. If so, it doesn’t seem to be mentioned anywhere. Normalization is a common early step in scientific comparisons.

    (*) Normalization formula: (x_i – x_min) / (x_max – x_min)

    • Mr Bill
    • 3 years ago

    Is it an architectural difference that allows the int8 and int16 bars to be nearly identical lengths for the 1070 (as compared to AMD Fiji? where the int16 is half the int8)?
    see [url=https://techreport.com/review/30413/nvidia-geforce-gtx-1070-graphics-card-reviewed/4<]Beyond3D Suite Textural Filtering Rate[/url<] edit: fixed link

    • MustSeeMelons
    • 3 years ago

    Great card, no doubt about that, but the pricing is horrible. Why is everyone accepting that simply as fate?

    • I.S.T.
    • 3 years ago

    Questions about Fallout 4’s test: why is Weapon Debris left off, and is that setting more GPU dependent or CPU dependent?

    • ronch
    • 3 years ago

    Look at that card!! SO FIERCE!!! It’s like it’ll grab you by the neck and gobble you up!!

    • fellix
    • 3 years ago

    “Despite its theoretical disadvantage in polygon throughput, the GTX 1070 can also move considerably more triangles than the GTX 980 Ti.”

    Geometry processing in Nvidia architectures since Fermi is distributed among the SMs, so the throughput is determined by the number of SMs and the time needed for each SM to complete a single polygon.
    In GM200 GPU, each SM needs 3 cycles to process a single triangle. For the rest of the Maxell family as well as the whole Pascal line that time is down to 2 cycles. Simple calculation proves the empirical observations. Of course, as far as rasterizing the geometry, GM200 still holds a slight advantage, despite the clock-rate deficiency to GP104, but Pascal is the new undisputed champion in rejecting occluded geometry by a wide margin.

      • chuckula
      • 3 years ago

      Are there still worthwhile gains to be made from amping up the rasterization hardware at the end of the pipeline for regular monitors (and that includes 4K and potentially 120Hz 4K) or are the main bottlenecks now further back along the pipeline in the shaders or other parts of the GPU?

        • sweatshopking
        • 3 years ago

        No.

          • chuckula
          • 3 years ago

          YOU MISSPELLED NO!

        • fellix
        • 3 years ago

        AMD do have some performance deficiency in geometry processing, rasterization and tessellation — those stages from the pipeline are still serialized, though now they offer up to four setup pipes. Still way below the wide-distributed approach by Nvidia (that scales perfectly with the pixel processing ), but good enough for the current market demands, considering the fact that games predominantly employ tessellation and geometry shaders quite sparsely. The majority of the geometry assets are still baked-in and predefined, fed by the CPU.

          • chuckula
          • 3 years ago

          Thanks! I like to read about the basics of the 3D rendering process and it’s fascinating but incredibly complex down in the details too.

    • odizzido
    • 3 years ago

    Going to post this message for every review until this gets done.

    Please make the lines showing which colour belongs to which card thicker. At the distance I prefer to read from I cannot tell them apart.

      • flip-mode
      • 3 years ago

      I’ve been mentioning this for years, although certainly not in every review. Maybe that’s what it will take. I’ll try to more consistently mention this too.

    • Mat3
    • 3 years ago

    DX11 only: The way it’s meant to be reviewed.

    Could you guys be anymore obvious with your agenda?

      • NTMBK
      • 3 years ago

      Could you be any more obvious with your trolling?

        • Billstevens
        • 3 years ago

        I wouldn’t call him a troll, people are just used to bad sites glazing over Dx12 results. TR’s methodology and history shows they are not biased or bad review site. But without knowledge of both those facts this review would seem biased.

        While Dx11 is still dominant many major AAA games now have a Dx12 or Vulcan mode which strongly favor AMD GCN. Those with AMD can can typically use Dx12 or Vulcan and get performance that surpass Nvidia in Dx11 or openGL. And it is worth noting that in certain games where the Fury X lost its ass, like Hitman, switching to Dx12 will having it comfortably steeling second place.

        Every time you pick a graphics card there are trade offs, a while a 1070 and 1080 are a safer buy across the board than a Fury X, the same can not be said for the 980 or even 980 Ti over the next year. This review does not show that at all due to a lack of Dx12 benches.

        There are at least 2 games in the review that use Dx12 and hopefully TR figures out a good way to bench them with frame times for the next round. Most review sites are benching both frame times and fps in Dx12 and all major new games moving forward are likely to launch with a Dx12 mode.

        Still the ultimate conclusion that a 1070, once its price comes down to within range of its MSRP for non FE boards, will be the best value high end card until AMD gets a chance to push back with Vega.

          • Voldenuit
          • 3 years ago

          [quote<]While Dx11 is still dominant many major AAA games now have a Dx12 or Vulcan mode which strongly favor AMD GCN. Those with AMD can can typically use Dx12 or Vulcan and get performance that surpass Nvidia in Dx11 or openGL.[/quote<] Hitman DX12 RX 480 > GTX 1060 Rise of Tomb Raider DX12 GTX 1060 > RX 480 Ashes of the Singularity DX12 GTX 1060 = RX 480 Total War Warhammer DX12 RX 480 > GTX 1060 Time Spy DX12 GTX 1060 = RX 480 DOOM Vulkan RX 480 > GTX 1060 Using benchmarks from TPU, G3D and PCPer, except for DOOM Vulkan, which are rarer, and I plucked from pcgamer.com. I'm calling small (<5%) differences a wash. While the RX 480 gets a big boost from DX12, I'd say it's too early to call either side a straight win for DX12 at the moment, and it would be folly to extrapolate future performance based on the small sample of games currently available, and how buggy many of the DX12 games are right now.

      • K-L-Waster
      • 3 years ago

      Did you conveniently skip this on the “Testing Methods” page?

      [quote<]Each title we benched was run in its DirectX 11 mode. We understand that DirectX 12 performance is a major point of interest for many gamers right now, but the number of titles out there with stable DirectX 12 implementations is quite small. DX12 also poses challenges for data collection that we're still working on. For a good gaming experience today, our money is still on DX11.[/quote<] I'm sure all 8 people who are actively gaming in DX12 today will be broken up over the omission.

        • Mat3
        • 3 years ago

        Ashes
        Total War
        Hitman
        Tomb Raider
        Quantum Break
        Doom (Vulcan)

        Not enough titles? Can’t test them? You’d be hard pressed to find another tech site with a recent GPU review that excluded them completely.

          • Ninjitsu
          • 3 years ago

          [url<]http://www.anandtech.com/show/10325/the-nvidia-geforce-gtx-1080-and-1070-founders-edition-review/21[/url<] [url<]http://www.anandtech.com/show/10325/the-nvidia-geforce-gtx-1080-and-1070-founders-edition-review/27[/url<] read that and take it easy BTW as you point out yourself, DOOM is Vulcan and not DX12

    • jihadjoe
    • 3 years ago

    Just a suggestion: Since this isn’t a reference design, then the graphs and perhaps the article title ought to specifically say that this is the ‘MSI GTX 1070 Gaming Z’ and not just ‘Nvidia GTX 1070’.

      • Shobai
      • 3 years ago

      On Pg 3:

      [quote<]To make things simple, when you see "GTX 1070," "GTX 980," or "GTX 980 Ti" in our results, just remember that we're talking about custom cards, not reference designs. You can read more about the MSI GeForce GTX 980 Ti Gaming 6G in our roundup of those custom cards. We also reviewed the Gigabyte Windforce GeForce GTX 980 a while back, and the Asus Strix Radeon R9 Fury was central to our review of that GPU.[/quote<] You may have overlooked this.

        • jihadjoe
        • 3 years ago

        I’m gonna have to admit skipping pages 2-4 after the introduction, but surely having clearly labelled graphs isn’t be a bad idea.

        It’ll allow the charts to stand on their own without having to qualify items in the list. It’s easy now because we’re talking in the same article’s comments, but if a Vega or a 1060 review comes out later on and those 1070 graphs are quoted then it’ll be easy for the figures to be misinterpreted.

    • Shobai
    • 3 years ago

    Apologies for banging this drum again, but your Frame Number plots could be improved:

    – You’ve got 4 plots on the GeForce charts and 2 on the Radeon charts. In some charts of previous reviews you’ve had a plot common to both; could you please consistently add the GTX 1070 plot to both charts, so that it’s easy to make comparisons?

    – For each of the 6 games, both the x and y axes of the frame number charts are different. That’s an achievement, for sure, but it does make comparisons between cards harder to do at a glance. To my mind, at least, it negates one of the benefits for having buttons to switch between charts.

    – Some of the charts appear to have odd x axis ranges, in that they don’t finish on a scale increment.

    – For Fallout 4, it is impossible to determine whether the GTX 1070 plot on the Radeon chart is being truncated.

    Again, thank you for the review. Please see this feedback as an attempt to help improve the quality of your reviews, and not as a personal attack.

      • MrJP
      • 3 years ago

      Seconded. The scales on the frametime plots really need to be the same between Geforce and Radeon to make flipping between them useful. Thanks.

    • Generic
    • 3 years ago

    I can only see the first paragraph?

    Article currently ends with:
    “…through the wringer to see how a more affordable Pascal card performs.”

      • Shobai
      • 3 years ago

      Click through to the review: click the picture to the right, for instance.

      [it sounds like you got here by clicking the comments link, rather than the article link]

        • Generic
        • 3 years ago

        Thank you Shobai!

        I beg the internets’ pardon for my silliness. /bows deeply

          • Shobai
          • 3 years ago

          No probs, Bob. Happens to the best of us

    • Lord.Blue
    • 3 years ago

    “Taking off the heatsink reveals six heatpipes running over a flat base plate.”

    I only saw 5…3 on top, and 2 on the bottom. Other than that, great review.

      • Jeff Kampman
      • 3 years ago

      That’s what I get for going by MSI’s images. Fixed.

    • chuckula
    • 3 years ago

    Bonus review.
    Thanks guys!

    Good to see the slots getting filled in.

    Now about that GTX-1060…. [url=https://techreport.com/forums/viewtopic.php?f=3&t=118247<]permit me to make a modest proposal[/url<].

      • anotherengineer
      • 3 years ago

      Where is the let Phoronix test it option????

      😉

        • chuckula
        • 3 years ago

        Those spoiled sellouts over at Phoronix already got a GTX 1060, they don’t need another one since they can’t do SLI anyway!

    • xeridea
    • 3 years ago

    Once again, the efficiency is misstated. You are still doing:

    Card A system Power / Card B system Power
    254/350

    which gets you 72.6% power used. However, this is total system power. With estimated 100W non GPU power, the comparison to the 980 Ti is
    154/ 250 = 61.6%, or 38.4% less power used, instead of 27.6% less power used.

    In case you were wondering this would make it 62.3% more efficient. I know you don’t have advanced power meters, but please at least do the correct math.

      • RAGEPRO
      • 3 years ago

      The system uses 27% less power under load. That’s all that was stated. Where’s the problem, again?

        • xeridea
        • 3 years ago

        [quote<]Impressively, we can see that the GTX 1070 delivers GTX 980 Ti-class performance for 27% less power under load. [/quote<] This is saying the card uses 27% less power than the card it is replacing.

          • Jeff Kampman
          • 3 years ago

          Sorry, I should have used clearer language here. Corrected.

    • anotherengineer
    • 3 years ago

    10-phases, wow.

      • maxxcool
      • 3 years ago

      I know right !? My GB motherboard only has 8 … I bet this thing will overclock like a freak.

        • BoilerGamer
        • 3 years ago

        Not on voltage locked bios it have now, once it is given 1.25V volts we will see how far it can go.

        • anotherengineer
        • 3 years ago

        [url<]https://www.techpowerup.com/reviews/MSI/GTX_1070_Gaming_X/27.html[/url<]

          • chuckula
          • 3 years ago

          That is “only” the X model. I’m not 100% sure if it comes with the insane power delivery of the Z model or is using the reference PCB.

            • anotherengineer
            • 3 years ago

            Ahh just noticed.

            Looks like it though
            [url<]https://www.techpowerup.com/reviews/MSI/GTX_1070_Gaming_X/4.html[/url<]

          • maxxcool
          • 3 years ago

          jebus .. 2400mhz video cards ..

            • stefem
            • 3 years ago

            2400MHz for the ram, the GPU reached a ridiculous 2100MHz clock

            • Voldenuit
            • 3 years ago

            Yeah, my Gigabyte GTX 1070 G1 is boosting to 2065 MHz w a mild overclock, but I haven’t explored the upper limits of overclocking on it (just dialed in +100 MHz core and +500 MHz mem from day one and left it there). Stable boost seems to hover around 2012-2025 MHz, depending on how hot it is that day. My old 970 (another gigabyte) was also running a +120/+500 MHz daily OC (up from factory OC), but I had gone as high as +200 on it before.

            Maxwell and Pascal seem to have a lot of headroom.

      • tipoo
      • 3 years ago

      So that’s where all the phases from the 3 phase 1060 went!

    • Voldenuit
    • 3 years ago

    Yay! TR review! Glad to see it up, Jeff.

    However, I’m a bit disappointed that TR only posted results for one resolution, and it’s not always the same one.

    I doubt a 4K display owner would consider playing at 1440p instead of, say 1080p or native 4K with settings turned down. Likewise, a 1440p or 1080p monitor owner would be playing at their native res, and as cards can shift relative performance quite dramatically between resolutions, presenting only a single resolution for each game is not necessarily the best way to convey which is the best card a given user should get.

    I get that multiple resolutions might make the average performance/price plots more tricky, but I think there are several ways to address that (averaging values, separate plots for target resolution, etc).

    Also, no DOOM Vulkan tests?

      • DPete27
      • 3 years ago

      TR has always tested each game at one selected resolution because they’re trying to hit a playable framerate for each class of GPUs. If anything I like that they flip between 1440p and 4k on various games. It’s inclusive to more readers, 1440p monitor owners and 4k monitor owners (or 1440p and 1080p in the mid-tier reviews)

      Games come and go and you can only test so many. It’s more important to draw a comparison across a range of GPUs than to know, “Ok, I want to play Rise of Tomb Raider at 1440p and max settings at 60fps. Oh good! I should get a GTX 1070!” What happens when you’re done with that game and on to the next?

        • Voldenuit
        • 3 years ago

        [quote<]TR has always tested each game at one selected resolution because they're trying to hit a playable framerate for each class of GPUs. If anything I like that they flip between 1440p and 4k on various games. It's inclusive to more readers, 1440p monitor owners and 4k monitor owners (or 1440p and 1080p in the mid-tier reviews)[/quote<] You do realize that publishing results for 1080p, 1440p and 2160p would be even [i<]more[/i<] inclusive, right? I get that it triples the amount of runs required, but I do like how sites like pcper, guru3d, tweaktown and techpowerup present information in a way that is value-added to someone looking to build or upgrade their system.

        • Ninjitsu
        • 3 years ago

        The problem with adjusting settings and resolution to hit playable framerates (which may be subjective) is that as a whole the picture gets muddied – if you simply glance over the graphs then you’ll probably assume they’re all talking about the same resolution or similar settings (same level of AA, for example). This is of course, not the case.

        IMO testing 4K is fairly pointless. Maybe flagships should get a 4K test set, so the 1080 and the upcoming Vega and Titan. For everything else it’s of questionable utility to most readers.

    • derFunkenstein
    • 3 years ago

    Thanks for covering all the bases with the OC mode issue. Much like a whole host of other “issues”, this is one for the fanboys to fight out. It’s a ~2% clock speed difference and no change to memory, so I don’t even understand why they did this in the first place. All the results will be within a margin of error anyway.

    That said, the 1070 is quite a handsome card that will be made even more handsome once pricing comes down to MSRP after the initial frenzy ends. Everything is awesome.

    edit: I’m a dumbass, so I removed part of this comment.

      • XTF
      • 3 years ago

      The differences between cards are small, 2% higher clocks make it far more likely this card ‘wins’ over a normally clocked one.

        • derFunkenstein
        • 3 years ago

        I always look at those tiny margins and then I compare prices. After that, I buy the cheapest one anyway, because I’m fly like that.

          • Voldenuit
          • 3 years ago

          [quote<]I always look at those tiny margins and then I compare prices. After that, I buy the cheapest one anyway, because I'm fly like that.[/quote<] I tend to pick based on cooler, prices and availability. Also size where relevant (I recently migrated to a larger case, so size is less of an issue for me right now). Ended up getting a Gigabyte G1 Gaming GTX 1070 for $429. The MSI Gaming 8G was $10 more at $439 but not in stock at the time I ordered it (it actually came into stock the very next day, so my advice to shoppers is not to get discouraged by items coming in and out of stock). From what I've read at tpu and tweaktown, the msi's Twin Frozr VI cooler is quieter but possibly slightly less effective than gigabytes Windforce 3X cooler, so pick whichever suits your needs, or, as is probably going to be more likely, buy whichever is in stock for now.

            • derFunkenstein
            • 3 years ago

            I’ve almost always ended up with eVGA cards going down my path, and they’ve been pretty great for me. My current one is an ACX 2.0 version GTX 970, and before that it was a custom-cooled GTX 760. Both have been great.

            $430 isn’t bad, all things considered.

    • XTF
    • 3 years ago

    Wouldn’t one eight-pin plug be more than enough? And why’s the peak power draw lower then reference?

    > MSI uses a pair of PCIe power connectors: a six-pin and an eight-pin plug.

      • dragontamer5788
      • 3 years ago

      Probably there for the overclockers.

        • XTF
        • 3 years ago

        Wouldn’t 225W be enough for OC? The reference card is only 150W..

          • f0d
          • 3 years ago

          umm nope

          • travbrad
          • 3 years ago

          Depends how far you OC it. 🙂

          • jihadjoe
          • 3 years ago

          Power really shoots up once you exceed a certain threshold. To use CPUs as an example, the [url=http://www.anandtech.com/show/8864/amd-fx-8320e-cpu-review-the-other-95w-vishera/2<]FX8320E[/url<] uses 86W at stock, but OC to 4.8 and it eats 260W easy.

      • Jeff Kampman
      • 3 years ago

      We’re using a GTX 1080 Founders Edition card in our results, not a GTX 1070 FE. It makes sense that a hopped-up GTX 1070 is going to consume about as much power as the GTX 1080 FE.

        • derFunkenstein
        • 3 years ago

        Speaking of which, how does this thing OC? I’m guessing that cooler would give the GPU all the thermal headroom it needs to go sky-high.

        • XTF
        • 3 years ago

        MSI GeForce GTX 1070 Gaming Z: 150W
        GeForce GTX 1070 Founders Edition: 180W

        [url<]https://techreport.com/review/30413/nvidia-geforce-gtx-1070-graphics-card-reviewed/2[/url<]

          • Jeff Kampman
          • 3 years ago

          Sorry, that was an artifact of the GTX 1080 review. The proper figure is 150W.

      • chuckula
      • 3 years ago

      You have a point that at least the configuration TR tested doesn’t appear to need that 6-pin cable under load.

      Incidentally, I know TR likes to separate products into different review categories, but it appears to be the same power-draw benchmark running Crysis 3 and the same test rig so:

      GTX 1070 (with whatever OC MSI throws in by default): 61W idle / 254W load.
      Rx 480 (stock reference card): 74W idle / 262W load.

      [All numbers total system consumption]

      Source: [url<]https://techreport.com/review/30328/amd-radeon-rx-480-graphics-card-reviewed/12[/url<]

      • f0d
      • 3 years ago

      its for insane overclockers like me that will put in a lot of voltage and clock it as high as they possibly can

      how does having another cable hurt anyways? most PSU’s come with plenty of extra pcie cables

        • travbrad
        • 3 years ago

        Yep better to have too much power available than too little *cough* 480

      • floodo1
      • 3 years ago

      Yes one 8 pin is more than enough. I have my GTX1080 FE running at 2.1ghz on the single 8pin connector without any issues (-8

        • f0d
        • 3 years ago

        how much extra voltage though?

          • floodo1
          • 3 years ago

          zero extra voltage. Just turn up the clock sliders and power limits using PrecisionX OC (-8

            • f0d
            • 3 years ago

            some people like to put in extra voltage for even more overclock
            for example i have 1.422v in my R9-290 which runs at around 1v standard

            when overclocking and overvolting the power requirements skyrocket

    • RAGEPRO
    • 3 years ago

    Nice review boss. Wish I had the cash for one of these, although it would break my lifelong streak of only buying fully-enabled “x80” GPUs.

      • PrincipalSkinner
      • 3 years ago

      A man’s gotta have principles.

    • DrCR
    • 3 years ago

    This 1070 seems both superior and inferior to the 1060 Air Edition to which Jeff referred.

      • superjawes
      • 3 years ago

      The 1060 Air Edition gets a TR recommendation for quietest GPU ever.

        • maxxcool
        • 3 years ago

        And lowest power draw, being water proof.. 😉

          • Voldenuit
          • 3 years ago

          Air is not water proof, so much humidity right now…

            • maxxcool
            • 3 years ago

            Oh fine be all technical … 😛

          • superjawes
          • 3 years ago

          Too bad the warranty is non-existent, though…

          • jihadjoe
          • 3 years ago

          Performance per watt is a huge problem, though.

        • DrCR
        • 3 years ago

        Unfortunately total vaporware though.

      • maxxcool
      • 3 years ago

      LOL “air launch”…

    • sweatshopking
    • 3 years ago

    Moeny
    Also, too bad 480 and 1060 data isn’t included in these charts. I know it is there, but would be handy to have them in one place.

      • DPete27
      • 3 years ago

      My guess is the 1060 review will have all those combined.

    • tsk
    • 3 years ago

    Nice, now just support adaptive-sync.

      • JosiahBradley
      • 3 years ago

      I’d seriously consider buying a 1070 or 2 if they worked with the fancy monitor I have, but I don’t see buying another monitor that cost hundreds more than my current one on top of new GPUs. AMD has no upgrade path for me right now, so I guess I just get to wait for VEGA and hope it is faster than my current setup.

      • EzioAs
      • 3 years ago

      I believe you’re talking about variable refresh rate, similar to FreeSync/G-Sync. Nvidia has had adaptive v-sync for years.

        • tsk
        • 3 years ago

        Yes that is what I am talking about, and the VESA standard is called adaptive-sync.

      • floodo1
      • 3 years ago

      Except that there are now quite a few monitors where the g-sync version offers higher refresh rates than the free/adaptive-sync version (i.e. XR341CK with freesync = 75hz while X34 with g-sync = 100hz)

        • tsk
        • 3 years ago

        So? It still doesn’t work with my FreeSync monitor. I’m not saying G-sync needs to go away, but with the flip of a switch, nvidia could support adaptive-sync(freesync) tomorrow in a driver update if they wanted.

          • floodo1
          • 3 years ago

          Good point, Nvidia could do both! Would be nice because of how much cheaper FreeSync monitors are!

            • Voldenuit
            • 3 years ago

            I’d definitely be up for both companies supporting both standards (and yes, some people will debate the semantics of G-sync being a “standard”, I just meant the electrical and electronic protocol).

            Unfortunately, that scenario is entirely contingent upon nvidia playing nice, and well… (I use their cards, but I have no illusions about their corporate benificence).

            EDIT: and @ tipoo, since the G-sync controller is an FPGA, I would also imagine that a firmware update could technically reprogram the monitors to accept both G-sync and Freesync signals, but… ngreedia (said without irony, this time).

        • tipoo
        • 3 years ago

        I don’t think many people deny its technological superiority, but many are still locked into one or the other with a monitor buy, and the mrsps of the Gsync ones are still higher. So adding support for the open standard as well, even if it’s not quite as good, would still be a boon.

          • floodo1
          • 3 years ago

          Yeah good point … Nvidia could support FreeSync because those monitors are cheaper and support G-Sync because those monitors frequently perform better … would be a huge win for consumers!

      • odizzido
      • 3 years ago

      Nvidia isn’t about giving customers what they want and improving the market they’re in.

      • derFunkenstein
      • 3 years ago

      Yep. The price of GSync has kept me from the variable refresh party.

        • tsk
        • 3 years ago

        I have a freesync monitor and I’m sticking with team red until nvidia decides to support the open standard, can’t do anything but vote with my wallet.
        I would have bought a GTX 1060 instead of the RX 480 if it supported my monitor.

          • f0d
          • 3 years ago

          i have freesync and it makes such little difference that im still going to get the best gaming card for the money i have

          the extra fps you get for having the best gaming card you can get with your dollar is more important to me than freesync/gsync – no matter which company makes it

          freesync makes a difference below 50fps but i never play games at such a low framerate anyways – ill turn down the settings until im at 75 minimum

            • Voldenuit
            • 3 years ago

            I’m of the opposite opinion. After playing games on a G-Sync monitor with a 970 and a 1070, I simply can’t go back to 60 Hz fixed-refresh displays (I’m using my old 60 Hz monitor as a secondary monitor, and anything with motion just looks “off” on it now).

            Overwatch at 90 fps (on the 970) and 130 fps (1070) is ridiculously smooth, and even watching movies and tv shows at 120 Hz (which divides more nicely into 24 and 30 fps content than 60 Hz) is much improved.

            Wrt low framerates, I agree that 40 fps on a VRR monitor is a lot smoother than 40 fps on a 60 Hz display, but it’s still 40 fps. Like you, I prefer to turn down settings to get a higher framerate, but unlike you, I find the best experience on my monitor to be when I’m at 90-144 fps.

            • f0d
            • 3 years ago

            i think you misunderstood as what you said is exactly what i intended to say
            framerate matters more than having freesync

            we only differ on what our minimum (lowest) framerate should be, the lowest i would go is 75 and the lowest it seems you would go is 90, i cant seem to help the 75 minimum as no matter how much hardware i throw at planetside 2 when there is hundreds of people on my screen and explosions and stuff going off everywhere it dips down to 75

            60 is horrible and i dont want to touch it with a ten foot pole

            • Voldenuit
            • 3 years ago

            Ah, I get it. You’re saying that having freesync/g-sync doesn’t fully compensate for low framerates, a sentiment with which I am completely in agreement.

            And I’m willing to go lower than 90 or even 75, depending on the genre of the game. But it’s hard to go back to fixed refresh once you get spoiled on VRR.

          • Voldenuit
          • 3 years ago

          I’d be with you on this if every Freesync monitor did 40-144 Hz, or better.

          Monitors with 40-75 or 45-90 Hz VRR shouldn’t qualify for the Freesync moniker, IMO. And manufacturers need to be more upfront about which frequencies they support. Sometimes, it’s not even in the published tech specs.

          EDIT: Added “IMO” to the end of a sentence.

            • derFunkenstein
            • 3 years ago

            I don’t think it has to be one or the other. No reason to kill GSync when it has technical advantages that somebody wants to use (like you), but they could also help a brother out by turning on adaptive sync for the rest of us plebs.

            I have a GTX 970, so there’s no real attractive AMD upgrade for me right now. On top of that, nothing I play is terribly strenuous. Buying a G-Sync monitor vs buying a new AMD card + FreeSync is roughly the same cost, and neither is a cost-effective decision for me because I just want to play with something. So I sit and I wait.

Pin It on Pinterest

Share This