Nvidia’s GeForce GTX 1080 graphics card reviewed

Pascal is here. After a long, long stop at the 28-nm process node, Nvidia is storming into a new era of graphics performance with a freshly-minted graphics architecture manifested on TSMC’s 16-nm FinFET process. Forget the modest introduction of Maxwell on board the GeForce GTX 750 Ti. The first consumer Pascal graphics card—the GeForce GTX 1080—is a high-performance monster that’s so fast, it’s practically warping the high-end graphics card market in its wake.

We can say all of this now because we’ve seen the numbers other reviewers have generated over the past few weeks, just as you have. Our goal today isn’t to produce a bunch of average frame rate results and crown the card a winner, though—we already know that the GTX 1080 is the fastest single-GPU graphics card around by that measure. Instead, we’ll be using our frame-time benchmarking methods to characterize just how smooth a gaming experience the GTX 1080 delivers along with its world-beating speed.

First, though, let’s discuss the improvements that Nvidia made under the hood with Pascal to deliver the kinds of performance we’ll be seeing from our test bench. You should check out our Pascal architecture deep-dive to get a broad idea of where Nvidia is coming from with its latest generation of products if you haven’t already—we won’t be revisiting all of the information presented there in this review.

A new GPU: GP104

Nvidia isn’t using its largest Pascal chip to power the GTX 1080. That GP100 GPU is only available as part of a Tesla P100 card for high-performance computing systems right now, and for good reason. That enormous chip (610 mm2!) is full of double-precision hardware that’s not of use to most gamers, and Nvidia is apparently having no problem selling every one of those chips it can make as parts of systems for large businesses with a need for double-precision speed.

The GTX 1080 (and its GTX 1070 sibling) are both powered by a smaller chip called GP104. This 314-mm2 chip has a smaller surface area than GM204 before it, but thanks to the wonders of Moore’s Law, we get more power in that tinier space. The fully-enabled GP104 chip in the GTX 1080 has 20 Pascal SMs for a total of 2560 stream processors, up 20% from GM204’s 2048, and about 16% fewer than the 3072 in the fully-enabled GM200 on the Titan X.  

A block diagram of the GP104 GPU. Source: Nvidia

Nvidia has also bumped GP104’s texturing capabilities a bit. This chip has 160 texture units, up from 128 in GM204. Its complement of 64 ROPs is the same as the middleweight Maxwell’s, though, and that ROP count is still down on the 96 of the GM200 chip in the Titan X and the GTX 980 Ti.

  ROP

pixels/

clock

Texels

filtered/

clock

(int/fp16)

Shader

processors

Rasterized

triangles/

clock

Memory

interface

width (bits)

Estimated

transistor

count

(Millions)

Die size

(mm²)

Fab

process

GM204 64 128/128 2048 4 256 5200 416 (398) 28 nm
GP104 64 160/160 2560 4 256 7200 314 16 nm
GM200 96 192/192 3072 6 384 8000 601 28 nm
Hawaii 64 176/88 2816 4 512 6200 438 28 nm
Fiji 64 256/128 4096 4 4096 8900 596 28 nm

What’s most eye-popping about GP104 isn’t its resource allocations, impressive though they might be. It’s the chip’s clock speeds. The reference GTX 1080 runs at bonkers 1607MHz base and 1733MHz boost speeds. Recall that the GM204 chip in the GTX 980 ran at 1126MHz base and 1216MHz boost clocks in its reference design. Nvidia has also demonstrated considerable overclocking headroom on GP104. The company showed off a card running at 2.1GHz—on air, no less—during its Dreamhack keynote.

That clock jump is partially thanks to the move to the 16-nm FinFET process, but Nvidia says its engineers worked hard on boosting clock speeds in the chip’s design process, too. The company says the finished product’s clock speed boost is “well above” what the process shrink alone would have produced.

In general, a move to a smaller process gives chip designers the ability to extract the same performance from a device that consumes less power, or to get more performance from the same power budget. Given the choice, it’s not surprising that Nvidia’s engineers appear to be pushing the performance envelope this time around. The GTX 1080’s 180W board power has crept up a bit from the GTX 980’s 165W figure, but it’s still frugal enough that the green team only needed to put a single eight-pin PCIe power connector on the card. We’ve long praised the company’s Maxwell cards for their efficiency, so we’ll forgive the GTX 1080 its slightly higher power requirements on paper.

New memory, too: GDDR5X

While the Tesla P100 is packaged with 16GB of HBM2 RAM, Nvidia uses GDDR5X RAM on the GTX 1080. GDDR5X is an evolution of the GDDR5 standard we know and love. GDDR5X achieves higher transfer rates per pin (10 to 14 GT/s) than GDDR5. Nvidia runs these chips at 10 GT/s and pairs them with a 256-bit memory bus. That’s good for a theoretical 320 GB/s of bandwidth. On first glance, one might think that’s a major improvement over the GTX 980’s 224 GB/s rate, but a bit short of the GeForce GTX 980 Ti’s 336 GB/s and well behind the Radeon R9 Fury X’s 512 GB/s.

Source: Nvidia

Raw transfer rates don’t tell the whole story in Pascal, though. This new architecture has a souped-up version of the delta-color-compression techniques that we’ve seen adopted across the industry. Pascal can apply its 2:1 compression more often, and it includes two new compression modes. Nvidia says the chip can employ a new 4:1 compression mode in cases where per-pixel deltas are “very small,” and an 8:1 compression mode “combines 4:1 constant color compression of 2×2 pixel blocks with 2:1 compression of the deltas between those blocks.”

An example of Pascal’s color compression in action. Pink portions of the frame are compressed. Source: Nvidia

The net result of that compression cleverness is that Pascal can squeeze down more of the color information in a frame than Maxwell GPUs could. That lets the card hold more data in its caches, reduce the number of trips out to its onboard memory, and reduce the size of data transferred across the chip. Nvidia says these improvements are good for a roughly 20% increase in “effective bandwidth” above and beyond the move to GDDR5X alone.

 

Pascal architectural improvements

Getting asynchronous

Anybody attuned to the enthusiast hardware scene over the past few months has doubtless heard a ton about graphics cards’ asynchronous compute capabilities, namely Radeons’ prowess and GeForces’ apparent shortcomings on that point. However much stock you place in this argument, Pascal appears to offer improved asynchronous compute capability versus Maxwell chips.

First, we should talk a little bit about the characteristics of an asynchronous compute workload. Nvidia suggests that an asynchronous task might overlap with another task running on the GPU at the same time, or it might need to interrupt a task that’s running in order to complete within a given time window.

Source: Nvidia

One example of such a compute task is asynchronous timewarp, a VR rendering method that uses head-position data to slightly reproject a frame before sending it out to the VR headset. Nvidia notes that timewarp often needs to interrupt—or preempt—a task in progress to execute on time. On the other hand, less time-critical workloads, like physics or audio calculations, might run concurrently (but asynchronously) with rendering tasks. Nvidia says Pascal chips support two major forms of asynchronous compute execution: dynamic load-balancing for overlapping workloads, and pixel-level preemption for time-sensitive ones.

It’s here that we actually learn a thing or two about what Maxwell could do in this regard—perhaps even in more depth than we ever did while those chips were the hottest thing on the market. Nvidia says Maxwell provided overlapping workloads with a static partitioning of resources: one partition for graphics tasks, and another for compute. The company says this approach was effective when the partitioning scheme matched the resources needed by both graphics and compute workloads. Maxwell’s static partitioning has a downside, though: mess up that initial resource allocation, and a graphics task can complete before a compute task, causing part of the GPU to go idle while it waits for the compute task to complete and for new work to be dispatched.

It might seem obvious to say so, but like any modern chip, GPUs want all of their pipelines filled as much of the time as possible in order to extract maximum performance. Idle resources are bad news. Nvidia admits as much in its documentation, noting that a long-running task in one resource partition might cause performance for the concurrent tasks to fall below whatever the potential benefits of running them together might have offered. Either way, if you were wondering what exactly was going on with Maxwell and async compute way back when, it appears this is your answer.

Source: Nvidia

Pascal looks like it’s much better provisioned to handle asynchronous workloads. For overlapping tasks, the chip can now perform what Nvidia calls dynamic load balancing. Unlike the rather coarse-sounding partitioning method outlined above, Pascal chips can dispatch work to idle parts of the GPU on the fly, potentially keeping more of the chip at work and improving performance.

Nvidia doesn’t go into the same depth about Maxwell’s pre-emption capability as it does for the architecture’s methods for handling overlapping workloads, but given friend-of-TR David Kanter‘s now-infamous comment about preemption on Maxwell being “potentially catastrophic,” perhaps we can guess why. Pascal’s preemption abilities seem to be much better, though. Let’s talk about them.

Source: Nvidia

For one, Nvidia claims Pascal is the first GPU architecture to implement preemption at the pixel level. The company says each of the chip’s graphics units can keep track of its intermediate state on a work unit. That fine-grained awareness lets those resources quickly save state, service the preemption request, and pick up work where they left off once the high-priority task is complete. Once the GPU is finished with the work that it can’t save and unload, Nvidia says that task-switching with preemption can finish in under 100 microseconds. Compute tasks also benefit from the finer-grained preemption capabilities of Pascal cards. If a CUDA workload needs to preempt another running compute task, that interruption can occur at the instruction level.

Simultaneous multi-projection, single-pass stereo, and VR

One of the biggest architectural changes in Pascal is a new component in the Polymorph Engine geometry processor that arrived in Fermi GPUs. That processor now benefits from a feature called the Simultaneous Multi-Projection Engine, or SMPE. This hardware can take geometry information from the upstream graphics pipeline and create up to 16 separate pre-configured projections of a scene across up to two different camera positions. This hardware efficiently performs a task that would have previously required generating geometry for as many separate projections as a developer wanted to create—a prohibitively performance-intensive task.

Source: Nvidia

All that jargon essentially means that in situations where a single projection might have caused weird-looking perspective errors, like one might see with a three-monitor surround setup, Pascal can now account for the angle of those displays (with help from the application programmer) and create the illusion of a continuous space across all three monitors with no perspective problems.

Surround gaming is just one application for this technology, though—it also has major implications for VR performance. You’ll remember that the SMPE can create projections based on up to two different camera positions. Humans have two eyes, and if we put on a VR headset, we end up looking at two different screens with slightly different views of a scene. Before Pascal hit the market, Nvidia says graphics cards had to render for each eye’s viewpoint separately, resulting in twice as much work.

An example of how the same scene needs to look for different eyes in VR. Source: Nvidia

With Pascal, however, SMPE enables a new capability called Single-Pass Stereo rendering for VR headsets. As Nvidia puts it, Single-Pass Stereo lets an application submit its vertex work just once. The graphics card will then produce two positions for each vertex and match up each one with the correct eye. This resource essentially cuts the work necessary to render for a VR headset in half, presuming a developer takes advantage of it.

An example VR scene, before and after traditional post-processing for a VR headset display. Source: Nvidia

SMPE and its effects on VR don’t end there, however. The technology also allows developers to take advantage of a feature called Lens Matched Shading, or LMS for short. Prior to Pascal, graphics cards had to render the first pass of an image for a VR viewport assuming a flat projection. Because VR headsets rely on distorting lenses to create a natural-looking result, however, a pre-distorted image then has to be produced from the flat initial rendering to create a final scene that looks correct through the headset. This step throws away data. Nvidia says that a traditional graphics card might start with a 2.1MP image to begin with for a VR scene, but after post-processing, that image might be only 1.1MP. That’s a huge amount of extra work for pixels that are just going to be discarded.

An example of Lens Matched Shading in action. Source: Nvidia

LMS, on the other hand, takes advantage of the SMPE to render a scene more efficiently. It first slices the viewport into quadrants and then uses each of those to generate an associated projection that’s close to that of the part of the lens that will eventually be used to view the image. With this multi-projection rendering, the preliminary image in Nvidia’s example is just 1.4MP before it goes through the final post-processing step—a major increase in efficiency.

 

A grab bag of other improvements and changes

The GTX 1080 and the Pascal architecture introduce a number of smaller improvements and changes, as well. We won’t be covering these in depth today, but some of them are worth taking a brief look at. For more information, we’d recommend checking out Nvidia’s excellent GeForce GTX 1080 whitepaper.

Fast Sync

Nvidia notes that competitive gamers who run titles like Counter-Strike: Global Offensive at high frame rates often leave v-sync off to let the graphics card run as fast as possible and to minimize input latency, at the expense of introducing tearing. When you’re rendering frames at multiple hundreds of FPS, it makes sense that tearing would be rampant. Fast Sync is a new frame output method that’s meant to eliminate tearing while maintaining most of the competitive benefits that running with vsync off entails. To accomplish that ideal, Nvidia says it decoupled the rendering and display stages of the graphics pipeline. With Fast Sync on, the card can still render frames as fast as possible, but it’ll only send completed frames to the display, avoiding tearing.

Source: Nvidia

To make that principle work, the Fast Sync logic adds a third buffer—the “last rendered buffer”—to the traditional front- and back-buffers of a graphics pipeline with vsync on. This new buffer contains the last complete frame written to the back buffer. It holds this frame until the front buffer finishes sending a frame to the display, at which point it’s renamed to the front buffer and the display begins writing out the completed frame held within.

Nvidia emphasizes that no copying between buffers occurs in this process. Rather, the company notes that it’s much more efficient to simply rename buffers on the fly. Once the last rendered buffer becomes the front buffer and scanout begins, a bunch of buffer-naming musical chairs occurs in the background while the display is scanning out so that rendered frames have places to go in the meantime. When the display scanout completes and the music stops, whichever buffer had assumed the role of the “last rendered buffer” prior to that point becomes the front buffer, and the cycle repeats. Nvidia says new flip logic in Pascal is responsible for managing this process.

Source: Nvidia

Fast Sync adds a bit of latency to the gameplay experience, as the annoyingly vague chart above purports to show. Still, as someone who’s exceptionally sensitive to tearing, I’d welcome slightly more input latency in trade for banishing that ugly visual artifact from my life.

To be clear, Fast Sync is not a replacement for G-Sync or FreeSync variable-refresh-rate monitors—it’s an interesting but separate complement to those technologies. We’ll need to play with this tech and see how it works in practice.

SLI and Pascal

We’ve already covered the changes to SLI that Nvidia is making with its Pascal cards. To recap, the company is discontinuing internal development of SLI profiles for three- and four-way SLI setups. It’s instead putting its weight entirely behind two-way SLI setups, instead. Extreme benchmarkers will still be able to get three- and four-way SLI profiles for use with apps like 3DMark, but for all intents and purposes, two-way SLI is the way of the future.

HB SLI bridges. Source: Nvidia

Running two-way SLI at its maximum potential with the GTX 1080 requires a new “high-bandwidth” SLI bridge that links both sets of SLI “fingers” present on GTX 1080s. With the proper bridge, the GTX 1080’s SLI link runs at 650MHz. Older “LED SLI bridges” will also run at this speed, but the ribbon-cable bridge included with many motherboards will only run at 400MHz with Pascal cards. Nvidia says the net result of this change is a doubling in SLI bandwidth compared to past implementations of the technology.

Source: Nvidia

If Nvidia’s internal numbers are to be believed, HB SLI has tangible benefits on in-game smoothness. The company ran Shadows of Mordor on an 11520×2160 display array to show off the feature, and the frame time plot of that benchmark suggests that the added bandwidth helps reduce worst-case latency spikes.

HDR content and higher-res display support

One of the major updates that AMD has been touting for its next-gen Radeons has been a bevy of features related to high-dynamic-range gaming and video playback, and Pascal appears just as ready for that next-generation content.

Nvidia’s Maxwell cards already came with support for 12-bit color, the BT.2020 wide color gamut, and the SMPTE 2084 electro-optical transfer function (EOTF). Pascal adds support for 60Hz 4K HEVC video decoding with 10- or 12-bit color, 60Hz 4K HEVC encoding with 10-bit color for recording or streaming HDR content, and DisplayPort 1.4’s metadata transport spec for HDR over that connection. Pascal cards will also be able to perform game-streaming in VR with a compatible device, like Nvidia’s Shield Android TV.

Nvidia is also beefing up high-res display support with Pascal and the GTX 1080. While the new card can still run only four active displays, it can now run monitors that max out at 8K (7680×4320) using two DisplayPort 1.3 cables. The GTX 1080 supports HDMI 2.0b with HDCP 2.2, and though it’s just DisplayPort 1.2-certified right now, it’s ready for the upcoming, higher-bandwidth DisplayPort 1.3 and DisplayPort 1.4 standards.

Now that we’ve covered some of the biggest changes in Pascal for consumers, let’s talk about the GTX 1080 Founders Edition card itself.

 

The GeForce GTX 1080 Founders Edition

When Nvidia first announced the GeForce GTX 1080, the company introduced a new concept called a “Founders Edition” card. At the time, this new name gave rise to speculation the card would have some kind of special sauce inside, but we now know that it’s really just a different designation for what we used to call “reference coolers.” The Founders Edition card also has a $699.99 suggested price tag, a $100 premium over the $599.99 suggested price for custom cards from Nvidia’s board partners.

  GPU

base

clock

GPU

boost

clock

Shader

processors

Memory

config

PCIe

aux

power

Peak

power

draw

E-tail

price

GeForce GTX 1080 1607 MHz 1733 MHz 2560 8GB GDDR5X 1x 8-pin 180W $699.99

The GeForce GTX 1080 Founders Edition has a pretty standard Nvidia reference board look, much like the GeForce GTX 980 Ti before it. Nvidia devised a fancy new polygonal design for the aluminum cooler shroud, but even with the new aesthetics, it could hide among other recent Nvidia reference boards without arousing too much suspicion.

The new reference cooler’s internals are pretty similar to what’s come before. The heatsink that the blower-style fan cools looks the same as those inside the company’s older, higher-end reference cards. Nvidia touts this “vapor chamber” heatsink as something special, and the card does appear to use one, but that same basic heatsink has been present in cards dating back to the GTX 780, at least. 

The one major upgrade from past Nvidia reference designs is the inclusion of a backplate on the card, which is a nice touch. The backplate is a pretty standard plastic-coated metal job.

After removing the million tiny screws that hold it in place, the backplate comes off to reveal the PCB itself. The four Philips-head screws hold down the heatsink on the GPU, which thankfully doesn’t require that the whole card be disassembled to be removed. The cooler assembly itself comes off as a unit.

Back to the front of the board, and few hex-key screws later, and we can pull off the acrylic shroud and the metal trim piece that cover the heatsink. With the four Philips screws removed from the back, we can fully expose the GPU die.

After removing even more screws, we can remove the shroud and see the exciting bits of the 1080. This view shows the GP104 die and the GDDR5X RAM that rings it. We also get a look at the 5+1-phase power delivery subsystem of the card, plus the single eight-pin PCIe power connector the 1080 uses to get the extra juice it needs from the PSU.

We were able to successfully reassemble our Founders Edition card without too much trouble after dissecting it. While we’ll be moving on to testing next, we actually performed this disassembly after we concluded our benchmarking, so the results you’ll see in the following pages all come from a factory-fresh GTX 1080. Let’s see what it can do.

 

Our testing methods

As always, we did our best to deliver clean benchmarking results. Our test system was configured as follows:

Processor Core i7-5960X
Motherboard Asus X99 Deluxe
Chipset Intel X99
Memory size 16GB (4 DIMMs)
Memory type Corsair Vengeance LPX

DDR4 SDRAM at 3200 MT/s

Memory timings 16-18-18-36
Chipset drivers Intel Management Engine 11.0.0.1155

Intel Rapid Storage Technology V 14.5.0.1081

Audio Integrated X99/Realtek ALC1150

Realtek 6.0.1.7525 drivers

Hard drive Kingston HyperX 480GB SATA 6Gbps
Power supply Fractal Design Integra 750W
OS Windows 10 Pro

 

  Driver revision GPU base

core clock

(MHz)

GPU boost

clock

(MHz)

Memory

clock

(MHz)

Memory

size

(MB)

Asus Strix Radeon R9 Fury Radeon Software 16.6.1 1000 500 4096
Radeon R9 Fury X Radeon Software 16.6.1 1050 500 4096
Gigabyte Windforce GeForce GTX 980 GeForce 368.39 1228 1329 1753 4096
MSI GeForce GTX 980 Ti Gaming 6G GeForce 368.39 1140 1228 1753 6144
GeForce GTX 1080 GeForce 368.39 1607 1733 2500 8192

Our thanks to Intel, Corsair, Asus, Kingston, and Fractal Design for helping us to outfit our test rigs, and to Nvidia and AMD for providing the graphics cards for testing, as well.

For our “Inside the Second” benchmarking techniques, we use the Fraps software utility to collect frame-time information for each frame rendered during our benchmark runs. We sometimes use a more advanced tool called FCAT to capture exactly when frames arrive at the display, but our testing has shown that it’s not usually necessary to use this tool in order to generate good results for single-GPU setups. We filter our Fraps data using a three-frame moving average to account for the three-frame submission queue in Direct3D. If you see a frame-time spike in our results, it’s likely a delay that would affect when a frame reaches the display.

You’ll note that aside from the Radeon R9 Fury X and the GeForce GTX 1080, our test card stable is made up of non-reference designs with boosted clock speeds and beefy coolers. Many readers have called us out on this practice in the past for some reason, so we want to be upfront about it here. We bench non-reference cards because we feel they provide the best real-world representation of performance for the graphics card in question. They’re the type of cards we recommend in our System Guides, so we think they provide the most relatable performance numbers for our reader base.

To make things simple, when you see “GTX 980” or “GTX 980 Ti” in our results, just remember that we’re talking about custom cards, not reference designs. You can read more about the MSI GeForce GTX 980 Ti Gaming 6G in our roundup of those custom cards. We also reviewed the Gigabyte Windforce GeForce GTX 980 a while back, and the Asus Strix Radeon R9 Fury was central to our review of that GPU.

Each title we benched was run in its DirectX 11 mode. We understand that DirectX 12 performance is a major point of interest for many gamers right now, but the number of titles out there with stable DirectX 12 implementations is quite small. We had trouble getting Rise of the Tomb Raider to even launch in its DX12 mode, and other titles like Gears of War: Ultimate Edition still seem to suffer from audio and engine timing issues on the PC. DX12 also poses challenges for data collection that we’re still working on. For a good gaming experience today, our money is still on DX11.

Finally, you’ll note that in the titles we benched at 4K, the Radeon R9 Fury is absent. That’s because our card wouldn’t play nicely with the 4K display we use on our test bench for some reason. It’s unclear why this issue arose, but in the interest of time, we decided to drop the card from our results. Going by our original Fury review, the GTX 980 is a decent proxy for the Fury’s performance, which is to say that it’s not usually up to the task of 4K gaming to begin with. You can peruse those numbers and make your own conclusions.

 

Sizing ’em up

Take some clock speed information and some other numbers about per-clock capacity from the latest crop of high-end graphics cards, and you get this neat table:

  Peak pixel

fill rate

(Gpixels/s)

Peak

bilinear

filtering

int8/fp16

(Gtexels/s)

Peak

rasterization

rate

(Gtris/s)

Peak

shader

arithmetic

rate

(tflops)

Memory

bandwidth

(GB/s)

Radeon R9 290X 64 176/88 4.0 5.6 320
Radeon R9 Fury 64 224/112 4.0 7.2 512
Radeon R9 Fury X 67 269/134 4.2 8.6 512
GeForce GTX 780 Ti 37 223/223 4.6 5.3 336
Gigabyte GTX 980 85 170/170 5.3 5.4 224
MSI GeForce GTX 980 Ti 108 216/216 7.4 6.9 336
GeForce Titan X 103 206/206 6.5 6.6 336
GeForce GTX 1080 111 277/277 6.9 8.9 320

Those are theoretical peak capabilities for each of the measures above. We won’t be testing every card in the table, but we’re leaving some older cards in to show how far we’ve come since Kepler. As you can see, the GTX 1080 provides a nice increase in pretty much every measure over GM204 and the GTX 980, and it’s even better in some regards than the GM200 GPU on board the Titan X and GTX 980 Ti. Let’s see how our calculations hold up with some tests from the Beyond3D suite.

The GTX 1080 has the same number of ROPs as the GTX 980, but its substantially higher clocks and higher SM count allow it to deliver a substantial increase in pixel fill rate over that card, pushing past even the GM200-powered GeForce GTX 980 Ti. Good grief.

This bandwidth test measures GPU throughput using two different textures: an all-black surface that’s easily compressed and a random-colored texture that’s essentially incompressible. Throw an incompressible texture at the GTX 1080, and it produces a nice boost over the GTX 980. The GTX 980 Ti still comes pretty close, though, and the Fiji cards pull ahead. Once the card can take advantage of its compression mojo, however, the amount of throughput gets a little ridiculous. It appears the new delta-color-compression techniques Nvidia implemented in Pascal are definitely doing their thing.

All of the graphics cards tested come close to hitting their peak texture-filtering rate in this test. The GTX 1080 edges out the prodigious power of the R9 Fury X here, and it holds that lead both with simple and more complicated formats. It also speeds way past the GTX 980 and GTX 980 Ti, for the most part. In fact, we can already say that the GTX 980 Ti is the GTX 1080’s most natural competitor in these tests—the GTX 980 just can’t keep up.

As we’ve seen in past reviews, our GeForce cards actually slightly exceed their theoretical peaks in this polygon throughput test—substantially so, in the case of the GeForce GTX 1080. We’ve guessed that this test is especially amenable to GeForces’ GPU Boost feature in the past, so it’s possible the cards are just running really fast. Regardless, the GTX 1080 turns in some impressive numbers.

The situation is more normal in our ALU throughput tests, where all of the cards more or less hit their peak theoretical numbers.

All told, the GeForce GTX 1080 is an exceptionally potent graphics card by every theoretical measure we can throw at it. Let’s see how that performance carries over to some real games.

 

Grand Theft Auto V
Grand Theft Auto V has a huge pile of image quality settings, so we apologize in advance for the wall of screenshots. We’ve re-used a set of settings for this game that we’ve established in previous reviews, which should allow for easy comparison to our past tests. GTA V isn’t the most demanding game on the block, so even at 4K you can expect to get decent frame times out of higher-end graphics cards.



These “time spent beyond X” graphs are meant to show “badness,” those instances where animation may be less than fluid. The 50-ms threshold is the most notable one, since it corresponds to a 20-FPS average. We figure if you’re not rendering any faster than 20 FPS, even for a moment, then the user is likely to perceive a slowdown. 33 ms correlates to 30 FPS or a 30Hz refresh rate. Go beyond that with vsync on, and you’re into the bad voodoo of quantization slowdowns. And 16.7 ms correlates to 60 FPS, that golden mark that we’d like to achieve (or surpass) for each and every frame.

The one little hitch in the GTX 1080’s frame time graph above causes it to spend 4ms beyond the 33.3 ms mark— barely worthy of note. The Fury X delivers similar performance. What’s really spectacular about the GTX 1080 is that it spends just 76 ms beyond that golden 16.7-ms mark, meaning we can be assured of a near-constant 60 FPS. With those smooth frame times, most of GTA V‘s gameplay experience on the GTX 1080 is fantastic at 4K. Even with a variable-refresh-rate display, the difference in fluidity between the GTX 1080 and the GTX 980 Ti can be felt in normal gameplay.

All of our contenders commendably spend no time past the 50-ms mark, and only the Radeon R9 Fury spends a significant number of milliseconds past the 33.3-ms hurdle. Move down to the 16.7-ms barrier, though, and nothing comes close to the GTX 1080’s smoothness. Even the GTX 980 Ti struggles a bit here.

 

Crysis 3
Crysis 3 is an old standby in our benchmarks. Even though the game was released in 2013, it still puts the hurt on high-end graphics cards. Unfortunately, our AMD Radeon R9 Fury and my 4K display have a disagreement of some sort, so the red team is only represented by the Fury X on this set of benches.


Would you look at that? The GTX 1080 is the only card that doesn’t show the significant frame-time “fuzziness” typical of inconsistent frame delivery throughout our Crysis 3 run. The average FPS metric is right where we’d like to see it for smooth gameplay, too. Our 99th percentile numbers suggest that there’s more to that 60-FPS average than meets the eye, though.


Even running this game at 4K, it’s hard to talk about “badness” with the 1080. We didn’t manage to catch a single frame taking longer than 50ms, or even one that took 33.3 ms. Once we hit the 16.7 ms mark, however, we can at least prove that the charts are still working. Here, we can see that the GTX 1080 spends a fair bit of time past 16.7ms working on frames: about 2.7 seconds. For comparison, though, the GeForce GTX 980 Ti spends almost 10 seconds past the 16.7 ms mark, and the R9 Fury X has to work even harder.

This major difference in our “badness” metric makes the GTX 1080 the first card we can really call smooth in Crysis 3 at 4K with high settings. The GTX 980 Ti we used is perfectly playable at these settings, but the subjective difference in smoothness between the two cards is definitely noticeable.

 

Rise of the Tomb Raider
Rise of the Tomb Raider is the first brand-new game in our benchmarking suite. To test the game, I romped through one of the first locations in the game’s Geothermal Valley level, since it offers a diverse backdrop of snow, city, and forest environments. RotTR is a pretty demanding game, so I took this opportunity to dial the resolution back to 2560×1440. We also turned off the AMD PureHair feature to avoid predisposing the benchmark toward one card or another in this test, since Nvidia’s HairWorks has created significant performance deltas in past Tomb Raider games when we’ve had it on.


Our frame-time chart and 99th percentile chart make two things pretty clear about this test. For one, the GeForce GTX 1080 performs admirably. For two, this game seems to play quite well with Nvidia’s cards. The delta between the 980 Ti and the 1080 in 99th percentile frame time isn’t huge, but it’s still worthy of note. For a clearer comparison, we need to look at the specific frame-time thresholds.


Well, that’s more like it. The 99th percentile frame time alone hides an important point of comparison between the two fastest cards in this metric. While all three of the GeForces we tested keep their frame times below 33.3ms, the GTX 980 Ti spends much more time past 16.7ms than the GTX 1080. This test indicates that you don’t have to have a 4K display to benefit from the added grunt of the 1080, presuming you want to turn those quality sliders way up.

 

Fallout 4


Even at 4K with its settings turned up, Fallout 4 just isn’t the most demanding game out there. The GTX 1080 makes pretty handy work of it. Once again, we see 99th-percentile frame times just barely above the magic 16.7ms mark. (Fallout 4 normally has a 60-FPS cap on by default, but we disabled it for these tests.)


No matter which way you squint at it, the GTX 1080 does a fantastic job of running Fallout 4 at respectable frame rates. The GTX 980 Ti also does a pretty respectable job, although it spends about three times as much time past the 16.7ms mark. In absolute terms, though, you’d be hard-pressed to notice frame-delivery roughness from either card. The Fury X struggles a bit here, and the GTX 980 brings up the rear. Once again, the GTX 1080 is the card to beat for smooth gaming at 4K in our tests.

 

The Witcher 3
The Witcher 3 is another benchmark where I re-used the settings that we’ve settled on in past reviews. We also chose to test this title at 2560×1440, rather than 4K. We didn’t crank the resolution in part because we wanted to maintain consistency with the numbers we produced in our Fury X review, but also because the game is demanding enough that playing the game at 4K with high settings wasn’t a great experience even on newer high-end cards.


The GTX 1080 is rapidly depleting my reserve of ways to say “well, that went well.” Its average FPS numbers for this test are impressive, and its 99th-percentile frame time number hovers just above the magic 16.7ms mark. Remember that as 99th percentile times get lower, it gets harder to push down the absolute values. While the difference between the GTX 980 Ti’s 99th-percentile result and that of the GTX 1080’s is only 1.5 ms, that difference can still have a noticeable effect on smoothness.


Sorry, but the numbers above already tell the story here. The GTX 1080 is smooth as butter. In fact, the most interesting thing to note about these results is that the Fury X has significantly improved its showing since our initial review, both in its 99th-percentile frame times and in its “badness” performance. Even so, it can’t catch the GTX 980 Ti or the GTX 1080 for smoothness. The GTX 1080 spends just 71 ms past the 16.7ms mark in The Witcher 3, and that makes for some wonderfully smooth gameplay on Nvidia’s latest.

 

Hitman

The 2016 version of Hitman closes out our test suite. We chose to bench this demanding title at 4K to really make our graphics cards sweat.


The GTX 1080 finally struggles a bit in this test, but so do the rest of our sample cards. The GTX 1080 opens up about the same lead over the GTX 980 Ti we’ve seen in our other tests, but neither card turned in a particularly impressive 99th-percentile frame time, all things considered. The Radeon R9 Fury X seems to be making a good showing based on FPS alone, but its relatively spiky frame-time plot tells a different story.


Going by our “badness” metrics, the GTX 1080 doesn’t quite turn in the glass-smooth performance it has in most of our other games. It’s still smoother than the other cards we tested, though, and by a significant margin. The 980 Ti and the Fury X both produce admirable average FPS figures, but they spend quite a bit of time churning away on frames that can’t be completed in under 16.7 ms. That data is consistent with the 99th-percentile frame times we gathered.

While these numbers might be discouraging for readers hoping for playable frame rates at 4K in Hitman, lowering the resolution and graphics settings to a saner level produces predictably better performance, going by our informal tests. That said, it is clear that we can’t just assume that the GTX 1080 will be able to deal with whatever we throw at it without breaking a sweat.

 

Power consumption

Let’s take a look at how much juice the GTX 1080 needs to do its thing. Our “under load” tests aren’t conducted in an absolute peak scenario. Instead, we have the cards running a real game, Crysis 3, in order to show us power draw with a more typical workload.

Here we can see another advantage of Pascal’s 16-nm lithography. Not only does the 1080 roundly outperform the rest of the cards we tested, it does so while using far less power. In our Crysis power-test run, peak power draw on the GTX 1080 was quite a bit lower than any of its competitors. Nvidia isn’t giving up the efficiency crown with this new generation of chips, to be certain.

Noise levels and GPU temperatures

Thanks to the move to a slightly different test rig, the noise floor on our test system is rather high thanks to its closed-loop liquid cooler. Even with a passive graphics card, just the pump and its fan left us with a 40-dBA noise floor. Still, some cards showed a significant increase from that floor when under load. Our test rig also happens to be down in sunny Alabama, and thanks to high temperatures and a creaky old house, the ambient temperature in our testing environment was about 80° F (or about 27° C), so the zero-point for our temperature numbers is a bit higher than in previous reviews. Those caveats aside, let’s see how loud and hot our cards get under load.

The Founders Edition cooler on the GTX 1080 sadly doesn’t do a great job of keeping the GPU underneath cool, or even all that quiet. The card’s load noise levels are only exceeded by the triple-fan cooler on the Gigabyte Windforce GTX 980 we have on hand, and its load temperatures are the worst of the pack by a wide margin. The sound from the single blower-style fan also has an unpleasant grinding quality, something our absolute noise measurements can’t convey. The Fury X produces a similarly unusual and annoying sound: a high-pitched whine that we’ve picked up on in past reviews.

Now’s as good a time as any to talk about the GTX 1080’s overclocking potential. While the silicon lottery certainly plays a role, it’s equally important to have a good cooler strapped onto the GPU you’re trying to tweak. While the GP104 chip itself might have plenty of overclocking potential on tap, we’d be wary of trying to push it too far with the Founders Edition cooler given our stock-clocked results. Plenty of Nvidia’s board partners are now selling GeForce GTX 1080s, and the custom coolers on those boards might unlock the thermal headroom one would want to really push the clocks skyward. For now, we’re reserving judgment on the GTX 1080’s overclocking prowess.

 

Conclusions

Before we dive into our conclusions, let’s take a look at our famous value scatters to see what kind of performance the GeForce GTX 1080 gets you—at least, in its Founders Edition form. Since prices have fallen on the GeForce GTX 980, GeForce GTX 980 Ti, and the Radeon R9 Fury X since their launches, we’ve surveyed all of the in-stock cards of each model available from Newegg right now and averaged their prices to present what we feel is a fair picture of the high-end graphics market today. (Since the Radeon R9 Fury couldn’t participate in all our tests, we’re leaving it off these charts.) The best values in these charts congregate toward the top left corner, where performance is high and prices are low.

First, let’s look at the potential performance that each card has on tap in the form of average FPS per dollar. Surprising nobody who’s read the past few pages, the GTX 1080 rockets to the top right corner of the chart. If you’re already in possession of a hot GTX 980 Ti, the GTX 1080 Founders Edition doesn’t offer a giant step up in performance, but it’s still a significant one. Our results suggest you really want a GTX 1080 to game smoothly at 4K with the titles we tested, and this card also has an advantage in demanding games running at 2560×1440.

Next, let’s crunch some numbers from our advanced frame-time metrics to determine how smooth a gaming experience the GTX 1080 delivers for its price tag. To make our “higher is better” arrangement work with frame times, we’ve converted the geometric mean of each card’s 99th-percentile frame times in our tests into an FPS value.

No surprises here, either. The GTX 1080’s 99th-percentile-FPS-per-dollar figure is so high that we had to add a rung to our chart to make it visible. It delivers unparalleled smoothness in our tests. Again, the GTX 980 Ti isn’t too far behind, but you are getting significantly smoother frame delivery for your money when you pay for a GTX 1080. Meanwhile, the Radeon R9 Fury X doesn’t deliver gameplay that’s any smoother than the GeForce GTX 980 we tested. Both of those cards are considerably outclassed by Nvidia’s latest.

While the GTX 1080 Founders Edition does deliver world-beating performance for a single-GPU card, it leaves a bit to be desired in the noise, vibration, and harshness department. Its blower-style heatsink isn’t particularly quiet or pleasant-sounding, and it also doesn’t keep the GP104 chip all that cool under load. We have some difficulty accepting those facts. $700 is a lot of money for a graphics card, and considering the premium that Nvidia charges for these cards over ones with custom third-party coolers and factory tuning jobs, we don’t think they represent a great value. Sure, the new reference heatsink design looks nice, but we don’t think good looks are enough reason for buyers to fork over the extra cash. 

Happily, Nvidia’s board partners are starting to deliver a diverse range of custom-cooled GTX 1080s themselves, and our experience with those cards so far has been positive. GTX 1080 custom jobs can cost significantly less than the Founders Edition, and they tend to come with beefier heatsinks and factory clock boosts. Unless you’re really into the Founders Edition look, we think most will be happier with one of these hot-rodded GTX 1080s. Those cards technically carry a $599.99 suggested price, but the GTX 1080’s popularity and low stock have conspired to bring the retail prices for most of those hot rods closer to the Founders Edition’s $700 sticker. If you’ve gotta have a GTX 1080 now, though, we think you still ought to opt for a custom card. They’re just better values.

No matter what flavor of GTX 1080 buyers end up with, Nvidia deserves high praise for pushing the envelope of graphics performance to new heights with the GTX 1080, the GP104 GPU, and the move to next-generation process tech. Even better, the company is offering that performance in a relatively affordable package for a high-end graphics card. If you’ve got a big wad of cash burning a hole in your pocket from the long pause at the 28-nm process node, you’ll be richly rewarded by the smoothness and performance that the GTX 1080 offers if you choose to spend it now.

Comments closed
    • 2x4
    • 3 years ago

    great card but where are the new titles bring it to its knees???? they are still using years old games. anything new coming up soon to replace hitman, fallout 4 or the witcher 3???

    • willmore
    • 3 years ago

    [quote<]We were able to successfully reassemble our Founders Edition card without too much trouble after dissecting it. While we'll be moving on to testing next, we actually performed this disassembly after we concluded our benchmarking, so the results you'll see in the following pages all come from a factory-fresh GTX 1080. Let's see what it can do.[/quote<] I can't express how happy I was to see this added in. My fear that a reviewer did that backwards always hangs in the back of my mind when I see an introduction to the card (to be tested) and there are pics of it taken apart. "Please tell me they didn't do that before testing, oh, please, please..." murmurs that fear all throught my reading of the article.

      • tipoo
      • 3 years ago

      Yeah, especially with the noted fan grinding noise, nice to know it was done the right way

      • chuckula
      • 3 years ago

      Not only did they reassemble it, but they had three BONUS SCREWS left over at the end!

      That’s what I call efficiency.

    • dragosmp
    • 3 years ago

    Great work guys, I enjoyed reading the review. When/if I’ll buy this card I’ll certainly keep this article as a reference.

    • cynan
    • 3 years ago

    [quote<]The GTX 1080's 99th-percentile-FPS-per-dollar figure is so high that we had to add a rung to our chart to make it visible. It delivers unparalleled smoothness in our tests. Again, the GTX 980 Ti isn't too far behind, but you are getting significantly smoother frame delivery for your money when you pay for a GTX 1080.[/quote<] Small nitpick. At least if comparing 99th-percentile-FPS per dollar on a linear scale, the 980Ti comes out ahead, with the 980 and the 1080 neck and neck. Did card pricing change? or are higher 99th-percentile-FPS given more weight? Suffice to say, it's not clear from the scatter plot that the 99th-percentile-FPS-per dollar is best with the GTX 1080 as the article seems to imply. Great review!

    • brucek2
    • 3 years ago

    I’ve had this card for a few days now. My only complaint is that now when I turn off my second monitor (BenQ), my primary monitor (ASUS) also goes dark. Maybe this is someone’s idea of a convenience feature? But I find it much more aggravating then convenient, because sometimes I just only need the one monitor.

    This didn’t happen with my prior card (780), or for that matter when I turn off the primary.

    Any ideas on how I could keep this from happening?

    • sophisticles
    • 3 years ago

    Really well done review, I especially like the videos of the card in action in various games. I would like to have seen some compute benchmarks, including some benchmarks running a video editing suite like Premiere Pro while using gpu accelerated filters or some encoding benchmarks like AME when using the gpu accelerated rendering option while applying an LUT, something extremely cpu and gpu intensive (in fact this even makes a good cpu benchmark for many cores).

    Perhaps a few distributed computing benchmarks, like Einstein@home or Folding@home or some crypto-mining benchmarks would have been nice.

    But as far as a review of a card from a pure gaming standpoint, it’s as good as we could expect.

    • djayjp
    • 3 years ago

    If you guys could do a comparison in latency between triple buffering (that you can force on via inspector or riva tuner for DirectX apps) and fast sync that would be great!

    • Shobai
    • 3 years ago

    Quick question: what defines your x-axis for the ‘frame number’ plots? They seem overlong in some cases. The plot for Hitman looks particularly weird, with the plotted lines filling less than half the width (on mobile, at least).

    Other than that, thanks for the review! It was well worth the wait.

      • Meadows
      • 3 years ago

      I’d guess it’s unintentional since the plot is scaled correctly for 3 out of 6 titles, and a bit oddly for the other 3.

      One could argue it might be useful if the lowest common denominator is used as the x-axis limit (the game that produced the most frames of all those tested), so that tab switching back and forth could compare how different games perform compared to one another, but that doesn’t seem to be the idea here.

        • Shobai
        • 3 years ago

        I think that’s what I had expected to see, I guess. It would makes the graphs more useful.

        • Shobai
        • 3 years ago

        Actually, I had a chance to go back and look a bit more closely at each of the frame number plots in this review: I would say that there are issues with every one of them.

        We have wasted space on some, which is pretty minor. There’s the issue with the scale being different between tabs of the same plot, also fairly minor. Fairly major, however, are the plots where it appears that some of the data for at least one data set [most often the GTX 1080] has been truncated.

        Hmm. It might seem like a little thing but I do hope that they come back to fix this, and take that fix forward with future reviews.

    • Mr Bill
    • 3 years ago

    Looks like the GDDR5’s and VRM’s are heatsinked to the fan shroud plate and the vapor chamber reaches down to the GPU but does not seem to be a sink for the upper side of the plate.

    • chuckula
    • 3 years ago

    Hey Jeff, regarding frame time measurement issues with DX12, is Intel’s PresentMon utility helpful here?

    [url<]https://techreport.com/news/29830/presentmon-gives-us-a-peek-under-the-hood-of-directx-12-games[/url<]

      • Jeff Kampman
      • 3 years ago

      It totally is. We’re just working on ways to make it usable in the context of our benchmarking methods.

        • chuckula
        • 3 years ago

        Good to hear that.
        Andrew will be happy too.

    • Mr Bill
    • 3 years ago

    I’m going to savor this review by reading all the comments first, and then the review.

    • NeelyCam
    • 3 years ago

    Yep – it’s a monster. I can’t wait to see how 1070 is doing.

    • Krogoth
    • 3 years ago

    1080 is interesting GPU but it is not as game changing as the hype suggests.

    I would avoid paying the early adopter tax until supplies and demand stabilizes. It is going for almost $900 on ebay.

    It is 5870 redux expect that Nvidia knows how to profit from it.

      • NovusBogus
      • 3 years ago

      Yeah, nice card for sure but looks like that whole ER MA GHERD 2X FPS was traditional NV marketing braggadocio, at least for the settings that enthusiasts are realistically going to use. Shocking, I know.

      Curious how the 1070 stacks up, but all in all I’m not regretting having picked up a discounted 960 in the run-up to Pascal launch. Maybe this time next year things will have settled down enough to make an informed decision about the $200-300 range.

      • paulWTAMU
      • 3 years ago

      I have a hard time viewing any 700 dollar GPU as ‘game changing’ for gamers. It may herald some neat new tech but until it’s in the sub 300 dollar card market, most of us aren’t buying it. Great on the people that have the cash for that, but man, my last build was only like 900 bucks all told.

      • Airmantharp
      • 3 years ago

      Exactly. It’s not game-changing, it just means that Nvidia didn’t screw the pooch going to 14nm.

    • Krogoth
    • 3 years ago

    Happy impatient kiddos?

    • AnilMahmud
    • 3 years ago

    Please can anybody cite the source that says that the 1080 GTX can process 4 triangles per clock as mentioned in this article ? It will be very helpful for me.

    Each polymorph engine in the 680GTX was producing 1 polygon in two cycles and there were 8 polymorph engines producing 4 triangles per cycle.

    In the 1080 GTX there are 20 polymorph engines thus it is supposed to be capable of processing 10 triangles per cycle and in theory should be processing 16 billion triangles per second. The tests show that it can process around 11 billion triangles per second.

      • mtcn77
      • 3 years ago

      I’d like to inquire whether Beyond3D suite tests changed between reviews. Texture compression results are much higher across the board on Nvidia cards.

    • rudimentary_lathe
    • 3 years ago

    Thanks for a good review.

    I appreciate that you only tested this card at higher resolutions, as I’d imagine just about everyone buying this card would have a 1440p or 4K monitor to pair up with it. It’s not possible for you guys to cover every permutation of configurations.

    I’m impressed by the significantly decreased power usage over the 980 Ti at load. I would guess GGDR5X accounts for a big part of the improvement, with the rest coming from the 16nm node.

    I’m rather puzzled that they’re charging more for the 1080 FE when most vendors will have better coolers, meaning lower temps, less noise, and/or more overclocking potential. I suppose some people prefer the blower style coolers.

    There’s no doubt the 1080 is the king of the hill for the time being. It’s the “money is no object” card. Personally I can’t justify the $700USD/$1000CAD sticker price. I’m still holding out hope that the RX 480 will deliver 980 performance while destroying the price/performance ratio at $229USD/$300CAD. Heck, even that’s a lot for me as I usually aim for the best bang for the buck card in the $200CAD range and overclock the bejesus out of it.

    • Chrispy_
    • 3 years ago

    Nice review chaps.

    Nobody’s happy that it’s late but we all know the reasons for it and hope that you’re finally getting on top of things again.

    All eyes will be on you for the RX480 launch. Not only did Scott leave to work for AMD, but through this site he’s been instrumental in making AMD focus on [i<]better[/i<] framerates rather than higher framerates and hopefully his influence there might give you two better access to insider information than other review sites. Not to belittle the GTX1080 or this review, the RX480 and its review are the more important of the two because it will do something we haven't seen in over five years: It will redefine the performance/$ of the entire graphics market by finally bringing some decent competition again. TR's GPU benchmarking metrics, specifically the emphasis on 99%FPS/$ will be the most important stat to read on the 29th. Good luck with your Polaris review, I hope it goes smoothly and is ready to publish when the NDA lifts. Both TR's loyal fans and TR's future success are probably all counting on it!

    • credible
    • 3 years ago

    Finally a review that is mostly devoid of hyperbole, great job guys:)

    • Meadows
    • 3 years ago

    Jeff, you started the review wrong. You need to use all caps for the first 5 words or the first sentence, whichever comes first.

    Other than that, well done for finishing it.

      • Jeff Kampman
      • 3 years ago

      I fixed it just for you, dear.

        • chuckula
        • 3 years ago

        SSK IS PROUD OF YOU TOO!

          • sweatshopking
          • 3 years ago

          MY BABY IS GROWING UP!

        • Meadows
        • 3 years ago

        Thanks! Mr Wasson had been doing it for reviews for as long as I can remember and I do like if some things stay the same, if that makes sense.

          • tipoo
          • 3 years ago

          Ohh, for some reason I thought this was a lead up to the TR sneaking the word FIRST into the first post meme.

            • Meadows
            • 3 years ago

            And just why would I do such a thing.

    • themattman
    • 3 years ago

    Great review, thanks for not rushing it. Frame time benchmarking is much more interesting that straight up FPS results, which is what almost everyone else did to get their reviews out when the embargo lifted.

    • Ninjitsu
    • 3 years ago

    So…what’s the difference between Fast Sync and Triple Buffering?

      • chuckula
      • 3 years ago

      There is a difference and this video explains it pretty well: [url<]https://youtu.be/oTYz4UgzCCU[/url<] TL;DR version: regular triple buffering doesn't play ping pong like fastsync does.

        • tipoo
        • 3 years ago

        Hm, someone should look into if the pingponging is a noticeable user benefit.

        • Ninjitsu
        • 3 years ago

        Hmmm. I also remember triple buffering being recommended with vsync on, for some reason.

        Thanks for the video, though. Somehow I still want Andrew or Jeff to explain it a bit lol.

        • webs0r
        • 3 years ago

        They go over it very quickly in the video. I was wracking my brains a bit to try to understand it, and I think I’ve got it.

        The ping poing just refers to the re-naming of the buffers to avoid a flip (memory copy). So while it is a difference, it doesn’t cut to the core of the difference. Let’s recap:

        So the key basic differences:
        1. Triple buffering has 3 buffers, Fast sync can have n, but really 3 are key (front/last rendered/back)
        2. Once triple buffering 3 buffers are full, backward pressure is applied to force the game engine to slow down. Fast sync avoids this as it will discard frames or ignore them in favour of the last rendered frame
        3. Renaming of buffers in the fast sync sitation would likely save some microseconds, avoiding the memory copy

        [b<]BUT WHAT DOES IT REALLY MEAN?[/b<] [i<]Example of a 60 Hz refresh case with VSYNC ON & DOUBLE BUFFERING[/i<] If you have a new frame ready every time <16 ms you will hit the 60 Hz vsync in time to give 60 fps. If frames generally take >16ms to render, you will always miss 1 vsync. When the game completes a frame, say at 20 ms, the video card then has to wait until the next vsync (33ms) to present it! What's more, then the game then has to wait as the buffer is full and it can't write a new frame into it! So this "backward pressure" stops it from rendering a new frame until 33 ms where the buffer is cleared. Effectively you get a halving of the frame rate - 30 fps because of this as you are only putting a new frame out every 2 vsyncs. [i<]Example of a 60 Hz refresh case with VSYNC ON & TRIPLE BUFFERING[/i<] Triple buffering allows the game to render a new frame into a 3rd buffer in the case where the game engine had to wait for the next vsync. So in the example above, when the new frame is ready at 20ms, then the game had to wait 13 ms to begin to render a new frame, now the game can start immediately. You can see how this saves some time. It puts this in the triple buffer, as the double buffer is waiting to be presented at 33ms. But if the new frame takes 20ms again, then it is ready at 40 ms. The next vsync is at 50ms. It can only be shown then. Now what this achieves is the game can render more of the time, and thus allows the frame rate to go in between 30-60 fps. If the game can render <16ms frames, the triple buffer is not used! The double buffer is ready to be written to after every vsync. There are interesting observations about input latency here which I won't go into, but I'll just point out that the frame that was presented at 50ms was based on inputs prior to 20ms. [i<]Now, what about FAST SYNC[/i<] So this case is really quite separate, as it deals with what happens if the game can render way faster than the vsync interval! Let's say it pumps out a frame every 5 ms. To hit the next vsync, the game has to wait 11 ms before it begins rendering the next frame. Triple buffering isn't used as the double buffer is always available. So this wasted 11 ms adds latency. Which is why people would run with VSYNC off - in this case the game writes into the buffer at 5 ms, 10ms and 15ms, and is 20% into writing the next frame when 16ms hits, thus you see 20% of the new frame and 80% of the last frame rendered, giving you that nice "tear" on the screen. So, Fast sync just allows the driver to show the 100% complete frame that finished at 15ms! Avoid the tear and reduce latency. [b<]So, in the end what is the real benefit today?[/b<] If you have a high refresh rate monitor - say 144 Hz. Then vsyncs are available every 7ms. There could be a benefit but it would be limited to 7 ms minus game frame time. The game would have to be super simple though - it would have to achieve average frame rates of >144 fps to get a benefit. So there is a benefit but I would say it is very minor and limited to very simply rendered games. If you have a high refresh rate monitor - 144 Hz and GSYNC, it is the same as above. GSYNC only helps time the monitor refresh to when the frame is ready, up to the maximum refresh of the monitor. So FAST SYNC could still benefit in that it shows the last rendered frame. At 7 ms per sync though, it would have to be rendering a frame at 3.5 ms or less to have completed a 2nd frame within the refresh interval ... that is an instantaneous rate of 286 fps to actively get a benefit!!!! So my conclusion is - nothing to see here, move along. Only massive benefits will be for people with 60 Hz monitors. And I can't predict how it 'feels' in terms of smoothness - (will it be jumpy/irregular?) This move should have been made years ago when all we had were 60 Hz monitors. Oh well.

          • Ninjitsu
          • 3 years ago

          OMG thanks for this. I couldn’t [i<]really[/i<] understand what they meant in the video. OTOH I'd say it'll be pretty good if there's a big benefit to all those with 60 Hz monitors - that's where the bulk of the market lies.

            • webs0r
            • 3 years ago

            You’re welcome 🙂 Having at least one person find it useful made it worth the time writing it this morning!!

            I googled to find something useful, but there was nothing except either people speculating about stuff they didn’t understand, or the short spiel just saying that “it’s for when the game renders faster than the refresh rate”.

            Your point is very valid. I shouldn’t assume everyone is at the enthusiast end. Big improvements for those with 60 Hz monitors.

        • Andrew Lauritzen
        • 3 years ago

        I dunno what they mean by “regular” triple buffering there. That’s how some swap chain uses worked in Windows XP through 7 (pure frame queuing), but Win8/10 unthrottled FLIP chains already work in the manner described (presenting the most recent frame at vsync time), as did the first OGL triple buffering implementations waaaay back in the day.

        So… okay, but not exactly new or unique.

      • kypetzl
      • 3 years ago

      There has been confusion when discussing triple buffering because we have been using the term “triple buffering” to refer to two different techniques. Fast Sync is the same as OpenGL triple buffering, and Fast Sync is different from DirectX triple buffering.

      In the OpenGL world, some OpenGL implementations provide an setting (read: “driver hack”) to enable a technique called triple buffering. The Fast Sync behavior described by Nvidia is the same behavior as this OpenGL triple buffering technique.

      In the DirectX world, triple buffering refers to a “length-3 swap chain.” A DirectX swap chain is a queue where completed frames wait to be scanned out to the display. It is important to note that frames in a DirectX swap chain cannot be discarded; every frame in the swap chain must be scanned out to the display.

      (The swap chain length should not be confused with the DirectX “frame latency” setting, the latter referring to the command queue holding frames before the frame is submitted for rendering.)

      Because of this policy to not discard frames, there is a latency problem when a DirectX swap chain has frames waiting for scan out and those frames can’t be displayed quickly enough. (More accurately, this problem can be understood as a version (of the latency portion) of the “Bufferbloat” problem affecting routers/switches and network packets.) This latency problem can occur in any DirectX swap chain of length-3 or longer.

      Fast Sync effectively changes the DirectX swap chain behavior to make it behave the same way as OpenGL triple buffering.

      Fast Sync and OpenGL triple buffering also solves the “Vsync quantization” problem, a problem which can occur when Vsync is on and the GPU stalls producing frames because all back buffers are full. (“Back buffer” here means the OpenGL back buffer when double-buffering or a buffer in the DirectX swap chain.)

      DirectX triple buffering (length-3 swap chain) mitigates the “Vsync quantization” problem by providing additional buffers so the GPU can keep producing frames. I assume DirectX rendering is commonly double-buffered, aka it uses a “length-2 swap chain,” so DirectX triple buffering is simply extending the swap chain by one buffer. A length-2 swap chain is the same as “double-buffered” rendering.

      To summarize, Fast Sync is the OpenGL triple buffering technique applied to DirectX rendering. “Triple buffering” in the DirectX world refers to something else; hence the confusion.

        • Ninjitsu
        • 3 years ago

        Another excellent explanation, thanks!

        Yeah I just knew of the DX method of triple buffering.

        • Andrew Lauritzen
        • 3 years ago

        Yes this agrees with my understanding except you should note that it’s only true for “classic” WinXP/Win7 swap chains. Win8/10 FLIP chains work more like the “fast sync/GL” version.

    • NTMBK
    • 3 years ago

    Nice review 🙂 Looking forward to getting DX12 tests, though. I wonder if AMD frametimes in Hitman look so bad because they focused on DX12 mode.

      • Tirk
      • 3 years ago

      DX12 is a bad word around here be careful, hehe

        • chuckula
        • 3 years ago

        Oh you want DX12?

        How about this: [url<]http://www.guru3d.com/articles_pages/palit_geforce_gtx_1080_gamerock_premium_edition_g_panel_review,14.html[/url<] Gee, a 78% lead for the GTX-1080 over the Fury X in a DX 12 benchmark at 4K resolutions. Since TR's aggregate results didn't show that much of a lead for the GTX-1080, should we adjust TR's scatter plot to increase its lead?

          • NTMBK
          • 3 years ago

          No frametimes, not interested. This is why the world needs TR, to do benchmarking properly!

            • chuckula
            • 3 years ago

            Well a lack of frametime tools, and not some stupid conspiracy, is why TR hasn’t been publishing a lot of DX12 numbers in the first place.

            • NTMBK
            • 3 years ago

            Given that AMD have a certain Mr Wasson working for them, I hope they’re coming up some tasty tools 😉

            Edit: No phone, I did NOT mean Watson.

            • Tirk
            • 3 years ago

            Chill out, I even mentioned in my other post that Nvidia shows improvement in DX 12 in some games you don’t need to go all biased mode on me. But its not enough for you I consistently give Nvidia credit I have to worship them as a god or something?

            Have I stated anything negative about the 1080 in these comments? No, but you have definitely made an effort bashing the Fury X in an article about a completely different GPU! I wonder whose the biased one. Would you also then like to mention the price/perf of the TitanX on the charts I’m sure it doesn’t look pretty 😉

          • Pettytheft
          • 3 years ago

          Seriously, do you get paid for all this? Calling you out for cherry picking benchmarks is futile, I know it but seriously. Everyone and their mom knows that the DX12 version of Tomb Raider is not up to par. It runs better in DX11 on both cards. Why not include games that were made with DX12 features in mind instead of tacked on in the end? Try Hitman, Total War: Warhammer or Ashes.

    • Unknown-Error
    • 3 years ago

    Great Review Jeff & Robert!

    1080 is an insanely powerful card. $700 actually seems quite reasonable.

      • ImSpartacus
      • 3 years ago

      Not when you notice that the 980 Ti can be had for [url=http://pcpartpicker.com/products/video-card/#c=224&sort=a8&page=1<]$410 at stock, $423 for a 10% core OC and $440 for an 18% core OC[/url<]. That $440 Zotac version, in particular, is a steal. Performance generally scales pretty well with core clock OCs. If that card even gets 15% better performance than a stock 980 Ti, then it's within spitting distance of a stock 1080 for less than 2/3 of the price. EDIT - I love how people don't want to believe that their shiny new 1080 can seriously challenged a <$450 hotclocked 980 Ti.

        • travbrad
        • 3 years ago

        To be fair you can OC a 1080 too. The Founders Edition/early adopter tax on both the 1080 and 1070 makes them a much worse value than they should be though, I agree on that. A lot of people buying 1080 FEs probably don’t really care about the price in the first place either. They just want the fastest GPU available.

        1070 FE makes a lot less sense to me, since that performs very similarly to a 980 TI and costs MORE right now (and if money was no object then you’d get a 1080 anyway).

        I’m not sure I’d buy a 980 TI right now either though because I care about heat output (both into my case and into my home office). I’d probably just wait it out until non-FE 1070s are available and get the good performance AND the improved heat/power efficiency. Plus depending on how close Polaris can get to a 980ti/1070, there could suddenly be a much improved supply of 1070s about 1-2 weeks from now. 😉

      • Spunjji
      • 3 years ago

      Only for the next… 4 days.

        • Mr Bill
        • 3 years ago

        LOL

        • chuckula
        • 3 years ago

        What, then it seems like a steal?

        • Mr Bill
        • 3 years ago

        Called it right. $240 for half the performance is pretty attractive.

      • Krogoth
      • 3 years ago

      1080 is somewhat faster than the 980Ti it replaced for an extra $100. The only going for 1080 is that eats almost half as much power at load.

      Power users should wait until graphical version of Big Pascal comes around.

      • credible
      • 3 years ago

      Sorry, $700 is not reasonable considering the economy and yes I know they can price it at whatever they want but I also don’t have to buy it lol and will not.

        • Waco
        • 3 years ago

        It’s reasonable considering they’re sold out constantly. 🙂

    • Jigar
    • 3 years ago

    TR Nailed it – no review of GTX 1080 comes close to yours, i have nothing but respect for you guys and also it was a quick reminder for me, why i visit techreport daily since past 12 years.

    Thanks once again Jeff and Robert.

      • TwoEars
      • 3 years ago

      Indeed. Awesome review by Jeff and Robert.

      • SomeOtherGeek
      • 3 years ago

      Totally concur! Well worth the wait. Thanks Jeff and Robert.

    • Philldoe
    • 3 years ago

    Jeff, are there any plans to do further testing using the 1080 in DX12? I’d love to see TR’s input on the A-Sync issue, and what nvidia is (not) doing about it.

      • Tirk
      • 3 years ago

      Don’t mention DX12 and A-Sync it makes Nvidia pressure developers to increase tessellation in games to make their FE 1080 shroud look smooth so they can say their on top 😉

      I mentioned DX12 in the forums and was promptly attacked that it was irrelevant so don’t count on seeing any focus on it any time soon until the AAA games that no one plays have it. Its disappointing because in truth Nvidia has seen some performance increase in some DX12 titles, so its not all doom and gloom for them.

      • Jeff Kampman
      • 3 years ago

      I am definitely thinking about this problem, and we’ll try to shed some light on it in a future article or review.

        • sweatshopking
        • 3 years ago

        Yeah. Wanna see how it compares in DX 12 vs the 480

    • ronch
    • 3 years ago

    Ok, RX 480 reviews in just 4 more days. Interesting times for GPU shoppers these days.

    • puppetworx
    • 3 years ago

    Can you actually buy one for $700 today? I keep hearing differently.

      • Freon
      • 3 years ago

      [url<]http://www.nowinstock.net/computers/videocards/nvidia/gtx1080/[/url<] They come in and out of stock. Still tough to get. [url<]http://www.nowinstock.net/computers/videocards/nvidia/gtx1070/[/url<] Bunch in stock as of the time of posting this. I've been watching and its been this way the last few days. 1080 you have to watch like a hawk, 1070 seems to already be crossing the threshold.

      • chuckula
      • 3 years ago

      I just pre-ordered from B&H photo & waited.
      The delivery did go out when they said it would be in stock, so I saved myself some hassle.

      Another option where there is some degree of transparency for shipment dates and quantities:
      [url<]http://www.shopblt.com/cgi-bin/shop/shop.cgi?action=thispage&thispage=0110040015017_B3U8710P.shtml&order_id=573355414[/url<] That's a non-FE card.

    • End User
    • 3 years ago

    I’m currently playing Rise of the Tomb Raider on my GTX 1080 FE @ 2560×1440 in DX12 mode with settings maxed. My 1080 FE is running at 2000 MHz (GPU) and 11 Gbps (memory) with no throttling. It is currently sitting at 78 °C with the fan set at 80%. It is much quieter than my previous dual EVGA GTX 770 SC SLI setup. The fan on my card does not exhibit any grinding sound. From 3 feet away the fan noise is greatly reduced (granted, I do have a Corsair 550D).

    Memory usage is a crazy 7GB in Rise of the Tomb Raider.

    Edit: Holy crap, Rise of the Tomb Raider maxed out the memory

      • sweatshopking
      • 3 years ago

      How is that game? You can get it for like 9$ if you set your windows region to Poland or Ukraine and buy it on the windows store. I was considering it, but haven’t pulled the trigger. Worth it?

        • yogibbear
        • 3 years ago

        Don’t get the UWP version, the .exe steam/wherever version is good.

          • sweatshopking
          • 3 years ago

          There is no reason to avoid the uwp version. It supports everything the steam version does.

            • Voldenuit
            • 3 years ago

            [quote=”sweatshopking”<] There is no reason to avoid the uwp version. It supports everything the steam version does.[/quote<] Does it allow users to turn vsync off? Does it do true fullscreen for Freesync users (G-sync users can run VRR on windowed apps anyway)? Does it support CF/SLI (Note the .exe version originally didn't either but was patched, and more importantly, could be hacked to support it by editing hex values in the profile even before Squeenix reacted to the shortcoming)? Does it support mods and DLL injectors (like SweetFX)? Does it expose the executable to the end user? I'm not being snarky; this was the state of ROTTR UWP as I am aware of it as of two months ago. Since then, I haven't heard much about the state of the game on UWP, so if you have any information to update us, I'd be appreciative.

            • sweatshopking
            • 3 years ago

            Yes, remember yes, but can’t be bothered, up to developer not an limitation of uwp and not sure for this one, steam version doesnt “support” mods or dlls, executable isn’t a support question.

            Only one that matters for 99.9% of users is vsync, which is patched.
            Again, id say it supports everything the steam version does.

            • auxy
            • 3 years ago

            You’re so full of crap it disgusts me.
            [quote<]Does it allow users to turn vsync off? Does it do true fullscreen for Freesync users?[/quote<] A couple of UWP games have specific support for it. However, you still can't control this using your driver control panel or third-party software. You're completely reliant on the whims of the game developer... just like on a game console. [quote<]Does it support CF/SLI[/quote<]No. And it [b<]never will.[/b<] [quote<](Note the .exe version [...] could be hacked to support it by editing hex values in the profile even before Squeenix reacted to the shortcoming)?[/quote<]This is exactly why you should never buy UWP games. Microsoft doesn't want you have this level of control over your software. This reduces the things they and their partners can sell you as value-adds. Don't buy the stupid security narrative, it's horses--t. [quote<]Does it support mods and DLL injectors (like SweetFX)?[/quote<]Nope! and it never will. [quote<]Does it expose the executable to the end user?[/quote<]See above. Saying that the Steam version doesn't "support" mods or DLLs is such a disgustingly dishonest lie I don't even know where to begin. That's like saying my car doesn't "support" having non-OEM parts installed. Well guess what; I'm not buying your stupid car where the rims are welded to the wheel hubs and I have to take it to the dealer to have them fix anything. I hate you and everyone like you who not only bows at the altar of ignorance but preaches the gospel of stupid. You need to go read Brave New World again.

            • RAGEPRO
            • 3 years ago

            Take it down a notch, loli. I even agree with you for the most part, but you’re getting a little hot.

            • sweatshopking
            • 3 years ago

            You looking to get in trouble? Personal attacks aren’t allowed. You’re always way over the top, aggressive, militant, and then just wrong. I dont know why im wasting my time

            Vsync – im full of crap but you specifically state im right, it does support uncapped frame rates and turning off vsync. Yeah. Im full of crap then.

            Crossfire/sli lacking support is not a limitation of uwp. IT NEVER HAS BEEN. Whether it comes to.specific games is up to the developer. THAT’S FACTUALLY TRUE. Whether it comes is up to them. Tomb raider might never get an sli patch. That’s unrelated to uwp, and also irrelevant for 99% of users (literally nobody)

            I dont want to buy addins. I want games to work and be cross platform. Dont agree with your ideological sense rants about Armageddon.

            Half the crap you’re whining about has nothing to do with anything. Like usual.

            Maybe reading is hard for you. Your car doesnt support many parts. THEY VOID WARRANTIES. I never said they couldn’t use them. I said they’re not “supported” AND GUESS WHAT THEY’RE NOT. Have a problem good luck with support. They’ll literally tell you “MODS ARE NOT SUPPORTED.”

            You’re way over the line again. You really need to tone it down.

            • auxy
            • 3 years ago

            More words from the mouth of Belial in service of Mammon. Every word in your post is an attempt to be “technically correct” after moving the goalposts, and twist the intentions of myself and others.

            SLI works fine on the Steam version of RotTR. UWP is a limiting factor over control of game properties like Vsync because you cannot control it with third party tools.

            Something being “unsupported” is different from being LOCKED OUT. You’re full of crap and you know it. You can pretend I’m stupid and can’t read, but at least I have a clear conscience.

            • sweatshopking
            • 3 years ago

            Please stop with the “conscience” bit. It’s really ridiculous. This isn’t a religious battle, and your over the top responses are really silly. Stop doing them. You’ve been asked like 1000 times by people here to conform to the culture of general respectful discussion. Please stop ignoring all your fellow users requests.

            I am “technically correct”. Glad we agree. Nice you were able to admit being wrong.

            Your second paragraph is correct. While it is limiting, it doesn’t prevent it, and if you’re somebody who requires third party tools (id say very few people are, and id be right) then avoid UWP. technically though, as we agree, the UWP version supports basically everything.

            Glad you also understand the difference between being possible and supported. I never said the same things were possible with UWP. I said what was supported, and again, I was right.

            • BurntMyBacon
            • 3 years ago

            [quote=”sweatshopking”<]I am "technically correct".[/quote<] Sure are ... but did you actually answer the question. [quote="Voldenuit"<]Does it allow users to turn vsync off?[quote="sweatshopking"<]Only one that matters for 99.9% of users is vsync, which is patched.[/quote<][/quote<] There's one and it is apparent that this is the one you feel is most important. [quote="Voldenuit"<]Does it do true fullscreen for Freesync users (G-sync users can run VRR on windowed apps anyway)? Does it support CF/SLI ... ?[quote="sweatshopking"<]... up to developer not an limitation of uwp and not sure for this one ...[/quote<][/quote<] Yes. Some of the current limitations on UWP games are not necessarily imposed by the platform itself (though some things are harder to do on UWP). However, the questions were in regards to the specific game called out. Takeaway here is you are not sure. [quote="Voldenuit"<] Does it support mods and DLL injectors (like SweetFX)? Does it expose the executable to the end user? [quote="sweatshopking"<]... , steam version doesnt "support" mods or dlls, executable isn't a support question.[/quote<][/quote<] He didn't ask about steam support with exposing the executable. Context suggests that the comment about mods and DLLs wasn't a steam support question either. [quote="Voldenuit"<]I'm not being snarky; this was the state of ROTTR UWP as I am aware of it as of two months ago. Since then, I haven't heard much about the state of the game on UWP, so if you have any information to update us, I'd be appreciative.[/quote<] The entire post was regarding the state of what is possible in the game on each platform, not what the respective platforms themselves will provide tech support with. [quote="sweatshopking"<]Again, id say it supports everything the steam version does.[/quote<] Though, from your statements above, you clearly don't know on some of the specifics asked and don't care on others. A more accurate statement might have been that it supports everything that 99.9% of user care about (VSync). Voldenuit is not in that 99.9% of people as is made obvious by the fact that he asked about the other missing features that are apparently important to him. As to why I care, I am interested in some of the same questions. Anyone with game specific answers, please post. Like you (SSK), I think the UWP has more potential than many people give it credit for. With some refinement, it could provide some needed competition for STEAM. UPlay and Origin don't carry games from other publishers and GOG doesn't have a lot of the newest titles. That said, to this point, there has been a disparity of feature set between UWP and Steam for some games. So unfortunately, even if you are all in on UWP, you need to look at the individual game and weigh the pros and cons to figure out whether you should actually purchase it there.

            • sweatshopking
            • 3 years ago

            If the question was what is TECHNICALLY POSSIBLE that’s a different question than supported.

            Having an exposed executable isn’t really in the same classification of vsync or not. It’s just how an application is packaged, and an entirely different conversion imo. Not the same class at all. Dlls and sweet included. As far as features as recognized by the vast majority of people, this game has them all.

            I can’t remember all the details for each specific game, and the internet in South eastern Africa leaves much to be desired. I answered to the best of my ability, given the circumstances.

            In the end though, the game is on sale on steam for 40$. It is available, with a region change, for 9$. Being told it isn’t worth buying at 1/5 the cost because it doesn’t have sweetfx and an exposed executable is crazy is mostly what my position is.

            I appreciate the time you took to even quite accurately, and I think we agree mostly on UWP. There is much to be done, but I welcome steam alternatives, as well as cross play and improved installation, cleanup, etc.

            • maxxcool
            • 3 years ago

            Apparently, your are the literal devil. Lulz…

            • Captain Ned
            • 3 years ago

            Auxy:

            YGPM

            • maxxcool
            • 3 years ago

            That has to be the funniest thing you have ever posted.

      • PrincipalSkinner
      • 3 years ago

      That’s VRAM usage?

        • End User
        • 3 years ago

        Yes.

      • tipoo
      • 3 years ago

      Like many modern games maybe it just tries to use the free VRAM it has available, rather than it actually *needing* 8GB. I’d be surprised if 8GB wasn’t more future proof and games could already max it and have a performance cliff after.

        • End User
        • 3 years ago

        When I was configuring the graphics options of the game a warning popped up stating that the options I selected demanded a 8GB or greater video card. I assume, at those settings, the game needs the GPU memory.

      • Krogoth
      • 3 years ago

      Sounds like a memory leak.

      I’m a bit skeptical that the game actually needs ~*8GiB* of VRAM.

        • End User
        • 3 years ago

        The game developer built in a warning message to display when users set textures to very high or enable SSAA to indicate high-end hardware is required (8GB of VRAM or greater).

        • UberGerbil
        • 3 years ago

        If the game has more than 8GB of assets, and uses the video card memory as cache effectively, then there’s no reason why it shouldn’t (eventually) be full all the time. That doesn’t necessarily mean it’s all in use at any given moment (though with high-enough res textures in a sufficiently complex scene it’s certainly seems possible).

    • xeridea
    • 3 years ago

    From other reviews, PureHair is about 10% performance hit @ 1440p, and it is extremely similar between cards (9-11%), so turning it on shouldn’t be an issue. There was obviously an issue with HairWorks, it performed horrendously, even on Nvidia hardware. TressFX and PureHair look and perform considerably better (even on Nvidia hardware), so it is sad that it was turned off just because HairWorks was junk.

    Hopefully we can see DX12 benchmarks in future reviews when things are more stable, to see the benefits of architecture improvements.

    • SetiroN
    • 3 years ago

    I miss Scott Wasson. 🙁

      • ronch
      • 3 years ago

      References to Scott used to rake in upvotes. What happened?

        • Unknown-Error
        • 3 years ago

        We love Scott, but what [b<]SetiroN[/b<] said can be interpreted as a slap in the face to Jeff Kampman and Robert Wild. Jeff & Robert's hard work brought this review to us, and the quality is up to the standard of what Scott did.

          • CScottG
          • 3 years ago

          ..and this is worth 48 upvotes? It seems a lot of people voting on these forums are just ******* shy of being totally ******.

          -let the down votes ensue!

      • Wirko
      • 3 years ago

      Hey, look at the bright side. Instead of a Scott-made RX480 review, you’ll soon be reading a review of a Scott-made RX480.

        • tipoo
        • 3 years ago

        I read they brought him on exactly for the type of frame time testing TR pioneered, so this will be a very interesting area of the RX 480 review for me.

          • tipoo
          • 3 years ago

          Past tipoo would be pleased, it looks like at least frame pacing isn’t a worry anymore, even edging out last gen Nvidia cards.

    • trek205
    • 3 years ago

    lol FIVE weeks later than everyone else…

      • chuckula
      • 3 years ago

      WRONG.

      They beat Anandtech.

      Game. Set. Match.

        • trek205
        • 3 years ago

        nice try but the Anandtech “preview” is still a review for all intents and purposes. in fact it was as good or better than most real reviews out there.

          • rxc6
          • 3 years ago

          If you call that a review, then the problem is with your standards. I don’t care if it is better than a lot of the crap that gets called a review nowadays. It doesn’t provide enough information for me.

            • Airmantharp
            • 3 years ago

            Agreed. I really only care about two things: that frame-times are within the expected range for the performance potential on tap, and that there aren’t any other things that are out of whack.

            Pretty much only trust TR for that.

            • ImSpartacus
            • 3 years ago

            I think it’s a fantastic compromise.

            AT covered all of the major bases AND did an admirable write-up. And they did it all over a month ago.

            You can’t have your cake & eat it too, but AT did the best with the resources that they had. Honestly, I prefer what they did to TR’s solution. But I’m a total Anandtech fanboy, so there’s that…

      • Krogoth
      • 3 years ago

      Why does it matter when you still cannot get your hands on a 1080 without resorting to scalpers on ebay?

    • Wonders
    • 3 years ago

    [quote<]Blaise it[/quote<] Hahaha, well-played.

      • chuckula
      • 3 years ago

      I’d wager Pascal’s Triangle setup engines were of high interest too.

    • f0d
    • 3 years ago

    i wonder how many people have a 4k monitor vs a high refresh monitor?
    you see lots of 4k tests on websites but never any tests for high refresh (120/144hz etc)

    i have no interest in a 4k but i would really like to know how well a 1080 can keep a constant 144hz fed with 144fps

      • travbrad
      • 3 years ago

      Silly f0d. I buy $700 graphics cards to get under 60FPS. PC MASTER RACE.

      • anotherengineer
      • 3 years ago

      Pfff 144 fps that’s it?!?

      My old clunker PC can push CS:Source at 323 avg fps according to the built in stress test 😉

      [url<]http://images.akamai.steamusercontent.com/ugc/264966045861333949/5997997615EF81A7D3E3885507CC65E67F2D0747/[/url<] (1080p screen, amd 955 cpu stock, radeon 6850 stock) Where is my 500Hz monitor?!?!? 😉 On a serious note, I have a 120 Hz screen and a buddy has a 144Hz screen, and I can not tell a difference between 120 and 144. However I can easily tell a difference between 60 and 120/144 though.

        • chuckula
        • 3 years ago

        [quote<]On a serious note, I have a 120 Hz screen and a buddy has a 144Hz screen, and I can not tell a difference between 120 and 144. However I can easily tell a difference between 60 and 120/144 though.[/quote<] Similar experience here. My old GTX-770 could only drive my 2560x1440 display at 120Hz. The new card now does 144 Hz, and I'm pretty sure my minor perceptual difference from 120 to 144 is mostly the placebo effect. However, I can spot 60 to 120Hz without even trying.

        • f0d
        • 3 years ago

        [quote<]On a serious note, I have a 120 Hz screen and a buddy has a 144Hz screen, and I can not tell a difference between 120 and 144. However I can easily tell a difference between 60 and 120/144 though.[/quote<] i 100% agree and thats why i lumped 120/144 together im just sayin it would be good to see high fps benchmarks since i think more people have a 120/144hz monitor than a 4k monitor

          • Chrispy_
          • 3 years ago

          Because the [i<]median[/i<] detection limit of the visual cortex is about 83Hz Beyond that your brain just doesn't care. The number is different for everyone and varies with age, I'd imagine, but as a kid I was very sensitive to 60Hz and 72Hz flicker on CRTs but didn't really care at 85Hz. There's still a point to framerates over 83Hz but it's not for your visual cortex, it's to do with latency and timing, because with predictable actions, human latency is incredibly sensitive. The "180ms reaction time" that you learn in school as a kid is reaction speed, not prediction speed. Prediction speed is trying to stop a digital stopwatch on 10.00 seconds and I bet that pretty much anyone capable of making a headshot on a moving target could get very close (I know I could get within 2/100ths if not nail it on the dot. That puts (my) human prediction latency at zero, +/- 20ms which means that a few miliseconds of latency here or there from lower framerates is enough to be noticeable when taking timed, single shots.

            • travbrad
            • 3 years ago

            I do think I can see a visual difference a bit over 83hz (even just moving mouse around desktop), but definitely not between 120 and 144. As you said it can vary a bit depending on the person though, and the differences definitely get less noticeable the higher you go. Going from 60hz to 80hz is more noticeable than from 80hz to 100hz, etc.

            I agree the feeling of responsiveness at high refresh rates/framerates is even more significant than the visual difference too. Maybe not that important for a turn based strategy game or something that is relatively slow paced but it makes a big difference in shooters, for example.

            • anotherengineer
            • 3 years ago

            It’s really aggravating that 100Hz or 120Hz screens aren’t standard now. The cost difference is probably very small, but since they can still call it “gamer” or “high refresh” they can put a large premium on it.

            I mean 2009 Samsung made a 120Hz 2233rz screen, back then when it was basically the only one on the market it was about $330 iirc. 7 years later (or a millennia in PC years) it’s still about the same price. (even with more competition!!)

            • Chrispy_
            • 3 years ago

            You certainly can detect differences on an LCD screen at higher framerates, and that’s because the picture actually changes. Were you to use a CRT or cinema projector, you’d realise that your visual cortex cutoff is probably in the 60-100Hz range because it’s a CRT has a very very high impulse to blank ratio and the picture is identical regardless of what frequency it’s refreshed at. The picture changes on an LCD because in any given frame at high refresh, some of the pixel transitions will take longer than 8.3ms (for 120Hz, as an example). This means that if you refresh at 6.9ms for 144Hz, even more pixel transitions are incomplete, giving you a slightly different result.

            There’s another reason though, in addition to the fact that pixel transition issue; LCD’s work differently to CRTs in that they produce sample and hold blur for your brain when tracking motion, which actually means that the higher framerates reduce this perceived blur slightly.

            The length of the percieved trail will be 120/144ths shorter for the 144Hz image in the case of your two framerates, but even with a 500Hz screen you’d still get sample and hold blur because the always-on backlight means you’re not relying on your visual cortex to “fill in the blanks” – and it’s that process of your brain providing missing information that makes CRT motion tracking so much better than LCD, though I’ll admit that the ULMB strobing backlights are a big step in the right direction.

            • odizzido
            • 3 years ago

            I can see CRTs drawing frames at 60hz. It hurts my eyes and gives me a headache in like 45 seconds. At around 90 or so CRTs start to become usable for me. I always ran mine at 100 though.

            • Airmantharp
            • 3 years ago

            I could stand 85Hz without too much annoyance, and could stand 75Hz if need be, but certainly preferred 100Hz+ wherever possible.

            60Hz was painful.

            • Andrew Lauritzen
            • 3 years ago

            Right, VR guys have studied this in depth and for most people the cutoff is somewhere around/below 90Hz, however there is some subtlety to how the frames are presented. Part of the reason it is possible to still see differences above 90Hz on regular LCDs is because they are generally not low persistence displays, so you effectively can still perceive the “stutter” when one frame switches to the next to some extent. On low persistence displays your brain sees it more like a strobe light and fills in the missing bits smoothly which is why there’s no immediate need to go above 90Hz on the VR HMDs right now.

            120Hz is definitely a good sweet spot and happens to divide a lot of common frame rates well also (24, 30, 60). 144 is pretty much the same thing – at best you get a minor advantage in more useful divisors with V-sync (and no G-sync/freesync). But hey, if you monitor does 144 there’s no real disadvantage to using it in most cases anyways; but I’d agree 120 is perfectly sufficient whereas 60 is not.

            • tsk
            • 3 years ago

            The guys over at PC perspective claimed they could notice the difference between a 144Hz and 165Hz display, I had a really hard time believing it, but sure enough Ryan Shrout and his colleague claim so here; [url<]http://www.pcper.com/reviews/Displays/ASUS-ROG-Swift-PG279Q-165Hz-2560x1440-27-IPS-G-Sync-Monitor-Review/Testing-165Hz-Ga[/url<] I personally have a 144Hz display and got curious about the 240Hz display they showed at computex, if I'd be able to notice a difference.

            • chuckula
            • 3 years ago

            That article lacked two vitally important words: Double Blind.

            • Andrew Lauritzen
            • 3 years ago

            It’s also hard to know if it’s *only* the refresh rate changing there… remember that the 165Hz thing is technically “overclocking” and it’s not clear if/how it interacts with any of the other monitor stuff.

            It may be just a refresh rate change, but it may affect other subtle things like the anti-ghosting logic and so on too. But again as I said, it’s certainly possible to notice differences in LCD monitors simply because they are high persistence. But that’s not a good argument long term for those refresh rates… we should instead shift more towards lower persistence ~90Hz displays.

            • tsk
            • 3 years ago

            Is lower persistence = input lag?

            • RAGEPRO
            • 3 years ago

            Persistence refers to “image persistence”, or basically, how long an image hangs around after the display, uh, displays it. LCDs by nature are high-persistence displays (as are OLEDs) because they simply continue displaying the previous frame until the next one comes up; making them low-persistence involves using some form of strobing or black-frame-insertion technology.

            Image persistence is bad because of the way our visual system works. It creates the impression of a much more blurred image than is necessary.

            • odizzido
            • 3 years ago

            well, bad if flicker doesn’t bother you. I personally hate anything that flickers. I stay away from plasma displays because they hurt my eyes.

            • anotherengineer
            • 3 years ago

            But what if you overclocked your brain though?!?!!?

            • brucethemoose
            • 3 years ago

            I can see a difference between 96hz and 110hz.

            As in, I boot up a gaming thinking I left it at 110, but I can feel that it’s at 96 and have to go back and change it.

            However, I’m not running any variable refresh rate fanciness, so the actual perceived desperate may be lower.

            • Visigoth
            • 3 years ago

            Then why do people get headaches even with backlights that have PWM @ 220 Hz or less? It may not be perceptible TO YOU, that does not mean “its’ not perceptible at all”. Big difference.

            • Chrispy_
            • 3 years ago

            Because that’s a different thing, for two separate reasons that are commonly mistaken to be the same thing, and neither of which is the same as the 83Hz rate I’m talking about.

            It’s interesting that you use 220Hz PWM flicker as an exmple, because that’s also the framerate at which [url=http://amo.net/nt/02-21-01fps.html<]a USAF study[/url<] determined pilots could identify an aircraft based on seeing just one frame. Rods and cones in the retina are capable of converting a single photon into an electrical signal, so your retina actually has a framerate of infinity, but let's stick to 220Hz for what is a useful figure, since that is a tested value at which the visual cortex can identify images. 1. A 220Hz flicker is seen by the retina whether you brain deals with it or not; The visual cortex normally filters this out but in some people the flicker causes headaches in the same barely-understood way that flashing lights can trigger hypnosis, epileptic fits, etc. Some people are troubled by flicker; We don't completely understand why, but it's Definitely A Thing™. These susceptible people are going to hate flickering regardless of what the source is. 2. Interference flicker: Your brain is not troubled by high-frequency flicker (above the ~83Hz of the visual cortex) but the 220Hz PWM flicker has a waveform that has additive intensity with your ambient lighting's waveform (if it's a dimmable LED or 60Hz fluorescent bulb that actually emits an intensity waveform). So, add a 220Hz wave to a 60Hz wave and you're going to get you a combined wave with a resonance of 20Hz (it's the remainder, or the [i<]out-of-sync-ness[/i<] of the two waveforms). The visual cortex will register this 20Hz resonance because it's well within the ~83Hz it can deal with, but it's an extra workload and it causes some people discomfort in the same way that prolonged concentration exhausts many people. In neither case have I really explained the 83Hz figure. I've told you that the eye has an infinite framerate because it's analogue, and I've told you that the visual cortex can detect single frames as short as 1/220th of a second. So where does 83Hz come from? Why is it not 220Hz? Well, simple. Your brain can detect a single 220Hz frame in isolation, but if you put changing frames back to back 220 times a second, our dumb organic brains just don't have the bandwidth to deal with it all at once; They just blur it. The whole concept of motion blur from cameras with a fixed framerate is irrelevant to a human eye with an infinite framerate, but our brains simply can't deal with infinite information coming down the optic nerve. Anything faster than ~83Hz is too much for it, so it just blurs the information together. There you go: Motion blur, in human vision, in one paragraph. That's why we such at identifying higher refresh rates beyond about 83Hz. For other reasons, we *can* just about do it, but for the most part, your typical person isn't going to notice anything significant between 85Hz or 200Hz, whilst they probably will notice a significant increase in smoothness from 60Hz to 85Hz. [i<]Edit - Sorry for the long post. I'm not trying to patronize or anything, but as someone with a rare form of colourblindness I've had more reason than most to look up the science behind human vision and in doing so have become absolutely fascinated by it. I've already learned several times more than most people will learn about it in a lifetime and yet there are still unanswered questions and things I plainly lack the mental capacity to comprehend.[/i<]

            • kvndoom
            • 3 years ago

            No apologies! That was very informative to read.

            • Zizy
            • 3 years ago

            Nice 🙂
            Some more things:
            While eye receptors actually gather individual photons, normal ones won’t fire unless there are at least ~3 detected in a sufficiently short time scale.
            220Hz+ thingy was a bright frame among darkness and was a limit to identify the type of plane (aka – is this Mig or F?). Not the limit to “see something”. No limit there afaik, but you will only see a blob of light. After image helps a lot here.
            Dark frame among whiteness didn’t work all that far. Not sure if they also tried white-image-dark and dark-image-white and how eyes (and brain) performed there.

            What form of colorblindness do you have?

            • Chrispy_
            • 3 years ago

            Yeah, it would only work in the case of a bright flash among several dark frames, since the photochemical reaction that generates an electric impulse spikes instantly upon absorption of the photon but decays under the usual rules of electricity. I could go into this in more detail but it’s off-off-topic at this point.

            Anyway, the spike and then decaying fall-off of the electrical impulse is half of the reason for our persistence of vision (the other half is all visual cortex stuff) so this definitely wouldn’t work with a dark frame interjected among several bright ones. That just confirms that retinal persistence is not entirely down to the visual cortex.

            As for me, I’m deuteranopic, rather than the deuteranomaly that applies to about 7% of the population. It’s similar in many ways but it’s the differences that I find far more fascinating; As a dichromat I don’t have defective (or semi-defective) green cones wasting space in my retina so I get the small ‘advantage’ of higher-density red and blue (not that it’s any real compensation for having functional green cones!)

            Tests on dichromats seem to imply that short wavelength cones are more prevalent than long wave cones so I cannot see in infra-red but specifically, a [i<]deuteranopic dichromat[/i<] can see quite a lot further into the ultraviolet spectrum, as a result of the low gradient in the [url=https://upload.wikimedia.org/wikipedia/commons/thumb/9/94/1416_Color_Sensitivity.jpg/300px-1416_Color_Sensitivity.jpg<]photon absorbance rate vs wavelength[/url<] for short wavelength cones. Doubling the number of cones makes a significant difference to how far left (how far into the ultraviolet end of the visible spectrum) the cutoff value is (bear in mind that's a logarithmic graph too). So yes, we deuteranopic dichromats see some ultraviolet - aka blacklights. It's one of the first things that tipped me off to about not having the common red-green deficiency. Has it ever been useful to me? Damned if I know, but it's interesting stuff to me at least 😉 [b<][i<]Edit: [url=http://www.ghuth.com/wp-content/trichromatic-spectrum-300x300.png<]Here is a better graph[/url<] that shows why gaining (blue) short wavelength cones is significant.[/i<][/b<]

            • Beelzebubba9
            • 3 years ago

            Great post, thank you.

        • credible
        • 3 years ago

        This is good to know, though I had bought a Dell U2713HM 2 years ago, quite the investment for me…I suppose I could sell it.

        Maybe I will lol.

      • Meadows
      • 3 years ago

      I kept saying this but they just keep testing the highest resolutions. The official explanation seems to be that extracting performance differences out of the high-end products is more important than testing real-world usage, for some reason.

        • travbrad
        • 3 years ago

        Yep it’s a bit strange considering the frametime testing stuff was mostly about getting data that was more representative of what it feels like to play in real-world usage. and detecting any noticeable slowdowns more accurately than framerates would. You don’t even need frametime data to see that 4K performance just isn’t there yet.

        1440p, 1080p, 1440 ultrawide, and 1080 ultrawide monitors are all more common than 4K monitors, and 4K has much worse performance (not even staying above 60 in a lot of games). Even in older games that could get good framerates you are stuck at 60 because there aren’t high refresh 4K monitors.

        If I had unlimited money to spend I would still go with 1440p 144hz (or ultrawide) for gaming at this point even with a 1080, maybe with a large 4K second monitor for other stuff.

      • Freon
      • 3 years ago

      Are you under some weird impression that reviewers are running with Vsync on? I think you may misunderstand how graphics cards are benchmarked.

        • f0d
        • 3 years ago

        no not at all
        i am under the impression that reviews are raising the resolutions to keep around 60fps and they think they are relevant

        more people have 144hz and 120hz or even 75hz monitors than 4k monitors so how about testing at 1080p and/or 1440p resolutions to see how these cards perform and if settings need to be lowered to reach 120/144hz

        the myth that we cant see past 30/60fps or hz is false and there is a massive improvement when going for a higher framerate/hz

    • dikowexeyu
    • 3 years ago

    A review this late should be comparing different vendors and the 1070.
    I bet that some overclocked 1070 price/performance makes a good bang for the buck against the 1080 founders edition.

      • Ninjitsu
      • 3 years ago

      I assume they didn’t want to put in the 1070 without it getting a proper review of its own. That said, I’m pretty sure the 1070 will include this review;s data as well, so probably wait for that.

    • DeadOfKnight
    • 3 years ago

    With this card being thermally limited, you really need to test these inside a case and not on an open air bench. Other than that, great review as always.

      • Airmantharp
      • 3 years ago

      I can imagine the difficulty in controlling variables being a hindrance when trying to standardize an ‘inside the case’ test.

      • hkuspc40
      • 3 years ago

      Jayztwocents already did that and there was hardly any difference. Further there are a ton of variables that would be difficult to control (which I believe he pointed out).

    • anotherengineer
    • 3 years ago

    Very nice and clean tidy review.

    Now for some constructive feedback 😉

    1. So here [url<]https://techreport.com/review/30281/nvidia-geforce-gtx-1080-graphics-card-reviewed/8[/url<] frame time in milliseconds graph has a thin pencil line for color, however below at the frames by percentile the color line is thicker. It would be nice if the color line beside the card name was the same height as the text, for ease of clarity. 2. For color deficient people and/or people with crap screens, it would probably be easier to differentiate if you were able to stick to the basic 8 colors. On a separate note, 3 questions 1. For Time spent beyond....ms graphs, why is the default 50ms or 33ms? I kind of recall Scott mentioning this way back. I just have it in my head that 60hz screen is 16.7ms refresh so I always click on that to compare. 2. The scatter plots, very easy to get a thumb for performance/$, but why not just add an actual bar graph, of the actual fps/$?? So the gtx 980 would be 0.093fps/$ 3. In the future do you plan to add a VRAM usuage chart to the tests? Thanks

      • chuckula
      • 3 years ago

      While not a graph, here are my eyballed Dollars/FPS numbers for both the average & 99th percentile graphs, sorted by winner to loser

      [b<]Average $ per FPS[/b<] GTX-980Ti: $550 / 57 FPS = $9.65/fps GTX-980: $450 / 42 FPS = $10.71/fps GTX-1080: $700 / 65 FPS = $10.77/fps R9 Fury-X: $620 / 48 FPS = $12.92/fps [b<]99th Percentile $ per FPS[/b<] GTX-980Ti: $550 / 44 FPS = $12.50/fps GTX-1080: $700 / 51 FPS = $13.72/fps GTX-980: $450 / 32 FPS = $14.06/fps R9 Fury-X: $620 / 32 FPS = $19.38/fps Incidentally a few notes above, obviously the GTX-980Ti has just undergone a large price cut in the last week, and of course there are cheaper (and faster) versions of the GTX-1080 out there, so look at that GTX-1080 as actually a worst-case number that can be beaten in the real world. For example, assuming the OC on my GTX-1080 adds zero performance, my numbers are: Average: GTX-1080: $650 / 65 FPS = $10.00/fps 99th Percentiale: GTX-1080: $650 / 51 FPS = $12.75/fps

        • anotherengineer
        • 3 years ago

        And techpowerup actually does it as a percentage for several resolutions.

        [url<]http://www.techpowerup.com/reviews/ASUS/GTX_1080_STRIX/26.html[/url<] So I don't know, lots of different ways to present the data. I kinda like the resolution performance/$ though since I'm on 1920x1080, it gives me something more easy to grasp since TR tests 2560x1440 and 4k.

          • ImSpartacus
          • 3 years ago

          I’m rather fond on how TechPowerUp does their summaries, very simple and effortless to interpret.

          It’s not always perfect, but the simplicity is appreciated in many circumstances.

        • cmrcmk
        • 3 years ago

        I like the two-axis charts since frames/second/dollar oversimplifies. To take an extreme example, if you consider a iGPU to be free, it’s fps/$ = infinity even though it’s clearly playing at a different level. The charts that TechReport uses gives you an idea of the value and the performance tier in one view. Maybe this isn’t everyone’s preference, though.

      • dikowexeyu
      • 3 years ago

      I’m just shooting in the dark, but, if you are color deficient, did you tried enchroma glasses?

        • anotherengineer
        • 3 years ago

        Well I am using a 21″ 1080p Dell P2214h IPS (which is calibrated) and sit about 20″ away, so the pencil thin line is difficult to register what color unless I move in closer vs. the thicker lines of the other graph.

        And slight red/green deficiency, mainly in shades and pastels. Grass is as green to me as is cherries are red to me as is night vs. day.

        However I do struggle with some of these
        [url<]http://unlimitedmemory.tripod.com/sitebuildercontent/sitebuilderfiles/ishihara38.pdf[/url<] Some of them are easy to make out, some are trickier, and some I can't. I see colored dots, but I can't make out the number, it just blends in with the rest of the dots like someone wearing camouflage in the bush. Now looking at a screen vs. real slides under sunlight with full spectrum are 2 totally different things, but you get the idea. Also never heard of those glasses, interesting. edit - also color blindness is a false term, it infers total lack of color or seeing in black and white, which is not the case. Most male color deficiency is red/green, but people can be deficient in other areas of the spectrum. Hunter orange is what it is because apparently it looks the same to everyone regardless of color deficiency. From what I know, I got it from my mom's dad, but my brother doesn't have it.

    • drewafx
    • 3 years ago

    No madVR tests for OpenCL benchmark? :/
    Guess I’ll just wait for AnandTech review

    • odizzido
    • 3 years ago

    Cool. I’ve not read this yet but from just skipping around it looks to be of high quality. Will need to give this a proper read later tonight.

    edit——–

    Nice review. I am glad to see that this site is still worth the wait.

    I was a little surprised to see you not test dx12 with RotTR though. Also rotter, what a great acronym.

    • chuckula
    • 3 years ago

    [quote<]Happily, Nvidia's board partners are starting to deliver a diverse range of custom-cooled GTX 1080s themselves, and our experience with those cards so far has been positive. GTX 1080 custom jobs can cost significantly less than the Founders Edition, and they tend to come with beefier heatsinks and factory clock boosts. Unless you're really into the Founders Edition look, we think most will be happier with one of these hot-rodded GTX 1080s.[/quote<] That's probably the best takeaway from the review. As I've posted my own experiences in the forums, even a custom card that just re-uses the stock PCB from the FE cards but adds a better HSF combo can pretty much eliminate the negatives from this review. That even includes a lower price combined with an out of the box OC and cooler/quieter operation. If you really want to go hog-wild, the crazy (and more expensive) liquid cooled editions with additional power pins are in the pipeline too.

    • derFunkenstein
    • 3 years ago

    Really stoked for this review of a card I will probably never buy. Despite its status of being something I put in my Newegg basket before sanity sets in, I’m stoked to finally get this level of detail. /puts away pompoms

      • ikjadoon
      • 3 years ago

      Eh, it’s the same reason we read reviews of Lamborghinis. I’d never spend the money, but the state-of-the-art is nice to hear about.

      And, maybe the GTX 2070 will offer similar levels of performance, haha.

      • chuckula
      • 3 years ago

      The custom GTX-1070 cards call to you like the Sirens…

        • derFunkenstein
        • 3 years ago

        That is….probably something I’ll think about. My 1440p display seems like a perfect match for it.

          • sweatshopking
          • 3 years ago

          It isn’t. You don’t need a 1070 to play diablo and StarCraft.

            • derFunkenstein
            • 3 years ago

            Yeah, I know, and truth be told I probably won’t buy anything soon. BUT SHINY!!!

        • the
        • 3 years ago

        I’m still waiting for sanity to sink in but my madness is also gifted with patience: GP100 has already been announced for HPC and I can see a consumer version coming next year. At the very least, AMD’s Vega chips should come with HBM2 for massive bandwidth.

          • chuckula
          • 3 years ago

          HBM2 will launch next year.
          But you ain’t getting HBM2 parts for the GTX-1070 price.

            • the
            • 3 years ago

            I have a spare kidney. I think that’ll get me a HBM2 card come 2017.

    • Lans
    • 3 years ago

    Only focused on things I care about and did a quick read on rest but looks like a pretty good review.

    $700 is way more than what I want to spend but if I were to, I can’t see myself accepting and have not accepted 82C and 48 dBA under load… from either camp.

    Also, with regards to “Prices fall on Nvidia’s Maxwell cards, but deals they’re not” article, the async detail in this review makes me say Maxwell is not a good deal unless prices come waaay down.

    • yogibbear
    • 3 years ago

    Any chance a custom 1080 appears in the upcoming 1070/480 reviews? I know I can look elsewhere, but kinda annoying having to project datapoints from 4 different TR reviews just to compare to my current card and future card? I mean it’s all probably meaningless this time because there’s such a significant difference between the cards, but I guess when we get back to the same node comparisons in like a year’s time, the differences will be more subtle…

    • tipoo
    • 3 years ago

    That cooler definitely seems like a letdown for the extra hundo you pay for the privilege of a Founders Edition. I had assumed it would be much better on acoustics, even though I had heard of the hot running chip.

      • chuckula
      • 3 years ago

      From personal experience I [b<]highly[/b<] recommend an EVGA with the custom cooler & dual-fan setup. For example, right now the fan noise is 0. I don't have a DB meter for that number, but it's probably accurate because the fans are literally not running while I'm doing casual desktop work. Under load, including extended runs of Unigine benchmarks that get the GPU up to a steady 75C, the noise has been inaudible to minimal from inside my case. [url<]http://www.evga.com/Products/Product.aspx?pn=08G-P4-6183-KR[/url<]

        • ikjadoon
        • 3 years ago

        Is this after you compared to other custom GTX 1080s? I can’t find a roundup of custom cards for the life of me.

          • anotherengineer
          • 3 years ago

          3 right here, 4 including FE
          [url<]http://www.techpowerup.com/[/url<] and an SLI test also, however all separate and individual reviews though.

    • blahsaysblah
    • 3 years ago

    What CPU should be paired with GTX 1080?

    Do you collect CPU stats during your runs? Do you have charts of latency vs CPU usage? Are any of the bottlenecks due to CPU not keeping up?

      • chuckula
      • 3 years ago

      I will say that the 5960X, while quite the power CPU, is probably not the ideal CPU for using the GTX-1080 in most gaming scenarios. That honor likely goes to the 4790K, 6700K, or even the rarely mentioned but interesting 5775C Broadwell with the L4 cache.

      I’m sure there’s a scenario or two where the 5960X can actually pull ahead, but in the aggregate those chips above are better most of the time. As a bonus, they are all substantially cheaper than the X99 platform + chip [the 5820K and new 6800K being at least close in price].

        • blahsaysblah
        • 3 years ago

        But is that still true when talking about 4k and DX12? Or just preparing for 4k/DX12 future.

          • tipoo
          • 3 years ago

          CPUs aren’t *completely* isolated from resolution and detail increases, but generally the impact is pretty small on them as the game logic they run is largely the same. So all of those are probably just as fine paired with it at 4K than at 1440p.

          DX12 will impact them in that more threads will be used well, and lower draw call overhead, so if anything it will lend all of them more longevity.

          Besides, at 4K the bottleneck would most certainly be the GPU.

            • chuckula
            • 3 years ago

            Typically cranking up the resolution, especially jumping to 4K, puts more and more strain on the GPU and makes for more GPU-limited scenarios.

            • Krogoth
            • 3 years ago

            CPUs haven’t matter that much for gaming performance in most demanding titles unless you are gaming at really low resolutions and want to crank out as much FPS as possible.

      • Airmantharp
      • 3 years ago

      Running dual 970’s on a 4.5GHz 2500k- I’d drop one of these in without hesitation.

      Answer: Just about anything.

      • Krogoth
      • 3 years ago

      Any of the current CPUs since games are heavily clockspeed/IPC bound under a SLI/CF setup.

      The best CPU on the market would be a Skylake chip.

    • JustAnEngineer
    • 3 years ago

    You folks really need to include a Radeon R9-390X in your price/performance charts.

      • derFunkenstein
      • 3 years ago

      It’s not as fast as the Fury. Wherever the Fury is on the list, just assume the 390X would be below it somewhere.

      You should be congratulating everyone on using stock-clocked GeForces, btw. Goodness knows you spend enough time in a circle jerk about factory OC’d cards.

      • Khali
      • 3 years ago

      You know, comparing older GPU’s with the latest greatest isn’t a bad idea. It would give folks an idea on how much improvement they would get over their current GPU’s if they decide to upgrade.

      Go back two or three generations of GPU’s and make a few bar charts. Pretty much take all the info on the Sizing ’em up and Power/Noise/Temp pages of this review and include 600, 700 series cards along with the already included 900 series. Plus the equivalent AMD GPU’s.

      Tossing in a Price/Performance chart would be interesting as well. Only problem will be most of the older cards are off the market so it would be limited to the current generation and maybe left overs from the previous generation of GPU’s.

      You wouldn’t even have to do all the tests over again. Just go back to the original reviews and get the info to plug into a new chart and get all the info in one place. You could call it the Legacy GPU comparison round up, or some other witty article name. Do it once a year or when ever new GPU’s come out.

      I have a GTX 680 and a GTX 780 Ti and would love to see a direct comparison to the GTX 1080.

        • UberGerbil
        • 3 years ago

        [quote<]You wouldn't even have to do all the tests over again. Just go back to the original reviews and get the info to plug into a new chart and get all the info in one place. [/quote<]Except those older reviews will have been done with older driver versions with different optimizations -- some of which are fairly significant for the games used as benchmarks. And other differences, depending on how old they are (TR doesn't change its test bed super frequently, but when they do it creates a rift that you can't really cross back over to get any kind of comparable results)

          • Khali
          • 3 years ago

          You know, I did overlook the effect drivers would make over time. But it seems like Techspot found a way around it so I’m sure TR could as well.

        • Voldenuit
        • 3 years ago

        [quote<]You know, comparing older GPU's with the latest greatest isn't a bad idea. It would give folks an idea on how much improvement they would get over their current GPU's if they decide to upgrade.[/quote<] This, a million times this. Most people don't upgrade every generation. It would be nice to see benchmarks going back 2 generations (or more). Someone on a 780Ti or 7870 might be in the market to upgrade now, and would like to know how much of a boost they would be getting (hint: a lot). The older gen doesn't even have to be run for all the benchmarks; a one-page roundup would suffice.

        • tsk
        • 3 years ago

        Here you go; [url<]http://www.techspot.com/article/1191-nvidia-geforce-six-generations-tested/[/url<] 480 to 1080.

          • chuckula
          • 3 years ago

          Thanks for that link.

          The rather smooth increase in performance levels from generation to generation appears to contradict the FUD that gets thrown around about how Nvidia spends most of its time killing the performance of older GPUs.

            • NTMBK
            • 3 years ago

            GP104 is very similar to GM200 architecturally, so it’s not surprising that the drivers behave similarly.

          • Ninjitsu
          • 3 years ago

          They’re doing the tests [i<]without[/i<] anti-aliasing, though. Which is extremely odd.

      • dikowexeyu
      • 3 years ago

      At least the best bang for the buck for each price should be included.

    • brucethemoose
    • 3 years ago

    Why did a $700 GPU get such an inadequate cooler?

    They don’t even have the excuse of high power consumption, like the GTX 480 or R9 290X…

      • ikjadoon
      • 3 years ago

      NVIDIA addressed this at their GTX 1080 launch Q&A with the press. They charg a higher price so they can sustain production until the GTX 1080 EOL’s. Currently, their reference card production stops quite quickly after launch.

        • derFunkenstein
        • 3 years ago

        All the same it’s kind of a weird answer. Best Buy has been selling boxed reference designs for years, going back to at least Kepler (GTX 770 and 780), and maybe even farther.

        • xeridea
        • 3 years ago

        Their answer doesn’t make sense. How does charging more increase their production? It is the same chip, the same chip that is sold on 3rd party cards. They don’t produce the cards, other fabs do. Sustain production until the card EOLs… in a couple years? Doesn’t any card ever produced need to “sustain production until it EOLs”? The problem usually is trying to liquidate old cards to make room for newer ones. Production is only an issue for a bit after launch, not 2 years later.

        • tipoo
        • 3 years ago

        Now how many people will have watched that, vs watched their keynote where they seemed to imply the Founders Edition was some hot stuff. Well, it got one of those two adjectives right (wait, actually, both. Stop talking, tipoo).

      • bjm
      • 3 years ago

      The real is answer: They didn’t want to end up like 3dfx.

      If they started selling their own boards, like 3dfx did after they bought STB, then they would have to say hello to new competition, formerly known as their AIB partners. By pricing the card at $700, they can cash in on the early adopter craze and avoid being competitive with their own partners.

        • Voldenuit
        • 3 years ago

        That doesn’t answer the question of why the cooler underperformed with respect to expectations in this review, though.

        And the FU editions are not doing any favors to consumers either; AIB makers seem to be taking advantage of the FU price gouging to price their own boards closer to the FU premium than the ‘standard’ MSRP (har har) prices quoted by nvidia.

          • bjm
          • 3 years ago

          Perhaps that’s an intended side effect? Genius!

            • tipoo
            • 3 years ago

            I feel near certain it was a buffer zone. If Polaris was hugely competitive, board partners could come down to MRSP. If not, they could get away with pricing nearer to the Founders Edition.

      • Krogoth
      • 3 years ago

      It is because Founder Editions is nothing more than early adopter edition. They went for the cheapest HSF to keep the profit margins as high as possible.

      The reference HSF is still sufficent enough to handle the 1080 at stock. Modern performance GPUs have always been toasters at load especially after a gaming marathon session.

        • End User
        • 3 years ago

        OC’ed my 1080 FE tops out at 78 °C. Compare that to the temps of a stock [url=http://www.guru3d.com/articles_pages/palit_geforce_gtx_1080_gamerock_premium_edition_g_panel_review,10.html<]Palit GeForce GTX 1080 GameRock Premium Edition[/url<] and the FE does not fare too badly.

          • Krogoth
          • 3 years ago

          ~78-80C loaded temps aren’t exactly good for the VRMs and silicon itself if you care about longevity.

            • brucethemoose
            • 3 years ago

            At stock volts it’s probably OK.

            But those temps would be unacceptable once you start pushing the VRMs harder and running more voltage through the GPU.

            • End User
            • 3 years ago

            [url=http://www.guru3d.com/articles_pages/palit_geforce_gtx_1080_gamerock_premium_edition_g_panel_review,10.html<]Seems to be the norm[/url<]. OC'ing is always a gamble. I accept that. I've OC'ed every EVGA card I've owned going back to my GTX 260. I've never had a card go bad. My previous setup was two OC'ed GTX 770 4GB cards with blower fans in SLI and they lasted 3 years. If my current EVGA card dies I'll either get it replaced under warranty or I'll buy a new one. My i7-920 has been OC'ed since day one and it is still in use today. Same goes for my 3770K. High temps don't scare me. Keep in mind that not every title is going to max out the temps. I'm replaying Wolfenstein The New Order at the moment and my 1080 is at 56 °C.

            • Voldenuit
            • 3 years ago

            Congrats on the new card!

            Any update on how well your 1080 handles idle clockspeeds at 120/144 Hz refresh rates?

            • End User
            • 3 years ago

            I’m not exactly sure what you mean. I have it set at 150 Hz when on the desktop and everything seems fine (the GPU is at 139 MHz).

            • DancinJack
            • 3 years ago

            I think the bug is when you’re using multiple monitors, and refresh is set to 120Hz+ that the card doesn’t clock down.

            edit: [url<]https://techreport.com/news/30304/nvidia-pascal-cards-still-exhibit-high-refresh-rate-power-bug[/url<]

            • End User
            • 3 years ago

            I have a dual monitor setup but the second monitor is being driven by the iGPU.

            • DancinJack
            • 3 years ago

            That’ll do it 🙂

            • End User
            • 3 years ago

            ?

            The bug is when you connect two monitors to the 1080. My second display is connected to the motherboard DisplayPort connector and powered by the iGPU.

            So, with my dual monitor setup:

            1) Primary monitor is connected to the 1080
            2) Primary monitor is at 150 Hz when idle on the Windows desktop
            3) Secondary monitor connected to motherboard
            3) The 1080 idles at 139 MHz / .625 V / 36 °C

            • DancinJack
            • 3 years ago

            By that’ll do it, I meant that you won’t run into it with the second running off IGP. I wasn’t very clear, but that’s what I meant.

            • Voldenuit
            • 3 years ago

            I remembered in the nvidia power bug article that neither ov us had issues with 120 Hz on our 970s, so I was curious if the 1080 would behave as well.

            • UberGerbil
            • 3 years ago

            The people paying bleeding edge prices for high-end video cards generally don’t care about longevity — they just need it to last the year or so and not throw up any errors on the initial run for the person they Ebay it to when they trade up again.

    • tipoo
    • 3 years ago

    Not the first, but lots of goodies in here 🙂

    It’s funny and aggravating though that Nvidia said async drivers were still in the works for Maxwell, leading some to believe the hardware had the capability that was never exploited, but given that they trumpet that as a benefit of Pascal, it seems deliberately misleading and underplaying something that should grow more important (some devs recovered 5ms render time per frame using async), pretending Maxwell would be fine on it until they had a better architecture to ship, and then I’m assuming they’ll just never mention it for Maxwell again…

    On the plus side, SMP/SMS seems like a real VR game changer. A very small performance hit for drawing two scenes? That’s huge, and will be more so for the lower end parts closer to the minimum VR recommendations. I’m surprised that nothing like that was mentioned for Polaris, especially the 480 aiming to be the entry point for VR.

      • the
      • 3 years ago

      Yeah, the VR efficiency gains by reducing the actual work needed is going to be a huge advantage. The RX 480 can claim to be entry point for VR but if it can’t much such efficiency gain, the VR war is over when nVidia introduces a Pascal part in that price range.

        • tipoo
        • 3 years ago

        Exactly, assuming the feature stays throughout the entire Pascal line and isn’t artificially segmented. A 1060 or whatever the price equal to the 480 will be, even if it performs a tad worse on 2D monitors, would hugely benefit from that in the VR space.

        Though I guess I also wonder what the cross market is for 700 dollar VR devices paired to 200 dollar GPUs…

    • chuckula
    • 3 years ago

    If you posted GIFs I’d totally go IT’S HAPPENING right now!

    I’ll comment more after the lawn is mowed and the review is read. Thanks!

    • Srsly_Bro
    • 3 years ago

    In before someone complains about the RX 480 review not being posted.

    • rxc6
    • 3 years ago

    “Pascal is here”

    Ahem! Pascal’s been here (mostly out of stock) for a while 😉

    P.S. Well done guys! Just on time for me to get these info and wait for Polaris.

    • Waco
    • 3 years ago

    At last!

    • southrncomfortjm
    • 3 years ago

    Awesome, nicely done.

    • I.S.T.
    • 3 years ago

    I’m so god****ed happy.

      • Srsly_Bro
      • 3 years ago

      For the release of the RX480?

        • I.S.T.
        • 3 years ago

        trollololololo, lololololo

        Seriously, that was funny

Pin It on Pinterest

Share This