Asus’ Strix Radeon R9 Fury graphics card reviewed

New Radeons are coming into Damage Labs at a rate and in a fashion I can barely handle. The first Radeon R9 Fury card, the air-cooled sibling to the R9 Fury X, arrived less than 24 hours ago, as I write. I’m mainlining an I.V. drip consisting of Brazilian coffee, vitamin B, and methamphetamine just to bring you these words. With luck, I’ll polish off this review and get it posted before suffering a major medical event. Hallucinations are possible and may involve large, red tentacles reaching forth from a GPU cooler.

GPU

boost

clock

Shader

processors

Memory

config

PCIe

aux

power

Peak

power

draw

E-tail

price

Radeon
R9 Fury
1000 MHz 3584 4 GB HBM 2 x 8-pin 275W $549.99*
Radeon
R9 Fury X
1050 MHz 4096 4 GB HBM 2 x 8-pin 275W $649.99

Despite my current state, the health of AMD’s new R9 Fury graphics card appears to be quite good. The Fury is based on the same Fiji GPU as the Radeon R9 Fury X, and it has the same prodigious memory subsystem powered by HBM, the stacked DRAM solution that’s likely the future of graphics memory. That means the Fury has the same ridiculous 512GB/s of memory bandwidth as its elder brother. The only real cut-downs come at the chip level. The Fiji GPU on Fury has had eight of its 64 GCN compute units deactivated, taking it down to “only” 3584 stream processors and 224 texels per clock of filtering power. The Fury’s only other concession to being second in the lineup is a 1000MHz peak clock speed, 50MHz shy of the big dawg’s.

At the end of the day, the Fury still has the second most powerful shader array in a consumer GPU, with 7.2 teraflops of single-precision arithmetic power on tap, and it has about a third more memory bandwidth than a GeForce Titan X.

The card we have on hand to test is Asus’ Strix rendition of the R9 Fury. Although all Fury X cards are supposed to be the same, AMD has given board makers the go-ahead to customize the non-X Fury as they see fit. Asus has taken that ball and run with it, slapping on its brand-spanking-new DirectCU III cooler. This beast is huge and heavy, with a pair of extra-thick heatpipes snaking through its array of cooling fins. The cooler is also one of the tallest and longest we’ve seen; it protrudes about two inches above the top of the PCIe slot cover, and the card is about 11.75″ long.

Like a number of aftermarket cards from the best manufacturers these days, the Strix R9 Fury’s fans do not spin until the GPU reaches a certain temperature. Generally, that means the fans stay completely still during everyday operation on the Windows desktop, which is excellent.

The Strix is premium in other ways, too many for me to retain in my semi-medicated state. I do recall something about Asus including a year-long license for XSplit Premium, so you can stream your Fury-powered exploits to the world. There’s also an “OC mode” that grants an extra 20MHz of GPU clock frequency at the flip of a switch. Oh, and I think those tentacle hallucinations may have been prompted in part by the throbbing light show under the Strix logo on the top of the cooler.

Asus expects the Strix R9 Fury to cost $579.99 at online retailers, a little more than AMD’s suggested base price for Fury cards. That sticker undercuts any GM200-based GeForce card, like the GTX 980 Ti and Titan X, and it’s roughly 50 bucks more expensive than hot-clocked GTX 980 cards based on the smaller GM204 GPU.

The Radeon R9 300 series, too

The R9 Fury isn’t the only new Radeon getting the treatment in Damage Labs. I’ve finally gotten my hands on a pair of R9 300-series cards and have a full set of results for you on the following pages.

The R9 390 and 390X are refreshed versions of the R9 290 and 290X before them. They’re based on the same Hawaii GPU, but AMD has juiced them up a bit with a series of tweaks. First, GPU clock speeds are up by 50MHz on both cards, yielding a bit more goodness. Memory clocks are up even more than that, from 5 GT/s to 6 GT/s, thanks to the availability of newer and better GDDR5 chips. As a result, memory bandwidth jumps from 320 to 384 GB/s, putting these cards also well ahead of the Titan X and anything else from the green team in terms of raw throughput. Furthermore, all R9 390 and 390X cards ship with a thunderous 8GB of GDDR5 memory onboard, just to remove any doubt.

Finally, AMD says it has conducted a “complete re-write” of the PowerTune algorithm that manages power consumption on these cards. Absolute peak power draw doesn’t change, but the company expects lower power use when running a game than on the older Hawaii-based cards.

Nope, this isn’t the Fury card I just showed you. It’s the Strix R9 390X, and Asus has equipped it with the same incredibly beefy DirectCU III cooler. This card is presently selling for $450 at Newegg, so it undercuts the GeForce GTX 980 while offering substantially higher memory bandwidth and double the memory capacity.

Meanwhile, at $330, this handsome XFX R9 390 card has the GeForce GTX 970 firmly in its sights. This puppy continues a long tradition of great-looking cards from XFX. Let’s see how it stacks up.

Test notes

You’ll see two different driver revisions listed for most of the cards below. That’s because we’ve carried over many of these results from our Fury X review, but we decided to re-test a couple of games that got recent, performance-impacting updates: Project Cars and The Witcher 3. Both games were tested with newer drivers from AMD and Nvidia that include specific tweaks for those games.

The Radeon driver situation is more complex. The Catalyst 15.15 drivers we used to test the R9 390, 390X, and Fury X wouldn’t install on the R9 290 and 295 X2, so we had to use older 15.4 and 15.6 drivers for those cards. Also, AMD dropped a new driver on us at the eleventh hour, Catalyst 15.7, that is actually a little fresher than the Cat 15.15 drivers used for the 390/X and Fury X. (Cat 15.7’s internal AMD revision number is 15.20.) We weren’t able to re-test all of the Radeons with the new driver, but we did test the R9 Fury with it.

I know some of you folks with OCD are having eye twitches right now. but if you’ll look at our results, I think you’ll find that the driver differences don’t add up to much in the grand scheme, especially since we re-tested the two newest games that have been the subject of recent optimizations.

Also, because some of you expressed a desire to see more testing at lower resolutions, we tested both The Witcher 3 and Project Cars at 2560×1440. Doing so made sense because we had trouble getting smooth, playable performance from all the cards in 4K, anyhow.

Our testing methods

Most of the numbers you’ll see on the following pages were captured with Fraps, a software tool that can record the rendering time for each frame of animation. We sometimes use a tool called FCAT to capture exactly when each frame was delivered to the display, but that’s usually not necessary in order to get good data with single-GPU setups. We have, however, filtered our Fraps results using a three-frame moving average. This filter should account for the effect of the three-frame submission queue in Direct3D. If you see a frame time spike in our results, it’s likely a delay that would affect when the frame reaches the display.

We didn’t use Fraps with Civ: Beyond Earth or Battlefield 4. Instead, we captured frame times directly from the game engines using the games’ built-in tools. We didn’t use our low-pass filter on those results.

As ever, we did our best to deliver clean benchmark numbers. Our test systems were configured like so:

Processor Core i7-5960X
Motherboard Gigabyte
X99-UD5 WiFi
Chipset Intel X99
Memory size 16GB (4 DIMMs)
Memory type Corsair
Vengeance LPX
DDR4 SDRAM at 2133 MT/s
Memory timings 15-15-15-36
2T
Chipset drivers INF update
10.0.20.0

Rapid Storage Technology Enterprise 13.1.0.1058

Audio Integrated
X79/ALC898

with Realtek 6.0.1.7246 drivers

Hard drive Kingston
SSDNow 310 960GB SATA
Power supply Corsair
AX850
OS Windows
8.1 Pro
Driver
revision
GPU
base

core clock

(MHz)

GPU
boost

clock

(MHz)

Memory

clock

(MHz)

Memory

size

(MB)

Asus
Radeon
R9 290X
Catalyst
15.4/15.6
betas
1050 1350 4096
Radeon
R9 295 X2
Catalyst
15.4/15.6
betas
1018 1250 8192
XFX
Radeon R9 390
Catalyst 15.15
beta
1015 1500 4096
Asus
Strix R9 390X
Catalyst 15.15
beta
1070 1500 4096
Asus
Strix R9 Fury
Catalyst 15.7
beta
1000 500 4096
Radeon
R9 Fury X
Catalyst 15.15 1050 500 4096
GeForce
GTX 780 Ti
GeForce 352.90/353.30 876 928 1750 3072
Asus
Strix GTX 970
GeForce
353.30
1114 1253 1753 4096
Gigabyte GTX 980
G1 Gaming
GeForce 352.90/353.30 1228 1329 1753 4096

GeForce GTX 980 Ti
GeForce
352.90/353.30
1002 1076 1753 6144
GeForce
Titan X
GeForce 352.90/353.30 1002 1076 1753 12288

Thanks to Intel, Corsair, Kingston, and Gigabyte for helping to outfit our test rigs with some of the finest hardware available. AMD, Nvidia, and the makers of the various products supplied the graphics cards for testing, as well.

Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests.

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Project Cars
Project Cars is beautiful. I could race around Road America in a Formula C car for hours and be thoroughly entertained. In fact, that’s pretty much what I did in order to test these graphics cards.


Frame time
in milliseconds
FPS
rate
8.3 120
16.7 60
20 50
25 40
33.3 30
50 20

Click the buttons above to cycle through the plots. You’ll see frame times from one of the three test runs we conducted for each card. Notice that PC graphics cards don’t always produce smoothly flowing progressions of succeeding frames of animation, as the term “frames per second” would seem to suggest. Instead, the frame time distribution is a hairy, fickle beast that may vary widely. That’s why we capture rendering times for every frame of animation—so we can better understand the experience offered by each solution.

If you click through the plots above, you’ll probably gather that this game generally happens to run better on GeForce cards than on Radeons, for whatever reason. The plots from the Nvidia cards are generally smoother, with lower frame times and more frames produced overall. However, you’ll also notice that the Radeons also run this game quite well, with almost no frame times stretching beyond the about 33 milliseconds—so no single frame is slower than 30 FPS, basically.

The FPS averages and our more helpful 99th percentile frame time metric are mostly the inverse of one another here. When these two metrics align like this, that’s generally an indicator that we’re getting smooth, consistent frame times out of the cards. The one exception here is the Radeon R9 295 X2. Dual-GPU cards are a bit weird, performance-wise, and in this case, the 295 X2 simply has a pronounced slowdown in one part of the test run that contributes to its higher frame times at the 99th percentile. I noticed some visual artifacts on the 295 X2 during testing, as well.

We can understand in-game animation fluidity even better by looking at the entire “tail” of the frame time distribution for each card, which illustrates what happens with the most difficult frames.


The Radeons’ curves aren’t quite a low and flat as the GeForces’, but they’re largely excellent anyhow, with a peak under 30 milliseconds.


These “time spent beyond X” graphs are meant to show “badness,” those instances where animation may be less than fluid—or at least less than perfect. The 50-ms threshold is the most notable one, since it corresponds to a 20-FPS average. We figure if you’re not rendering any faster than 20 FPS, even for a moment, then the user is likely to perceive a slowdown. 33 ms correlates to 30 FPS or a 30Hz refresh rate. Go beyond that with vsync on, and you’re into the bad voodoo of quantization slowdowns. And 16.7 ms correlates to 60 FPS, that golden mark that we’d like to achieve (or surpass) for each and every frame.

In this case, generally good performance means all of the cards ace the first two thresholds, without a single frame beyond the mark. The Fury spends a little more time above 16.7 ms—or 60 FPS—than the Fury X, so it’s not quite as glassy smooth, but it’s close.

Again, this game just runs better on the GeForce cards, but not every game is that way. Let’s move on.

GTA V

Forgive me for the massive number of screenshots below, but GTA V has a ton of image quality settings. I more or less cranked them all up in order to stress these high-end video cards. Truth be told, most or all of these cards can run GTA V quite fluidly at lower settings in 4K—and it still looks quite nice. You don’t need a $500+ graphics card to get solid performance from this game in 4K, not unless you push all the quality sliders to the right.


GTA V is a more even playing field between the GeForce and Radeon camps, and it’s another game where we see very nice behavior in terms of smooth, consistently quick frame delivery. The R9 Fury mixes it up with the GeForce GTX 980 here, tying in the FPS average and trailing by less than two milliseconds at the 99th percentile. The R9 390X isn’t far behind.

The most intriguing match-up here may be the Radeon R9 390 versus the GeForce GTX 970. At roughly the same price, the R9 390 has an edge in both of our general performance metrics.



Since all of the cards perform well, there’s not much drama in our “badness” metric. Very little time is spent beyond the 33-ms threshold, and none beyond 50 ms. At 16.7 ms, the R9 Fury almost exactly matches the GTX 980.

The Witcher 3

We tested with version 1.06 of The Witcher 3 this time around, at 2560×1440, and we switched to SSAO instead of HBAO+ for ambient occlusion.

These frame time plots have quite a few more spikes in them than we saw in the two previous games. On the Radeons, some of those spikes stretch to 50 or 60 ms, and those slowdowns are easy to notice while playing. The Radeon aren’t alone in this regard, though. The Kepler-based GeForce GTX 780 Ti also suffers later in the test run, in the portion where we enter the woods and encounter lots of dense vegetation.


The Fury trails the GTX 980 by just a few frames per second in the FPS average, but those slowdowns cost it in terms of the 99th-percentile frame time. There, the Fury drops behind the GTX 970.


The curves for the Maxwell-based GeForces are lower and flatter than those for other GPUs.


Our “time beyond 33 ms” results pretty much tell the story. The Maxwell-based GeForces run this game smoothly, while the Radeons and the GTX 780 Ti struggle.

Far Cry 4


The FPS average and 99th percentile scores don’t agree here. Click through the frame time plots above, and you’ll see why: intermittent, pronounced spikes from the R9 290-series and Fury cards. Conspicuously, though, the R9 390 and 390X don’t have this problem. My crackpot insta-theory is that this may be a case where the 8GB of RAM on the 390/X cards pays off. The rest of the Radeons have 4GB per GPU, and they all suffer occasional slowdowns.

That said, even if this is a memory issue, it could probably be managed in driver software. None of the GeForces have this problem, not even the GTX 780 Ti with 3GB.



At the 50-ms threshold, our “badness” metric picks up the frame time spikes on the Furies and the 290X. You can feel these slowdowns as they happen, interrupting the flow of animation.

Alien: Isolation




Every single metric points to generally good performance from all of the cards here. Without any spikes or slowdowns to worry over, we can simply say that the Fury essentially ties the GTX 980, while the R9 390X trails them both. Meanwhile, the Radeon R9 390 slightly outperforms the GeForce GTX 970.

Civilization: Beyond Earth

Since this game’s built-in benchmark simply spits out frame times, we were able to give it a full workup without having to resort to manual testing. That’s nice, since manual benchmarking of an RTS with zoom is kind of a nightmare.

Oh, and the Radeons were tested with the Mantle API instead of Direct3D. Only seemed fair, since the game supports it.




This automated test has a bit of hitch in it, just as it starts, that’s most pronounced on the GeForce cards. The Mantle-powered Radeons avoid the worst of that problem. Beyond that strange little issue, though, the story is much the same as what we saw on the last page. The Fury matches up closely with the GTX 980, and the R9 390 outperforms the GTX 970.

Battlefield 4

Initially, I tested BF4 on the Radeons using the Mantle API, since it was available. Oddly enough, the Fury X’s performance was kind of lackluster with Mantle, so I tried switching over to Direct3D for that card. Doing so boosted performance from about 32 FPS to 40 FPS. The results below for the Fury X come from D3D, as do the results for the Fury.




Man, here’s another close match-up between the GTX 980 and the R9 Fury. The surprise this time around: the GTX 970 outperforms the R9 390 for once.

Crysis 3


Notice those big, 100-ms-plus spikes in the frame times for the R9 390/X and Fury cards. They’re big, and they’re hard to ignore. Let’s see what they do to our metrics.



Although the Radeon cards’ FPS averages look good, they suffer in our other metrics as a result of those pronounced slowdowns. The hiccups happen at a specific point in the test run, when I blow up a bunch of bad guys at once. Many of the Radeon cards suffer at that point, and the effect is not subtle while playing. The GeForces deliver fluid animation in this same spot, which makes for quite a contrast.

Notice something, though, in the beyond-50-ms results above: The R9 290X and 295 X2 suffer the least. You can see it in the raw frame time plots, too. My best guess about why is that AMD must have introduced some kind of regression in its newer drivers that exacerbates this problem. Perhaps they can fix it going forward.

Power consumption

Please note that our “under load” tests aren’t conducted in an absolute peak scenario. Instead, we have the cards running a real game, Crysis 3, in order to show us power draw with a more typical workload.

I’m a little surprised to see the R9 Fury system drawing 50W less than the same test rig equipped with a Fury X. The R9 Fury doesn’t require any more power under load than a GM200-based GTX 980 Ti. Then again, the Fury’s performance is generally closer to the GeForce GTX 980’s, and with a GTX 980 installed, our test system only draws 282W under load.

The story isn’t so good for the R9 390 and 390X. With the Asus Strix 390X, our test system pulls a whopping 449W while running Crysis 3. Yes, the 390X is clearly faster than the 290X before it, but it also requires an additional 82W to make it happen. And these Hawaii-based cards weren’t terribly efficient to start. The gap between the GTX 970 and the R9 390 under load is 121W. Yikes.

Noise levels and GPU temperatures

Our power and noise results are all-new for this review. I’ve managed to lower the noise floor on our GPU test rig by about 6 dBA by swapping in a quiet, PWM-controlled Cooler Master fan on the CPU cooler, so I re-tested everything in the past week. Our new setup lets us detect finer differences than we could before.

One unexpected side effect of our quieter test environment is that our sound level meter now clearly picks up on the pump whine from our Fury X. Interesting. I think the problem is more about the pitch of the noise than its volume, but we can put a dBA number to it now.

Asus’ DirectCU III cooler is incredibly effective at keeping the Strix R9 Fury cool without making much noise. Heck, it’s nearly as quiet as the Fury X’s water cooler. Asus hasn’t tuned the Strix R9 Fury to keep GPU temperatures as unusually low as on the Fury X, but 72°C is still pretty cool as these things go.

That same big DirectCU III cooler is pretty effective aboard the R9 390X, as well. Even when asked to move quite a bit more heat from this power-hungry card, it does the job while producing less noise than the Titan X. XFX’s cooler for the R9 390 is also quietly effective, though it can’t match the Asus Strix GTX 970, far and away the quietest card of the bunch.

Conclusions

As usual, we’ll sum up our test results with a couple of value scatter plots. The best values tend toward the upper left corner of each plot, where performance is highest and prices are lowest. We’ve converted our 99th-percentile frame time results into FPS, so that higher is better, in order to make this layout work. These overall numbers are produced using a geometric mean of the results from all of the games tested. The use of a geomean should limit the impact of outliers on the overall score, but we’ve also excluded Project Cars since those results are dramatically different enough to skew the average.

The R9 Fury outperforms the GeForce GTX 980 by 1.1 frames per second overall. That’s essentially a tie, folks. Since we’re talking about an FPS average here, this result is more about potential than it is about delivered performance. In that respect, these two cards are evenly matched. Most folks would stop there and draw conclusions, but we can go deeper by using a better metric of smooth animation, the 99th-percentile frame time, as we’ve demonstrated on the preceding pages. When we consider this method of measuring performance, the picture changes:

We saw substantially smoother gaming across our suite of eight test scenarios out of the GeForce GTX 980 than we did from the Radeon R9 Fury. This outcome will be nothing new for those folks who have been paying attention. Nvidia has led in this department for some time now. I continue to believe this gap is probably not etched into the silicon of today’s Radeons. Most likely, AMD could improve its performance with a round of targeted driver software optimizations focused on consistent frame delivery. Will it happen? Tough to say, but hope springs eternal.

The rest of the pieces are there for the R9 Fury to be a reasonably compelling product. The Fiji GPU continues to impress with its power efficiency compared to the Hawaii chip before it. Although the Fury X’s water cooler is nice, it apparently isn’t strictly necessary. Asus has engineered a startlingly effective air cooler for the Strix R9 Fury; it’s quieter than any of the GeForce cards we tested with Nvidia’s reference cooler.

Also, you may have noticed, the cuts AMD has made to the Fiji GPU aboard the R9 Fury just don’t hurt very much compared to the Fury X. The gap in performance is minimal and, as I’ve mentioned, the Fury still has a 7.2-teraflop shader array and 512 GB/s of memory bandwidth lying in wait. Games don’t yet seem to be taking full advantage of the Fiji GPU’s strengths. Should games begin using Fiji’s prodigious shader power and memory bandwidth to good effect, the Fury seems well positioned to benefit from that shift. Unless you really want that water cooler, I can’t see paying the premium for the Fury X.

So the R9 Fury has its virtues. The difficult reality at present, though, is that this card is based on a bigger GPU with a larger appetite for power than the competing GeForce GTX 980. The Fury has more than double the memory bandwidth via HBM. It costs about 50 bucks more than the 980. Yet the Fury isn’t much faster across our suite of games, in terms of FPS averages, than the GTX 980—and it has lower delivered performance when you look at advanced metrics.

We should say a word about the R9 390 and 390X, as well. The XFX R9 390 we tested is something of a bright spot for AMD in this whole contest. This card’s price currently undercuts that of the Asus Strix GTX 970, yet it offers very similar performance. The value proposition is solid in those terms. The downside is that you’re looking at an additional 120W or so of system power with the R9 390 installed instead of the GTX 970. That translates into more noise on the decibel meter and more heat in the surrounding PC case—and, heck, in the surrounding room, too. Buyers will have to decide how much that difference in power, heat, and noise matters to them. We can say that the XFX card’s cooler is pretty quiet, even if it’s not as wondrous as the Strix GTX 970’s.

The R9 390X, meanwhile, offers somewhat higher performance at the cost of, well, more money—but also substantially higher power consumption. Asus has done heroic work creating a cooler that will keep this thing relatively quiet while gaming, but it’s hard to imagine a smart PC builder deciding to opt for the 390X’s additional 167W to 199W of system power draw compared to a GeForce GTX 970 or 980. Perhaps the right user, who wants maximum performance across a triple-4K display setup, would find the 390X’s 8GB of memory compelling.

Enjoy our work? Pay what you want to subscribe and support us.

Comments closed
    • tbone8ty
    • 4 years ago

    “Interestingly, Tom’s Hardware managed to improve the power efficiency of the AMD Radeon R9 Fury X by 50% just by reducing the power limit of the card. Surprisingly the only tradeoff was a 10% reduction in the average frames per second. Average FPS dropped from 39.7 to 36 while power consumption dropped by nearly 100W from 267W to 170W. It’s quite clear that AMD’s Fiji GPU and the GCN architecture in general was designed to run at slightly lower frequencies and as a result would gain a massive chunk of power efficiency through the slightest clock speed reductions. So realistically speaking it should prove to be quite easy for AMD to deliver on its R9 Nano promises.”

      • HisDivineOrder
      • 4 years ago

      I always felt like the GCN architecture as far back as the 7970GE and continuing with the R9 290 was being pushed harder than it should have been to eke out just a hair more performance to try and keep up with/beat nVidia’s best offering of the time. Look at the R9 290X in particular that was obviously combined with a cooler not up to the task of handling its excess heat. It seemed obvious that it was meant to run cooler with lower clockspeeds and someone late in the game decided it needed to be ramped up even though the cooler chosen long ago wouldn’t keep up with those plans. Plans were plans.

      If AMD could have been happy being second place and not challenging it (ie., the 4xxx and 5xxx series) with performance but performance per dollar, they could have suffered a lot less from leakage/waste from going too hard and running cards too hot. That said, they don’t have that option, do they? They aren’t pricing their $650 card to be inferior to the other $650 card, are they? They’re pricing it as though it were equal to it or better.

      Except it’s not. I think AMD would have been better off releasing the R9 Fury X that Toms Hardware made there with a lower price point and no water cooler. It would not have been top end, but it would have been a great competitor for the $500 980 at $550. Slot the R9 Fury in at $450 and you’ve got the 980 non-Ti surrounded by better value.

      But something about the R9 Fury X and Fury non-X are too expensive to do that. Probably the HBM. Too bad. It’s wasted on this card because it’s clearly not helping, but I’m sure AMD used it mostly as a testbed for hammering out HBM for next year’s offerings.

        • Freon
        • 4 years ago

        Could they really make money selling a Hawaii chip board for Tonga prices by saving a few bucks on the heatsink and VRM to account for the lower power? It’d still be a large die to print and a 512bit bus board…

    • Krogoth
    • 4 years ago

    970 is the only game changing GPU so far.

    The other pieces of silicon are either underwhelming or just overpriced for their performance. It is a sad state of affairs when Kepler and Tahiti-based parts are still viable for majority of games out there at 2 to 4Megapixel gaming unless you want loads of AA/AF or trying to run a poor coded POS (see AC: Unity and Batman: Arkham Knight)

      • Melvar
      • 4 years ago

      Loads of AF?

      • JustAnEngineer
      • 4 years ago

      $270 -30MIR Radeon R9-290 is a much better value than $310 GeForce GTX970. What makes you believe that GeForce GTX970 is “game changing”?

        • Ninjitsu
        • 4 years ago

        When it came out, he meant.

          • JustAnEngineer
          • 4 years ago

          Radeon R9-290 launched on November 5, 2013. GeForce GTX970 launched 10½ months later, performs the same and costs more than the Radeon. What makes that “game changing?”

            • f0d
            • 4 years ago

            actually the 970 performs in between the 290 and 290x
            [url<]https://techreport.com/review/27067/nvidia-geforce-gtx-980-and-970-graphics-cards-reviewed/13[/url<] and it was way cheaper than the amd cards when it came out maybe it was game changing because it made amd massively reduce their prices? 290x was around $550 before the 970 came out

            • JustAnEngineer
            • 4 years ago

            With current drivers and games and currently-available clock speeds, Radeon R9-290 performance is nearly indistinguishable from GeForce GTX970. The R9-290 (without the “X”) has always been a better value than the full Radeon R9-290X.

            • f0d
            • 4 years ago

            still doesnt stop the 970 being game changing making amd massively reduce their prices

            if it wasnt for the 970 you would be paying high prices for those gpu’s still

            • JustAnEngineer
            • 4 years ago

            The huge price run-up in early 2014 for AMD’s Hawaii GPUs was driven by insatiable demand from *coin miners and the excess profits were made by the e-tailers like Newegg rather than any markups from AMD.

            • f0d
            • 4 years ago

            none of that had anything to do with the low price of the 970 when it was released, coin mining with gpu’s was long finished by then

            there was no markups in the price when the 970 released – that was amd’s rrp for the gpu, the markups were when they sold ABOVE amd’s price in late 2013

        • geekl33tgamer
        • 4 years ago

        It’s at a price point here that’s right on that sweet spot between price, performance and efficiency. You can buy them from £240 – Cheaper than a 390/X.

        Icing on the cake is that most 970’s overclock to well past 1450 Mhz with no effort or voltage bump. At that speed, they are faster than stock 980’s by a few FPS, and a long way clear of anything AMD have at that price. Heck, it’s on the toes of the Fury at £500 when aggressively overclocked to 1500+ Mhz (Gigabyte G1 and EVGA SCC cards easily sail past 1500 Mhz for about £290).

        It’s a great card at that price point for the here and now, so why wouldn’t you look at it?

    • JJAP
    • 4 years ago

    The 99th performance/price of the 390X is way better than the Fury. A lot of the $100+ difference is going to better noise, heat, and TPD, no?

    So, TR prefers the fury over the fury x. TR’s chart shows poor p/p for the fury over the 390x. HardOCP says ignore the fury, get 390x (TR prefers better heat/power draw?). TR says ignore the power hungry 390x and get the 970/390…

    I can only conclude that the 390 is the highest AMD’s card you should consider! 😎

    [url<]http://www.hardocp.com/article/2015/07/13/msi_r9_390x_gaming_vs_asus_strix_fury_review/8[/url<]

      • JustAnEngineer
      • 4 years ago

      The Radeon R9-290 is the very clear leader in price/performance. The $270-30MIR Radeon R9-290 4GB performs on par with the $310 GeForce GTX970 for $40 to $70 less money.

    • Damage
    • 4 years ago

    FYI, I’ve made a tweak to the two value scatter plots on the last page of this review, as explained here:

    [url<]https://techreport.com/blog/28624/reconsidering-the-overall-index-in-our-radeon-r9-fury-review[/url<] Scott

      • chuckula
      • 4 years ago

      That is helpful in clearing the air. However, there’s a caveat: Nothing is ever going to be good enough until you copy-n-paste AMD’s own pre-launch press kit and then spend 25,000 words on conspiracy theories about how AMD’s own press kit is a huge, evil anti-AMD lie and that their products are really better than what AMD says they are.

      • Meadows
      • 4 years ago

      Good one, you were right about the difference being miniscule.

      The power usage graph is hilarious. NVidia cards stop where AMD ones only just get started.

        • NoOne ButMe
        • 4 years ago

        I will think it funny as in the next months that difference will grow wider. In a positive for AMD! 😉

          • Meadows
          • 4 years ago

          True, but the performance of the cut-down Fiji models won’t get any better.

            • NoOne ButMe
            • 4 years ago

            Huh? I think you misinterpreted my statement.

    • maxxcool
    • 4 years ago

    Thank you Damage!

    • Jigar
    • 4 years ago

    I am confused, need help in understanding this review –

    Fury was basically competing with Gigabyte GTX 980 G1 Gaming (Intro pricing was $629 but now its priced at $520)

    Anyway, here is a question that i have, why do the power consumption figurs of GTX 980 G1 Gaming don’t match with the previous review.

    In the previous Review of Gigabyte GTX 980 G1 card is consuming – 333 Watts, but in this review it shows 282 Watts (BTW, Fury consumes – 330 watts) –
    Link – [url<]https://techreport.com/review/27176/geforce-gtx-980-cards-from-gigabyte-and-zotac-reviewed/4[/url<] Also, did the new driver reduce the power consumption of GTX 980 substantially ? Or was the review showing performance figure of Gigabyte 980 G1 but power consumption of reference GTX 980 ???

      • chuckula
      • 4 years ago

      The power consumption levels are not isolated to only measuring the GPUs but measure the power draw of the entire system. You’ll note that the R9-390, which is just a rebadge of the 290 and is overclocked so the GPU itself actually draws more power, also scores a lower power consumption level in this review (371 watts vs. 400 watts for the R9-290 in the earlier review).

      • Ninjitsu
      • 4 years ago

      That’s system power consumption, and the two systems were different. That review dates to October 2014, and yes, Nvidia has released drivers that modified the power regulation algorithms since.

        • Jigar
        • 4 years ago

        50 Watts reduction is quite an achievement, amazing work actually by Nvidia driver team.

          • Ninjitsu
          • 4 years ago

          Again, may not just be the GPU power coming down – this is a different system with DDR4…I managed to save 40w idle power going from a 9600GT/4x1GB DDR2/G35 chipset (mATX board) to a GTX560/P43 (ATX)/2x4GB DDR3.

          Going from a CRT to an LCD helped cut another 30-40w while the computer was in use.

          Oh and – primary drive being an SSD helps too – HDDs go to sleep.

          EDIT: I have 3 SSDs and a HDD vs just a single HDD before on the older motherboard.

      • Damage
      • 4 years ago

      One factor in the 980 G1’s change in power draw is probably the switch from running Crysis 3 at 2560×1440 with one set of IQ settings to 3840×2160 with another set. That will utilize the GPU differently and, perhaps more importantly, slow the frame rate enough to leave the CPU less busy. Since this is system-level power draw, everything is a factor.

      Edit: Oh, the system change is a huge factor, too.

        • Jigar
        • 4 years ago

        I was completely stumped, thanks for clarifying that Damage.

    • Krogoth
    • 4 years ago

    Wow

    Such underwhelming performance

    Little change in the pricing scene

    2015 is a poor year for GPUs

      • bfar
      • 4 years ago

      If you pay for your hardware in Euros, prices are up roughly 20% on last gen. It’s a Greek tragedy 🙁

        • Meadows
        • 4 years ago

        Has it escaped your notice that he’s impressed?

    • Shobai
    • 4 years ago

    Sorry, am a bit late to the party. Thoroughly enjoyed the review, Damage. One thing I spotted, though, on the power consumption page;

    [quote<]I'm a little surprised to see the R9 Fury system drawing 50W less than the same test rig equipped with a Fury X.[/quote<] I think it might only be 40W difference under load, with the Fury X @ 370W and Fury @ 330W. [edit: didn't get the quote tags right, not to worry]

    • tbone8ty
    • 4 years ago

    so when is AMD including a “free”sync monitor with these Fury’s cards? that would be frametastic!

    • shank15217
    • 4 years ago

    Despite all this I still wanted to buy into a fury or a fury x, too bad nothing is actually in stock, so I went with 980ti.

      • Prestige Worldwide
      • 4 years ago

      You’re much better off with the 980ti.

    • remon
    • 4 years ago

    So, how about excluding Project Cars from the FPS/$ graphs? Like how you did with Dirt Showdown in the GTX 660 review?

    [url<]https://techreport.com/review/23527/review-nvidia-geforce-gtx-660-graphics-card/11[/url<] "(We chose to exclude DiRT Showdown, since the results skewed the average pretty badly and since AMD worked very closely with the developers on the lighting path tested.)"

      • Ninjitsu
      • 4 years ago

      I haven’t been able to find evidence that Nvidia has done anything special with the PCars engine. It’s not a TWIMTBP title, nor a GameWorks title.

      And…
      [quote<] We've added the latest entry in the DiRT series to our test suite [b<]at the suggestion of AMD[/b<], who has been [b<]working with Codemasters for years on optimizations[/b<] for Eyefinity and DirectX 11. Although Showdown is based on the same game engine as its predecessors, it adds an advanced lighting path that uses DirectCompute to allow fully dynamic lighting. In addition, the game has an optional global illumination feature that approximates the diffusion of light off of surfaces in the scene. We enabled the new lighting path, but global illumination is a little too intensive for at least some of these cards. ... Uh oh. The latency plots for all of the GeForces are squiggly messes. None of them fare well with this game's advanced lighting path. ... [/quote<] [url<]https://techreport.com/review/23527/review-nvidia-geforce-gtx-660-graphics-card/8[/url<] Ironically... [quote<] Based on their in-house EGO engine, GRID Autosport includes a DirectCompute based advanced lighting system in its highest quality settings, which incurs a significant performance penalty on lower-end cards but does a good job of emulating more realistic lighting within the game world. In our R9 Fury X review, we pointed out how AMD is GPU [i<][sic, probably meant CPU][/i<] limited in this game below 4K, and while the R9 Fury’s lower performance essentially mitigates that to a certain extent, it doesn’t change the fact that AMD is still CPU limited here. The end result is that at 4K the R9 Fury is only 2% ahead of the GTX 980 – less than it needs to be to justify the price premium – and at 1440p it’s fully CPU-limited and trailing the GTX 980 by 14%. On an absolute basis AMD isn’t faring too poorly here, but AMD will need to continue dealing with and resolving CPU bottlenecks on DX11 titles if they want the R9 Fury to stay ahead of NVIDIA, as DX11 games are not going away quite yet. [/quote<] [url<]http://www.anandtech.com/show/9421/the-amd-radeon-r9-fury-review-feat-sapphire-asus/13[/url<]

      • Damage
      • 4 years ago

      I looked at excluding Project Cars and The Witcher 3 from the overall to see what it did, but removing the games from the mix didn’t have much effect. The geomean we use to compute the overall already avoids weighing outliers too heavily. In this case, it worked. Back with the GTX 660 review, it didn’t, so I had to filter. Different circumstances, so I acted differently. I was just trying to be fair in both cases, and I was open about what I did and why.

    • smog
    • 4 years ago

    I wish I hadn’t cheated on my msi 480 from 2010 for a skanky 770 a year or so ago. It was a poorly judged sidegrade and the 770 shook my faith in nvidia for a bit with its wind tunnel racket and shit performance. It was factory oc and I had to underclock the gimp on afterburner to not have it randomly turn my screen black after 10 minutes gaming too, with a solid 750w psu (which ran a stellar 480 for years)
    I need a new msi 480 style card to last me an age but I don’t feel now is the right time.
    The 980 ti is tempting and the 1440p 144mhz solution with it but why bother with so much incomming shiz right now?
    I had hoped for the Fury X to rape benchmarks. I really did want a reason to see if the grass was greener.

      • Ninjitsu
      • 4 years ago

      -1 for using “rape” that frivolously.

        • snook
        • 4 years ago

        -3 for SJW

        • DoomGuy64
        • 4 years ago

        [url<]https://www.youtube.com/watch?v=k391eMiMBRE[/url<]

      • f0d
      • 4 years ago

      [quote<]I wish I hadn't cheated on my msi 480 from 2010 for a skanky 770 a year or so ago. It was a poorly judged sidegrade[/quote<] in no way was that a sidegrade as the 770 would kick the crap out of a 480 and maybe even 2 of them in sli [quote<]the 770 shook my faith in nvidia for a bit with its wind tunnel racket and **** performance.[/quote<] the performance of the 770 was good (see above) and all the ones i have heard were whisper quiet - sounds like you had a dodgy card, ever thing about returning it as defective? [quote<] It was factory oc and I had to underclock the gimp on afterburner to not have it randomly turn my screen black after 10 minutes gaming too, with a solid 750w psu (which ran a stellar 480 for years) [/quote<] definitely sounds more like you had a dodgy card more than than anything else if you had to underclock it - why diddnt you return it as defective? its not like all 770's were like that

        • auxy
        • 4 years ago

        480 to 770 really isn’t that much of an upgrade, especially if this person had their 480 overclocked. GF100 was a powerful chip.

          • f0d
          • 4 years ago

          checking the reviews says otherwise

          the best i could get was a bunch of similar comparisons over at anadtechs bench (i cant find any direct comparisons on TR of the 570/480 and 770/680)

          770 vs 570
          570 was pretty much the same as 480, 570 had the same number of shaders as 480 but 570 had a little faster clock (732mhz vs 700mhz) and a little less memory bandwidth (152 GB/s vs 177 GB/s)
          [url<]http://www.anandtech.com/bench/product/829?vs=831[/url<] and a slower 680 VS 570 [url<]http://www.anandtech.com/bench/product/772?vs=831[/url<] you can see the 770 destroys the 570, thats a MASSIVE difference heres TR's review of the 570 just to show that they are similar to 480's (570 wins some and 480 wins some) [url<]https://techreport.com/review/20088/nvidia-geforce-gtx-570-graphics-processor/8[/url<] 770 vs 590 (2X GF110 GPU's) [url<]http://www.anandtech.com/bench/product/829?vs=975[/url<] very similar performance as TWO GF110 GPU's if thats not a MASSIVE performance difference then i dont know what is yes the GF100 was a powerful GPU but so was GT200 and G92 - that never stopped them from being massively slower then the next generation of GPU's yes you could have overclocked the gtx480 but the gtx770 was an even bigger overclocking champ

          • Waco
          • 4 years ago

          I guess 50-100% faster in damn near everything isn’t much of an upgrade for you.

            • auxy
            • 4 years ago

            Hey, Waco. (´・ω・`)

            You mad? (*´ω`*)

            • Waco
            • 4 years ago

            No, I just see you trolling and spewing out incorrect “facts” as usual.

    • TheSeekingOne
    • 4 years ago

    Looking at the Fury review on techpowerup, it seems that a wider selection of games makes the card look way better than it looks here.
    Even the FuryX there is beating the 980TI in most of the tested games at both 2K and 4K resolution.

      • chuckula
      • 4 years ago

      Really? I read their review and they didn’t provide frame time information for a single game.
      TechReport: 8 games reviewed.
      TechPowerUp: 0 games reviewed.

      Conclusion: TR wins and you are a shill.

        • f0d
        • 4 years ago

        i dont get how people care about fps so much as they do

        frame time information is way more important
        id rather 70 fps and good frame times than 150fps and frame times all over the place

        when everyone moves over to frame time from fps will be a glorious day for gpu benchmarking

          • anotherengineer
          • 4 years ago

          Well fps is still important. Especially to those running 120hz and 144hz 1920×1080 screens. My 120hz screen is actually 1680×1050.

          Now I would rather have 70 smooth fps or 150 varied, well on a 60hz screen I think it would look fairly close since you will still lose 10 frames @ 70fps and lose 90 frames @ 150fps.

          Now if you said 40 smooth frames over 60 erratically timed frames, on a 60hz screen, then yes for sure.

          Now TPU does not do frame latency testing, however one nice thing they do do is a large array of different resolutions. One nice thing about that, is sometimes you can tell whether a game is gpu or cpu limited, however it does not tell the whole story due to lack of frame latency data.

          However again, as the resolution drops, I would imagine the frame latency would also drop, making it less important for lower resolutions.

          Also I think we can all agree here that a quad i7-4790K @ 4.2ghz would generally perform better at most games than an i7-5960X @ 3 GHz

          With steam hardware survey and TR survey putting 1080p as the most common resolution, I think it would be great if Damage started including 1080p testing in his card reviews. A 4.2 ghz i7 Devil would probably give people a better idea what to expect vs. the Core i7-5960X 8 core also.

          At the end of the day, data from site to site is not an apples to apples comparison, and there really isn’t no simple way around it.

            • f0d
            • 4 years ago

            i have a samsung s27 950d [url<]http://www.samsung.com/sg/consumer/it/monitor/led-monitor/LS27A950DS/XS[/url<] so yes i do understand that having high framerates do matter to people like us but i have seen how erratic frames at around 100fps looks even on a 120hz screen and its not as good as a smoother (as in not as much variance) framerate around 70fps i agree that different resolutions will show if a game is gpu or cpu limited but it doesnt show if the frame time variance is high or low which is the main thing we really need to dump the standard fps metric completely and just go with frame times (which also shows fps)

            • anotherengineer
            • 4 years ago

            ” but i have seen how erratic frames at around 100fps looks even on a 120hz screen and its not as good as a smoother (as in not as much variance) framerate around 70fps”

            Of course, because it’s at 120Hz refresh screen. You did not mention that in your OP. I said on a 60Hz screen the difference of your example would probably be negligible.

            Hence the importance for being specific.

            • f0d
            • 4 years ago

            but not knowing how much variance in framerates is the problem not what refresh your monitor runs at

            look at project cars on the 290 – it can vary anywhere from around 15ms to 30ms which is around 60-30fps which i find horrible as you would notice 30fps on any screen, then you look at the frametimes the nvidia cards are and its a clear night and day difference

            frame time variance (minimum to maximum) “inside the second” is what matters – not how many frames you get – no matter the refresh rate of your monitor

            • anotherengineer
            • 4 years ago

            Of course and I agree.

            I just don’t agree with your first statement (“70 smooth fps vs 150 non-smooth fps) if you were running on a 60hz monitor.

            That’s all

            • f0d
            • 4 years ago

            well i never said HOW varied it could be when i mentioned 150 varied
            what if the frame times varied between 5 and 50 milliseconds? which has happened (but not with these cards)
            [url<]https://techreport.com/review/21516/inside-the-second-a-new-look-at-game-benchmarking/9[/url<]

            • anotherengineer
            • 4 years ago

            I know, you weren’t being specific 😉

        • Meadows
        • 4 years ago

        You’re a tad fanatic there, aren’t you.

        I’ve checked their latest review as well, here’s my two pence. To me, the first one seems to be the most important.

        1. Alien: Isolation: TR’s review shows the Fury cards perform relatively poorly compared to the GTX 980. A cursory investigation reveals that TR used an 8-core intel processor at 3 GHz, while TechPowerUp instead opted for a 4-core intel processor at a much higher 4.2 GHz. The single-thread performance delta may be significant and it might have removed a lot of driver-side disadvantage for the Fury cards, resulting in a much wider gap between AMD and NVidia at the highest resolutions. No advantage is seen at the lower resolutions.

        2. Assassin’s Creed: Unity: the top AMD cards again perform relatively well in this title, but it’s worth noting that none of the cards could give a playable framerate above 1080p using 4x AA.

        3. Battlefield 4: I can’t tell what settings TechPowerUp used for this one, because all the cards tested worse than they did for TR despite the fact that TPU used no AA at all in their testing (and TR used 4x). I question the results altogether.

        4. Crysis 3: Same as above. TPU’s results are almost 40% worse than TR’s, despite using an identical resolution and AA setting. They haven’t disclosed the rest of the settings.

        5. Far Cry 4: results track closer here, but again there’s a wider delta between the 980 and the Fury than what TR measured, and I suspect it’s because of the same thing I wrote about on the first title.

        6. Witcher 3: extremely large difference between TR’s and TPU’s results. The 980 clearly won in TR’s testing, while the complete reverse is true for TPU. Again, however, TPU have not disclosed their game settings and their overall numbers are almost 40% lower when looking at the same 4K resolution. (Both sites tested with HairWorks turned off.)

        Conclusion 1: TechPowerUp are cheeky bastards for keeping their testing settings secret. This makes a portion of their results questionable at best.
        Conclusion 2: The CPU used for testing is highly suspect. TR might want to run a couple tests using a CPU more suited to gaming, rather than one made for small business servers and multithreading lunatics.

        Edit: if TR’s testing CPU does not in fact run at the factory 3 GHz, then they should note so.

          • JustAnEngineer
          • 4 years ago

          [quote=”Meadows”<] The CPU used for testing is highly suspect. TR might want to run a couple tests using a CPU more suited to gaming, rather than one made for small business servers and multithreading lunatics. [/quote<] [url<]https://techreport.com/review/28213/the-making-of-damagebox-2015[/url<] [quote="Damage"<] CPU: Intel Core i7-4790K — My favorite desktop processor right now is the Core i7-4790K, a desktop quad-core with eight threads and a modest 88W power envelope. The 4790K has a base clock of 4GHz and a Turbo peak of 4.4GHz, the highest clock speeds of any Haswell-derived processor on the market. As a K-series part, it's unlocked for easy overclocking, and we've taken ours to 4.7 and 4.8GHz without much trouble. Thanks to its high clock speeds, the 4790K is the fastest CPU you can buy for PC gaming. The Haswell-E-based Core i7-5960X isn't bad, but don't pony up the extra dollars for it expecting bragging rights. In real games, the 4790K outperforms it. Anyhow, that's the story of why I selected a quad-core processor for the new Damagebox. If years of testing desktop CPUs in practical scenarios has taught be anything, it's that one should never ignore Amdahl's Law. Per-thread performance matters more than you might think, and the Core i7-4790K is the current champ in that department. Choosing it for my build was only natural. [/quote<]

            • Den
            • 4 years ago

            [url<]https://techreport.com/review/26977/intel-core-i7-5960x-processor-reviewed/15[/url<] Has Damage explained why he used a CPU much fewer people have that he already showed was inferior?

            • Damage
            • 4 years ago

            Please note the actual results in that link. The difference in gaming performance between the 5960X and 4790K is exceedingly small. The 5960X is a fantastic CPU for gaming with excellent per-thread performance. There is no significant downside of using a 5960X for gaming, even if it is not literally the very fastest CPU in every test.

            Furthermore, I use the X99 platform for GPU testing in part because of its unparalleled PCIe bandwidth. That factor may matter going forward as resolutions and refresh rates rise, especially since Radeons now use PCIe for multi-GPU transport. And if DX12 delivers as promised, we may soon find that having more than four cores on hand actually helps, too.

            Guys, seriously, there is no nefarious agenda at work here in the choices I’ve made.

            • anotherengineer
            • 4 years ago

            Totally agree with what you are saying Damage.

            I think the issue people have is that a lot of them own an i7-47++ whatever and can easily relate to how it would perform on their system vs. the 5960X.

            Doesn’t really matter to me since I’m using an old AMD 955BE. I just divide the fps by 2 to guess-timate what I could get lol

            Any thoughts on including 1920×1080 in the future??

            • Damage
            • 4 years ago

            For a Fury? Seriously, get a new monitor before buying a $550+ graphics card.

            I know testing at low resolution is a persistent request, but expanding what we do when GPU makers close the review windows to less than 24 hours just isn’t feasible.

            Maybe I can hire someone to do that testing later? I honestly don’t have time for it myself in the near future.

            • Meadows
            • 4 years ago

            Keep in mind that the competing site only has time to test 4 real-world resolutions (instead of 1) because their tests are neither as thorough nor as well documented.

            Your persistent choice of going for 4K is best for purely highlighting relative differences between GPUs.

            With that said, calling 1080p a “low” resolution is a bit harsh and the only reason why people keep requesting it is that your reviews don’t answer their question of “but how fast will this game run [b<][i<]for me[/i<][/b<]"?

            • JustAnEngineer
            • 4 years ago

            [quote=”Damage”<] [i<]1080p[/i<] for a Fury? Seriously, get a new monitor before buying a $550+ graphics card. [/quote<] +3 for that. [quote="Meadows"<] Calling 1080p a "low" resolution is a bit harsh. [/quote<] The truth hurts. 1920x1080 (2.07 megapixels) may be the best resolution that you can get from a cheap game console (and they frequently can't even manage that at 30 Hz), but a 60+ Hz 3.7 MP or larger display is one of the things that a >$1½K gaming PC should have. 25" or 27" 2560x1440 (WQHD) non-TN LCD monitors can be purchased for under $400. I agree completely with Damage that testing high-end graphics cards at low resolutions is a waste of time if you are trying to gather information on what sort of components should go into your next gaming PC. If you're stuck at 2.07 megapixels or less, you don't need a top-of-the-line graphics card.

            • Meadows
            • 4 years ago

            [quote<]"If you're stuck at 2.07 megapixels or less, you don't need a top-of-the-line graphics card."[/quote<] So all of you say, but my answer to that is "ah, but you do." You see, many of the latest games at the highest detail levels using customary 4x AA (which I personally abhor for latency reasons) will stress high-end cards no problem even at 1080p. And by "stress" I don't mean bog down but [i<]utilise fully[/i<]. It's indeed true that you may get more than 80, 100, or even 120 frames per second at 1080p, but that's neither a waste nor a sign that you need more pixels. It's merely what any self-respecting gamer should go for first, rather than a paltry 35-45 fps at 2160p. You need to be able to exceed the refresh rate of your monitor -- whether it's constant or variable -- for a gaming setup to give the best feel. I consider 1440p/1600p an awkward middle ground, but TR should definitely keep testing them if nothing else.

            • Ninjitsu
            • 4 years ago

            Exactly all of this. That “frames above 16.7ms” needs to be 0 or very less at 1080p (maybe less than 100 or 200?) at the highest settings before I’d consider a card overkill for that resolution.

            • JustAnEngineer
            • 4 years ago

            As I quoted from the DamageBox 2015 article, Amdahl’s law applies. If the bottleneck is the single-threaded performance of your CPU, buying more and more expensive graphics cards may not be fixing your problem.

            • sweatshopking
            • 4 years ago

            i have a 4790k, and a 290. My ram is decent 2133. my system isn’t fast enough at 1080p to play games as i’d like to.
            I certainly get time constraints, and i’m not saying he should bother testing, but I don’t need a better monitor until my current one is fully utilized, and it ain’t.

            • Westbrook348
            • 4 years ago

            I’m assuming you like to play games with all settings maxed and with higher amounts of traditional AA. I like to play on Ultra too.. but not if it keeps me stuck at 1080p. If my choice is between 1080p Ultra or 1440p High, I would take 1440p. The extra resolution makes a big difference. I wouldn’t drop down to Medium or Low settings, because that really reduces IQ. But I’ve noticed that “Ultra” doesn’t necessarily look that much better than “High” (occasionally it does, but usually the improvements are minor). And I would definitely trade in 4x or 8xMSAA for a 1440p upgrade. 2xMSAA becomes sufficient as PPI increases. An alternative is SMAA. I hate post processing AA that makes all the textures super blurry, but SMAA does an awesome job and doesn’t increase frame times by much.

            • Ninjitsu
            • 4 years ago

            I have yet to find substantial evidence that a CPU bottleneck with a current gen i5 @ 3.5 GHz+ exists and limits the GPU performance in a majority of games at 1080p.

            Some games, sure, but that’s usually after the 60 fps barrier has been crossed. Exceptions include [url=http://www.tomshardware.com/reviews/nvidia-geforce-gtx-980-970-maxwell,3941-9.html<] Arma 3[/url<] and [url=http://www.anandtech.com/show/8568/the-geforce-gtx-970-review-feat-evga/10<]Rome II[/url<], but even they require a good amount of VRAM and a fairly decent graphics card to push more than 30 fps with settings turned to "very high". Heck, Nvidia themselves recommend a 980 for 1080p if the player wishes to turn everything up while playing The Witcher 3, for example. They know it can't push 4K. Again, if we actually [i<]had[/i<] 1080p benchmarks for these cards I could actually point at evidence, but I merely have these results for some cards: Company of Heroes 2 - no card tested was able to cross 45 fps min [url<]http://www.anandtech.com/show/8568/the-geforce-gtx-970-review-feat-evga/5[/url<] Civ Beyond Earth - need a Titan or better for 60 fps [url<]http://www.anandtech.com/bench/GPU15/1258[/url<] Crysis 3 - need a 680 or better for 60 fps, "high quality" and these are only averages [url<]http://www.anandtech.com/bench/GPU15/1223[/url<] Far Cry 4 - need a Titan or better for 60 fps averages [url<]http://www.anandtech.com/bench/GPU15/1269[/url<] GTVA4 - Fury and 980 just about cross 60 fps on an average [url<]http://www.anandtech.com/bench/GPU15/1370[/url<] Shadow of Mordor - Only the GTX 980 holds 60 fps mins (strange results to be frank) [url<]http://www.anandtech.com/bench/GPU15/1252[/url<] Total War Atila - Need Fury X or better to do 60 fps averages [url<]http://www.anandtech.com/bench/GPU15/1273[/url<] Finally: 1) I'd rather play a gorgeous looking game at 60 fps at 1080p than a okay-ish looking game at higher resolutions with the frame rate all over the place. 2) Productivity determines my monitor upgrades, not gaming. 3) It's cheaper to get a new GPU that can push 50+ fps at 1080p than get a variable refresh monitor (1080p or otherwise) and then another GPU that can actually support such a monitor.

            • anotherengineer
            • 4 years ago

            While I agree it’s not much for a fury, 1080p @ 120Hz is a fairly popular gaming monitor though, it could probably push 60 fps no problem, but 120 fps??

            Also agree, very difficult to do a detailed quality review of anything in 24 hours that’s a bit ridiculous.

            Also I would love a 2560×1440 or 2560×1600 screen, maybe even a 4k screen, since I occasionally do some cadding at home, but, unfortunately it’s just not in the budget right now (especially with the low CND $$) The old HD 6850 will probably be going another year before an upgrade, and when that happens, the budget for it will be about $250. As for the 1080 screen, can’t see wasting cash on that in a long while since it’s only a year and a half old.

            • JustAnEngineer
            • 4 years ago

            Sub-$400 WQHD (2560×1440) IPS LCD monitors have been available for more than three years.
            [url<]https://techreport.com/review/23291/those-27-inch-ips-displays-from-korea-are-for-real[/url<]

            • anotherengineer
            • 4 years ago

            I will not ever buy a PC monitor without a 3-yr warranty or an OSD for controls for RGB, etc.

            Also in Canada, looking at this guy but waiting for a sale. (or an equal like the BenQ BL2420PT)
            [url<]http://accessories.dell.com/sna/products/Gaming_Accessories/productdetail.aspx?c=ca&l=en&s=dhs&cs=cadhs1&sku=860-BBFK[/url<] Hopefully a future revision will have freesync. $400 is the high end for me for a monitor, also I am in Canada so prices are quite a bit higher now due to the value of our currency, then add recycling fees and 13% sales tax, and a half decent monitor isn't cheap.

            • Aquilino
            • 4 years ago

            Sorry but 4K is a frivolity on 2015. Videocards get eaten alive on top settings (ie. soft shadows; and I’m gonna guess that the most demanding AA modes are not necessary, because…), and don’t want to touch the 120Hz/144Hz issue.
            It’s just unfeasible, unless you don’t mind 30 FPS, of course, or ugly graphics – but that defies the motive for a 4K monitor, IMHO.

            BTW: I love the last table, the ’99th percentile FPS per dollar’ one. Just exquisite. Thanks a lot.

            • Den
            • 4 years ago

            4K makes sense for older or less demanding games or if you have a dual 980ti setup and he did use 1440p for Witcher 3 where it made sense to do that as anyone using a 980ti on a 4K monitor should. Using 4K for all tests would allow SLI/XF tests to be done on the same setup. And AMD’s 300-series cards get shown in their best light with this setup, since they relatively benefit from the higher resolutions (even if the result is a worse gaming experience compared to lower resolutions where it averages above the 30s).

            • Ninjitsu
            • 4 years ago

            Yeah but older games won’t really look good at 4K, because they’re textured for 720p…heck, CoD:MW2 looks fairly dated at 1080p, and I realised that they were using low-quality “cut-out” pictures as backgrounds, similar to old films.

            • Lazier_Said
            • 4 years ago

            “For a Fury? Seriously, get a new monitor before buying a $550+ graphics card.”

            I’m reading in an underlying assumption here that because 1080 is a low end feature in 2015 then therefore any 1080 display must be a $199 piece of crap that you should upgrade first.

            I don’t game much anymore, but when I do it’s on a 60 inch plasma TV, against which $550 was and remains chump change.

            4K isn’t the be all and end all, yet.

            • K-L-Waster
            • 4 years ago

            Hrrm…. here’s the thing, as others have said 4K is a little much for current video cards (we’re probably about a generation from a single card being comfortable at 4K without compromising).

            At the same time, 4K monitors are still pricey, and still have compromises in refresh rates and what have you. Now, you could go for a 2560, but I’m hesitant. My current monitors (1920x1200s) are 8 years old, so I have to figure the next ones will have a similar duty cycle. Do I want to commit to 2560 for the next 6+ years?

            My current plan is to soldier on with what I have until either a) one goes belly up, or b) 4K monitors get less expensive and better performance. That’s just me, of course, but I suspect I’m not the only one.

            • Freon
            • 4 years ago

            While I generally agree with your first point, the 2nd is moot given you’re not doing much CF/SLI testing. It’s been a while since I’ve seen TR actually do a CF or SLI test. And IIRC, two 8x slots is sufficient for the most part.

            The 4 core CPU is a much better analog to what most readers are using. Especially since many may even be overclocking. It just seems questionable to lean in the 6+ core direction when there is any doubt.

            Not a huge deal either way, but the 4790K would certainly be more appropriate. I just can’t see anyone having concerns over you picking the 4790K over a X99 setup.

            • Damage
            • 4 years ago

            I want to be doing more multi-GPU testing. It’s just a matter of having time. Thus, I want a platform that is ready for it. And between 4790K and 5960X, it really, really is not a huge deal either way.

            • wierdo
            • 4 years ago

            I’m not sure how many people really use SLI in the overall “hardcore gaming” market, which itself is a niche of overall PC gaming, so if time is a constraint then I’d vote for more single card testing over SLI/Crossfire results any day of the week.

            Imho SLI/Crossfire is too unstable in many situations to be a desirable option, I doubt many people miss seeing those results, myself included – outside of passing interest for academic curiousity’s sake rather than buying decisions.

            • Meadows
            • 4 years ago

            [quote<]"The difference in gaming performance between the 5960X and 4790K is exceedingly small."[/quote<] Is it? I only wonder if that assessment still holds. I never claimed you had an agenda and, in fact, I support what you do. But with Fury's strange resource allocation and AMD's persistent driver woes, a 40% difference in CPU clock speed between two review sites is going to raise eyebrows and suspicions regardless of how good that new 8-core wonder might be.

            • Damage
            • 4 years ago

            40%? By what math? You are aware the 5960X’s Turbo peak is 3.5GHz?

            It also has a massive L3 cache and measurably strong per-thread performance in all cases.

            If the use of that CPU is naturally “going to raise eyebrows and suspicions” then the Internet is a strange place.

            Ergo status quo, I suppose.

            • Meadows
            • 4 years ago

            No, I was not aware. I compared the base 3 GHz to the 4.2 GHz the other site used.

            I lost track of processor models a few years back, so I don’t even know which generation we’re at right now. Can’t remember Turbo speeds either.

            I revised my maths and it appears the actual difference is more like 15%, but that’s before the mild OC the other site used. If I account for the OC, it’s a 20% difference even if we graciously assume that the 5960X stays at its Turbo peak at all times.

            I don’t know if that’s enough to explain all or part of the anomalies I saw in the competing review. Maybe not. I don’t doubt the real-world validity of your results either. With that said, a constant 4 GHz (regardless of the architecture in question) seems like a sweet spot for benchmarking and single-threaded performance testing, the latter of which AMD’s drivers heavily rely upon. Just a suggestion.

            • Den
            • 4 years ago

            “Guys, seriously, there is no nefarious agenda at work here in the choices I’ve made.”

            Sorry if my comment made it seem like I was implying that. I was just curious and suspected PCIe bandwidth may be part of the reason – I’ve been curious lately on seeing what x8 2.0 or x8 3.0 significantly limitations on the 980ti or Fury X, especially now that you mention XF using PCIe bandwith.

            • anotherengineer
            • 4 years ago

            PCIe scaling right here for ya!!!!!!

            [url<]http://www.techpowerup.com/reviews/NVIDIA/GTX_980_PCI-Express_Scaling/[/url<] the TLDR - hi fps at low rez seems to consume more bandwidth than hi res at low fps. So basically if you are running 4k, the PCIe version isn't very critical.

            • Den
            • 4 years ago

            I’ve seen that review, which does start making 2.0 x8’s limits show with a single card. But I’d like to see the frame times and 99th percentiles to see if the limitations could be causing stuttering – especially for something like a 980ti.

            And as Damage brought up, the AMD’s use the PCIe lanes for XF so it would be neat to see how much XF Fury X would be limited by x8 2.0.

          • gecko575
          • 4 years ago

          [quote<]Conclusion 1: TechPowerUp are cheeky bastards for keeping their testing settings secret.[/quote<] From TPU test setup page: [quote<]All video card results are obtained on this exact system with exactly the same configuration. All games are set to their highest quality setting unless indicated otherwise. AA and AF are applied via in-game settings, not via the driver's control panel.[/quote<]

            • Meadows
            • 4 years ago

            Yeah, no. Good luck figuring out what “highest quality” means for special settings, as well as AA and AF levels.

        • mno
        • 4 years ago

        Well, the Fury came out looking a lot better in [url=http://www.pcper.com/reviews/Graphics-Cards/Sapphire-Radeon-R9-Fury-4GB-Review-CrossFire-Results<]PCPer's review[/url<], which did do frame time measurements. Honestly, the main takeaway I got is that Project Cars is really bad for AMD, which really drags down AMD in TR's reviews.

          • Meadows
          • 4 years ago

          TR’s results don’t depend on that game.

          • Den
          • 4 years ago

          TR uses geomean so an outlier like that wouldn’t really effect it much.

          PCPer seems to be using a reference 980 compared to the G1 TR uses, so that makes the Fury look better relative to the 980.

            • Meadows
            • 4 years ago

            Nice catch.

    • wierdo
    • 4 years ago

    Thanks for the review Scott, always a pleasure to read.

    I’m thinking we may be in an Athlon vs P4 scenario for GPUs, I wonder if, to reach a similar image quality target, one card may be better at using a specific antialiasing/tesselation/etc balance vs the other. GPUs are much more sensitive to optimization – vs general purpose CPUs – so that could also amplify the effect of fiddling with those knobs.

    I was reading David Kanther’s Fury related threads at realworldtech last week and it seemed there was an interesting discussion in there about how the balance of resources may make it difficult to have a “universal” ideal configuration when playing current games, and I assume future DX12 titles may make this even more complicated potentially.

    Might be a fun idea for an article, exploring the pros and cons of having a top-heavy GPU implementation like this one, one with a ton of shaders but limited ROPs. I’d love to read one if you guys ever consider doing something like that for science.

      • auxy
      • 4 years ago

      You should read the rest of the comments here, particularly from Damage and guardianl. (*‘∀‘)

        • wierdo
        • 4 years ago

        Thanks. His post was related to what I was getting at yes.

        I think an article testing the architecture of these cards from an academic angle would be fun. Fiddling for the sake of exploration often provides interesting reading and creates material that other sites may not have explored.

        Kinda like what was done in the SSD endurance test for example. I find those unusual kinds of articles quite enjoyable personally.

          • Ninjitsu
          • 4 years ago

          I’d like to see more academic articles too! RWT used to be great for that, shame Kanter doesn’t write much there anymore.

    • Mystiq
    • 4 years ago

    Reddit’s AMD community is a little up in arms about this review. Most of the complaints stem from the GTX 980’s factory OC and lack of 15.7. For the record, I don’t agree; the 10% OC doesn’t explain some of the differences.

    I’d just be curious to see a re-review once Windows 10 is released with 15.7 (assuming all the cards work with it). There also seem to be conflicting reports about the performance boost with 15.7 and I can’t seem to find clear info anywhere. 🙁

      • Damage
      • 4 years ago

      Doh. I *did* test the Strix R9 Fury with Catalyst 15.7, as the text of the article said, but my testing methods table said I used 15.15 (had it flipped with the Fury X entry).

      My bad. I’ve fixed the table.

        • Mystiq
        • 4 years ago

        Timing and everything, I know, but I would like to see the Fury X tested on 15.7 as well. I really can’t seem to find consistent information about how 15.7 affects framerates. If what I’m reading adds up, it wouldn’t affect your results much because of the processor used. (I’m still on an FX-8350 so it should be helping me.)

        It doesn’t look like a complete apples-apples comparison.

        I’m sorta pissed at AMD at this point and would like to know for sure if the Fury cards really do have a memory problem (the spikes in some of the frame time graphs), if the drivers are just really immature and if 15.7 on Windows 10 is really the panacea people are making it out to be. Help me, Ellen Pao!

        I’m kind of an AMD fanboy but I can’t help but feel like I’d be making a bad decision buying the Fury or Fury X. My brain is telling me 980 Ti but my heart says Fury X.

          • Ninjitsu
          • 4 years ago

          Wait a bit, then, till the 15.7 results are out? What’s the rush? (Otherwise: Go with the Ti!)

          EDIT: AnandTech tested both Furies with 15.7, and they and THG both note that 15.7 provided negligible performance improvements over 15.15.

            • ImSpartacus
            • 4 years ago

            Yeah, if you need to spend that much money, at least spend it on something tried & true.

            The Fury X’s water cooler is very tempting and we all are rooting for AMD, but Nvidia has this one. Maybe next time.

    • ImSpartacus
    • 4 years ago

    Is it normal to use a factory-overclocked 980 & 970 in a review like this?

    Based on the bottom of the [url=https://techreport.com/review/28612/asus-strix-radeon-r9-fury-graphics-card-reviewed/2<]this page[/url<], it looks like the 980 has a ~5% core overclock over the [url=https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units#GeForce_900_Series<]stock configuration[/url<]. It looks like the Fury has some pretty crippling issues on its own, but that's not helping things and the 980 isn't called "980 OC" in the graphs and stuff. I may have missed someone else commenting on this as the comments are quite long on this article.

      • interbet
      • 4 years ago

      Don’t think most noticed that, 970 as well.

      • snook
      • 4 years ago

      Incoming, unless I’m special.

      • maxxcool
      • 4 years ago

      While i agree 5% oc is not ideal, the real world affect of 5% is a wash.

        • ImSpartacus
        • 4 years ago

        Apparently it’s more like 10% of actual performance at 4K compared to a reference 980.

        [url<]http://www.techpowerup.com/reviews/Gigabyte/GeForce_GTX_980_G1_Gaming/27.html[/url<] It seems like the performance difference is minimal at low resolutions, but it gets pretty hefty as you get closer to 4K.

      • Damage
      • 4 years ago

      The 290X and 390X are also faster than stock. The Fury likely would be in an aftermarket card like the Strix if there were any clock speed headroom. The reality is that what you buy at Newegg generally will not be a reference card with reference clocks.

        • ImSpartacus
        • 4 years ago

        If that’s the reality, then TR reviews need to embrace that reality and recognize that a particular graph isn’t about a “390, 390X, 970 and 980,” but a “XFX 390, ASUS Strix 390X, ASUS Strix 970 and Gigabyte GTX 980 G1 Gaming.”

        Yeah, it’s on your methodology page, but your methodology page won’t be linked when some random reader posts a graph of your review on a random forum. I know I’ve embedded countless review graphs on all sorts of forums, so it’s important that each graph can stand on its own (within reason).

        In addition to the graphs, the verbiage of the article needs to constantly reiterate that we’re comparing an “ASUS Strix Fury” to a “Gigabyte GTX 980 G1 Gaming” (or other GPU). It’s also helpful to mention in the conclusion & supporting paragraphs that one (or potentially both) of the cards have a factory overclock.

        It might sound overly redundant or laborious, but it’s necessary. I know I, personally, missed that detail until a friend pointed it out to me. Communication is really tough and it only gets harder when you have to communicate a technical topic.

        As a side note, there seems to be a decent variety of Fury cards out in the wild and [url=http://anandtech.com/show/9421/the-amd-radeon-r9-fury-review-feat-sapphire-asus/2<]Anandtech got at least one that got a factory overclock[/url<]. It doesn't seem like much, but it puts the overclocked Fury noticeably above the unoverclocked ASUS Fury (which you reviewed). The voltage anomaly on the two cards was also interesting. Their review is a really good read, in general.

          • Damage
          • 4 years ago

          Our review assets are not intended to be ripped and shared out of context in random forums, it’s true. I refuse to accept culpability for misunderstandings in that context. I will consider even more labeling work for those who don’t pay attention to details already presented, but such things have their limits. Communication is difficult, and people need to read the detailed testing specs and analysis we provide in order to understand what they are seeing.

          • Den
          • 4 years ago

          I wouldn’t say its necessary, but would be nice. I’d be happy if all reviews mentioned what models and clockspeeds they were using somewhere in the testing setup. Annoying when you have to look at their old reviews to guess what model they might be using to test assuming they have the same card as back then.

            • Damage
            • 4 years ago

            In case you missed it, the models and clockspeeds of the cards we tested are provided on this page of the review:

            [url<]https://techreport.com/review/28612/asus-strix-radeon-r9-fury-graphics-card-reviewed/2[/url<]

            • Den
            • 4 years ago

            I did see it. Wish everyone would make reviews as good as yours. *disables adblock*

            • NoOne ButMe
            • 4 years ago

            And the little useful clockspeeds hit? I don’t understand why this isn’t published. As stated in another post, over at least 3 sites, the Titan X ranged from an average of 1100 to 1150Mhz.

            Unless with Nvidia’s cards you’re capping them at their promised boost clocks, which, would be unfair to Nvidia, as, I imagine only the worst of the worst bins only hit those.

            I should add, if this data requires additional work to a degree that noticeably increases the time to do the review, I can accept not having it. Otherwise, it is data that allows us to put the datapoints that your testing gives us in a larger context. Especially as “little” performance swings of 3%+ in either direct for Nvidia’s cards can completely change what the published numbers mean. Most reviews seem to end up between 1100-1130Mhz for Titan X.

            If your Titan X sample hits 1190 (appears to be max boost clock at stock) you would have Nvidia’s card getting 5%+ “more value” in the review. Of course, at this time if you overclock Maxwell is beastly and Fiji is just sad. So, that doesn’t matter. These reviews typically don’t feature user overclocked cards. I consider a 2% overclock a non-issue when determining the overall performance of the card (subtract 1-2% performance) but, no knowing what clockspeed average Nvidia’s card(s) hit is a HUGE detriment towards compared Nvidia’s and AMD’s in your (otherwise excellent) reviews.

        • travbrad
        • 4 years ago

        [quote<]The reality is that what you buy at Newegg generally will not be a reference card with reference clocks.[/quote<] Yep. I just looked up the GTX980 on newegg and of the 20 or so 980s they are selling, 3 use reference clockspeeds. The 970s on newegg are like that too. 4 of the 27 cards use stock clocks. Of the 6 R9 390s in stock on newegg, only 1 of them has a significant overclock (a whopping 60mhz). Of the other 5 cards, 2 are stock-clocked and the other 3 have 10mhz, 15mhz, and 20mhz overclocks. There isn't a single R9 Fury for sale on newegg yet, so I can't look those up.

    • anotherengineer
    • 4 years ago

    ” I’m mainlining an I.V. drip consisting of Brazilian coffee, vitamin B, and methamphetamine”

    Real men just use adrenaline 😉

      • Meadows
      • 4 years ago

      So did Mr Wasson. It’s just that adrenaline doesn’t make itself.

      • derFunkenstein
      • 4 years ago

      What is a man? [url=https://www.youtube.com/watch?v=OMTizJemHO8<]A miserable little pile of secrets![/url<]

        • anotherengineer
        • 4 years ago

        What is a Mega Man?

        [url<]https://www.youtube.com/watch?v=_73VuV_VFhM&list=PL8B78DBAB74CE66CD[/url<]

    • ronch
    • 4 years ago

    It’s clear how AMD bet the farm on 20nm to realize better efficiency gains. When it didn’t happen, they were left out in the cold while Nvidia laughs all the way to the bank. AMD is also left with a $33M hole in their pocket for expenses spent on their 20nm efforts. But even if 20nm was successful, it’s not like Nvidia wouldn’t have access to it. There you go, AMD

      • NoOne ButMe
      • 4 years ago

      no one bet on 20nm for GPUs. The only process to bet on would have been IBM’s 22nm, and, that was pretty much a closed process.

      To clarify, the 33 million from the 20nm has to do with their canceled 20nm Skybridge X86 and ARM compatibility being canceled (most of work ported to 14nm?) likely.

        • AnotherReader
        • 4 years ago

        I wonder if AMD would have been better off switching their GPUs to their own process when they still had fabs. SOI has higher wafer costs, but that might have been balanced by not paying a foundry to fab their chips. They would also have been able to charge more for their GPUs as SOI would have allowed them about 10% higher clocks at the time.

          • NoOne ButMe
          • 4 years ago

          By the time that AMD had any GPUs designed for SOI 28nm was coming up, and, the most advanced commercial available SOI node for non-IBM designs was/is GloFo shitty 32nm.

          SOI higher wafer costs are in theory offset by the fact that you can get better yields.

          Still, SOI performance increase is quite large, and, the clockspeed increase it gives quite large. The best example (CPU sadly) is that going from Trinity on 32nm SOI to 28nm bulk AMD had to give up a little bit of clockspeed. And, that was with DP-SOI.

          FD-SOI should in theory give even better performance. I expect once 14nm is fully mature (fully 14nm FEOL and BEOL) that SOI will become the cheap way to increase performance and low power compared to shrinking. Of course, being technologically superior doesn’t matter in the foundry world. Mostly as Intel’s yields on a mature process are insane (and you pay a lot more for that) and TSMC, Samsung, Global Foundries are all super cost sensitive on wafer cost. Even if SOI gets you overall cheaper per chip it still makes it a harder sell.

          [b<]of course, that is theory. All evidence I've seen backs it up however. Highest performance process that is public and for mass production is 20nm planer FD-SOI.[/b<]

    • Ninjitsu
    • 4 years ago

    Well, as expected I suppose. Thanks Scott! Especially for the lower resolution testing!

    So yeah, I had said this almost a month ago after Scott and David’s first podcast – for current games, at 1080p, go with Nvidia. At 1440p, probably still buy Nvidia. For 4K…well, buy a GPU when you have a 4K monitor. Not now.

    And it’s funny, a couple of days ago, someone asked “hey when will you guys test the 390/X?!”, and I jokingly linked to TR’s 290X review.

    The response I received was “OH BUT THEY’RE FASTER DID YOU EVEN BOTHER TO CHECK”…there’s like a 3 fps difference between the 290X and 390X, and a bunch of ms in frame times. Doesn’t look like a ~$150 difference to me…probably be less pronounced at lower resolutions, or after overclocking the 290X.

    Oh, GM204 [i<]is[/i<] faster than GK110, [url=https://techreport.com/news/28600/wednesday-shortbread?post=920625<]Krogoth[/url<]. [i<]Have you seen the chart?[/i<] EDIT: Forgot to slip in a Floyd reference. 😀

    • f0d
    • 4 years ago

    i have been sitting here trying to think of something positive to say about the new amd fury cards and i cant

    the fury cards losing out to nvidia (980 and 980ti) cards on both 99th percentile per dollar and average fps per dollar is a major let down

    the r9 390 seemed like a winner until i got to the 99th percentile per dollar chart at the end
    its still a great card pretty much on par with a stock 970 but i suspect a 970 would overclock much better (and im an overclocker so its something i look out for)
    still the 390 is a nice card – the 970 and 390 are both great options

    the r9 295×2 might be the best amd card you can buy with great 99th percentile per dollar and average fps per dollar besting both the fury cards and the nvidia cards

      • NoOne ButMe
      • 4 years ago

      Depends on what games you play. The only games where AMD r9 390/290/390x/290x have big loses in 99th frame is at TW3 and pCars. Otherwise they’re close to equal or ahead.

      So, tentatively, they are equal outside of GW titles.

        • YukaKun
        • 4 years ago

        I wonder if changing the ID of the card would work like it did for the original Batman game where the performance difference was abysmal, but after changing the ID, the performance gap wasn’t so big.

        Cheers!

          • DoomGuy64
          • 4 years ago

          That was an issue with anti-aliasing. AMD cards were partially running AA, but couldn’t do the resolve step unless you changed the ID to a Nvidia card. All the performance loss, none of the IQ benefits.

        • DoomGuy64
        • 4 years ago

        PCars clearly has software/driver issues, since running the game in w10 gives AMD an additional 20+ fps.
        [url<]https://www.youtube.com/watch?v=XzFe5OOHZko[/url<] [url<]https://www.youtube.com/watch?v=IpATnpx45BI#t=79[/url<] The game is broken on AMD cards running w7/8, so the numbers are not really representative of what the cards are actually capable of, only how it performs in a worst case scenario.

          • NoOne ButMe
          • 4 years ago

          Does that change the frame times? No clue.

      • snook
      • 4 years ago

      one positive; new memory architecture.

      it’s competitive, but not the barn-burner AMD lead people to believe.
      that is the real WTF moment for me.
      they must kick someone in marketing/testing dead in the butt. hard!!

      my hope is the nano produces numbers along the lines of the fury. It will be a great card
      for a SFF build. I’m just tired of the mid-tower build I have now.

    • Arclight
    • 4 years ago

    Average fps wise it should be slightly higher than the GTX 980 in most games but it was on par or one or two frames under in games I didn’t expect. Dang it AMD, you just had to clock the core 20-30 MHz higher to make it a home run.

    Frame time is still worse, it will be a long wait for a fix and I’m wondering how average fps will change when it arrives (it could actually be lower).

    • Demetri
    • 4 years ago

    [quote<]AMD says it has conducted a "complete re-write" of the PowerTune algorithm that manages power consumption on these cards. Absolute peak power draw doesn't change, but the company expects lower power use when running a game than on the older Hawaii-based cards. [/quote<] [quote<]Yes, the 390X is clearly faster than the 290X before it, but it also requires an additional 82W to make it happen. [/quote<] AMD please...

      • YukaKun
      • 4 years ago

      Not saying you’re entirely wrong poking AMD, but…

      [quote<]The Catalyst 15.15 drivers we used to test the R9 390, 390X, and Fury X wouldn't install on the R9 290 and 295 X2, so [b<]we had to use older 15.4 and 15.6 drivers for those cards[/b<]. Also, AMD dropped a new driver on us at the eleventh hour, Catalyst 15.7, that is actually a little fresher than the Cat 15.15 drivers used for the 390/X and Fury X. (Cat 15.7's internal AMD revision number is 15.20.) We weren't able to re-test all of the Radeons with the new driver, but we did test the R9 Fury with it.[/quote<] Emphasis mine. Cheers!

        • Demetri
        • 4 years ago

        So older drivers made the 290X more efficient? That would be another indictment against AMD. I just don’t see how they can come out and say the card doesn’t draw any more peak power than the old one, and should be more efficient overall, and then it gets tested and it uses 82 more watts. That 390X power consumption is so out of whack it looks like a misprint; makes me think it’s something ASUS messed up when they were tweaking it, unless things are even worse off at AMD than I imagined.

          • Voldenuit
          • 4 years ago

          [quote<] That 390X power consumption is so out of whack it looks like a misprint; makes me think it's something ASUS messed up when they were tweaking it, unless things are even worse off at AMD than I imagined.[/quote<] While it is possible that ASUS used a custom PCB and VRM design (they are one of the AIBs that like to do that), it is unlikely for that to push power consumption more than a few percent either way. The 390 (and 390x) are old chips on established process that are simply being pushed as far as they can go, no surprise at jump in wattage, really.

            • sweatshopking
            • 4 years ago

            Plus an extra 4gb ram

      • auxy
      • 4 years ago

      I’m willing to blame the much higher-clocked RAM, plus there’s twice as much of it. (*‘∀‘)

        • anotherengineer
        • 4 years ago

        Nvidia uses 7GHz effective ram on the 970 and 980.

        GPU clocks and voltages are the big power hogs.

        [url<]http://www.techpowerup.com/reviews/AMD/R9_Fury_X/35.html[/url<] [url<]http://www.techpowerup.com/reviews/MSI/R9_390X_Gaming/34.html[/url<] [url<]http://www.techpowerup.com/reviews/MSI/GTX_980_Gaming/29.html[/url<] I think AMD should see what is the max speed they can run their GPU at 1.0V (maybe 975 MHz like the 980??) and adjust from there, I think that's what is really sucking the watts at full bore.

          • NoOne ButMe
          • 4 years ago

          GCB ideal clocks in 800-900Mhz. For best perf/watt.

          Better question is if cutting units or slowing clock speed is what will get it to utilize it’s shaders better.

            • anotherengineer
            • 4 years ago

            Indeed. With all the negative press about load power consumption, you would think they would put 7GHz (effective) GDDR5 on all their mid to high end cards and try to lower GPU clocks and voltages to reduce power while trying to keep the performance somewhat equal.

            • NoOne ButMe
            • 4 years ago

            Naw. Hawaii uses only 60ish or so for the GDDR5 (the stock 5Ghz). AMD has a really good controller for 5-6GBps GDDR5. When you push up higher power gets higher.

            GPUs are all about going wide and slow, the problem is GCN wants to go a bit to slow on 28nm.

            The crazy this is, on the 980ti, the GPU itself is probably only drawing 150-160W given card hitting 250W. If Nvidia could make a lower power GDDR5 controller they could really show off Maxwell. AMD can get 8GBps at 512b at under 90W. Nvidia 85-100 for only 7.x@384. And, AMD gets higher % real world usage before compression.

            980 silicon probably using under 100W at stock in most games!!

    • ultima_trev
    • 4 years ago

    Epic opening paragraph and all-around excellent review as well always. I wonder with all the deal AMD made about the frame pacing drivers during the HD 7970 days, they seemingly decided to neglect frame pacing all together with their newer products? Or perhaps GCN is at its limits in terms of performance and a new architecture is required?

    • NoOne ButMe
    • 4 years ago

    Well, at least anyone who loves AMD can now say that a 390 is solidly beating a 970 for the same price… Yeah. XD

    I still don’t understand why P:Cars is included in the final frametimes/fps. Throw it out and have a huge notice telling everyone that AMD is f-cking terrible at racing games. Not even worth considering.

    Also, clockspeeds… I’m really doubtful AMD is hitting their precious 1000Mhz at 75C. I’m doubtful of that power usage. And, of course, always want what clock Nvidia cards hit as boost is variable at the top. =]

      • AnotherReader
      • 4 years ago

      The one bright spot for prospective AMD users is that GCN cards seem to age more gracefully than the competition. The R9 290X started out slower than the 780 Ti and is now marginally faster than it.

      Coming to the Fury, I think it is all but confirmed that the front end can’t keep the back end fed in many games. I don’t think power consumption can be reliably measured with just one sample. Of course, we can’t expect Scott to buy 30 different Furies and 30 different 980s. The best we can do is look at other reliable reviews and get an average. Overall, the Fury seems to hit its clock speeds as shown in the Anandtech review, and the Asus sample they had, was pretty efficient for an AMD card. They also tested a Sapphire card which had higher power consumption, but was extremely quiet.

        • NoOne ButMe
        • 4 years ago

        Good for AMD and purchasers of this card in that case 😀

        Doesn’t solve my issues of what clock speeds Nvidia is hitting. An example, anandtech review it appears to hit average of 1100, for hardOCP, 1150, for techpowerup 1127 I believe.

        That’s an about 4 percent difference in performance from anandtech to hardOCP and can completely change the tone of a review.

        Just like how a 290x at stock clocks with an AIB cooler is such a better card than the stock cooler. If their Titan X is hitting 1200Mhz, than they’re running 4-9% faster than the generalized speeds I’ve seen on it.

        As someone who loves numbers and spreadsheets it is frustrating. Yes, I’m that girl/guy who loves spreadsheets 😉

          • AnotherReader
          • 4 years ago

          I love spreadsheets and numbers too. I have a spreadsheet with performance over time for various high-end cards from the VLIW 5/Fermi generation onwards.

          You are right; Nvidia’s boost mechanism means more variability than AMD’s turbo. Unfortunately, again, we can only get a rough idea of that variability by looking at 30 or so different reviews. I don’t know if there are 30 indpendent reviews measuring the clock speed of the 980 Ti and Titan X.

          I think that Fiji would have been a Titan X competitor if they had dropped 4 CUs and increased the front-end resources by 50%. That would have also come in at about the same size as the current Fiji. Anandtech’s review of the Fury X indicates that AMD had an upper limit of 4 Shader engines (SEs) for Fiji. Why would that be? Could it be that a single command processor can’t handle more than 4 SEs?

            • Mystiq
            • 4 years ago

            Can you explain for someone a little less graphically-inclined what this means? I’ve seen some other people say that the fact that the Fury has a low number of render output units doesn’t mean much but there was speculation [b<]before[/b<] that one (a few days ago) that the ROPs were what was holding Fury back. What's in the front end, exactly? Someone earlier in here said it can do stuff like dropping pixels if they are occluded or breaking up triangles into better parts to render.

            • AnotherReader
            • 4 years ago

            The ROPs are the back-end. The front-end consists of:

            Vertex Assembler
            Tessellator: increases number of triangles(polygons)/vertices
            Geometry Assembler: takes lines and triangles, adds additional information and transforms them from a 3D space to the 2D screen space
            Rasterizer: converts triangles to pixels

            A nice representation from the 5870 days is at [url<]http://www.beyond3d.com/images/reviews/cypress-arch/setup-big.png[/url<]

            • NoOne ButMe
            • 4 years ago

            just a general overview of why the ROPS are not holding it back… The ROPs basically limit scaling to higher resolution. If the 64 ROPs was holding Fury or Fury X back against the 96 ROPs of Titan X or 980ti than we would see the performance gap grow as we increase resolution.

            As, the ROPs would be under more pressure from AMD.

      • Ninjitsu
      • 4 years ago

      [quote<]Also, clockspeeds... I'm really doubtful AMD is hitting their precious 1000Mhz at 75C. I'm doubtful of that power usage.[/quote<] It is, except in FurMark. [url<]http://www.anandtech.com/show/9421/the-amd-radeon-r9-fury-review-feat-sapphire-asus/17[/url<]

    • TopHatKiller
    • 4 years ago

    Keerfufllehh. Ugh. Fiji is fine, it’s okay – there’s nothing wrong with it. It does not set my heart pulsing. Sure, if you’re in the market for a 980, then [subject to the actual price] you would probably be silly buying Nv. But not that silly. I do resent the note in the review that the oem cooler here was great – it ain’t – it’s just okay. [Hopefully Arctic and Prolimatech will provide genuinely high quality after market coolers at some point soon.]
    I stopped working [intermittent with playing games] to bother writing this?
    Bring on 14/16nm. Wake me up if drivers change anything.
    PS. Sorry, that was boring.
    PPS. Worse, I still can’t think of anything funny to say. at this point: I blame AMD –
    and don’t bother downvoting me – just shoot me.

      • Meadows
      • 4 years ago

      What if, instead of badmouthing the coolers in every GPU review, you instead explained what exactly “isn’t great” about these coolers?

      And please try to do it in a way that doesn’t make my eyes bleed.

        • NoOne ButMe
        • 4 years ago

        It isn’t a waterblock? I think I’m trolling here, maybe that is his actual answer though.

          • TopHatKiller
          • 4 years ago

          Too dear, sadly. It’s a shame TR has never reviews replacement gpu air coolers.

        • Ninjitsu
        • 4 years ago

        [quote<] And please try to do it in a way that doesn't make my eyes bleed. [/quote<] This. XD

        • TopHatKiller
        • 4 years ago

        It/They is/are NOT: acelero extreme or prolimatech. Is, therefore excessive pretend marketing effort to pretend a cheap cooler, not just dirt cheap cooler, is in fact a dear one. DID THAT MAKE YOUR EYES BLEED? My voice is horse. From screeching. Yikes.

        Emendations:
        This is a case of incorrect baseline comparison. Comparing an awful cooler to a competent one: in this instance the ‘competent’ cooler appears vastly more effective then it actually is. One realizes that aib’s have severe cost-restraints [particularly nvidia ones] but I’m still irritated by excessive praising of only competent coolers.
        also — i don’t like the colour

        Boring. To remedy;
        [1] If you think my prose is ugly : you should see my face.
        [2]If you think my prose is hard to read you should try ‘Finnagin’s Wake’ or rather
        … It/They is/are NOT {wraparound text}

          • Chrispy_
          • 4 years ago

          I think you put too much praise on Arctic/Prolimatech. The quality of their heatsinks is high but the design is generic.

          A cooler like [url=http://images.anandtech.com/doci/9421/SapphireAMDDeck.jpg<]Sapphire's Fury[/url<] exceeds the surface area of either Arctic/Prolimatech, takes up less space, includes better VRM and ancillary component cooling and acts as a mechanical brace fixed to the PCB in many more places than the Arctic/Prolimatech solutions. [url=http://www.techpowerup.com/forums/attachments/cooler2-jpg.66393/<]Asus' Strix[/url<] seems to have a smaller and cheaper construction but still gets the basics of multiple, giant, direct-contact heatpipes out to a huge fin array that covers as much of the card area as possible. Sure, if you want to sacrifice all practicality for better air cooling, you could just make a cooler that uses five slots, runs three 140mm fans and uses unobtainium heatpipes but the OEMs are making solutions that fit in regular-joe cases, allow for regular-joe mis-handling, and conform to PCI and ATX specifications. Nothing from Arctic/Prolimatech fits any of those criteria. Edit, let's have a few links....

      • Melvar
      • 4 years ago

      I tried shooting you but now there’s something wrong with my monitor.

        • TopHatKiller
        • 4 years ago

        At least that was funny, dear. +1

      • TwoEars
      • 4 years ago

      “Keerfufllehh. Ugh.” indeed.

    • Bensam123
    • 4 years ago

    Lots of untapped potential there. DX12 is probably going to change things up immensely as well as where games go in the next couple of years in terms of eye candy. Interesting to say the least. I don’t imagine the positioning for AMD getting worse among the current cards either. So that means the only way it can go is up as this is a worst case scenario essentially.

    Driver optimizations, DX12, and new games with more eye candy…

    • ImSpartacus
    • 4 years ago

    [quote<]it's hard to imagine a smart PC builder deciding to opt for the 390X's additional 167W to 199W of system power draw compared to a GeForce GTX 970 or 980. Perhaps the right user, who wants maximum performance across a triple-4K display setup, would find the 390X's 8GB of memory compelling.[/quote<] I think you're really grasping for that one. If you can afford 3 4K displays, I would hope that you could afford a Titan X or two to power em. And the sad thing is that it means that the 390X really doesn't make much sense for anyone...

    • Geonerd
    • 4 years ago

    What’s the word regarding overclocking with the -X cards?

      • ImSpartacus
      • 4 years ago

      Voltage is locked, so it’s a big question mark right now.

      • Ninjitsu
      • 4 years ago

      Poor OC according to AnandTech. They managed to do well with the Fury though.

    • DragonDaddyBear
    • 4 years ago

    What I am gathering is they need more ROP units. Is it die area, engineering, or some or some other factor that caused them not to put more units into this chip?

      • homerdog
      • 4 years ago

      Doesn’t seem like the chip is often limited by rasterization rates. Rather the front end seems to need some help.

      Though the biggest problem seems to be software (driver and compiler need major help).

        • DragonDaddyBear
        • 4 years ago

        What exactly, then, is the “front end?” I keep reading comments about the “front end” but it’s not really defined.

          • guardianl
          • 4 years ago

          [url<]http://www.beyond3d.com/content/reviews/1/6[/url<] "The front end of the chip concerns itself with keeping the shader core and ROP hardware as busy as possible with meaningful, non-wasteful work, sorting, organising, generating and presenting data to be worked on, and implementing pre-shading optimisations to save burning processing power on work that'll never contribute to the final result being presented" That's a little old, but in general the front end basically takes lists of primitives (triangles, lines) and figures out the best way to render them. It sounds simple, but its actually very complex because triangles can be tiny (one pixel) to large (larger than the screen, so millions of pixels), from fully occluded (covered by other triangles) to completely visible etc. If the front end breaks them up, orders them right etc. it can save a ton of rendering effort. There's more to it, but that's the core concept.

            • DragonDaddyBear
            • 4 years ago

            I could really use a Visio diagram of this. There are so many parts that affect the GPU.

            So the front end would constitute a major architecture change. Maybe they are working on this in CGN 1.3?

            • AnotherReader
            • 4 years ago

            The front-end, like the back-end, is parallel. AMD’s GCN GPUs are organized into shader engines: each shader engine (SE) has one tessellator and rasterizer etc. On the other hand, the number of ROPs and compute units per SE is variable. Fiji, Hawaii, and Tonga have 4 SEs while Pitcairn and Bonaire have 2 SEs. Each SE of Fiji has 16 CUs while each SE of Hawaii has 11 CUs; each SE of Fiji and Hawaii has 16 ROPs while each SE of Bonaire has 8 ROPs. Just increasing the number of SEs would increase the throughput of the Fiji successor.

    • Westbrook348
    • 4 years ago

    Awesome review Scott. Thanks for all your hard work. Looks like Fiji can’t quite compete with Maxwell at AMD’s desired prices. I continue to be ecstatic with my 980Ti purchase. Less than $1 a day if I keep it two years. I can’t say enough good things about 3D Vision.

    • tbone8ty
    • 4 years ago

    295×2 still top dog! Awesome!

      • geekl33tgamer
      • 4 years ago

      Not sure why you’re down voted as your technically correct. That dual GPU card was always going to do that though if CrossfireX is supported by the game. 😉

        • tbone8ty
        • 4 years ago

        now that frame times have been cleaned up in a majority/most of games for the 295×2, its still the best card for the money period.

    • TwoEars
    • 4 years ago

    Sad to say it but as it stands this card is at the wrong price point, especially when you consider overclocking performance where the GeForce cards shine.

    I hope they have some margins on this card because they’re going to need to bring the price down to 980 levels. Either that or make some magic drivers or firmware.

    In comparison Nvidia is looking like a well-oiled machine. They have their line-up, pricing, drivers and roadmap sorted. I don’t think I’ve ever seen a more impressive GPU line-up.

    • snook
    • 4 years ago

    thanks for the review scott.

    • HisDivineOrder
    • 4 years ago

    The problem is AMD bet the farm on HBM and it’s going to be great… next year. You know? When everyone has access to HBM2 (and greater amounts of memory will be nice) and dieshrinks to help squeeze more GPU into the limited space one has with today’s interposers.

    Basically, nVidia had the good sense to know that at current fabrication limits, GDDR5 was good enough for this year and by the time they’ll need the superior HBM, they’ll also have superior dieshrinks to look forward to.

    But AMD was obsessed with being the first on HBM even if it netted them no real benefit and actually limited how large they could make their GPU to compensate. This is probably why they didn’t improve their front and back ends very much. They just didn’t have room on the die. Meanwhile, nVidia focused all their 28nm space on making the most of what they already had.

    Consider how much nVidia is doing with far less in terms of pure bandwidth and how well tuned nVidia has made Maxwell 2 to current games. AMD should have the sheer brute strength advantage, especially in a battle between the Fury non-X and 980 non-Ti, and they just don’t. They’re even. I don’t think it’s reasonable to expect AMD to resolve driver problems that both give them the frame rate advantage AND deliver those frames when they should, too. They’ll choose one and still be merely matching nVidia in the long run.

    By the time AMD gets around to sorting those drivers, nVidia will probably price drop their existing products and slot in a 980 Ti except with ALL of its GPU uncut…

    …or if AMD delays in driver development of old are any indication, Pascal may be upon us with HBM2.

    No matter what, the most important thing AMD could do right now is price drop before it’s too late and the narrative is established that their product is “almost good enough, but not quite there.” Already, they’re heavily leaning that way. If they could undercut nVidia, then suddenly the value proposition can swing their way. They’re really close.

    But if I’m looking at 1440p to 4K, I’m not looking at $580 for a Fury non-X. I’m thinking of the extra $70 to get a 980 Ti and really blow it up. If I’m thinking about sub-1440p (like 1080p or less), then I’m probably leaning toward the superior experience of the $450-500 980 non-Ti.

    If the Fury X were $550 and the Fury non-X were $450 (with drops on the R9 390 and 390X), then suddenly AMD would be back in the discussion even with their blemishes.

      • NoOne ButMe
      • 4 years ago

      So, NVidia dropped the price (or all the AIB randomly decided to, you choose) for the 980 and 980ti by $20.

      Their cards obviously are “not fine” due to NVidia adjusting the price. They are very good at getting maximum profit and outside of rare products like the 970 they usually don’t undercut the competition on equal performance.

      Of course, that isn’t to say they never have, done or will. =]

        • Ninjitsu
        • 4 years ago

        Well, Nvidia makes a lot of profit on the 980 (don’t know about the 980 Ti), and has been for months. The problem for AMD was always that if they released something that performed at parity with Maxwell but for the same or more, Nvidia could just slash prices and [i<]still[/i<] make profit. EDIT: The 980 prices look the same...heck there are many selling for $530 and above, and a ~$1000 Asus model. Didn't check Ti.

    • Dr_Gigolo
    • 4 years ago

    So this is third review I read of the vanilla fury and I’m rather surprised at the results. In almost all the benchmarks, the fury looses. These are settings that in my eyes, aren’t playable. What would the results be if you aimed for an average of 60fps? 8x MSAA is way overkill for bf4.

    Did you go out of your way to fatigue the fury? I am just trying to understand the performance discrepancy…

      • geekl33tgamer
      • 4 years ago

      You’re not serious? The settings used in the games are the same across all the cards, and are designed to place them under heavy load.

      If the settings were different from one card to the next, you would be on point. They are not, and this site’s creditability of the owner (Scott/Damage) and his staff and readership is built up entirely around the fact the site is still independent and impartial.

      Perhaps the question you should be asking is what sites you’ve read were paid to give a favourable review?

      • Damage
      • 4 years ago

      I can’t answer for how others tested or why or what results they got. I do offer extensive documentation about how I tested, and you should be able to reproduce the results if you have the right hardware and software. Do remember that every test scenario is different, so results will naturally vary. I do try to pick test areas that represent something common in each game.

      As for BF4 and 8x MSAA, I simply used the Ultra preset because it was simple and thus easier for me not to flub across different cards–and because BF4 is more playable than you might think at those settings in 4K. (The 99th % frame time is 37 ms, so 99% of all frames were rendered at almost 30 FPS.) Also, I prefer to go with presets at times simply because it’s what a gamer might do. I happen to think DICE chose pretty well with BF4’s Ultra.

      I had no sense when I defined these test parameters–some of which I first did for the 980 Ti review–how the Fiji’s architecture would respond to different AA levels. If I were guessing beforehand, I would have picked the card with HBM to do best at 8x MSAA. This choice wasn’t something I gamed by testing six different combinations of IQ and then picking the one that hurts AMD the most. You seem to be asking if that’s the case, so I’ll straight up say “nope.” Not at all.

        • guardianl
        • 4 years ago

        “As for BF4 and 8x MSAA, I simply used the Ultra preset because it was simple and thus easier for me not to flub across different cards”

        The fact that other reviewers (and AMD’s own benchmarks from the press slides) are basically all heavily customized (i.e. who the hell plays a game with 0xAA with an enthusiast GPU?!) should leave a sour taste in everyone’s mouth.

        “Also, I prefer to go with presets at times simply because it’s what a gamer might do. ”

        Yup. I’ll add that game presets are best because many games are now automatically selecting a preset based on your graphics card so it’s very representative.

          • Dr_Gigolo
          • 4 years ago

          I actually almost never play AAA games with AA on if the framerate is too low. I play at 1440p and I prefer to kick up textures and HBAO+ before AA. On witcher 3 I used the high preset, then I changed textures and geometry to ultra and set it to HBAO+. No AA and no Hairworks. Which worked out to about 45-50fps in most settings. I would prefer 60fps, but the image quality I got from it, made me compromise. I’m seriously considering getting another GPU (290X) just to get 60+fps.

          That said, I wouldn’t expect that this is the general consensus. It’s just that jaggies don’t bother me at resolutions like that.

          Doing 0xAF is just crazy.

          • YukaKun
          • 4 years ago

          I never use AA. In fact, I have never used it, except in some recent racing games (GRID series and DiRT).

          I don’t like the resulting images from AA. And I’m willing to believe I am not alone in that boat.

          Cheers!

          • Arbiter Odie
          • 4 years ago

          I also don’t use AA. It makes me think of heavily compressed h264 video. Bring on the higher resolutions, so the image detail is higher!

            • Wonders
            • 4 years ago

            Spatial AA helps with temporal artifacts (“dancing jaggies” and visual noise present only in animated images), which could otherwise only be resolved by DPIs nearly an order of magnitude higher than today’s displays. Modest use of Spatial AA is a good alternative to the overuse of Temporal AA (aka Motion Blur), if realism is the goal.

        • Dr_Gigolo
        • 4 years ago

        Yeah I was a little fast to the gun there. Of course it makes sense that you would have the same settings and benchmarks for all your reviews.

        That said, your review has made me curious as if there is a certain sweetspot where the fury gets overtaken by the GTX 980 and loses it’s edge in terms of demanding settings or if the other reviews simply chose to review sections of games that would make the fury look more favorable.

          • Damage
          • 4 years ago

          I haven’t tested all of this myself, but we know certain general principles based on the public review data and what AMD did when producing its benchmarks showing the Fury X beating the GTX 980 Ti. (Remember that?)

          First, we know that Fiji-based cards usually tend to perform better relative to the competition as the resolution rises. So testing primarily in 4K like I did is friendlier to Fiji than testing at lower resolutions.

          Second, in order to produce numbers where Fury X beat GTX 980 Ti like in the reviewer’s guide, AMD had a specific formula used in most games:

          0x MSAA
          0x anisostropic filtering
          Crank all IQ settings using shader effects, like shadowing and such
          Use shader-based post-process AA like FXAA/SMAA

          In effect, they were biasing against leaning on the ROP subsystem and the texturing hardware and entirely toward their 8.6 teraflop shader array. And it worked. No one who cares about image quality would play games that way, so the reviews didn’t track with those numbers at all. But that’s how they got a favorable playing field.

          So sure, it’s possible to make Fiji look better by choosing your resolution or IQ settings specifically for its strengths, but I try to remain grounded in “how would I play this game?” when I choose test settings. At the end of the day, the usage model for these things is gamers playing games, not some bizarre mix of IQ settings based on architectural considerations.

            • Dr_Gigolo
            • 4 years ago

            I guess I never saw the benchmarks where the Fury X beat the 980Ti. But those reviewer guide settings sound crazy. 0x AF is much much more unplayable for me than no AA is.

            And of course I understand what you are saying. It just piqued my interest is all.

            • tay
            • 4 years ago

            This reply is better than most entire reviews. Props.

      • Meadows
      • 4 years ago

      It’s 4x AA, not 8.

      If the Fury had more ROPs it wouldn’t be that much of a problem.

    • 7c0
    • 4 years ago

    [quote<] Should games begin using Fiji's prodigious shader power and memory bandwidth to good effect, the Fury seems well positioned to benefit from that shift. [/quote<] Back in the days when Windows Vista was still in beta, the Athlon 64 FX-74 was marketed as the main star of the Quad FX platform. Performance wasn't that good, power consumption either, but the hope was that Vista would bring better NUMA support to the table and the platform as a whole would see some substantial benefit from the well-optimized software that was sure to arrive sooner or later. But that never happened, and the Quad FX concept is a thing of the past that very few even remember. Now Intel on their part had some similar attempts to sell products or whole platforms that were ill suited to the then present state of the market they were targeting (I remember Skulltrail, someone please correct me if I'm wrong). The point is, a software product that could or could not happen hardly justifies buying a product that's in more than one way inferior to others present on the market today. This rarely happens, and if at all, by that time the next generation of hardware would have already arrived. If there's one thing to be learned by Intel, AMD or whoever that might be, it's that taking the sweet spot upon introduction of a product id what counts, not promises of driver updates or some properly coded software that would come in the future.

      • I.S.T.
      • 4 years ago

      Oh man, Skulltrail was Intel’s biggest joke on the industry since the first Atom CPUs…

        • the
        • 4 years ago

        Apparently you missed Itanium and Larrabee.

          • ImSpartacus
          • 4 years ago

          At least Larrabee meant that Intel was going to care about GPU performance.

          It was a look into the future even if it was a failure.

          • I.S.T.
          • 4 years ago

          Different industry for the first, and the second never came out.

            • the
            • 4 years ago

            Larrabee was eventually released but not as a video card but rather the Xeon Phi coprocessor card for compute.

        • jihadjoe
        • 4 years ago

        It was basically a Xeon board with some skull stickers right?

      • K-L-Waster
      • 4 years ago

      Yes – buying based on some capability that may be enabled through software at some indeterminate future time is a high risk proposition. Always better to either buy what is available now based on what it can do now, or hold off on buying altogether if you don’t have an immediate need.

      • the
      • 4 years ago

      By the time the dual core chips were out, it was clear that large core count CPUs would displace multi-socket systems in everything except servers. In hindsight, AMD should have pushed a dual socket enthusiast platform alongside the launch of the Athlon 64. In fact, it would have been trivial as the original Athlon FX chips used socket 940, the same as the Opterons at the time. AMD probably wanted to keep the markets segmented as the Opteron had higher margins.

      • Meadows
      • 4 years ago

      Up you go.

    • chuckula
    • 4 years ago

    While most of the top-bill is about the Fury, I do note that at least the R9-390 looks like a solid competitor for the GTX-970.

      • Leader952
      • 4 years ago

      As long as you don’t mind the higher electric bill and the heat.

      [quote<]The gap between the GTX 970 and the R9 390 under load is 121W. Yikes.[/quote<]

        • absinthexl
        • 4 years ago

        The $200 difference between very similar adaptive-sync IPS monitors is why I just decided on a 290X. Would have gone with a 970 otherwise.

          • Westbrook348
          • 4 years ago

          3D Vision and the complete lack of investment by AMD in the 3D market is why I paid a little extra for Nvidia and the ROG Swift

            • DoomGuy64
            • 4 years ago

            Not exactly. NV has native support which works really well, but only for their hardware. AMD has tridef which is 3rd party, but it is more versatile and works with anything.

          • Kretschmer
          • 4 years ago

          I did the same thing. If you’re going with AMD mid-range, the 290/290X blows everything else out of the water from a price/performance perspective. 290X for $270 after MIR? Not too shabby.

          Part of me is still kicking myself for volunteering for AMD’s driver shenanigans, but the combo took adaptive sync from “too expensive” to “reasonable.”

            • NoOne ButMe
            • 4 years ago

            The only shenanigans they pull over and over is not having optimized drivers for the cards they launch. If it has been out 3-6 months their drivers are a pretty safe bet.

            • Kretschmer
            • 4 years ago

            Judging by my limited ownership, there are stability and “quirkiness” issues that I never saw on my 660Ti. Things like turning Freesync on/off in game, changing resolutions, etc. can involve a force quit. It’s hard to tell what part of the platform is the weak link (GPU/drivers/monitor), as Freesync is such a new feature.

            That aside, I had to reinstall my drivers once in my first week (day?) of ownership, because every DX game was displaying huge corruption.

            AMD offers a workable platform, but it doesn’t feel as bulletproof as Nvidia. Is that worth $200 more for your monitor? That’s up to each buyer to decide.

        • killadark
        • 4 years ago

        i doubt a mere 100w is gonna add to anything substantial over 2 years so who cares but heat might be a thing.

        and i dont know why people still cry a about some 100-200w extra though its not much more in electricity bill anyways.

          • NoOne ButMe
          • 4 years ago

          It all depends on where you live. I am lucky to have dirt cheap, so, no pressure to upgrade my Fermi… Well. No pressure from power consumption 🙂

          • JustAnEngineer
          • 4 years ago

          U.S. national average residential electricity price: [url=http://www.eia.gov/electricity/monthly/epm_table_grapher.cfm?t=epmt_5_6_a<]12.6¢ per kW-hr[/url<]. For 500 hours of gaming per year, an extra 121 watts costs you $7.65/year. With the NVidia-based GeForce GTX 970 costing $70 more at Newegg than a similarly-performing AMD-based Radeon R9-290, the annual electricity savings will make up the difference after you have been using the same graphics card for [b<]more than nine years[/b<]. If you live in the tropical paradise of Hawaii, where electricity prices are 2½ times the national average, it would still take nearly four years of gaming to get back the price difference between the cards.

            • Leader952
            • 4 years ago

            Do your figures include the costs of Air Conditioning?

            Where did you get the annual 500 hours of gaming use figure from?

            • NeelyCam
            • 4 years ago

            If we consider increased air conditioning costs during the summer, we need to also consider reduced heating costs during the winter.

            • Den
            • 4 years ago

            The overall heating costs would not be reduced – you would just have a second device you use as a heater.

            • NeelyCam
            • 4 years ago

            Yes, but if you had a more efficient GPU, you would need to use that heater more. So, the extra electricity cost associated with a power-hungry GPU is partly compensated for by the fact that it also heats your house.

            • Den
            • 4 years ago

            You’re right.

            • Freon
            • 4 years ago

            Only if you use resistive heat, which is unlikely for most people.

            • Den
            • 4 years ago

            TL:DR: Even ignoring cost offsets from reduced heating costs, AC costs won’t add more than 20% in even the most extreme case (using an inefficient AC all year around during the time you play games). Adding 10% to the electricity cost would be a likely on the high side for most people since a lot of people will play games at night when their AC is more likely to be off.

            Using the data from this window unit (less efficient than a central unit I would imagine) because it has the energy guide data ($50 a year for 3 months, 8 hours a day at 12 cents per kwh): [url<]http://www.kenmore.com/kenmore-6-000-btu-room-air-conditioner/p-04270062000P?sid=IDxKMDFx20140801x001&KPID=04270062000&kpid=04270062000[/url<] Also assuming cost scales linearly such that adding 300 btu would increase the cost by 300btu/6000btu, If you assume 2 hours a day (730hours a year) during the time you use AC and you assume it the 100W extra generates 300 btu, then gaming for 2 hours a day for 3 months would be 1/80th of $50 or $0.63 a year. Meanwhile, the extra 73 kWh would cost about $8.76 a year at $0.12 per kWh. So a total of $9.39 a year. Which really would only make a difference when the two cards cost nearly the same amount for nearly the same performance. So if you were to believe the 390 and 970 performed the same, then you should purchase the 970 if they cost the same. If you live in a place hot like hawaii where electricity costs something like 36 cents per kwh (so the extreme case), and you use AC all 12 months, then it could be 50* 300btu/6000btu *2hr/8hr *12month/3months * $.36/$.12 = $7.5 a year. Meanwhile, the extra 73 kWh would cost about $26.28 a year at $0.36 per kWh. So a total of $33.78 a year. So under these conditions, something like the 390x would cost more than a 980 after about a year and a half if the 390x only drew 100W more (the load testing in this review showed it using 167W more). Either way, it seems ignoring electricity costs for AC and just adding a 0-15% to the cost could be a decent way of accounting for AC costs.

            • f0d
            • 4 years ago

            500 hours a year isnt much for a gamer – thats only 1.3 ish hours per day
            some of us play WAY more than that

            i play around 4 hours per day weekdays and 6+ hours a day weekends and i know of people that play way more than that
            also in australia we pay MUCH more per k/w of electricity than americans (about 24c/kwh here)

            when you look at it differently than your own usage in your own country then you will realize that for some people the power usage does matter

            (ill let you do the math but im guessing around a year of usage would make up the difference)

            • anotherengineer
            • 4 years ago

            It’s all relative, and what you consider a ‘gamer’

            I am a gamer, but a casual one, I probably log 300 hrs/yr. max. and I think that is high for a casual gamer myself.

            A normal gamer I would put about 300-600 hrs/yr.

            Over 600 hrs./yr. I would personally call that a hardcore gamer or someone with no family, no job or a part-time job, or lots of holidays/vacation time off work, a student perhaps, no life, or other hobbies except gaming.

            Just got my last hydro bill here in Ontario Canada, after delivery, regulatory charge, and 13% tax + obviously the electricity itself, came to 21.83 cents/kw.hr!!!

          • travbrad
          • 4 years ago

          It would mainly be an issue of heat for me, not power costs. If I close the door to my home office and fire up a game it gets noticeably warmer in that room, which is kind of nice in the winter but in the summer not so much. That’s with only a GTX660 too, which has roughly the same power consumption as a GTX970.

          That alone wouldn’t prevent me from buying a card with much higher power consumption, but when 2 cards have roughly equal performance and price I’d go for the one that generates less heat.

          • Bensam123
          • 4 years ago

          Glad people are noticing this now. Efficiency doesn’t matter outside of mobile and high capacity computing. A argument definitely could be made for mobile laptops based on maxwell as far as efficiency goes.

        • anotherengineer
        • 4 years ago

        My electric dryer uses about 5200W and with 2 little ones we do about 50 loads of laundry a month!!! Now that’s a lot of electricity!!

        Or the 4000W electric sauna, or the welder in the garage.

        When it comes to electronics 100+W is lots, however that is only during gaming. For most working adults, that’s probably not too many hours/week.

        And for a kid, they don’t care because they are not paying the electric bill lol

          • NeelyCam
          • 4 years ago

          [quote<]Or the 4000W electric sauna, or the welder in the garage.[/quote<] Yep - heating up the sauna costs me $1 a pop. But at least I have a heatpump-based water heater right next to it, so some of the heat transfers to the water, saving me a little bit.

            • chuckula
            • 4 years ago

            My sauna costs some money every time I fire it up too.
            But it’s also my lawn mower and the sauna part is provided by Mother Nature.

            • anotherengineer
            • 4 years ago

            Really?? You have a sauna?!?!?! I would have never guessed that 😉 😉

            No heat pump here. Forced air natural gas furnace for the winter, and an old beeter R12 central AC unit that probably has a SEER of 10 or less lol, but if it runs more than 10 days a year, than it’s a good summer for us, and that’s rare! ;(

            • oldog
            • 4 years ago

            [url<]https://en.wikipedia.org/wiki/Finnish_sauna[/url<]

            • NeelyCam
            • 4 years ago

            I think his comment was sarcasm.. but good link.

            You know – I think back in the day, academic elites were laughing at Wikipedia for good reasons, but these days it’s solid on a variety of topics to the point that ridicule is pretty much unwarranted, and is starting to amount to elitism.

            I for one hail Wikipedia as the easy-to-access source of information for the masses. Sucks for the elites who memorized all those books. Now everyone can find [i<]better[/i<] quality information with just a few keyboard/mouse clicks, without ever having to open a book. It's the enabler of the New World Order.

        • pyromaniac2749
        • 4 years ago

        I’m pretty sure that the extra 7-8$ for electricity each year is affordable if you can afford a mid/high range card in the first place. Where it really makes a difference is for mobile gamers. That extra 121W is probably a good hour extra of battery life that you are losing. 🙂

          • auxy
          • 4 years ago

          Not that Hawaii/Grenada or a real GM204 will ever appear in a laptop…

    • Milo Burke
    • 4 years ago

    Maybe you’ll give us a nice link from the comments page back to the original article?

    Preferably a blue one. I like those ones the most.

      • UberGerbil
      • 4 years ago

      As always, clicking on the image thumbnail takes you to the article. It’s worked that way forever.

        • Milo Burke
        • 4 years ago

        Yes, but he normally puts a link in there as well that is more immediately obvious.

        Exhibit A: [url<]https://techreport.com/discussion/28513/amd-radeon-r9-fury-x-graphics-card-reviewed#metal[/url<] Exhibit B: [url<]https://techreport.com/discussion/28499/amd-radeon-fury-x-architecture-revealed#metal[/url<] Exhibit C: [url<]https://techreport.com/discussion/28356/nvidia-geforce-gtx-980-ti-graphics-card-reviewed#metal[/url<]

        • cobalt
        • 4 years ago

        I actually just figured that out today, and wondered if it was always like that! (I don’t remember ever noticing a comments page without a link.)

          • UberGerbil
          • 4 years ago

          A comment page without an explicit link used to be common, maybe even the norm; I remember figuring out the picture link thing… back in 2006 maybe? Pre-2010, anyway.

        • NeelyCam
        • 4 years ago

        Oh, you’re right! That’s fantastic – thanks for the tip!

        I’ve been quietly grumbling about those links missing in some of the recent articles, having to go to the TR main page to find the link.

    • southrncomfortjm
    • 4 years ago

    I think that at this very moment, if I had a $1500 budget (which seems reasonable for a solid gaming machine), I’d go with an GTX 980 and then tack on a solid ultra widescreen 1440p G-sync monitor.

    Really wish AMD could find the resources to definitely pull ahead, or at least even, at a better price point. Would love to go back to Team Red. I’m even willing to put up with a bit more power usage and heat to go with AMD.

    Also, nice work Damage. You are a titan of the tech world.

      • travbrad
      • 4 years ago

      [quote<]Really wish AMD could find the resources to definitely pull ahead, or at least even, at a better price point.[/quote<] I wonder how much of the cost is because of the memory on these cards, with HBM on the Fury cards, and the 8GB of memory on the 390 cards. A slightly cheaper 390 with 4GB of memory would probably be a clear win for AMD, and I doubt there would be any difference at all in performance compared to the 8GB version.

    • jra101
    • 4 years ago

    No runs of the Beyond3D GPU architecture suite ?

      • Milo Burke
      • 4 years ago

      And no benchmarks for Sim Tower!!

        • TwoEars
        • 4 years ago

        Minecraft is where it’s at.

          • the
          • 4 years ago

          As weird as it sounds, I’ve seen some Minecraft tools (not the main game) bring a 12 core system to its knees with every CPU loaded to 100%. Despite its simplistic look, that game can produce massive amounts of data for its world.

        • NoOne ButMe
        • 4 years ago

        Lol! I still play that from time to time lol!

    • geekl33tgamer
    • 4 years ago

    Damage, great article as always.

    I’m having difficulty getting my head around the performance numbers though. At face value, AMD have not cut too much resource from the chip over the Fury X, but it’s TDP is practically the same.

    That’s all well and good when the Fury X’s 275W is going against the 250W GTX 980Ti, but this card is competing below a 170W GTX 980 for the most part.

    It’s a poor value, both in terms of frame rates as a raw performance metric as well as the frame times that AMD have always had a more noticeable issue with. Are there underlying issues with the architecture and/or drivers that need to be ironed out, because with all that resource on-board, even with it’s modest trimmings, it does not perform very well on all fronts?

    Edit: Corrected a spelling mistake.

      • Damage
      • 4 years ago

      One possible answer to the question of why Fiji (in both Fury and X forms) seems to underperform is something one of the other commenters has alluded to. The front end of Fiji looks very much like that of Tonga or Hawaii; it just has more CUs per cluster than Hawaii. It’s possible the front end of the GPU is a bottleneck in many games, which could explain Fiji’s similar performance to Hawaii, despite all the extra resources elsewhere.

      If that’s the case, and if the issue is just not “feeding the beast” quickly enough (and not just rasterization rates), then it’s possible that the coming shift to DX12 and Vulkan could be a big boon to AMD’s GCN-based GPUs. They may then be able to use their eight ACE engines to schedule lots of work in parallel and keep those big shader arrays active. Doing so could lead to a surprising turnaround in relative GPU performance.

      I also expect DX12 and Vulkan to lead to much lower frame times from games generally thanks mostly to a reduction in serialization and single-thread CPU overhead. This development could help AMD more than Nvidia–in part because AMD needs the help more, and in part because of GCN’s dormant ACE engines. Also, these “thin” APIs will move a lot of control back to game developers, taking the ability to optimize things behind the scenes out of the hands of the GPU driver guys–at least in theory. Fascinated to see how that plays out.

        • BobbinThreadbare
        • 4 years ago

        Great answer Damage.

        Interesting that the mantle version of Battlefield 4 is not doing that for them so far.

          • Damage
          • 4 years ago

          Well, something in the BF4 app and/or drivers isn’t optimized for Fiji, since D3D is outright faster than Mantle on the Fury cards. But Mantle is a legacy product now.

            • NoOne ButMe
            • 4 years ago

            Mantle isn’t legacy. It’s more of a “Mantle is dead, long live Mantle” as they’re using it as the base to push whatever they feel like they should push.

            LiquidVR is an extension of Mantle, for example.

          • geekl33tgamer
          • 4 years ago

          I remember reading that Mantle was no longer being worked on? I suspect there’s no optimizations in the code for Fiji.

            • derFunkenstein
            • 4 years ago

            [url<]http://www.pcgamer.com/amd-halts-mantle-optimizations-for-current-and-future-graphics-cards/[/url<] Was reading this yesterday, and kinda thought "well we already knew that".

        • Melvar
        • 4 years ago

        [quote<]Also, these "thin" APIs will move a lot of control back to game developers, taking the ability to optimize things behind the scenes out of the hands of the GPU driver guys--at least in theory.[/quote<] I remember the days before GPU drivers were optimized for individual games. I don't recall game developers having done more optimization before then, so much as a lot of games were just never being optimized by anybody. I think DX12 will increase the quality gap between good and crappy games.

          • HisDivineOrder
          • 4 years ago

          Hopefully, the “crappy games” developer will stick with DX11 that is still meant to be used and supported while the “good games” developer will move on over to DX12 where they should have more latitude to really use the hardware.

          • Ninjitsu
          • 4 years ago

          DX12 will still retain the “traditional” DX abstraction layer for compatibility, so I guess smaller games will continue using Feature Level 11 etc.

          • UberGerbil
          • 4 years ago

          But in those days there was less use of 3rd party engines and libraries, which tend to be written by the better developers and tend to be optimized (to the extent they can be with earlier APIs). Just a handful of engines are used in a lot of games, so if those are optimized a lot of games will be also.

        • geekl33tgamer
        • 4 years ago

        Thank you for such an insightful reply. The bottlenecking would certainly make a lot of sense after just refreshing my memory with some die shot comparisons. Somehow I missed that the ROP’s count are identical.

        There’s no denying it’s a very forward thinking architecture, but isn’t that approach a bit risky? I applaud AMD for the move, but it’s reliant on something we currently don’t have yet. They did this with their FX CPU’s too. On paper the CMT architecture could really shine with the right workload. Unfortunately, everyone wrote code predominantly to leverage high floating-point integer calculations (especially games) because that’s what was always done before it. AMD’s efforts were never enough to force the industry’s hand, and as a result of that CPU architecture lacking dedicated FPU’s per core like Intel, they lost out. Shared resources attached to deep pipelines didn’t work.

        I digress slightly, but this GPU architecture just feels the same somehow. There’s a lot of assumption that things will change in how software is written for it to fully leverage all that performance. I worry that the changes needed may never happen, and DirectX 12 alone may not be the single silver bullet to save it’s day?

          • Milo Burke
          • 4 years ago

          Windows 8 didn’t save the day for Billdozer. We’re jaded because we’ve heard this before.

            • geekl33tgamer
            • 4 years ago

            True, but it wasn’t the fault of Windows anyway. Go take a peek at how CMT works and why the shared FPU’s were a bad idea in the Bulldozer design.

            But yes, it’s classic AMD and things we have sadly all heard before. 🙁

            • NoOne ButMe
            • 4 years ago

            Windows 8 also didn’t have a massive API change for CPUs, because, well. That doesn’t make much sense, does it.

            It also didn’t technically make sense as having a reason to be able to save the day… Windows 10 with DX12 has technical merit for “saving the day”.

        • TheSeekingOne
        • 4 years ago

        It’s simple! Pretty much all the games in your test suite are games that AMD haven’t optimized their drivers for. I think they’re busy working on their DX12 and Vulkan drivers, or maybe something else. In FarCry 4, which is arguably the best looking game in your suite, the FuryX does pretty well, so I don’t think it’s an issue related to the engineering of the chip.

          • f0d
          • 4 years ago

          most of the games are actually amd sponsored games

          -alien isolation
          -battlefield 4
          -crysis 3
          -civilization beyond earth
          are all amd sponsored games and 2 of them are mantle games which gives amd an even higher advantage

          if they cant optimize their drivers for their own sponsored games then thats a problem with amd not anyone else

          (in no way am i saying TR is biased by choosing mostly amd games – im just pointing out that amd’s drivers problem is amd problem – not anybody elses)

        • Behrouz
        • 4 years ago

        May i ask you Why did you Test With Catalyst 15.15 not 15.7 ?

          • Ninjitsu
          • 4 years ago

          He’s explained in the review, go read.

          • NoOne ButMe
          • 4 years ago

          Explained in the first page bub.

          • Damage
          • 4 years ago

          Ack, I*did* test the R9 Fury with Cat 15.7. The Fury X and 390/X cards were tested with 15.15.

          I just realized the table in the testing methods page said 15.15 for the Strix R9 Fury. My bad. I’ve corrected it.

        • Ninjitsu
        • 4 years ago

        Well, there won’t be that much of a turn-around in GPU performance – Pascal will obviously be designed with DX12 and Vulkan in mind, so it’s pretty much going to be neck and neck (I hope, otherwise AMD’s in trouble).

        Of course, existing cards may see the turn-around you speak of, though AMD won’t get fresh sales from that (because this event is about a year away).

        • kuttan
        • 4 years ago

        @ damage Yes just what I thought. +1

        • nanoflower
        • 4 years ago

        If it happens I still don’t believe it will make a difference for these products. That’s because we are probably looking at two years before the majority of new games use DX12. In the interim we will see some support it but the majority will still focus on DX9/10/11 because of the installed base of OS (not everyone is going to move to Win10 ) and hardware (that may get support for DX12 but won’t be able to use it effectively because the hardware was designed for DX9/10/11.) That gives Nvidia plenty of time to react to whatever changes DX12 brings in the interaction between games and GPUs.

        • GrimDanfango
        • 4 years ago

        A big ‘boon?
        [url<]http://c2.staticflickr.com/6/5005/5340405056_e634ccb48a_b.jpg[/url<]

      • ronch
      • 4 years ago

      “Are there underlying issues with the architecture and/or drivers that need to be ironed out”

      AMD’s always had to strap on two more cylinders, a variable geometry turbo, and run 2,000RPM higher to match Nvidia’s performance.

      Drivers? Of course there are driver issues. There are always driver issues with AMD. 😛

    • chuckula
    • 4 years ago

    [quote<]Also, because some of you expressed a desire to see more testing at lower resolutions, we tested both The Witcher 3 and Project Cars at 2560x1440. Doing so made sense because we had trouble getting smooth, playable performance from all the cards in 4K, anyhow.[/quote<] Thanks Damage! Additionally, all facetiousness about your chemical intake aside, THANK YOU for putting in the hard work to get this review done.

      • Prestige Worldwide
      • 4 years ago

      Thank you very much for testing at lower res for the 99% of us who do not play at 4K.

        • TwoEars
        • 4 years ago

        Yes – 2560×1440 testing in Witcher 3. That’s all I need right there.

        • Ninjitsu
        • 4 years ago

        Yup! Thanks a lot!

        • wimpishsundew
        • 4 years ago

        According to Steam, it’s 99.9%

        • anotherengineer
        • 4 years ago

        And for the 98.89% who don’t game at 2560×1440 also!!

        Well according to steam anyway……………………………

    • DPete27
    • 4 years ago

    [quote<]Asus has done heroic work creating a cooler [b<]than[/b<] will keep this thing relatively quiet[/quote<] Last paragraph of the conclusion. Doh! Maybe the meth I.V. ran dry.

      • Damage
      • 4 years ago

      Doh. Fixed.

        • pranav0091
        • 4 years ago

        Hey Scott, I have had a little complaint for some time now. Can you make those coloured lines in the chart-index a little thicker ? Its just about a pixel wide on my 1080p display and I can hardly make out the colours between the 970 and the 780 Ti (in the index/legend, not in the plots themselves)

        Thanks 🙂

    • chuckula
    • 4 years ago

    [quote<]We saw substantially smoother gaming across our suite of eight test scenarios out of the GeForce GTX 980 than we did from the Radeon R9 Fury. This outcome will be nothing new for those folks who have been paying attention. [/quote<] That's AMD's biggest issue right there. Raw FPS aren't really an issue, especially when we remove the GTX-980Ti from the comparison, but it's the frametime percentiles that are the kicker. In a broader sense, AMD is probably not too pleased that we are having a conversation about a slightly cut-down version of their flagship chip vs. the 2014 era GM204 parts. Just remember, Fury was [i<]supposed[/i<] to be the $650 part that would fight against the GTX-980Ti while Fury-X was supposed to be an $800 part that would embarrass the Titan X.

      • Prestige Worldwide
      • 4 years ago

      Indeed. I think that’s why AMD named these cards the Fury line.

      They are furious that their top of the line chips are just keeping up with their competition’s mid-range chips.

    • I.S.T.
    • 4 years ago

    [quote<]Hallucinations are possible and may involve [b<]large, red tentacles reaching forth[/b<] from a GPU cooler.[/quote<] Sounds like my idea of a good time. ( ͡° ͜ʖ ͡°)

    • chuckula
    • 4 years ago

    [quote<] I'm mainlining an I.V. drip consisting of Brazilian coffee, vitamin B, and methamphetamine just to bring you these words. [/quote<] [url=https://www.youtube.com/watch?v=zL8wjDftejc<]Looks like Damage picked the wrong week to quit![/url<]

      • Gyromancer
      • 4 years ago

      I loved that running joke from Airplane!

    • geekl33tgamer
    • 4 years ago

    All that extra resource, and it can only tie a GTX 980. It’s also barely faster than the 390X in most of the game tests.

    How did that happen?

      • guardianl
      • 4 years ago

      I wrote a comment for realworldtech about this, the summary bit is:

      “Basically we’re looking at an ~20% reduction in compute/texturing between the two cards, but at 4k resolutions the performance difference is consistently close to the 5% clockspeed difference. Given that the performance of Fury scales better with higher resolutions I think we can still discount ROPs as the primary bottleneck. Pretty much confirms that the front end is the bottleneck and that the design is highly unbalanced for current games.”

      [url<]http://www.realworldtech.com/forum/?threadid=151205&curpostid=151205[/url<]

        • jihadjoe
        • 4 years ago

        I recall David Kanter himself quoted your explanation in the last podcast and said you were probably right. Thumbs up!

        • travbrad
        • 4 years ago

        [quote<]the design is highly unbalanced for current games[/quote<] Sounds a lot like Bulldozer when it was first released. At least in that case you could consider that there are other workloads for a CPU than gaming though, whereas with a GPU gaming is what it's all about. Interesting read though, thanks for posting.

      • Freon
      • 4 years ago

      [url<]https://dl.dropboxusercontent.com/u/27624835/misc/aliens.jpg[/url<] ??

      • torquer
      • 4 years ago

      One of the great things about AMD is that they are very forward thinking with their chip designs. One of the terrible things about AMD is that they have often looked forward and been wrong. The lifespan of a GPU design isn’t long enough, I don’t think, for AMD to be vindicated on these design decisions.

      Props to them for bringing HBM to market first. By luck or calculation, Nvidia has an opportunity to capitalize on HBM 2.0 and a die shrink to realize all of the performance benefits AMD is now missing out on. Time will tell.

        • NoOne ButMe
        • 4 years ago

        AMD hasn’t been wrong with graphics for quite a while. I don’t think they’ll be proven wrong here, but, well. Consumers should buy on what it is NOW. Not what the future holds.

          • torquer
          • 4 years ago

          I suppose it depends on your definition of “wrong.” AMD has pretty competitive products, but its been rare in the last several years for them to really jump ahead of Nvidia from an engineering perspective. Their cards often end up being hotter, louder, and much more power hungry to match far more efficient Nvidia chip designs. When I say “wrong” I simply mean that their approaches have seemed extremely promising on paper with very forward looking designs but that those generational leaps in performance have not really materialized the same way that they have for Nvidia. And speaking on broader terms AMD’s CPU designs have followed a similar model ever since the end of the Phenom architecture.

            • NoOne ButMe
            • 4 years ago

            Huh? An engineering perspective? They match Nvidia at games with the same or better pref/mm^2 while having vastly better architecture outside of games. AMD has made better hardware for years compared to Nvidia. Maxwell is the first architecture since before Tesla that really beats AMD’s engineering. And that will be debatable depending on if AMD gets a larger jump from DX12 Than Nvidia. Being able to take off all the compute in Kepler is something AMD could have done in place of GCN. They didn’t. Maxwell is the marvel as it raised performance/clock and lowered power at the same time. :D. On top of what Kepler did.

            Nvidia’s Bette at software and marketing/branding/all of that. Engineering? The final products are rarely as impressive as AMD’s.

            AMD also has a chance to take the pref/watt crown back soonish. Now, you can say it is all due to HBM, but, AMD measures 512bx8GBps bus at over 80 watts of power. The 980ti uses over 80W for 384x7GBps. Same for the Titan Black. want to take a guess at who has spent more on GDDR5 memory bus technology? I’m guessing Nvidia. Now, total on GDDR5 including the invention… Surely AMD.

            • torquer
            • 4 years ago

            You should try to separate emotional attachment to AMD from fact. AMD doesn’t make bad products, but to match Nvidia’s performance for the last couple of generations they have had to make some very large and hot cards. Why is that bad? Large GPUs are expensive and difficult to manufacture, cutting into profits. Less profits means less investment in R&D. Less investment in R&D means less engineering resources. You can take it from there.

            The very points you make just underscore what I said. AMD has bet on certain things and while in their own right and in an academic sense it is cool and interesting technology, it has not panned out for them in the only ways that matter – profit and market share. Nvidia is just better at the business of making GPUs and their bottom line shows it. As is typical of ANY comment thread on ANY story involving AMD or Nvidia, people conveniently forget that this is a business and the rules of business apply.

            Unfortunately for AMD, making cool designs that just bring parity with the competition doesn’t pay as well as making the right architecture calls at the right time.

            • AnotherReader
            • 4 years ago

            I’m not sure the story is as simple as NVidia making the right choices in each generation. Maxwell is a technical marvel, but for most of the time, NVidia’s superior software engineering and developer relations have balanced their inferior physical design. At times, they have also had the superior architecture. Let’s examine the gaming GPUs at the last two nodes: 40 nm and 28 nm.

            40 nm Node:
            1) Cypress (VLIW 5) vs no show: winner VLIW 5
            2) Cypress vs early Fermi: Fermi faster with 100 W more power and lower yields. Die size in square mm is 334 for VLIW 5 and 529 for GTX 480
            3) Cayman (VLIW 4) vs refined Fermi: Fermi is 15% faster with 50W more power consumed. Considering the power consumption difference, it is a tie. Die size is 389 vs 520

            40 nm verdict: AMD early leader, NVidia catches up by the time 40 nm is mature

            28 nm node:
            1) Tahiti vs Kepler: GTX 680 is faster and consumes less power by about 30 W. Die size is 352 vs 294 so Kepler is better from that perspective as well. Today Tahiti > Kepler
            2) Hawaii vs Kepler: approximately the same speed and power consumption. Die size is 438 vs 561 so Hawaii is the more profitable GPU
            3) Hawaii vs Maxwell (GTX 980): Maxwell is the winner in all ways. Performance is 10 to 15% higher with 100 W less power consumption and a smaller die to boot: 438 vs 398. Maxwell is the winner
            4) Fiji vs Maxwell (GTX Titan X and 980 Ti): Maxwell wins, but the margin is as close as Fermi vs Cayman. It seems clear that the ratio of geometry to shaders is optimal for Maxwell, but not so for Fiji. Maxwell still leads on power consumption, but the gap isn’t as great as it was.

            28 nm verdict: NVidia and AMD close at start with NVidia pulling away at the end. Will there be a twist in the tale with DX12? Only time will tell.

            • NoOne ButMe
            • 4 years ago

            Who brought business into it? I sure didn’t. AMD makes better hardware when looking at the GPU itself. They’ve failed in practically every other area in making everything you need for a good GPU compared to Nvidia.

            I would never sanely argue that AMD is better at business than Nvidia.

            I’m saying that GCN 100% a better architecture than Kepler, and about on par with Maxwell, if not slightly ahead. From the actual silicon itself.

            And, AMD has a pretty long history of being ahead (in silicon) given how long “true” GPUs have been around.

      • K-L-Waster
      • 4 years ago

      My guess – Damage is right, and a lot of it is inefficient drivers.

        • geekl33tgamer
        • 4 years ago

        I suspect that’s a big part of it too, but then why release it? We’re in a position (again) where an AMD product that’s great on paper isn’t looking as good as it can do at launch because of software.

        It makes my head hurt to think that AMD still have not realised that it’s equally as important after all this time to get it right on day one?

          • K-L-Waster
          • 4 years ago

          Oh I agree – as I’ve said before: Hardware + Software = Product. They seem to do well on the hardware part of the equation, but the software is an after thought.

          On the bright side, the software at least is more flexible when it comes to fixing the problems.

            • geekl33tgamer
            • 4 years ago

            Totally agree, but then how long do you wait? I mean, I have *that* thread that’s famous on this site for the mega “Piledriver Beast” system I built.

            I ended up selling it’s R9 290X’s 18 months into their product cycle, because even now they are still fixing issues in the software for that architecture for things promised easily a year ago. It pained me to do it, but I switched to GeForces.

            Sad to say, they honestly have not given me any issues from a software standpoint. Gamers know this – look at the market share split on Steam between AMD and Nvidia – The AMD piece of that pie chart is getting smaller and smaller.

            These cards today won’t help that anymore than the 290X did 2 years ago. Props for HBM, but my gut feeling is their competition will come along with something using it sooner or later and best them at their own game (again) in every…possible…way. 🙁

            • Milo Burke
            • 4 years ago

            I think it’s less a matter of AMD waiting a week to release a product and more about which department AMD needs to double their staffing in.

            • geekl33tgamer
            • 4 years ago

            Yeah – I bet their driver guy would like a colleague to join his, um, team.

            • HisDivineOrder
            • 4 years ago

            The poor guy. He rarely gets any vacation time nowadays.

            • the
            • 4 years ago

            Funny, he should be on summer vacation now that school is out.

          • the
          • 4 years ago

          The weird thing is that this review used three different drivers across different AMD cards. I think the guy who is responsible for card naming schema is pulling double duty by assigning driver version numbers. Everything is a complete mess requiring a secret decoder ring to figure everything out.

Pin It on Pinterest

Share This