AMD’s Radeon R9 Fury X graphics card reviewed

The Fury X is here. At long last, after lots of hype, we can show you how AMD’s new high-end GPU performs aboard the firm’s snazzy new liquid-cooled graphics card. We’ve tested in a range of games using our famous frame-time-based metrics, and we have a full set of results to share with you. Let’s get to it.

A brief stop in Fiji

Over the past several weeks, almost everything most folks would want to know about the new Radeon GPU has become public knowledge—except for how it performs. If you’ve somehow missed out on this info, let’s take a moment to summarize. At the heart of the Radeon R9 Fury X are two new core technologies: the Fiji graphics processor and a new type of memory known as High Bandwidth Memory (HBM).

The Fiji GPU is AMD’s first new top-end GPU in nearly two years, and it’s the largest chip in a family of products based on the GCN architecture that stretches back to 2011. Even the Xbox One and PS4 are based on GCN, although Fiji is an evolved version of that technology built on a whole heck of a lot larger scale. Here are its vitals compared to the biggest PC GPUs, including the Hawaii chip from the Radeon R9 290X and the GM200 from the GeForce GTX 980 Ti.

ROP

pixels/

clock

Texels

filtered/

clock

(int/fp16)

Shader

processors

Rasterized

triangles/

clock

Memory

interface

width (bits)

Estimated

transistor

count

(Millions)

Die size

(mm²)

Fab

process

GM200 96 192/192 3072 6 384 8000 601 28 nm
Hawaii 64 176/88 2816 4 512 6200 438 28 nm
Fiji 64 256/128 4096 4 4096 8900 596 28 nm

Yep, Fiji’s shader array has a massive 4096 ALU lanes or “shader processors,” more than any other GPU to date. To give you some context for these numbers, once you factor in clock speeds, the Radeon R9 Fury X has seven times the shader processing power of the Xbox One and over seven times the memory bandwidth. Even a block diagram of Fiji looks daunting.


A simplified block diagram of a Fiji GPU. Source: AMD.

In many respects, Fiji is just what you see above: a larger implementation of the same GCN architecture that we’ve known for several years. AMD has made some important improvements under the covers, though. Notably, Fiji inherits a delta-based color compression facility from last year’s Tonga chip. This feature should allow the GPU to use its memory bandwidth and capacity more efficiently than older GPUs like Hawaii. Many of the other changes in Fiji are meant to reduce power consumption. A feature called voltage-adaptive operation, first used in AMD’s Kaveri and Carrizo APUs, should allow the chip to run at lower voltages, reducing power draw. New methods for selecting voltage and clock speed combinations and switching between those different modes should make Fiji more efficient than older GCN graphics chips, too. (For more info on Fiji’s graphics architecture, be sure to read my separate article on the subject.)

This combination of increased scale and reduced power consumption allows Fiji to cram about 45% more processing power into roughly the same power envelope as Hawaii before it. Yet even that fact isn’t Fiji’s most notable attribute. Instead, Fiji’s signature innovation is HBM, the first new type of high-end graphics memory introduced in seven years. HBM takes advantage of a technique in chip-building technology known as stacking, in which multiple silicon dies are piled on top of one another in order to improve the bit density. We’ve seen stacking deployed in the flash memory used in SSDs, but HBM is perhaps even more ambitious. And Fiji is the first commercial implementation of this tech.


A simplified illustration of an HBM solution. Source: AMD.

The Fiji GPU sits atop a piece of silicon, known as an interposer, along with four stacks of HBM memory. The individual memory chips run at a relatively sedate speed of 500MHz in order to save power, but each stack has an extra-wide 1024-bit connection to the outside world in order to provide lots of bandwidth. This “wide and slow” setup works because the GPU and memory get to talk to one another over the silicon interposer, which is the next best thing to having memory integrated on-chip.

With four HBM stacks, Fiji effectively has a 4096-bit-wide path to memory. That memory transfers data at a rate of 1Gbps, yielding a heart-stopping total of 512 GB/s of bandwidth. The Fury X’s closest competitor, the GeForce GTX 980 Ti, tops out at 336 GB/s, so the new Radeon represents a substantial advance.

HBM also saves power, both on the DRAM side and in the GPU’s memory control logic, and it enables an entire GPU-plus-memory solution to fit into a much smaller physical footprint. Fiji with HBM requires about one-third the space of Hawaii and its GDDR5, as illustrated above.

This very first implementation of HBM does come with one potential drawback: it’s limited to a total of 4GB of memory capacity. Today’s high-end cards, including the R9 Fury X, are marketed heavily for use with 4K displays. That 4GB capacity limit could perhaps lead to performance issues, especially at very high resolutions. AMD doesn’t seem to think it will be a problem, though, and, well, you’ll see our first round of performance numbers shortly.

The Radeon R9 Fury X card

Frankly, I think most discussions of the physical aspects of a graphics card are horribly boring compared to the GPU architecture stuff. I’ll make an exception for the Fury X, though, because this card truly is different from the usual fare in some pretty dramatic ways.

GPU

boost

clock

Shader

processors

Memory

config

PCIe

aux

power

Peak

power

draw

E-tail

price

Radeon
R9 Fury X
1050 MHz 4096 4 GB HBM 2 x 8-pin 275W $649.99

The differences start with the card itself, which is a stubby 7.7″ long and has an umbilical cord running out of its belly toward an external water cooler. You can expect this distinctive layout from all Fury X cards, because AMD has imposed tight controls for this product. Board makers won’t be free to tweak clock speeds or to supply custom cooling for the Fury X.

Instead, custom cards will be the domain of the vanilla Radeon R9 Fury, due in mid-July at prices starting around $550. The R9 Fury’s GPU resources will be trimmed somewhat compared to the Fury X, and customized boards and cooling will be the norm for it. AMD tells us to expect some liquid-cooled versions of the Fury and others with conventional air coolers.

Few of those cards are likely to outshine the Fury X, though, because video card components don’t get much more premium than these. The cooling shroud’s frame is encased in nickel-plated chrome, and the black surfaces are aluminum plates with a soft-touch coating. The largest of these plates, covering the top of the card in the picture above, can be removed with the extraction of four small hex screws. AMD hopes end-users will experiment with creating custom tops via 3D printing.

I’m now wondering if that liquid cooler could also keep a beer chilled if I printed a cup-holder attachment. Hmm.

The Fury X’s array of outputs is relatively spartan, with three DisplayPort 1.2 outputs and only a single HDMI 1.4 port. HDMI 2.0 support is absent, which means the Fury X won’t be able to drive most cheap 4K TVs at 60Hz. You’re stuck with DisplayPort if you want to do proper 4K gaming. Also missing, though perhaps less notable, is a DVI port. That omission may sting a little for folks who own big DVI displays, but DisplayPort-to-DVI adapters are pretty widely available. AMD is sending a message with this choice of outputs: the Fury X is about gaming in 4K, with FreeSync at high refresh rates, and on multiple monitors. In fact, this card can drive as many as six displays with the help of a DisplayPort hub.

Here’s a look beneath the shroud. The Fury X’s liquid cooler is made by Cooler Master, as the logo atop the water block proclaims. This block sits above the GPU and the HBM stacks, pulling heat from all of the chips.

AMD’s decision to make liquid cooling the stock solution on the Fury X is intriguing. According to Graphics CTO Raja Koduri, the firm found that consumers want liquid cooling, as evidenced by the fact that they often wind up paying extra for aftermarket kits. This cooler does seem like a nice inclusion, something that enhances the Fury X’s value, provided that the end user has an open emplacement in his or her case for a 120-mm fan and radiator. Sadly, I don’t think the new Damagebox has room for another radiator, since I already have one installed for the CPU.

The cooler in the Fury X is tuned to keep the GPU at a frosty 52°C, well below the 80-90°C range we’re used to seeing from stock coolers. The card is still very quiet in active use despite the aggressive temperature tuning, probably because the cooler is rated to remove up to 500W of heat. Those chilly temps aren’t just for fun, though. At this lower operating temperature, the Fiji GPU’s transistors shouldn’t be as leaky. The chip should convert less power into heat, thus improving the card’s overall efficiency. The liquid cooler probably also helps alleviate power density issues, which may have been the cause of the R9 290X’s teething problems with AMD’s reference air coolers.

That beefy cooler should help with overclocking, of course, and the Fury X’s power delivery circuitry has plenty of built-in headroom, too. The card’s six-phase power can supply up to 400 amps, well above the 200-250 amps that the firm says is needed for regular operation. The hard limit in the BIOS for GPU power is 300W, which adds up to 375W of total power board power draw. That’s 100W beyond the Fury X’s default limit of 275W.

To better facilitate overclocking, the Catalyst Control Center now exposes separate sliders for the GPU’s clock speed, power limit, temperature, and maximum fan speed. Users can direct AMD’s PowerTune algorithm to seek the mix of acoustics and performance they prefer.

Despite its many virtues, our Fury X review unit does have one rather obvious drawback. Whenever it’s powered on, whether busy or idle, the card emits a constant, high-pitched whine. It’s not the usual burble of pump noise, the whoosh of a fan, or the irregular chatter of coil whine—just an unceasing squeal like an old CRT display might emit. The noise isn’t loud enough to register on our sound level meter, but it is easy enough to hear. The sound comes from the card proper, not from the radiator or fan. An informal survey of other reviewers suggests our card may not be alone in emitting this noise. I asked AMD about this matter, and they issued this statement:

AMD received feedback that during open bench testing some cards emit a mild “whining” noise. This is normal for most high speed liquid cooling pumps; Usually the end user cannot hear the noise as the pumps are installed in the chassis, and the radiator fan is louder than the pump. Since the AMD Radeon R9 FuryX radiator fan is near silent, this pump noise is more noticeable.

The issue is limited to a very small batch of initial production samples and we have worked with the manufacturer to improve the acoustic profile of the pump. This problem has been resolved and a fix added to production parts and is not an issue.

That’s reassuring—I think. I’ve asked AMD to send us a production sample so we can verify that retail units don’t generate this noise.

Fury X cards have one more bit of bling that’s not apparent in the pictures above: die blikenlights. Specifically, the Radeon logo atop the cooler glows deep red. (The picture above lies. It’s stoplight red, honest.) Also, a row of LEDs next to the power plugs serves as a GPU tachometer, indicating how busy the GPU happens to be.

These lights are red by default, but they can be adjusted via a pair of teeny-tiny DIP switches on the back of the card. The options are: red tach lights, blue tach lights, red and blue tach lights, and tach lights disabled. There’s also a green LED that indicates when the card has dropped into ZeroCore power mode, the power-saving mode activated when the display goes to sleep.

Speaking of going to sleep, that’s what I’m gonna do if we don’t move on to the performance results. Let’s do it.

Our testing methods

Most of the numbers you’ll see on the following pages were captured with Fraps, a software tool that can record the rendering time for each frame of animation. We sometimes use a tool called FCAT to capture exactly when each frame was delivered to the display, but that’s usually not necessary in order to get good data with single-GPU setups. We have, however, filtered our Fraps results using a three-frame moving average. This filter should account for the effect of the three-frame submission queue in Direct3D. If you see a frame time spike in our results, it’s likely a delay that would affect when the frame reaches the display.

We didn’t use Fraps with Civ: Beyond Earth or Battlefield 4. Instead, we captured frame times directly from the game engines using the games’ built-in tools. We didn’t use our low-pass filter on those results.

As ever, we did our best to deliver clean benchmark numbers. Our test systems were configured like so:

Processor Core i7-5960X
Motherboard Gigabyte
X99-UD5 WiFi
Chipset Intel X99
Memory size 16GB (4 DIMMs)
Memory type Corsair
Vengeance LPX
DDR4 SDRAM at 2133 MT/s
Memory timings 15-15-15-36
2T
Chipset drivers INF update
10.0.20.0

Rapid Storage Technology Enterprise 13.1.0.1058

Audio Integrated
X79/ALC898

with Realtek 6.0.1.7246 drivers

Hard drive Kingston
SSDNow 310 960GB SATA
Power supply Corsair
AX850
OS Windows
8.1 Pro
Driver
revision
GPU
base

core clock

(MHz)

GPU
boost

clock

(MHz)

Memory

clock

(MHz)

Memory

size

(MB)

Asus
Radeon
R9 290X
Catalyst 15.4/15.5
betas
1050 1350 4096
Radeon
R9 295 X2
Catalyst 15.4/15.5
betas
1018 1250 8192
Radeon
R9 Fury X
Catalyst 15.15
beta
1050 500 4096
GeForce
GTX 780 Ti
GeForce 352.90 876 928 1750 3072
Gigabyte
GeForce GTX 980
GeForce 352.90 1228 1329 1753 4096

GeForce GTX 980 Ti
GeForce
352.90
1002 1076 1753 6144
GeForce
Titan X
GeForce 352.90 1002 1076 1753 12288

Thanks to Intel, Corsair, Kingston, and Gigabyte for helping to outfit our test rigs with some of the finest hardware available. AMD, Nvidia, and the makers of the various products supplied the graphics cards for testing, as well.

Also, our FCAT video capture and analysis rig has some pretty demanding storage requirements. For it, Corsair has provided four 256GB Neutron SSDs, which we’ve assembled into a RAID 0 array for our primary capture storage device. When that array fills up, we copy the captured videos to our RAID 1 array, comprised of a pair of 4TB Black hard drives provided by WD.

Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests.

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Sizing ’em up

Do the math involving the clock speeds and per-clock potency of the latest high-end graphics cards, and you’ll end up with a comparative table that looks something like this:

Peak pixel

fill rate

(Gpixels/s)

Peak

bilinear

filtering

int8/fp16

(Gtexels/s)

Peak

rasterization

rate

(Gtris/s)

Peak

shader

arithmetic

rate

(tflops)

Memory

bandwidth

(GB/s)

Asus R9 290X 67 185/92 4.2 5.9 346
Radeon R9 Fury X 67 269/134 4.2 8.6 512
GeForce GTX 780 Ti 37 223/223 4.6 5.3 336
Gigabyte GTX 980 85 170/170 5.3 5.4 224
GeForce GTX 980 Ti 95 189/189 6.5 6.1 336
GeForce Titan X 103 206/206 6.5 6.6 336

Those are the peak capabilities of each of these cards in theory. As I noted in my article on the Fiji GPU architecture, the Fury X is particularly strong in several departments, including memory bandwidth and shader rates, where it substantially outstrips both the R9 290X and the competing GeForce GTX 980 Ti. In other areas, the Fury X’s theoretical graphics rates haven’t budged compared to the 290X, including the pixel fill rate and rasterization. Those are also precisely the areas where the Fury X looks weakest compared to the competition. We are looking at a bit of asymmetrical warfare this time around, with AMD and Nvidia fielding vastly different mixes of GPU resources in similarly priced products.

Of course, those are just theoretical peak rates. Our fancy Beyond3D GPU architecture suite measures true delivered performance using a series of directed tests.

The Fiji GPU has the same 64 pixels per clock of ROP throughput as Hawaii before it, so these results shouldn’t come as a surprise. These numbers illustrate something noteworthy, though. Nvidia has grown the ROP counts substantially in its Maxwell-based GPUs, taking even the mid-range GM204 aboard the GTX 980 beyond what Hawaii and Fiji offer. Truth be told, both of the Radeons probably offer more than enough raw pixel fill rate. However, these results are a sort of proxy for other types of ROP power, like blending for multisampled anti-aliasing and Z/stencil work for shadowing, that can tax a GPU.

This bandwidth test measures GPU throughput using two different textures: an all-black surface that’s easily compressed and a random-colored texture that’s essentially incompressible. The Fury X’s results demonstrate several things of note.

The 16% delta between the black and random textures shows us that Fiji’s delta-based color compression does it some good, although evidently not as much good as the color compression does on the Maxwell-based GeForces.

Also, our understanding from past reviews was that the R9 290X was limited by ROP throughput in this test. Somehow, the Fury X speeds past the 290X despite having the same ROP count on paper. Hmm. Perhaps we were wrong about what limited the 290X. If so, then 290X may have been bandwidth limited, after all—and Hawaii apparently has no texture compression of note. The question then becomes whether the Fury X is also bandwidth limited in this test, or if its performance is limited by the render back-end. Whatever the case, the Fury X “only” achieves 387 GB/s of throughput here, well below the 512 GB/s theoretical max of its HBM-infused memory subsystem. Ominously, the Fury X only leads the GTX 980 Ti by the slimmest of margins with the compressible black texture.

Fiji has a ton of texture filtering capacity on tap, especially for simpler formats. The Fury X falls behind the GTX 980 Ti when filtering texture formats that are 16 bits per color channel, though. That fact will matter more or less depending on the texture formats used by the game being run.

The Fury X achieves something close to its maximum theoretical rate in our polygon throughput test, at least when the polygons are presented in a list format. However, it still trails even the Kepler-based GeForce GTX 780 Ti, let alone the newer GeForces. Adding tessellation to the mix doesn’t help matters. The Fury X still manages just over half the throughput of the GTX 980 Ti in TessMark.

Fiji’s massive shader array is not to be denied. The Fury X crunches through its full 8.6 teraflops of theoretical peak performance in our ALU throughput test.

At the end of the day, the results from these directed tests largely confirm the major contrasts between the Fury X and the GeForce GTX 980 Ti. These two solutions have sharply divergent mixes of resources on tap, not just on paper but in terms of measurable throughput.

Project Cars
Project Cars is beautiful. I could race around Road America in a Formula C car for hours and be thoroughly entertained, too. In fact, that’s pretty much what I did in order to test these graphics cards.


Click the buttons above to cycle through the plots. What you’ll see are frame times are from one of the three test runs we conducted for each card. You’ll notice that PC graphics cards don’t always produce smoothly flowing progressions of succeeding frames of animation, as the term “frames per second” would seem to suggest. Instead, the frame time distribution is a hairy, fickle beast that may vary widely. That’s why we capture rendering times for every frame of animation—so we can better understand the experience offered by each solution.

The Fury X’s bright red plot indicates consistently lower frame times than the R9 290X’s purple plot. The dual-GPU R9 295 X2 often produces even lower frame times than the Fury X, but there’s a nasty spike near the middle of the test. That’s a slowdown that you can feel while gaming in the form of a stutter. The Fury X avoids that fate, and playing Project Cars on it generally feels smooth as a result.

Unfortunately for the red team, the Fury X doesn’t crank out frames as quickly as the GeForce GTX 980 Ti. The 980 Ti produces more frames over the course of the test run, so naturally, its FPS average is higher.

Frame time
in milliseconds
FPS
rate
8.3 120
16.7 60
20 50
25 40
33.3 30
50 20

Higher averages aren’t always an indicator of smoother overall animation, though. Remember, we saw a big spike in the 295 X2’s plot. Even though its FPS average is higher than the Fury X’s, gameplay on the the 295 X2 isn’t as consistently smooth. That’s why we prefer to supplement average FPS with another metric: 99th percentile frame time. This metric simply says “99% of all frames in this test run were produced in X milliseconds or less.” The lower that threshold, the better the general performance. In this frame-time-focused metric, the Fury X just matches the 295 X2, despite a lower FPS average.

Almost all of the cards handle this challenge pretty well, considering that we’re asking them to render in 4K at fairly high image quality settings. All but one of them manage a 99th percentile frame time of below 33 milliseconds. That means, on a per-frame basis, they perform at or above 30 FPS 99% of the time.

We can understand in-game animation fluidity even better by looking at the entire “tail” of the frame time distribution for each card, which illustrates what happens with the most difficult frames.


These curves show generally consistent performance from nearly all of the cards, with the lone exception of the Radeon R9 295 X2. That card struggles with toughest three percent of frames, and a result, the line for this dual-Hawaii card curves up to meet the one for the single-Hawaii 290X. These are the dangers of multi-GPU solutions.


These “time spent beyond X” graphs are meant to show “badness,” those instances where animation may be less than fluid—or at least less than perfect. The 50-ms threshold is the most notable one, since it corresponds to a 20-FPS average. We figure if you’re not rendering any faster than 20 FPS, even for a moment, then the user is likely to perceive a slowdown. 33 ms correlates to 30 FPS or a 30Hz refresh rate. Go beyond that with vsync on, and you’re into the bad voodoo of quantization slowdowns. And 16.7 ms correlates to 60 FPS, that golden mark that we’d like to achieve (or surpass) for each and every frame.

The frame time plots have few big spikes in them, and the FPS averages here are well above 20. As a result, none of the cards spend any time beyond our 50-ms threshold. Even the 295 X2, which has one spike beyond 50 ms in the frame time plots, doesn’t repeat a slowdown of the same magnitude in the other three test runs. (These results are a median of three runs.)

The Fury X essentially spends no time beyond our 33-ms threshold, either. Like I said, it generally feels pretty good to play this game on it in 4K. Trouble is, the new Radeon falls behind three different GeForces, including the cheaper GTX 980, across a range of metrics. Perhaps the next game will be a different story.

The Witcher 3

Performance in this game has been the subject of some contention, so I tried to be judicious in selecting my test settings. I tested the older Radeons with the Catalyst 15.5 beta drivers here (and 15.15 on the Fury X), and all cards were tested with the latest 1.04 patch for the game. Following AMD’s recommendations for achieving good CrossFire performance, I set “EnableTemporalAA=false” in the game’s config file when testing the Radeon R9 295 X2. As you’ll see below, I also disabled Nvidia’s HairWorks entirely in order to avoid the associated performance pitfalls.


You can tell by the “fuzzy” frame-time plots that the Radeons struggle in this game. That’s particularly an issue when the frame time spikes get to be fairly high—into the 40-60-ms range. The Fury X trails the GTX 980 Ti in the FPS average results, but it falls even further behind in the 99th-percentile frame time metric. This outcome quantifies something you can feel during the test run: the animation hiccups and sputters much more than it should, especially in the early part of the test sequence.

The GeForce GTX 780 Ti struggles here, too. Since we tested, Nvidia has released new drivers that may improve the performance of Kepler-based cards like this one. My limited time with the Radeon Fury X has been very busy, however, so I wasn’t able to re-test the GTX 780 Ti with new drivers for this review.



In our “badness” metric, both the Fury X and the R9 290X spend about the same amount of time beyond the 50-ms threshold—not a ton, but enough that one can feel it. The fact that these two cards perform similarly here suggests the problem may be a software issue gated by CPU execution speed.

Despite those hiccups, the Fury X generally improves on the 290X’s performance, which is a reminder of the Fiji GPU’s tremendous potential.

GTA V

Forgive me for the massive number of screenshots below, but GTA V has a ton of image quality settings. I more or less cranked them all up in order to stress these high-end video cards. Truth be told, most or all of these cards can run GTA V quite fluidly at lower settings in 4K—and it still looks quite nice. You don’t need a $500+ graphics card to get solid performance from this game in 4K, not unless you push all the quality sliders to the right.




No matter how you slice it, the Fury X handles GTA V in 4K quite nicely. The 99th-percentile results track with the FPS results, which is what happens when the frame time plots are generally nice and flat. Again, though, the GTX 980 Ti proves to be measurably faster than the Fury X.

Far Cry 4


At last, we have a game where the Fury X beats the GTX 980 Ti in terms of average FPS. Frustratingly, though, a small collection of high frame times means the Fury X falls behind the GeForce in our 99th-percentile metric.



The similarities between the Fury X and the 290X in our “badness” metric might suggest some common limitation in that handful of most difficult frames.

Whatever the case, playing on the GeForce is smoother, although the Fury X’s higher FPS average suggests it has more potential.

Alien: Isolation




In every metric we have, the Fury X is situated just between the GTX 980 and the 980 Ti in this game. All of these cards are very much competent to play Alien Isolation fluidly in 4K.

Civilization: Beyond Earth

Since this game’s built-in benchmark simply spits out frame times, we were able to give it a full workup without having to resort to manual testing. That’s nice, since manual benchmarking of an RTS with zoom is kind of a nightmare.

Oh, and the Radeons were tested with the Mantle API instead of Direct3D. Only seemed fair, since the game supports it.




This is a tight one, but the GTX 980 Ti manages to come out on top overall by a whisker. For all intents, though, the Fury X and 980 Ti perform equivalently here.

Battlefield 4

Initially, I tested BF4 on the Radeons using the Mantle API, since it was available. Oddly enough, the Fury X’s performance was kind of lackluster with Mantle, so I tried switching over to Direct3D for that card. Doing so boosted performance from about 32 FPS to 40 FPS. The results below for the Fury X come from D3D.




The Fury X trades blows with the GeForce GTX 980 in BF4. The new Radeon’s performance is fairly solid, but it’s just not as quick as the GTX 980 Ti.

Crysis 3


Ack. The Fury X looks competitive with the GTX 980 Ti in the FPS sweeps, but it drops down the rankings in our 99th-percentile frame time measure. Why?

Take a look at the frame time plot, and that one spot in particular where frame times jump to over 120 milliseconds. This slowdown happens at the point in our test run where there’s an explosion with lots of particles on the screen. There are smaller spikes on the older Radeons, but nothing like we see from the Fury X. This problem is consistent across multiple test runs, and it’s not subtle. Here’s hoping AMD can fix this issue in its drivers.



Our “badness” metric at 50 ms picks up those slowdowns on the Fury X. This problem mars what would otherwise be a very competitive showing versus the 980 Ti.

Power consumption

Please note that our “under load” tests aren’t conducted in an absolute peak scenario. Instead, we have the cards running a real game, Crysis 3, in order to show us power draw with a more typical workload.

In the Fury X, AMD has managed to deliver a substantial upgrade in GPU performance over the R9 290X with lower power draw while gaming. That’s impressive, especially since the two GPUs are made on the same 28-nm process technology. The Fury X still requires about 50W more power than the GTX 980 Ti, but since its liquid cooler expels heat directly out of the PC case, I’m not especially hung up on that fact. GCN-based GPUs still aren’t as power-efficient as Nvidia’s Maxwell chips, but AMD has just made a big stride in the right direction.

Noise levels and GPU temperatures

These video card coolers are so good, they’re causing us testing problems. You see, the noise floor in Damage Labs is about 35-36 dBA. It varies depending on things I can’t quite pinpoint, but one notable contributor is the noise produced by the lone cooling fan always spinning on our test rig, the 120-mm fan on the CPU cooler. Anyhow, what you need to know is that any of the noise results that range below 36 dBA are running into the limits of what we can test accurately. Don’t make too much of differences below that level.

The Fury X’s liquid cooler lives up to its billing with a performance that’s unquestionably superior to anything else we tested. You will have to find room for the radiator in your case, though. In return, you will get incredibly effective cooling at whisper-quiet noise levels.

Conclusions

As usual, we’ll sum up our test results with a couple of value scatter plots. The best values tend toward the upper left corner of each plot, where performance is highest and prices are lowest. We’ve converted our 99th-percentile frame time results into FPS, so that higher is better, in order to make this layout work.


If you’ve been paying attention over the preceding pages, you pretty much know the story told by our FPS value scatter. The Radeon R9 Fury X is a big advance over the last-gen R9 290X, and it’s a close match overall for the GeForce GTX 980 Ti. However, the GeForce generally outperforms the Fury X across our suite of games—by under 10%, or four FPS, on average. That’s massive progress from the red team, and it’s a shame the Fury X’s measurably superior shader array and prodigious memory bandwidth don’t have a bigger payoff in today’s games.

Speaking of which, if you dig deeper using our frame-time-focused performance metrics—or just flip over to the 99th-percentile scatter plot above—you’ll find that the Fury X struggles to live up to its considerable potential. Unfortunate slowdowns in games like The Witcher 3 and Far Cry 4 drag the Fury X’s overall score below that of the less expensive GeForce GTX 980. What’s important to note in this context is that these scores aren’t just numbers. They mean that you’ll generally experience smoother gameplay in 4K with a $499 GeForce GTX 980 than with a $649 Fury X. Our seat-of-the-pants impressions while play-testing confirm it. The good news is that we’ve seen AMD fix problems like these in the past with driver updates, and I don’t doubt that’s a possibility in this case. There’s much work to be done, though.

Assuming AMD can fix the problems we’ve identified with a driver update, and assuming it really has ironed out the issue with the whiny water pumps, there’s much to like about the Fury X. The GPU has the potential to enable truly smooth gaming in 4K. AMD has managed to keep power consumption in check. The card’s cooling and physical design are excellent; they’ve raised the standard for products of this class. Now that I’ve used the Fury X, I would have a hard time forking over nearly 700 bucks for a card with a standard air cooler. At this price, decent liquid cooling at least ought to be an option.

Also, although we have yet to perform tests intended to tease out any snags, we’ve seen no clear evidence that the Fury X’s 4GB memory capacity creates problems in typical use. We will have to push a little and see what happens, but our experience so far suggests this worry may be a non-issue.

The question now is whether AMD has done enough to win back the sort of customers who are willing to pony up $650 for a graphics card. My sense is that a lot of folks will find the Fury X’s basic proposition and design attractive, but right now, this product probably needs some TLC in the driver department before it becomes truly compelling. AMD doesn’t have to beat Nvidia outright in order to recover some market share, but it does need to make sure its customers can enjoy their games without unnecessary hiccups.

Enjoy our work? Pay what you want to subscribe and support us.

Comments closed
    • Geonerd
    • 4 years ago

    Interesting CPU compute benchmarks at [url<]http://www.computerbase.de/2015-06/amd-radeon-r9-fury-x-test/8/#diagramm-folding-at-home-fahbench[/url<] 'Tis a shame the x-Coin mining window has come and gone; delivered a year ago, the Fury would have been a huge grand slam for AMD I know TR is largely about gaming, but might it be possible to add an OpenCL benchmark or two to the GPU testing suite? Folding, coin mining, pulsar searching, SETI, photochop, video encoding, etc. are all interesting real-world applications that might well justify the extra time and effort spent in testing. Thanks.

      • chuckula
      • 4 years ago

      The coining thing is hilarious… in early 2014 AMD fans were very loud* about how great the R9 290/x was for litecoin (it was NEVER bitcoin, BTW, always litecoin) mining and how all the cards were sold out, AMD would make a fortune, etc.

      Fast forward a year and you have AMD’s executive team moaning and griping that the continuing losses in the graphics division were heavily influenced by the litecoin mining bubble where legitimate gamers couldn’t actually buy the hardware and then after a few months all the miners dumped their cards on the secondary market and killed demand for first-run sales.

      * Pun very much intended

        • Klimax
        • 4 years ago

        What’s more funny, CUDAminer and CCMiner often showed better performance in various x-coins. (And in Litecoin they closed the gap considerably)

    • DragonDaddyBear
    • 4 years ago

    Annnd post 666

      • Meadows
      • 4 years ago

      You cheated with the last 3 comments.

      • NeelyCam
      • 4 years ago

      Crap.

    • DragonDaddyBear
    • 4 years ago

    I wonder what the air-cooled version will be like in the benchmarks. I think it will be a very good value if it’s close to the X and significantly cheaper.

    • ish718
    • 4 years ago

    AMD should sell the Fury X without the water cooler for $550. Only die hard AMD fans are buying this card for $650 over the faster and less power hungry 980ti.

    • Klimax
    • 4 years ago

    Just small reminder that HBM is not the only thing out there. There’s also HMC. (Micron/Intel?)

      • raddude9
      • 4 years ago

      There’s also Wide I/O as well, but none of them are really “out there” at the moment, Wide I/O and HMC are still in the labs. HBM has won the race to the market and with Nvidia jumping on board as well,it looks like it’s going to hold its place in the GPU market for the immediate future.

    • anotherengineer
    • 4 years ago

    POST 610!!!!!!!!!!!!!!!!!!

      • NeelyCam
      • 4 years ago

      I’m trying to get the post 666

        • anotherengineer
        • 4 years ago

        Wait for it……………….. Wait for it……………..

        😉

        • anotherengineer
        • 4 years ago

        Post 600 Neely………………wait for it…………..get ready to button mosh your mouse!!

        • DragonDaddyBear
        • 4 years ago

        You and a whole lot of other people I bet.

    • rootheday3
    • 4 years ago

    Re post count … I am hoping we can keep this going till we reach [url=https://m.youtube.com/watch?v=SiMHTK15Pik<]epic levels[/url<].

      • chuckula
      • 4 years ago

      900… maybe.

    • BIF
    • 4 years ago

    What, no folding benchmarks?

    And…doesn’t matter anyway; not to me, because it requires access to the outside for the radiator (which introduces a real cap on how many cards I can get), and doesn’t do CUDA and therefore won’t help complete 3D rendering work using engines needing CUDA.

    At least to make up for all those cons, it’s more expensive. <eyeroll>

    (sigh)

      • geekl33tgamer
      • 4 years ago

      Anything else you want to complain about?

        • anotherengineer
        • 4 years ago

        Yes Actually.

        My wife
        the economy
        the weather

        edit – thanks for asking 🙂

      • Prestige Worldwide
      • 4 years ago

      There will be an air cooled version released at some point as well.

    • itachi
    • 4 years ago

    584 comments that’s gotta be a record for this website :). congratz mates.

      • raddude9
      • 4 years ago

      what do you mean 584… I saw 585… what gives?

      • geekl33tgamer
      • 4 years ago

      No, there’s some articles from way back in the site’s early days that eclipse that.

    • Klimax
    • 4 years ago

    For interested posters, RTW discussion on Fuji:
    [url<]http://www.realworldtech.com/forum/?threadid=150872&curpostid=150872[/url<]

      • willmore
      • 4 years ago

      RWT.

        • Klimax
        • 4 years ago

        Happens… 😉

    • Ninjitsu
    • 4 years ago

    Tom’s Hardware tested at 1440p as well, min/max/avg FPS, FPS over time, and frame time variance (avg/75th/95th). Game suite is slightly different too.

    Fury X <= 980 Ti overall, and frame time variance at 1440p isn’t bad either.

    On drivers:
    [quote<] AMD’s cards are measured using the 14.12 Omega update posted in December of 2014. The numbers for Grand Theft Auto V come from the 15.4 Beta posted in April. We then re-tested The Witcher 3: Wild Hunt with Catalyst 15.5. [b<]The Radeon R9 Fury X is benchmarked exclusively under Catalyst 15.15 Beta[/b<]. We had hoped to retest the other AMD cards using this driver as well, but it does not support the Radeon R9 290X or 295X2. Unfortunately, we still have no explanation as to why the dual-Hawaii board fares so poorly under The Witcher 3. [/quote<] [url<]http://www.tomshardware.com/reviews/amd-radeon-r9-fury-x,4196-4.html[/url<]

      • Mr Bill
      • 4 years ago

      Tom’s also tested them all at 3840×2160 and [b<]under their choice of settings[/b<] Fury X >= 980 Ti. However, Tom's does not emphasize the 'in-the-millisecond' variation at which Tech Report excels. So, Tom's noted there was some stuttering but had no quantitative way to show it. Tom's frame variance over time graphs are very poorly plotted. What Tom's report does show, is that the Fury X is indeed targeted at the 4K market just as Damage had noted in the Tech Report review. If I had seen only the Tom's review I might conclude that the Fury X was pretty good for 4K. But the Tech Report review shows that there are in the millisecond problems, that possibly can be delt with in the drivers over time. If AMD gives stuttering the at-att-attention it deserves.

        • tomc100
        • 4 years ago

        Nobody has tested the card using mantle which I’m curious to see if that gets rid of stuttering. This also might be indicative of how dx 12 performs as well.

          • ermo
          • 4 years ago

          Did you not read the present review? Scott tested using Mantle:

          “[url=https://techreport.com/review/28513/amd-radeon-r9-fury-x-graphics-card-reviewed/10<]Oh, and the Radeons were tested with the Mantle API instead of Direct3D. Only seemed fair, since the game supports it. [/url<]" Scott also tested BF4 using Mantle, but for some reason the D3D renderer performed better, so he used that instead.

            • jihadjoe
            • 4 years ago

            That’s pretty much the bell tolling for Mantle. How can anyone expect any sort of longevity when just one GPU after release AMD’s DX11 drivers are already better than their close to the metal API?

            • ermo
            • 4 years ago

            For Mantle maybe. But not for Vulcan — or DX12 for that matter. And Vulcan is more or less Mantle, isn’t it?

            • Klimax
            • 4 years ago

            Still same problems for DX 12 and Vulcan. Those are inherent to low level APIs, not just Mantle.

    • Krogoth
    • 4 years ago

    The people who are complaining that HBM is pointless and it doesn’t make any sense for Fury platform are not seeing the whole picture.

    AMD is trying out a new memory standard on a matured architecture (GCN) and process type (28nm) so their engineers can get a hold on HBM before using on a whole new architecture and process type.

    Nvidia is going to do the same thing with one of their upcoming Maxwell refreshes (Probably a mobile variant) before implementing HBM on Pascal.

      • Mr Bill
      • 4 years ago

      this +++

        • sweatshopking
        • 4 years ago

        THREE PLUSES BILL?! DOESN’T THAT SEEM EXCESSIVE?!

          • Chrispy_
          • 4 years ago

          Thirty-eight uppercase letters SSK?! Doesn’t that seem ex-

          Oh, never mind….

          • Mr Bill
          • 4 years ago

          It is succinct, and he said it better than I did.

      • VincentHanna
      • 4 years ago

      People who complain that HBM is pointless…

      Don’t understand what words mean.

      • chuckula
      • 4 years ago

      Ahh what a difference a week makes.

      Remember when this post by some supposed AMD-hater Chuckula was downthumbed and viciously attacked for being some anti-AMD diatribe?

      [quote<]HBM is a nice technology but these Fury parts are effectively a for-sale beta test of HBM. The other thing to remember is that RAM is needed to let the GPU actually work at full capacity, but RAM by itself can't increase the inherent power of a GPU.[/quote<] [url<]https://techreport.com/discussion/28476/live-blog-from-amd-new-era-of-pc-gaming-event?post=914367[/url<] Isn't it funny how the AMD fanbase now agrees with everything I said and actually uses it as an [s<]excuse[/s<] [u<]defense[/u<] of AMD? Facts are stubborn things aren't they.

        • sweatshopking
        • 4 years ago

        QQ MOAR NERD

        • raddude9
        • 4 years ago

        I down-thumbed you for your obvious double standards. If AMD implements a radically new memory technology then it’s a “paid-for-beta” but if some other company were to do it, it would probably be a “wonderful innovation”.

          • chuckula
          • 4 years ago

          I’m the one who corrected some of the HBM haters around here. You won’t find a single post of mine where I call HBM useless or stupid. You *will* see accurate posts of mine where I point out what HBM can and can’t do and where I note the design tradeoffs that AMD obviously had to make to get HBM running in 2015 as opposed to 2016. Every single one of those posts has proven accurate now that HBM is on the market.

          I was also the one who was relatively favorable to AMD by saying that the Fury X would pretty much match a GTX-980Ti…. there’s actually an argument to be made that I was overly optimistic about its performance, especially when you look at the frame time variation graphs in TR’s review. So if you want to attack me for being wrong, then please do it right: Attack me for overestimating how Fury X would perform.

          Now, there were a bunch of koolaiders who took my realism and relative favorability to AMD as being anti-AMD because they wanted to live in fairyland where a GPU that comes out months after the competition with gobs of new expensive technology and is priced exactly the same as the competitor’s product will annihilate that competitor. Maybe to people who don’t like reality I have “double standards” but I have been [b<][i<]more[/i<][/b<] than fair to AMD and reality has proven me correct big-time.

            • derFunkenstein
            • 4 years ago

            Like I said last night: I WANT TO BELIEVE

            • raddude9
            • 4 years ago

            Wow, what’s with the rant, did I touch a nerve. I think you are reading way too much in what I wrote. Anyway, how about keeping to the point?

            I stand by my post, calling Fury a “paid-for-beta” I think is unfair, it works, seemingly without crashing, and it’s an improvement on the previous generation. That’s not beta in the common usage of the term, that a version 1.0. Beta versions by contrast are usually unreliable, buggy and have poor performance. So calling a product a “beta” when it’s not is disparaging.

    • wingless
    • 4 years ago

    With all the Bitcoins I can mine with this thing, I’ll be able to afford a 980 Ti in no time!

      • tipoo
      • 4 years ago

      Mining on even top end GPUs has become futile with dedicated mining hardware like ASICs and FGPAs out there.

        • Klimax
        • 4 years ago

        And even with ASICS it’s mostly fool’s errand. Unless you buy used devices for very cheap. (I got mine, although mostly as curiosity and HW collection)

          • Bensam123
          • 4 years ago

          Altcoins bros. You’re doing it wrong if you’re mining sha256 on GPUs.

    • sweatshopking
    • 4 years ago

    i’ve seen a bunch of places have given this card “recommended”. Why?

      • ImSpartacus
      • 4 years ago

      The water cooler?

      I mean, it’s pretty compact and quiet for a card that can crank out 4K with the best of em.

      • Meadows
      • 4 years ago

      Because if you factor in the cost of water cooling and the associated low noise, it’s not a bad card. It’s not the best performer by any stretch of the imagination but it’s a good product nonetheless.

      Problem is, everyone expected a game-changing, earth-shattering success.

        • Milo Burke
        • 4 years ago

        Only because they advertised that it would be a game-changing, earth-shattering success.

        Remember Bulldozer? They need to stop doing that.

      • JumpingJack
      • 4 years ago

      Regardless of why, we should all be happy regardless of what camp you reside in… the 980 Ti exists solely because Fury was on the horizon and now it is here.

      I wonder how much the Ti coming out at $650 on the 31st of May had to do with AMD’s launch price of 650$…

        • jihadjoe
        • 4 years ago

        [quote<]I wonder how much the Ti coming out at $650 on the 31st of May had to do with AMD's launch price of 650$...[/quote<] From opinions I've read on other sites: Likely not much, and AMD probably also knew that Fury X wouldn't fly if they released at $800. It seems like Nvidia just took a guess (educated or otherwise) at what Fury X performance might be like, cut down the Titan appropriately and picked an arbitrary price point. $650 is pretty close to the $700 that 780Ti launched at, and both the 970 and 980Ti have been cheaper than the previous generation 670 and 780Ti that I see no reason to suspect the $650 price point.

      • Klimax
      • 4 years ago

      AMD being underdog, no other sane reason.

      • Silus
      • 4 years ago

      Because they’re biased ? Are on AMD’s payroll ?

      Still, the card isn’t all that bad, it’s just not priced correctly. It needs to cost less than the 980 Ti

        • anotherengineer
        • 4 years ago

        Based solely on performance yes.

        However it does have water cooling and is pretty silent, there should be a small price premium for that. Just like there is in large non-reference cards/coolers also.

        If they want to push more cards then yes again, $25 cheaper then the 980 Ti would probably be good enough.

        • tomc100
        • 4 years ago

        Price is right. Actually, they should jack up the price.

          • chuckula
          • 4 years ago

          [quote<]they should jack up the price.[/quote<] THANK YOU THANK YOU THANK YOU THANK YOU!! -- Jen Hsun

            • tomc100
            • 4 years ago

            No problem. Is my check in the mail?

      • HisDivineOrder
      • 4 years ago

      It’s recommended at this point because AMD needs to make some money to stay in business and the future of PC gaming without AMD in it is a murky, dire one full of despair. One where the same two companies ream us with rebrands and tweaks that give us 5-10% gain each generation through clock increases and every five or so years when they are feeling extremely generous, a dieshrink.

      One where drivers deliberately out of malice or accidentally out of coincidental incompetence start to make new cards look better than old cards for lack of interest in ensuring they maintain their performance.

      I wish AMD would figure out the incredibly easy things they could do to right their ship. I’d be more understanding if they didn’t keep making the same easy mistakes again and again and again, but when they fail so spectacularly in the same way over and over, it makes it hard for me not to say they deserve what they get.

      It’s just we don’t deserve what we’ll get after they get what they deserve.

      • chuckula
      • 4 years ago

      In fairness to AMD: The Fury cards aren’t terrible and the “recommended” can mean: You want a high-end AMD GPU? We’d recommend this one!

      • MathMan
      • 4 years ago

      Because the FuryX is only a few percent slower than a 980Ti, which is irrelevant in the grand scheme of things.

      The FuryX is a major disappointment compared to the drummed up expectations that were created by AMD. They over promised and under delivered.

      • willyolio
      • 4 years ago

      look at the noise and temperature page of the review.

    • anotherengineer
    • 4 years ago

    hmmmm I still don’t know why AMD runs high clocks with blurays
    [url<]http://www.techpowerup.com/reviews/AMD/R9_Fury_X/35.html[/url<] those voltages don't help with power consumption. Anyone see any reviews with underclocking/undervolting?

      • ImSpartacus
      • 4 years ago

      I don’t think you can change the voltage on the Fury X yet, so results are limited.

      Eventually, I think it’d be cool to lower the memory clocks since it probably wouldn’t completely murder performance.

    • Deanjo
    • 4 years ago

    HBM on the Fury is equivalent to slapping Micky Thompson slicks on a Prius.

      • chuckula
      • 4 years ago

      That’s cute & all but I’ma sue for infringement of my Hyundia vs. Kia reference.

    • ermo
    • 4 years ago

    [b<]@Damage:[/b<] Do you have a Win10 box? There is a apparently a Project CARS performance discrepancy for AMD drivers between using Win 8.1 and Win 10 when keeping the hardware the same between tests. I've seen reports of up to 20% better FPS averages as well as measurably smoother frametimes in Win 10. YMMV. There's also been some recent optimization of the Ultra settings in Project CARS, so if you begin to test using a later patch, the Ultra settings of earlier tests may no longer be comparable. Project CARS patch 2.0 (which follows v1.4) is currently in testing.

    • cldmstrsn
    • 4 years ago

    I am more excited about the Fury because it is a hundred dollars less which will hopefully make Nvidia sweat a little on the price for the 980 Ti.

    • geekl33tgamer
    • 4 years ago

    Everyone, relax. Their driver [s<]team[/s<] guy will be back from his vacation Monday and I'm sure Fury optimized drivers are his top priority between homework. I'm confident we will have performance parity via software advances within 12 months.

      • Terra_Nocuus
      • 4 years ago

      Unless finals and/or lab hours eat up too much of his time.

        • geekl33tgamer
        • 4 years ago

        Lab hours? You know he’s blatantly in Minecraft after school.

      • Milo Burke
      • 4 years ago

      Don’t get your hopes up. I hear the driver team’s mom grounded him from the computer until mid-July.

      • tipoo
      • 4 years ago

      Still waiting for my 2900XTX to be updated to match the 8800GTX through drivers

      • itachi
      • 4 years ago

      Well I haven’t seen optimization drivers for GTA V since their beta release and the performance on both fx 8000 and my r9 290x windforce are terrible in this game..gotta be the only game I can’t run maxed with arma 3, and perhaps Starcraft 2 aswell if I want smooth during 4v4

        • geekl33tgamer
        • 4 years ago

        Don’t worry, things are not rosy in that title on the Nvidia side neither if you have Maxwell + SLI – It’s epilepsy inducing.

        Their 350.12 drivers fixed it, but the 3 driver releases that followed it that get us to today don’t carry the fixes.

      • geekl33tgamer
      • 4 years ago

      Made the top comments section – cool! 😉

    • maxxcool
    • 4 years ago

    524 comments … wowzers

    • NeelyCam
    • 4 years ago

    [quote<]Assuming AMD can fix the problems we've identified with a driver update[/quote<] This reminds me of a plot I saw a while back, comparing AMD and NVidia GPU performance on games over time with driver updates. I remember that when a new generation of GPUs came out, AMD's performance was quite a bit behind NVidias, but after a year or so, the same GPUs had performance parity. The difference was that each driver update improved NVidia's performance just a little bit, while AMD's driver update brought big performance boosts. So, it seemed that NVidia's drivers were more optimized at release than AMD's, but AMD's driver team (of one?) got them back into the game given enough time to tweak and optimize. I wish I could find that plot, but my quick search didn't yield anything, and I don't have time to search more right now

      • squngy
      • 4 years ago

      You don’t particularly need some obscure plot.

      All you have to do is look at a benchmark for an older game and a benchmark for a newer game.
      Reviewers will throw in older cards for comparison and they always use the newest drivers.

        • NeelyCam
        • 4 years ago

        You know what would be great? An article about mid-range gaming:

        The article could test various GPUs at 1080p and/or 1440p resolutions, and instead of keeping the settings the same, set the 99th percentile frame rate targets (or 98th or 97th… whatever seems like a decent threshold) the same (e.g., 16.7ms) and modify the quality settings for each GPU until they meet that target.

        Then compare the GPUs in a price vs. settings scatter plot, showing which GPUs give you the kind of experience you’re looking for, and how much would that experience cost you. You could even throw in APUs/IGPs, and use their prices in the scatter plot vs. CPU+GPU prices for discrete solutions.

        I realize that this would obviously be a huge amount of work, and scoring the “quality” of the graphics is subjective… but still, that kind of information would be most interesting to me.

          • Ninjitsu
          • 4 years ago

          YES THIS PLEASE

          • Meadows
          • 4 years ago

          That would probably be the second best thing since sliced bread, but with TR the way it is right now, even if Scott started off right away you wouldn’t see a single review using that technique for the rest of the year.

            • NeelyCam
            • 4 years ago

            What’s the best thing since sliced bread?

          • Kretschmer
          • 4 years ago

          HardOCP does something a bit like this.

            • travbrad
            • 4 years ago

            Except HardOCP seems to think 25FPS is “playable” They use $500-1000 GPUs and manage to run them at Playstation 1 framerates.

            • Krogoth
            • 4 years ago

            It depends on the game in question.

            There are plenty of games where 25-30FPS framerate is quite playable. Not everything is a twitch shooter.

            • Airmantharp
            • 4 years ago

            Especially if those frames are delivered consistently.

      • K-L-Waster
      • 4 years ago

      “Buy now! In 3 months it’ll be awesum!! Wait, where are you going..?”

      • l33t-g4m3r
      • 4 years ago

      That, or TR didn’t use the right driver from the get-go.
      [url<]https://www.reddit.com/r/pcmasterrace/comments/3b2ep8/fury_x_possibly_reviewed_with_incorrect_drivers/[/url<] [url<]http://support.amd.com/en-us/kb-articles/Pages/AMD-Radeon-300-Series.aspx[/url<] 300 / Fury series driver version: 15.15. TR benchmarked with version: 15.5. I don't know all the details on this driver, but I'm betting it performs better than 15.5. Either way, I'm thinking driver updates for this card will fix a lot, as it's too early to call these numbers a final indication of Fury's potential.

        • K-L-Waster
        • 4 years ago

        See Scott’s response to the other post above — seems more of a labelling issue. Was actually the same driver you are referring to.

          • l33t-g4m3r
          • 4 years ago

          How? He specifically said 15.5, not 15.15, and explained why they didn’t use 15.6. Sounds like they used 15.5 to me.

          Either way, something’s off on these numbers, because other places are getting better results.
          [url<]http://www.guru3d.com/articles_pages/amd_radeon_r9_fury_x_review,15.html[/url<] So, TR either used the wrong driver, or cherry picked too many pro NV games like Project Cars, because the numbers don't exactly match. Edit: Looking through it, some of the games do somewhat line up, so it looks more like TR Project Car'd their benchmarks, whereas guru3d picked more neutral games.

            • Damage
            • 4 years ago

            I did not use the wrong driver. I used the one AMD provided.

            I think we’re all weary of your accusations about our testing methods being somehow biased by now. You are welcome to stop. 🙂

            • chuckula
            • 4 years ago

            C’mon Damage! Don’t you know that you have a responsibility to write better drivers (and benchmarks) to show AMD in the best possible light!

            • Meadows
            • 4 years ago

            Did I just catch the bright glint of a hammer behind that smile?

            • l33t-g4m3r
            • 4 years ago

            Great, then you should change the description, because calling it the wrong version is misleading.

            Also in retrospect, you’re still not being clear on it.
            [quote<]"I used the one AMD provided."[/quote<] What if AMD provided you 15.5, instead of 15.15? Is the FTP driver called 15.5, but the same dlls as 15.15? Dunno. There is no real information here, other than you "used the ones AMD provided.". AMD certainly provided us with a Rage Hotfix too, didn't they? This explanation kinda sounds like, "I was only taking orders". Fine, W/E. You "used the ones AMD provided.", and I'll leave it at that. Mostly because it seems like you're egging me on with that answer. But that's not going to clarify anything for people who have questions, and being indirect about specifics only causes more questions to be asked. Further edit: [quote<]Edit: Doh! It is 15.15 for the Fury X press driver. That's what I used. Changed the table in the review.[/quote<] So it was 15.15 after all. Thanks for clearing that up, and no thanks for being non-specific with me. If you wonder why I question things of this nature, this would be exactly why. It just seems like there's sometimes this [i<]veil[/i<] of impartiality with things. I can't tell what's going on, and that makes me question it.

            • chuckula
            • 4 years ago

            Do you honestly think that this protracted conspiracy campaign is doing your pals at AMD any favors?

            Do you realize that to a normal person your constant protests about drivers seem to indicate that AMD can’t even get the right driver copied to a server for people to download? Is that really conveying a “pro-AMD” message? Even the majority of the Nvidia shills around here don’t put AMD down that much!

            • K-L-Waster
            • 4 years ago

            Well y’know, reality has a known pro-Nvidia bias….

            • l33t-g4m3r
            • 4 years ago

            Quite honestly, I don’t think it would have been AMD’s fault if TR used the wrong driver. 15.15 is clearly labeled as the Fury driver. However, it seems that TR just called it the wrong version number, and picked a bunch of games which bench poorly. So, W/E. Games like Project Cars are obvious driver problems, and not hardware issues, so it’s pretty clear regardless of the benchmarks that driver updates would fix things. That, and Project Cars is a unique snowflake that doesn’t represent other games, especially when it uses physx and has Nvidia banners plastered everywhere in the game. Anyone who plays it would notice the developer bias, [i<]simply from watching the scenery.[/i<] Not only that, but the Project Cars guys are ignoring known bugs in the current game, and are instead focusing on making Project Cars2 for a cash grab. Their community is kinda upset about this, so I can't see Project Cars being a viable benchmark for any protracted period of time, simply because the players are upset with the developer, and will most likely abandon the series. It'll end up more of an ignorable benchmark than it already is. Kinda like Batman: Arkham knight, which I think the only reason why it wasn't included was the 30 fps cap, and that it got pulled from steam. It may well have been included if it hadn't had the cap, and was slightly less buggy.

            • chuckula
            • 4 years ago

            Oh, l33t, I’m sure that in your world NOTHING is ever AMD’s fault.

            • l33t-g4m3r
            • 4 years ago

            No, I don’t think amd really had viable drivers until Hawaii, and gcn 1.0 was too finicky with games to bother with. Hawaii+ performs much better on average.

            Here’s the kicker though, gcn 1.0 is still usable with today’s games after all the driver improvements, while NV on the other hand quit optimizing for anything pre Maxwell. Not cool for anyone who shelled out for a titan or 780.

            • K-L-Waster
            • 4 years ago

            [quote<]Here's the kicker though, gcn 1.0 is still usable with today's games after all the driver improvements, while NV on the other hand quit optimizing for anything pre Maxwell. Not cool for anyone who shelled out for a titan or 780.[/quote<] Ok, you have 2 games where older generation cards underperform (i.e. Project Cards and Witcher 3). Somehow you have turned that into a vast conspiracy where no NV card will ever work acceptably ever ever ever beyond the first 9 months from release. Gotta say my experience has been vastly different from that.

            • l33t-g4m3r
            • 4 years ago

            So, you’re saying it’s wrong to cherry pick two particularly bad games to generalize the total performance of Kepler cards?

            How’s this any different using those games with Fury to generalize it’s performance? Same thing.

            It’s like benchmarking Batman Arkham Knight. The only thing you get out of it, is that there are problems. You don’t get a picture of the true performance, just that there are issues. Not to say pointing out these problems isn’t important and shouldn’t be done, because it certainly should. But that’s not what a review should be centered on, and I don’t see problem areas with NV cards being treated in the same regard, which is more hardware crippling and not software. Kepler on the other hand, is indeed experiencing some driver issues, but that’s not going to get reported on, because those cards are last gen, and Kepler users need to just suck it up and buy NV’s new Maxwell cards.

            -I will say the quip on the gameworks team not having their priorities straight on BAK was good in the podcast, but then it was all like that’s the exception, and that NV normally helps developers. LOL. The sole purpose that NV sends the gameworks team out, is to implement gameworks. You only get help if they’re in a good mood, because those fancy effects take priority.

            • JumpingJack
            • 4 years ago

            This is pretty sad, TR’s results are no different than the majority of the other sites, Fury is on average slower than the Ti at 1440p and about the same to every so slightly slower at 4K.

            No site shows the dominating 4K wins that AMD’s PR/Marketing fluff showed so the bigger problem here is why does AMD consistently try to skew results and present false information.

            • K-L-Waster
            • 4 years ago

            Sadly, some people have a vested interest in refusing to accept those facts.

            • l33t-g4m3r
            • 4 years ago

            No, it’s not different, but the averages certainly are from the games chosen, and it reads more like a detailed analysis of what the worst case scenario is. Good info, but you don’t get a picture of what the card is really capable of either.

        • PixelArmy
        • 4 years ago

        Reddit edit…
        edit: it would appear AMDmatt has debunked this you can all go home: [url<]http://forums.overclockers.co.uk/showthread.php?p=28230087#post28230087[/url<]

          • chuckula
          • 4 years ago

          Oh, you’re in trouble now. The worst thing you can do is directly cite statements made by people who actually work at AMD.

          The fanboys HATE those guys! (and I’m not talking about the Nvidia fanboys, I mean the AMD fanboys)

          • l33t-g4m3r
          • 4 years ago

          Not quite. both AMDmatt and the reddit post are referring to versions of 15.15, not 15.5. There is a difference.

    • anotherengineer
    • 4 years ago

    POST 510!!!!!!!!!!!!!!!!

      • chuckula
      • 4 years ago

      Yes, but mine goes to Five ELEVEN!

        • anotherengineer
        • 4 years ago

        Just not the same as 7-11 😉

          • chuckula
          • 4 years ago

          We’ll get to 711 soon!

    • chrone
    • 4 years ago

    thanks for including the witcher 3 in the benchmark! 😀

    • Kretschmer
    • 4 years ago

    Man, the $270 290X that I purchased this month is holding up really well!

    (It would have been a $320 GTX970, but screw the G-Sync tax.)

    • DragonDaddyBear
    • 4 years ago

    500+ comments, wow! I’ll make 501.

    EDIT: make that 502.

    My $0.02: I REALLY want to support AMD but damn they make it hard. I’d probably still choose this over the green option just because it’s so quiet and it would probably fit in my Node 304 and probably allow two more hard drives to fit to boot. And if it wouldn’t, I’m pretty sure the Fury Nano would.

    • vmavra
    • 4 years ago

    All in all, the future looks grim for AMD in both CPU and GPU. The only hope for AMD to become truly competitive is to increase it’s R&D budget at least two fold. For that to happen sadly someone needs to buy AMD and the only company who has both the cash and the approval of the US government is Apple which would benefit immensly from it since depending only on Intel for CPU’s will be bad fo them too if AMD collapses.

    • ET3D
    • 4 years ago

    I’d love to see some OpenCL benchmarks. Haven’t yet found a review with them. Hopefully Anandtech will deliver.

    • ish718
    • 4 years ago

    Put HBM on a 290x and it will be almost as fast as the fury x.
    Put HBM on a 980ti and it will make the fury x look bad.

    lol

      • Westbrook348
      • 4 years ago

      My understanding is that the GM200 is too big to accomodate HBM. You’d have to cut off a big chunk of the 980Ti to be able to fit the GPU and the HBM on the same interposer.

        • BryanC
        • 4 years ago

        980Ti: 601mm^2.
        Fury X: 596 mm^2.

        Size is not the reason why GM200 doesn’t have HBM.

      • AnotherReader
      • 4 years ago

      I doubt that any of those assertions is correct. If the 980 Ti was hobbled by memory bandwidth, overclocking results would be limited by memory clock. They aren’t. Similarly, the 290X isn’t limited by the available bandwidth either and would not be able to make much use of HBM. The only benefit would be to a reference card where the lower TDP would allow it to reach its top turbo bin all the time.

    • Tristan
    • 4 years ago

    Why Fury X is performing much lower than on their slides ? Maybe they committed mistake, and benchmarked their new R400 GPU…

      • Klimax
      • 4 years ago

      Because AMD selected options and games such that average FPS was higher. We do not know what settings they used and what parts of game got tested.

      Reminder: There can be very large deltas between various sections of games, Crysis 3 with rain versus level full of trees and grass…)

    • Klimax
    • 4 years ago

    Not good.
    1)Ban on nonref cards is bad. And AMD cannot afford it. NVidia could ,they have good air cooler and due to standard design/size vendors like Gigabyte could bundle it with their own cooler. Unlikely here, too short for powerful cooler.
    2)Performance is nowhere near enough. In general worse then stock 980 ti.
    3) ANOTHER evidence that low level APIs are bad atrocious idea. Once again Mantle code cannot handle new card (285 previously) and kills performance. And that’s just GCN 1.2 with different ratios of resources, not jump from Kepler to Maxwell or from VLIW to GCN.
    4)Drivers, drivers, drivers. Almost looks like there’s a race between AMD and Intel on this. So far Intel’s winning it. (Size of variance between games across different APIs) There are bigger jump on IGP then elsewhere…
    And I don’t expect AMD to have any good fixes anytime soon. Like they still don’t have DCL support.
    AMD simply needs far more resources then NVidia to even close the gap.
    5)Temperature and noise are no surprise when you already (have to?) use water cooling. Too bad so far no tests on 980 ti with such cooling… Or no comparison to coolers like from Gigabyte. (So far only nonref available)

    AMD is lucky reviewers tend to not use niche games like DCS World or they would be even in bigger trouble. (I heard that you can forget using Radeons and AMD CPUs on that game)

    Conclusion: Unless you like Red, best use for money is getting 980ti. You get proper performance, you can actually chose what cooler you like (air, water) and be set.

    • kilkennycat
    • 4 years ago

    In a Fury X Crossfire arrangement, where do I put the two liquid coolers in a standard case? One on the rear and one on the side panel… if a cooling port is available on the side-panel? Also the cooler thickness is such that the rear mounted one seems very likely to collide with one or more of the quad-pair memory sticks in a X99 ATX motherboard….. And I suspect it will collide with a bunch of components on other motherboards. Also it obviously precludes using the rear fan position for liquid cooling of the CPU ( I have very successfully liquid-cooled a 4930k that way).

    Pity that the ‘bag-on-the side” liquid cooler was absolutely necessary to match the GTX980Ti performance. No doubt the air-cooled version will be lacking in that respect ( hopefully at least matching the factory overclocked versions of the GTX980) but will be the only version of Fury that makes any ‘packaging sense’ in a Crossfire configuration.

      • Arclight
      • 4 years ago

      To someone looking to spend +$1k on GPUs, they ought to have enough money for ~$100 case with fan mountings on the HDD rack and on the bottom of the case. If they somehow are unwilling to change the relatively way less expensive case to accomodate the extremely expensive card setup then they can wait for the dual GPU card which will launch eventually.

      Personally I think that the card is not worth it over what the competition is offering currently. We’ll find out how much AMD can afford to drop the price, nvidia is already considering price cuts to completely pull new customers on their side and make the purchase decision easy to make.

    • torquer
    • 4 years ago

    Theres nothing new here. Same old song and dance. Nvidia has done better than AMD in all the ways that matter to a business (making money) for some time. AMD has to do battle on two fronts – CPU and GPU. On one, it loses almost all the time. On the other, it struggles to reach parity.

    As I’ve said many times before (to deaf ears), the world needs a strong and innovative AMD. We need AMD to make money. We need AMD to have fat profit margins to match Intel and Nvidia. We need that money to be invested in R&D resources, driver teams, etc so that AMD can BEAT Nvidia at least some of the time. Without these things, we get an iterative AMD that has a hard time getting out of its own way on both the hardware and software front. We see a valiant effort, but one that basically keeps them going to the next round rather than securing generational ‘wins’ and the aforementioned profit margins.

    Consoles are not enough – those are not GPUs that AMD is making and selling. It is IP that is being licensed, and the profits on licensing IP especially at the volume the consoles drive is not enough to make up for a lack of profit in their two main businesses. AMD’s poor performance has allowed Nvidia to start acting a lot like Intel used to before the Clawhammer days. Intel pays no attention to AMD anymore, because ARM has become a larger threat.

    As much as the commenters here would like to focus on all of these spiritual and academic discussions about the two companies, ultimately both exist for only one reason: making money. Unfortunately, one is losing that fight badly and in the end if AMD loses, we all lose.

    Disclaimer: I have not purchased an AMD product (other than indirectly through console purchases) for several years. I would like to, but I’m not going to buy an inferior product just to “support the team.”

    • the
    • 4 years ago

    While last weeks article about the architectural details tempered my expectations to the Fury X and Titan X trading performances wins on a per game basis, these findings fell below my already lowered bar.

    The stuttering hints that after a driver revision, things could look better. For a flagship launch on a product that has been delayed for a few months, you’d think that they’d have the drivers ready for launch.

    From the look of it, AMD should have aimed for 3584 shaders on the die and used the rest of the die space toward more ROPs and TMUs for a better balance of resources.

    • TopHatKiller
    • 4 years ago

    Sorry, But obviously I’m just an asshole:- Who Cares?!! Yep – they’re some things that are impressive: 45% increase in transistor count and no increase on power?
    That beats Maxwell 2…and pretty much anything.
    HBM the way of the future: for sure, but like so many things from AMD these days…. they take an age to introduce new tech. Let’s be honest about it. AMD could’ve introduced HBM two years ago, but didn’t [probably for cost] and now the 4GB limit is a problem for a high-end gpus.
    I’m not a fanboy: I loath Intel and am constantly disgusted by Nv. But. AMD has to do better than this. Technical and incredible wins: such as power/transistor, transistor density and very clever ‘making-do’ strategies are not enough. 2016 has to be better.
    godlick AMD, be successful.

    Unless you’re desperate wait for 14/16nm gen.

    • NovusBogus
    • 4 years ago

    Well it’s definitely not a Maxwell killer, but it doesn’t suck either and we know AMD can play a very aggressive pricing game so I doubt those MSRPs will stick around very long. If the performance scales down it could do a lot of good for the regular cards. Fury X probably won’t win many new converts, but it’ll keep the faithful satisfied and offers hope that AMD may yet get itself out of rebadge hell.

      • Klimax
      • 4 years ago

      Problem: Price drop will continue cycle of underfunding. They wanted to avoid that. 980 Ti terminated any of their hopes for that.

    • marvelous
    • 4 years ago

    Just a bad design

    • chuckula
    • 4 years ago

    If we put a fanboy filter in place, on further reflection the real monkeywrench in this launch isn’t even related to the cards or the infamous drivers. Instead, it’s related to the pricing policy on the GTX-980Ti.

    In fairness to AMD, they [b<]did[/b<] make Nvidia react and release the GTX-980Ti instead of just leaving the Titan X as-is at $1000. If the Titan X was Nvidia's only available answer and the Fury had launched at $800 -- a full 20% price advantage -- the conversation we'd be having right now would be much much different. The name "Ngreedia" has some merit, but Ngreedia doesn't mean Nstupidia either. They knew that while the Fury might not destroy the Titan X, it would render the Titan X's price-point untenable and Nvidia proactively responded with the GTX-980Ti. Which leads us to the "advantage" that Nvidia has: The GTX-980Ti is pretty BORING technically. Sure it's got a massive GPU chip, but Nvidia launched that chip months ago and has had time to do binning. Fancy silicon interposers? Nope. Fancy RAM that only one company on earth makes right now? Nope. Need for exotic cooling in the base-model parts? Nope. All those things came together to give Nvidia the ability to respond. AMD still deserves credit for forcing Nvidia's hand though. Hell, if Zen ends up trailing comparable Skylake parts by only 10% I'm pretty sure the AMD fanbase will declare the Rapture, so by those metrics Fury is pretty strong.

      • snook
      • 4 years ago

      ahhh, pure genius, for real. zen doesnt have to beat skylake in my book, but it has to destoy my FX 8350.

      I’m not building another PC until zen hits no matter what. Ive been honest about being an AMD fanboy, that isn’t changing. long live AMD, I need them. lol

    • snook
    • 4 years ago

    Looked at a few videos/articles on overclocking furyX, 80-125Mhz. Not sweet at all. One site thinks it’s powertune throttling. Maybe a driver fix?

      • fade2blac
      • 4 years ago

      Being able to tweak PowerTune parameters, which to this point have been controlled or restricted by the GPU BIOS, would likely yield better OC results. This may however result in a dramatic increase in power draw if Tahiti/Hawaii is any indication. On those older chips, above about 1050-1100 MHz there is a tipping point where exponentially more voltage is required with each bump in clock speed and leakage becomes a runaway train. This time AMD has said they are more closely tuning their power delivery to each chip so I would guess PowerTune attempts to run the GPU in a narrow performance band. Perhaps someone can get PowerTune out of the way and truly unleash the Fury to see where the ‘up to 400 amps’ really gets you.

      For example, the boost profile on my 7950 Boost edition just wanted to jack up the voltage to the “toaster oven” setting and then proceed to go into convulsions under load due to PowerTune oscillating between boost and throttle clocks. I modified the BIOS using a non-boost version to drop the voltage about 125mV and raise the power limits which allowed even higher clocks at lower heat/power and with no throttling under load. So many complaints about the Boost edition were apparently due to either AMD or the AIB screwing up the BIOS parameters or not properly binning lower leakage chips. The silicon lottery meant that most Tahiti chips seemed to be too leaky to push past 1100 MHz or so without more exotic cooling. So far, my experience with Hawaii is better but not fundamentally different. PowerTune is better behaved and less prone to fits this time around but still seems unforgiving if you push too far.

        • snook
        • 4 years ago

        I hear you, thanks for the reply.

    • madgun
    • 4 years ago

    So AMD gets beaten in it’s very own pet project – BF4.
    And that too not by a slim margin but heavily with benchmarks showing a > 10 fps gap at 4K, versus a 980 ti.

    What do the red team fan boys have to say about that?

    Also have you guys considered GTA 5 which is probably the most neutral title. Guess what the FuryX got beaten heavily over there too.

    So not sure what to make out of all the Gameworks BS. AMD is getting beaten by a superior card, the 980 Ti, as simple as that. No shame in that. But no need to cloud that fact or put excuses on the board.

    • _ppi
    • 4 years ago

    AMD fan in me: /facepalm
    nVidia fan in me: BWHAHAHAHAHA

    Realistically, I can see Fury business case in smaller form factors, where full-sized cards are just too large to fit or get too loud. And I can see OEMs like Zotac do interesting things with it.

    • Silus
    • 4 years ago

    Well…this was expected. Extremely late to the game, over promising with that idiotic “new era of gaming” or whatever it was slogan and even doing those shady tactics of not providing review samples to some sites. That was enough to conclude that AMD had no winning product in their hands.

    The Fiji GPU is no slouch, but considering the competition and its price, it’s clearly not providing enough performance or simply too expensive.

    The Fiji GPU however, is a failure of giant proportions, architecturally speaking. On paper, it’s huge…massive even…nothing comes close to it in bandwidth, theoretical processing power, fillrate, you name it. There’s barely anything that it doesn’t destroy on paper.
    But in actual games, where it matters, it fails miserably. It should be handly destroying even the Titan X and the best it can do in most scenarios, is 3rd place.

    It needs to be cheaper than $650, but AMD is already bleeding millions and now they have a giant GPU with expensive HBM to produce, plus interpose and a water cooler. I’m sure their margins will go out the window if they lower the current price to where it should be to be competitive e.g. $550.

      • DrDominodog51
      • 4 years ago

      The Fury has great fillrate, but it is as good as the 290X at handling polygons. That’s how the 980 ti beats it.

        • auxy
        • 4 years ago

        ROPs are not used in geometry setup. (´・ω・`)

        • ronch
        • 4 years ago

        So why didn’t they think that when they planned It?

      • tomc100
      • 4 years ago

      It’s already sold out on newegg.

        • Silus
        • 4 years ago

        Of course! AMD fanboys will buy it no matter what.

        Plus, as it has been reported numerous times, using HBM makes it a product with limited supply.

          • tomc100
          • 4 years ago

          But, but, but you just said they should price it much lower because nobody is going to buy it since it’s too expensive. Since AMD fanboys will buy it no matter what and it’s in limited supply then they should raise the price to $700.

            • Silus
            • 4 years ago

            Can you quote me where I said that no one would buy it ?

            That’s a rhetorical question of course, because I said no such thing. I did say though that for it’s current price it under performs, so it needs to be cheaper to be competitive.

            • tomc100
            • 4 years ago

            If it needs to be cheaper to be competitive then it wouldn’t sell out. Also, since you just said AMD fanboys will buy anything “no matter what” then why should it be cheaper? Your arguments make no sense. If there’s customers willing to buy a product that has a lower performance than its competitor than why should they lower the price. To satisfy your rant? You also mentioned that AMD needs all the money it can get since it must be really expensive to produce and AMD can’t lower their price because it has financial problems, so again why should they sell it at a lower price when fanboys are buying it? That’s like advising Apple to sell their products at a lower price since android watches and android phones/tablets are just as good. You seem angry that AMD is selling it at that price and fanboys are buying it. It’s a good thing you’re not a financial advisor. If it’s selling out it doesn’t need to be cheaper to be competitive.

            • Silus
            • 4 years ago

            It makes complete sense (except for you of course)…AMD doesn’t live off their fanboys. They need to appeal to other customers too that aren’t their zealots. And those people won’t accept any price, unlike fanboys like yourself. But you are way too limited to understand basic economics and financials of a big company, so I won’t even try to explain it further.

            • tomc100
            • 4 years ago

            lol, you’re a dumbass. I already showed how dumb you are and nothing you say will change that. Appeal to other customers? A product that is sold out doesn’t need to appeal to numbskulls like yourself. Hey, maybe Apple should lower their prices to appeal to people in the third world too. So damn stupid. Something that is selling like hot cakes doesn’t need to be lowered in price. It’s called simple supply and demand. It’s also called common sense but for a complete boob it wouldn’t make much sense.

            • ronch
            • 4 years ago

            Maybe because AMD fanbois are a bunch of nobodies?

            Just kidding.

        • flip-mode
        • 4 years ago

        Limited initial availability would be my guess.

          • travbrad
          • 4 years ago

          Yep almost anything can “sell out” if the supply is low enough.

        • ronch
        • 4 years ago

        It’s not hard selling 3-5 units, which is probably all they got their hands on.

        • Klimax
        • 4 years ago

        Something about limited availability be it due to difficulty of manufacturing or intentional limitation on availability.

        • K-L-Waster
        • 4 years ago

        It’s been out for what, 3, 4 days? Of course the initial shipments are sold out.

        It’s after that initial hit that the pricing will either affect it or not.

      • psyph3r
      • 4 years ago

      Your typical cynical analysis is far too soon. Brand new memory tech with immature drivers going up against highly optimized systems with way more memory. They are within 5% most of the time and competing against much more expensive systems. AMD still needs to work on their drivers to clear their frame buffer faster. I’ll be buying a Fury XT with 8GB of HBM2. It’s going to kick ass. With the arrival of DX12, AMD’s entire ecosystem is going to kick some serious ass. Make your judgement in 6 months.

        • Silus
        • 4 years ago

        LOL the typical AMD fanboy. Brings me back to R600 days where all the excuses around its poor performance was “you’ll see when DX10 arrives, it will kick ass then…it was made for DX10” or “drivers will get better, these are just immature drivers” and those sorts of irrational arguments.

        AMD fanboys, instead of expecting proper drivers at launch, expect them to be bad at first but get insanely better in the future.
        AMD fanboys, instead of expecting proper performance now in the current crop of games, expect it to be much better with games that support the next iteration of DX.

        You know how many times that happened ? The answer is 0. Sure drivers will get better, but AMD is notorious for their less than stellar drivers and somehow you believe now it’s going to be different ? Also HBM is what it is (be it 1 or 2), but it can’t fix an unbalanced architecture, which clearly is the case as the paper specs and then actual benchmarks show.

        You can wait 6 months all you want and you can create all the fantasies about Fury X kicking ass, but the reality is as the reviews have shown it. You can keep your head inside AMD’s reality distortion field or simply wake up, but with that level of delusion I’m sure you prefer to stay in AMD’s RDF.

          • snook
          • 4 years ago

          Hehe, praise Allah the kind and merciful, you came in late and are finishing hard.

        • JustAnEngineer
        • 4 years ago

        Cynicism is the belief that humans are motivated only by selfishness.

        Silus is not a cynic. AMD could offer a new card that performed three times as well as the NVidia competition for one tenth the price and was bundled with free ice cream… if Silus posted on-line about it, he would still be negative towards AMD.

          • Klimax
          • 4 years ago

          I’d say bit of hypothetical. For quite few years he didn’t get chance to display what you attribute. (Maybe something after release of 7970?)

      • shank15217
      • 4 years ago

      Hyperbole much? It needs some work on the driver department, tired of this doom and gloom crap.

    • geekl33tgamer
    • 4 years ago

    That’s not what I expected when I read those performance numbers, and boy that’s got to be awkward for AMD at their target price point.

    Even more-so because the two largest e-tail companies here LOWERED the price of the 980Ti today to £499.99. The Radeon, yeah – It’s starting at £507.99.

    Heck, at times in 2 of the games, it’s within 1-2 FPS either side of a regular 980. Those are £369.99. Jeez.

    May only be £8 at the Ti end, but that new card is also only one price cut away from being a compelling option. Today, it’s not a compelling option.

    • ish718
    • 4 years ago

    AMD tried to take the performance crown from nvidia but ended up getting beat by their own product, the ungodly R9 295 x2!

      • juzz86
      • 4 years ago

      Which makes my next decision much harder. I waited for both 980Ti and Fury to hit before I updated, but it still looks like the 295×2 is the card to grab to update my GTX 690. And that’s disappointing, because I could’ve done it months ago.

      I have no allegiance either way, but it’s a damn perplexing time to buy a new card at reasonable cost. The 980Ti is too dear, and the Fury can’t match it either. Best to save the $100 or so and stick with dual Hawaii, I guess.

      Any thoughts folks? I’m honestly lost as to what to do, but the 690 isn’t cutting the mustard anymore.

        • f0d
        • 4 years ago

        I never have liked dualgpu cards – imo running 2 separate cards is the way to go or even one really good one because game performance with sli/crossfire depends on profiles

        I guess if you play popular games like battlefield and others that have good sli profiles dual 290/290x’s are the way to go or if your not running 4k resolution dual 970’s or if you can afford the extra money dual 980’s

        i don’t like running sli/crossfire though (which is funny as i have 3 x 670’s I’m about to play with in sli) and i would prefer one fast card like the 980ti because i don’t always play games that have good sli profiles so i would prefer just having one big honkin fast card

        i guess it all depends on what you play and if the 690 works well in the games you play then i guess 2X 970’s would be good (unless 4k resolution because of the 3.5gb memory thing) also

          • Westbrook348
          • 4 years ago

          Agree after trying to play games with 3D Vision on in SLI. No problems in 2D with G-sync enabled. But 3D with two cards is a mess (I can’t imagine three). I’m trading the 970s in for a 980Ti because I can’t take the headaches anymore.

          • juzz86
          • 4 years ago

          Cheers mate, I’ve been on duallies since the 4870X2 so their hiccups apparently don’t trouble me too much, I’m pretty forgiving on the odd stutter or two. I do get they’re a niche thing, but they’ve treated me very well in the past, and they’re also easier for me to put a block on and add to the rig, considering it’s mATX!

          I moved from 1600p60 back to 1080p120, and the 690 can’t quite cut the mustard in the newer stuff, so I’m back at 1600 (and will probably stay here, to be honest – I see the allure of 120 but 60 doesn’t actually bother me too much after seeing both). Everything up to FarCry 3 is fine, but anything after that is struggling a bit. It’d be nice to be able to up the pretties a bit!

            • sweatshopking
            • 4 years ago

            Wait until dx12 comes out and see if the multi gpu fixes improve the situation.

    • Meadows
    • 4 years ago

    Aaand it’s basically a dud.

    The only redeeming feature of this card may be overclocking headroom. Here’s hoping it has that, at least.

      • Meadows
      • 4 years ago

      Also, this is how you stop NVidia from looking like “the more expensive option”.

      • sweatshopking
      • 4 years ago

      yes.

      • VincentHanna
      • 4 years ago

      I can’t imagine you’d add an $80 liquid cooling rig to the card and not factory overclock it appropriately.

      • shank15217
      • 4 years ago

      You read that entire review and came up with ‘dud’?

        • ImSpartacus
        • 4 years ago

        And you didn’t?

        The Fury X that was tested was a stuttery mess.

        I think many of us were rooting for it, but it just didn’t pan out today. Maybe it’ll improve in the next few months.

          • squngy
          • 4 years ago

          As an nV leaning guy, I was so hopping it would be a huge success.

          NV didn’t change prices for almost a year FFS.

        • Waco
        • 4 years ago

        How could you read it any other way?

      • Silus
      • 4 years ago

      I suggest you read about overclocking for Fury X…you won’t be pleased.

        • snook
        • 4 years ago

        Oops, just posted about. It’s not a barn burner for sure.

        • Meadows
        • 4 years ago

        Well, you’ve saved me some effort then.

      • puppetworx
      • 4 years ago

      Such a damp squib after so much time waiting for something new in the GPU market. It’s like the GPU market has now followed the CPU market into stagnation.

        • JustAnEngineer
        • 4 years ago

        AMD and NVidia have literally pushed to the boundaries of TSMC’s venerable 28 nm fabrication process. The individual transistors cannot get any faster, and with 600 mm² GPU dies, they have run out of room to add more transistors.

          • anotherengineer
          • 4 years ago

          Yep.

          AMD should have asked Intel if they could fab a few 14nm GPU engineering samples for them 😉

            • Airmantharp
            • 4 years ago

            What’s funny is that Intel would actually be a great suitor for AMD’s graphics division, but such a move would destroy AMD as a whole, and AMD’s GPUs compete to an extent with Intel’s HPC products.

            • chuckula
            • 4 years ago

            This will likely never happen in the real world but…..

            Intel doesn’t buy-out AMD’s graphics division, but Intel does a big cash-transfer over to AMD to license GCN and other assorted technology to be used in Intel’s IGPs.

            AMD gets a bunch of badly needed money without having to actually give-up any lines of business, Intel gets better integrated graphics.

            • Klimax
            • 4 years ago

            Intel has no use for that. Either in IGP nor in HPC. Just remember with whom Intel competes in HPC…

            • chuckula
            • 4 years ago

            [quote<]Just remember with whom Intel competes in HPC...[/quote<] Nvidia? What do they have to do with this?

            • Klimax
            • 4 years ago

            Because you couldn’t say AMD. AMD is nowhere/nobody in that market.

            All Intel needed they already got through patent crosslicensing (first with AMD and shortly before HD series with Nvidia) And what they lack they can’t get from AMD anyway. (Good driver team)

            BTW: Even Bay Trail with good drivers would be quite capable gaming chip. But performance is very uneven…

            • Airmantharp
            • 4 years ago

            Graphics drivers are one spot where Intel could really use a bump, though they’re certainly getting there.

            Casual gaming on laptops is one thing that they’ve gotten down pat.

          • puppetworx
          • 4 years ago

          I know you’re right about the die size being the ultimate limitation but it’s really disappointing that AMD weren’t able to trade blows with NVidia when they have a bigger chip, a new memory architecture and watercooling on their side. If the problem is that AMD can no longer improve architecture efficiency (like they’ve failed to do in the CPU market) then the GPU market is in a lot of trouble.

      • anotherengineer
      • 4 years ago

      I wouldn’t call it a dud outright.

      HBM is a new tech, and they were able to bring it to production and the market and will no doubt replace GDDR5 in the future, becoming the new defacto standard in hi-perf cards.

      And it’s not a dud in the sense of core temps and noise, two very important things there IMHO. Of course you can slap a liquid cooler on about anything, but it will cost additional $$ and typically void your warranty though.

      If the nano lands somewhere between the 970 and 980 with about the same power requirements, it has the potential to be a great card. However, even if the nano is an awesome card, the reality is so many people have a 970 and/or 980 now, there would still be no reason to dump it for a nano. The Nano does have the niche feature of being small for itx boxes and/or htpc’s though. I think if AMD could have put out the Nano in volume before the 970 and 980 it probably would have been a great help to them.

      Have to keep on waiting until the reviews are done.

        • K-L-Waster
        • 4 years ago

        [quote<]If the nano lands somewhere between the 970 and 980 with about the same power requirements, it has the potential to be a great card. [/quote<] Agreed - I'm still keeping an eye on it as a potential upgrade. Now, it would need to land between the 970 and the 980 on both performance and price, of course, but I suspect both are doable.

      • jihadjoe
      • 4 years ago

      Unfortunately reports from [url=http://www.hardocp.com/article/2015/06/24/amd_radeon_r9_fury_x_video_card_review/11<]HardOCP[/url<] and [url=http://www.techpowerup.com/reviews/AMD/R9_Fury_X/34.html<]TPU[/url<] indicate the card may not have even that. The above sites got just 70-80MHz, and 100MHz overclocks out of their Fury X samples, respectively.

    • YukaKun
    • 4 years ago

    I have a theory by looking at all the benchmarks for the Fury X.

    Can you OC your CPU and see how the Fury X scales, please? Same with the RAM.

    I’m not trying to instigate a conspiracy theory or anything, but I’d like to see what happens.

    Cheers!

    • madgun
    • 4 years ago

    Top review Scott. FuryX handily beaten at launch by the venerable 980 Ti. 2900XT all over again except this time AMD can’t afford such a mishap.

      • Action.de.Parsnip
      • 4 years ago

      I keep seeing this comparison. The 2900xt had zero redeeming qualities whatsoever. Comparison is therefore false

        • Waco
        • 4 years ago

        The 2900XT wasn’t anywhere near as bad as you’re remembering it.

          • Meadows
          • 4 years ago

          Yes it was, my friend.

    • Mr Bill
    • 4 years ago

    I think AMD has got a winner here.

    1) The Fury X is keeping up with NVIDIA’s Titan X which needs three times as much memory.

    2) There is another AMD card coming that has two of these Fury X GPU’s?

    3) The lower sound level and less heat in the case at this performance level.

    4) Since HBM is an open spec. I Wonder how long it will be before NVIDIA releases an HBM evolution of the Titan-X.

      • Milo Burke
      • 4 years ago

      Nvidia was attractive with cooler and quiet cards when they performed similar, not worse. That’s not the case here, but I don’t think it means the Fury X is a bad product. I think AMD can still whip their drivers into shape if they choose to. (Although not doing so before the big unveil is a pretty big black mark on the card’s record.)

      I don’t think HBM is an easy mod. Nvidia’s Pascal chip next year is expected to have HBM 2 on it, but I highly doubt we’ll see it on anything Nvidia before then. Or are you just musing when we’ll have a Pascal-based Titan?

      • Westbrook348
      • 4 years ago

      Your definition of “keeping up” is different than mine. Are you still just looking at average FPS? Because in 2011 Scott showed why that was foolish. If you check out the 99th percentile frame time graphs in his review, the Fury X is lower than the 980, and far below the 980 Ti.

      Maybe with two of them, Fury X gets closer to where it needs to be at its pricepoint. Unlikely IMO given AMD’s driver optimization history, but if they surprise us that’s great for the industry and for us as gamers. But really, you should stop focusing on average FPS. That doesn’t guarantee a smooth experience at all.

      Lastly, I don’t think it’s feasible to make a Titan X with HBM, because the GPU is already so big. HBM has to fit on the same substrate; Titan X leaves no room for anything else.

        • beck2448
        • 4 years ago

        From Tweaktown: At the end of the day, the best card to buy right now is still the GeForce GTX 980 Ti. A great card that beats the Fury X in most situations, with great custom cards from the likes of ZOTAC, EVGA, ASUS and everyone else.
        from Hardocp: In terms of performance there wasn’t any game where the AMD Radeon R9 Fury X was faster than the GeForce GTX 980 Ti. In some games, it did match the same gameplay experience, which was a major upgrade from the AMD Radeon R9 290X. However, in every game the GTX 980 Ti always had the framerate advantage, especially when it came to minimum framerate which is important. Most people DON”T want a big water cooling attachment they have to deal with.

        Not looking good. What they may or may not do in the future is irrelevant NOW.

      • maxxcool
      • 4 years ago

      umm NV is already on track with HBM2 … pascal I think ? They thanked AMD for the pioneering effort then ran with it .. since the fury seems on par with current NV tech their gamble was the better one.

      edit last line.

        • Airmantharp
        • 4 years ago

        One has to imagine that if Nvidia hadn’t shown real interest in using HBM, the technology may have not been available for AMD in the first place.

      • maxxcool
      • 4 years ago

      corrections

      “”I think Nvidia has got a winner here.

      1) The Titan-x is keeping up with the Fury-x which needs 1/8th the memory bandwidth

      2) It takes TWO fury-x’s and a rowboat Oars to beat Titan sufficiently

      3) Does not require liquid cooling that may not fit in smaller cases.

      4) Since HBM is an open spec. Nvidia allowed AMD to make revision-a0 products and take them to market and test them on consumers .. then revise and improve their designs over that of the AMD team while continuing to improve their 4 year lead in hardware and software compression techniques AMD only recently tested in the 285.””

      • brucethemoose
      • 4 years ago

      Pass the Kool-Aid please.

        • Mr Bill
        • 4 years ago

        sure!
        [url<]http://www.anandtech.com/show/9385/amd-shows-off-dualgpu-fiji-card[/url<]

          • brucethemoose
          • 4 years ago

          [url<]http://i.imgur.com/SLG6hp4.jpg[/url<]

          • Mr Bill
          • 4 years ago

          So, why is this not exciting? Double the GPU and that means 8GB of HBM. It will surely wax the dual-GPU R9 295 X2 and probably the Titan X. I’ll totally guess that it will cost about 60% more than the Fury X so hmmm about a grand.

          Good times are coming for us tech nerds. I can hardly wait for August.

          Then everybody will have to find another “rare” metal or ancient God abbreviation to name these cards after. Zelazny’s ‘The Furies’ was a pretty good read.

            • JumpingJack
            • 4 years ago

            Multi-GPUs mirror their memory, so the effective available memory is still just 4 gigs. The BW also is not additive, the same data rate to each GPU effectively remains the same.

            • Mr Bill
            • 4 years ago

            Pixel fill rate, shader arithmetic, bilinear filtering, rasterization, and ALU throughput will all go up. Frame times and latency will go down. Look at the improvement of the R9 295 X2 compared to the R9 290X and think along those lines for the Fury X2 vs the Fury X.

      • Silus
      • 4 years ago

      AMD’s reality distortion field sure is working on you…

        • Krogoth
        • 4 years ago

        Pot calling kettle black……

        • Mr Bill
        • 4 years ago

        I do favor AMD generally because I have been building systems since the 1980’s and I remember how expensive CPUs and motherboards were before AMD and others started bringing some competition to the CPU and chipset market.

        In this case, I am simply agreeing with Damage in his conclusions of this excellent review. I restated very broadly some of the interesting points. I hardly game at all and WOW at that. So, I have no axe to grind about this card. I’m running a $200 XFX HD 7870 which is sufficient for WOW. I think HBM is really cool tech and we are getting to see how a lot of theoretical engineering shakes out in fact, and its very exciting. This is just the beginning of HBM use in GPUs and CPU’s this is going to be really fun to see what comes out of the competition.

          • Voldenuit
          • 4 years ago

          [quote<]I do favor AMD generally because I have been building systems since the 1980's and I remember how expensive CPUs and motherboards were before AMD and others started bringing some competition to the CPU and chipset market.[/quote<] Competition is good for the marketplace. AMD dualies were very expensive when intel was underperforming, too. As long as we're rooting for free market economics and not getting emotionally attached to corporations.

            • Mr Bill
            • 4 years ago

            I’m remembering the Athlon 1GHz going for a cool $300 and the Pentium suddenly dropping from $1000 to meet that price.

            • f0d
            • 4 years ago

            you mean this athlon 1ghz?
            [url<]http://www.cpu-world.com/CPUs/K7/AMD-Athlon%201000%20-%20AMD-K7100MNR53B%20A.html[/url<] Price at introduction $1299 thats even more than the 8 core haswell-e

            • Mr Bill
            • 4 years ago

            OK I just checked and what I actually bought was a 900Mhz for which I paid $300 in the fall of 2000.

            AMD Athlon K7900MNR53B A, 710021016681, 0.18 micron, Y2K, week 21, 1.80v core / 3.3v I/O.

            I guess the time difference from the March 2000 rollout to fall of 2000 dropped the price considerably for the 900MHz units.

            Here is a TR link from right about then…
            [url<]https://techreport.com/news/1080/amd-ships-1-1ghz-athlons[/url<]

            • Mr Bill
            • 4 years ago

            Rather than a duallie, I wasted my dough on pairs of the Palamino and Barton MP’s because SMP was so cool back then. I do recall not trying to SMP the duallies because it was just to expensive for those server chips.

          • f0d
          • 4 years ago

          people seem to forget that when amd had the performance crown they were charging 1k for their top cpus also

          they diddnt bring the prices down at all – they just joined the competition at charging high prices

          [url<]https://techreport.com/review/8295/amd-athlon-64-x2-processors[/url<] Athlon 64 X2 4200+ 2.2GHz 512KB $537 Athlon 64 X2 4400+ 2.2GHz 1024KB $581 Athlon 64 X2 4600+ 2.4GHz 512KB $803 Athlon 64 X2 4800+ 2.4GHz 1024KB $1001 [url<]http://www.cpu-world.com/CPUs/K8/AMD-Athlon%2064%20FX-55%20-%20ADAFX55DEI5AS%20%28ADAFX55ASBOX%29.html[/url<] Price at introduction $827 [url<]http://www.cpu-world.com/CPUs/K7/AMD-Athlon%201000%20-%20AMD-K7100MNR53B%20A.html[/url<] Price at introduction $1299 (edited for proof)

      • tomc100
      • 4 years ago

      The Fury X was really meant to compete against the GTX 980, but then Nvidia got rumors of its performance and decided to spoil the party and magically bring out the 980 Ti. No doubt that when the Fury X2 comes out, Nvidia will also magically bring out their Platinum 980 or whatever it’s called. It seems Nvidia is holding back and is milking its current hardware until they feel threatened.

        • shank15217
        • 4 years ago

        Fury X was meant to compete with the Titan X, hence the ‘X’. Its very likely Fury X will reach performance parity a few months down the line with driver updates but that maybe too late and tarnish the brand.

          • Mr Bill
          • 4 years ago

          Queue the One-X album cover, because its “never too late”?

          • tomc100
          • 4 years ago

          The titan X is $1000 while the Fury X is $650 so I doubt it was meant to compete against it. Most likely it was supposed to perform between a GTX 980 and a Titan X and it’s priced accordingly. The X is meant as an extreme version of Fiji just like 390X and 290X. Nvidia just started using the X to distinguish it between the previous titan. Had Nvidia not magically release their 980 Ti early, they would have egg on their faces and EVERYONE would be singing praises of AMD like it’s the next coming of Jesus.

          Soon we’ll get the Fury Pro which will compete against the 970 and probably beat it and I’m sure Nvidia will magically release the GTX 975.

      • puppetworx
      • 4 years ago

      What about the price/performance graph makes it look like a winner?

      Unless a) there is a [b<]serious[/b<] performance improvement in the next driver, b) people figure out how to overclock the bejesus out of it, or c) AMD drops the price quickly, I don't see many people picking this over the 980 Ti. 10% performance difference is a lot when you're laying down $650.

        • snook
        • 4 years ago

        10% is, but not when it equals 4 frames. It sits next to the 980Ti on price performance, that’s not really a loss on that front. Others, oh yea, it lost.

          • puppetworx
          • 4 years ago

          It’s only 4 frames on 4K resolutions, on lower resolutions with higher framerates the performance difference remains proportional. In fact from what I’ve seen 4K is where the Fury X is [b<]most[/b<] competitive, with performance falling away from the 980 Ti at lower resolutions. [url=http://www.pcper.com/reviews/Graphics-Cards/AMD-Radeon-R9-Fury-X-4GB-Review-Fiji-Finally-Tested/Grand-Theft-Auto-V<]1[/url<] [url=http://www.techpowerup.com/reviews/AMD/R9_Fury_X/31.html<]2[/url<]

            • JumpingJack
            • 4 years ago

            HBM did not pan out and there are far too many technical knocks against Fury X to make it a choice card over the 980 Ti ….

            – No HDM 2.0 support.
            – Higher power consumption for equivalent (@ 4K) to lesser performance (1440p and lower)
            – 4 GB limited

            For me, personally, that low pitch whine reported (and demonstrated on Ytube) is also a negative — it can be quite quiet, but the pitch, even barely audible, will drive me nuts.

            I agree…. I don’t see an objective non-brand dedicated person jumping for the Fury X over the 980 Ti … the fanboys will snap it up, no doubt.

            • l33t-g4m3r
            • 4 years ago

            TR mentioned retail cards would have quieter coolers than the review samples, in case you missed it.

            • JumpingJack
            • 4 years ago

            So what was reviewed was not what is sold…. didn’t they do this with the 290X as well?

            I recall AMD promising a follow-up after the follow-up … that never materialized either. Hmmmmmmmm…..

            • JumpingJack
            • 4 years ago

            [url<]https://techreport.com/news/28566/retail-fury-x-coolers-still-whine-dont-include-fix#metal[/url<] Hmmmmmmmmm....... I though you said AMD said this was fixed and a non-issue.

            • snook
            • 4 years ago

            first in with the tech is first to suffer the growing pain. I suspect HBM is going to be the default memory though for both vendors.

            • snook
            • 4 years ago

            got ya, sounds logical.

    • Ninjitsu
    • 4 years ago

    I’ve been resisting pointing this out, given the amount of effort that goes into these reviews (and since I’m not a subscriber yet)…but it would be great if TR could drop 4K benchmarks for now, focusing on 1080p and 1440p instead.

    Hardly anyone in the TR community uses 4K, hardly anyone on Steam uses 4K (0.06%). Heck, IIRC less than 5% of Steam users use anything higher than 1080p. So, what’s the point? Who can use these benchmarks?

    No 2015 single GPU setup can push 60 fps min @ 1080p with the highest details in a 2015 game.

    That should be the mark – can today’s top end single GPU render [i<]today's[/i<] games at 60 fps min @ 1080p, with all details turned up, and with [i<]reasonable[/i<] AA, like 4x? If yes, move on to the next resolution. Otherwise, stick to 1080p and maybe 1440p (whichever resolution was second according to the TR poll). Sure, this will become less relevant as Adaptive Sync proliferates - but it'll still be a year or two before we see notable adoption. As the owner of a 1080p monitor and someone who's considering a new GPU in a few months, this article doesn't help me much at all, unless I take an Excel spreadsheet and calculate approximate theoretical performance increases. (btw from an academic pov it doesn't help assessing the strengths and weaknesses of HBM, or the trade-offs that AMD/Nvidia have made).

      • YukaKun
      • 4 years ago

      I downvoted you for this simple reason: “because I don’t have it, no one else can know about it”.

      Can you imagine that logic applied to Ferrari, Porsche or Lamborghini?

      If you want to game on 1080p and don’t want to move from there, you have the 970. No need to look at the 980ti, Titan X or the Fury family.

      Cheers!

        • Ninjitsu
        • 4 years ago

        [quote<] "because I don't have it, no one else can know about it". [/quote<] Know about it sure - but the last few GPU reviews have almost dropped non-4K resolutions entirely. Also the car analogy doesn't apply here. You run software on GPUs, at various resolutions. I can't think of that translating to cars. Also, the 970 isn't good for 60 fps min @1080p - heck I'd bet even the Fury X isn't. 4K is mostly academic at this point.

          • VincentHanna
          • 4 years ago

          You can run your GPU at various resolutions…

          And you can drive 25mph on a residential street using a Ferrari… Or strap a surfboard to the roof and take 1 of your 3 kids to the beach. Totally the same.

            • YukaKun
            • 4 years ago

            To your point:

            [url<]http://assets.blog.hemmings.com/wp-content/uploads//2011/02/Volvo-Lambo.jpg[/url<] I still don't agree on taking out 4K, since it's the way forward and we have to accept that 4K is the new 1080p (or 1900x1200) of this era. Cheers!

          • YukaKun
          • 4 years ago

          The car analogy stands just because of your last point (again): “4K is mostly academic at this point”.

          You can drive a car anyway you want, just like you can attach your video card anywhere you want. If you have a 1080p monitor, sure. If you have a trio of 1080p screens, sure. If you have a 4K monitor, again, sure. Why take out information? I can understand asking for more information, but less? Specially when the cards are *aimed* at the tested resolution. Makes no sense.

          Cheers!

        • Whispre
        • 4 years ago

        I down voted you for one simple reason, ignoring the point of his post.

        I personally game at 1440p not 1080… and skipped every page of this review that was at 4k… worthless to me today, and for at least the next year or two, maybe three.

        Even if no 1080p tests are done, at least do most in 1440p….

          • YukaKun
          • 4 years ago

          With that I can agree. 1440p/QHD is actually better to test these cards than 1080p/FHD in conjunction with 4K/UHD. Still, his point was not that originally. He wants 4K out for cards that are *aimed* to target that resolution. Makes no sense.

          Cheers!

        • Anonymous Coward
        • 4 years ago

        Yeah actually reviews of Ferrari, Porsche or Lamborghini are pretty boring for me these days. Getting past those cars is a step on the road to being emotionally mature, IMO.

      • Milo Burke
      • 4 years ago

      Scott had very limited time with the Fury X before the review needed to be posted. I think it’s fair to say he had time to test wide or test deep, and he chose to test deep (regarding frame variance, Mantle comparisons, Fury’s unusual balance of assets, etc.). Which I prefer, since everybody else seems to test wide and shallow.

      Scott said in another post on this thread that he prefers 1440p at higher framerate than 2160p at a lower framerate. So you’re not the only one. But reviewing hardware is all about discovering where one product stumbles and another doesn’t, which is best handled by giving them difficult tasks.

      So yes, I think Scott did the right thing testing at 4k. And yes, I’d like it if he’d do some tests at 1440p and 1080p as well in the coming weeks.

      What’s holding you back from subscribing? Gold is nice, but silver still counts.

        • Ninjitsu
        • 4 years ago

        Yeah, I know – which is why I was hesitating to write this.

        [quote<] But reviewing hardware is all about discovering where one product stumbles and another doesn't, which is best handled by giving them difficult tasks. [/quote<] I'd add some more nuance to that - difficult tasks are fine, but [i<]only[/i<] giving it difficult tasks that it clearly can't perform well doesn't prove much. Where does it shine? etc. In this case specifically, there's no way to tell whether 4GB is holding the card back, simply because neither the VRAM usage is examined nor have the lower resolutions been tested much. Of course, while I don't mind testing 4K on its own, it seems to be dominating the reviews from TR lately, which is why I suggested dropping it - if it takes too much time to test multiple resolutions, test the most common, relevant and useful. Maybe throw in one 4K test in a game where it does very well at 1080p to showcase future-proofing type stuff. I don't earn anything regular myself, neither do I own a credit card - once both are sorted, gold it shall be.

          • Milo Burke
          • 4 years ago

          No worries, my friend. You don’t have to be a subscriber to comment. Regardless, your criticism is clearly constructive.

          I like your thinking. It’s best to test where the products do well, and where they are most likely to be used. I wonder the best way to bring this suggestion to light in an open and positive way.

        • Anonymous Coward
        • 4 years ago

        [quote<]But reviewing hardware is all about discovering where one product stumbles and another doesn't, which is best handled by giving them difficult tasks.[/quote<] Is a weakness interesting merely because it exists, or is it only interesting when it can actually effect the user? In an ideal world, TR would ensure that they cover both the "common man" at 1080, as well as the well-funded enthusiast.

      • Westbrook348
      • 4 years ago

      I don’t have a problem with the high end card reviews or 4K testing, but there does seem to be an excessive focus on the Titan X and 4K (especially with MSAA enabled). How many people are paying $999 for a GPU? Compared to the millions of 970s sold. I would prefer the majority of these guys’ effort focus on 1440p. Not everyone has G-sync, so maintaining constant 60 fps is still super important. If “time spent above 16.7 ms” is the entire duration of the benchmark, then maybe the pixel count should be lowered. Postpone focusing on 4K until variable refresh is more widely adopted.

      And this is coming from a guy who loves 3D Vision and would love an update on frame times in 3D. That’s even more niche than 4K, the difference being that the former doesn’t get any attention: there seems to be zero Tech Report articles on 3D vision since Cyril’s January 2012 article. Oculus Rift is still a year away. If you guys are grasping for ways to test the 980Ti and Fury X, run some frame time analysis on the 1440p ROG Swift in 3D. I await the results! 🙂

      Edit: typo

      • odizzido
      • 4 years ago

      me too. I’d like to see 1440/1080 and extend the time spent over metric to 8.3ms

      • NarwhaleAu
      • 4 years ago

      Buy any of these next gen cards and you can game at 1080p to your hearts content.

      I game on a 30″ monitor so I appreciate the 4K testing. In my opinion, you can’t truly test something unless you push it to the edge. 4K is where I will be in a few years time, and I tend to keep graphics cards for ~5 years, so I’d like to see how these cards do. To me, 1080p isn’t as relevant – most cards can now push enough pixels to game at high settings.

        • Ninjitsu
        • 4 years ago

        People keep monitors for longer than 5 years.

          • NarwhaleAu
          • 4 years ago

          Why would you buy a Fury X to game on a 1080p screen…. that doesn’t make sense. Instead, you could buy a 27″ IPS and a 290 (or a 370 if you can stretch to it). That would be a far better gaming experience!

            • sweatshopking
            • 4 years ago

            you’d buy a fury x to game on a 1080p screen because a GPU fast enough to drive games at 1080p doesn’t yet exist, and it’s the closest you can get to a decent 1080p gpu from amd in 2015

            • anotherengineer
            • 4 years ago

            What if your 1080p screen was 120Hz or 144Hz??

      • jihadjoe
      • 4 years ago

      But how will we know if 4k is playable/unplayable if nobody tests it? Fury was supposed to be a groundbreaking product offering better than Titan X performance. I think it was definitely not an error to test in 4k.

        • Ninjitsu
        • 4 years ago

        Sure, test it at 4k, but [i<]test other resolutions too[/i<]...that's all I meant. And the thing is, if it can't do 60 fps at 1080p or 1440p, it's not going to do that at 4K - that's fairly obvious. To be clear, I mean staying below/at 16.7ms throughout the benchmark run, or at least >90% of the time - don't think I just mean averages.

      • NeelyCam
      • 4 years ago

      Hmm… that brings up a larger point about GPU benchmarking… the “good enough” aspect.

      If I have a given GPU, I would use the highest settings that will still allow the game to run smoothly. I could get a faster GPU and use higher settings/resolution, but I would still aim for the same goal – smooth gameplay. If that meant buying a significantly more expensive GPU, and possibly having to deal with cooling noise, was it really worth it…?

      Maybe I’m in the minority here, but I can be happy with less eye candy and resolution as long as the game is smooth and the system is quiet. That’s why high-end GPUs don’t really interest me – Fury Nano is much more in line with what I would want.

        • raddude9
        • 4 years ago

        I can’t believe it, but I completely agree with Neely! I’m not bothered about super-high settings any more, I just want smooth gameplay without the noise/power consumption. The Nano is the first card to interest me in quite a while.

    • tipoo
    • 4 years ago

    Any chance of an addendum about video memory use with the Fury? AMD claimed they found 4GB was enough because they hadn’t optimized driver video memory use enough before now, so I assume that means they’re planning on making it more efficient in new drivers for the Fury.

    Seeing how much better Nvidias texture compression is, I wonder if part of that can be improved in software too?

    • albundy
    • 4 years ago

    hopefully this new tech paves a way for affordable 4k gaming in the near future. surely we will see more powerful lower cost cards that follow.

    • Hübie
    • 4 years ago

    What i notice since GCN 1.0 is a low triangle-setup compared to the geforce-pendants and a relative weakness in fillrate. I think the first issue is related to the setup (maybe arbiter?) and the fillrate is a lack of inconsistant throughput by the TMUs.
    Am i right or is there another significant weakness which causes that?
    Setting Fiji in relation to Tahiti the frontend scaled too less for full workload, don’t you think? Whats your opinions on that, guys??

    • odizzido
    • 4 years ago

    If you do some future tests with these cards would you consider testing for people more interested in 120hz monitors at lower resolutions?

    Nice review BTW. It’s kinda sad to read but nice to know where things stand finally. If the problems with long frame times were solved I think it would look pretty good when you consider adaptive sync.

    • flip-mode
    • 4 years ago

    This thing is a price cut away from being a compelling option. It’s equivalent (at Scott’s 99th percentile frame rate) to a GTX 980 out of the gate before any driver updates. It runs cool and quiet. AMD simply overpriced the thing. Sell it for $550 and a lot of people will pay the $50 premium over the GTX 980 for the nice cooler and the hope of driver improvements.

    The only problem with this thing is the $650 price. Revise to $550 and sell cards.

      • chuckula
      • 4 years ago

      The cut-down Fury parts at lower prices might be the most interesting parts competing with the 980 & 970.

      AMD just needs to actually get them on the market.

      • Mat3
      • 4 years ago

      [quote<] It's equivalent to a GTX 980 out of the gate before any driver updates.[/quote<] Equivalent? It easily beats the 980 out of the gate.

        • Westbrook348
        • 4 years ago

        It’s lower than the 980 on Scott’s 99th %ile frame time graphs. If you’re still focusing on average FPS you need to get “inside the second.”

      • YukaKun
      • 4 years ago

      A water cooled card with factory warranty that fits in 99% of mid towers out there under $700 is not a good deal? Specially when it’s not that far in terms of power consumption and performance (they’re roughly equivalent) to their competition at that price range?

      I’m buffled…

      Cheers!

        • flip-mode
        • 4 years ago

        There’s nothing baffling about it. $650 might work for your needs (must be small, must have factory water cooler) but your needs don’t sound like they reflect the needs of the broader market that normally shops for a card of this stature. The vast majority of the market isn’t going to be making a purchasing decision based on this card being so small (not to mention you still need room for the radiator), and most of them will probably not think the water cooler is worth an extra $150 over a GTX 980.

        Typically, people have a price in mind, then find the best performing card at that price. Then people start to differentiate based on secondary requirements – card size, power consumption, noise, brand identity (asus, msi, evga / ati, nvidia).

        I think the water cooler is worth $50 over similarly performing cards, but most people will start to lose interest as the price difference grows higher than that.

        edit: clarity

          • YukaKun
          • 4 years ago

          At that price point? On the contrary. Water-cooling is the way to go for SLI and XFire for monster config systems, or even regular builds if budget allows. That’s why it is an appealing “feature” Fury X brings to the table at *that* price point. The water block for the 980ti is north of $80USD and you blow your warranty when swapping to it. If you want a water cooled 980ti *with* warranty, that is at Titan X price points if you look around.

          Re-phrasing this, AMD paved the way for nVidia OEMs to get creative. Water-cooling is not a little feature IMO. It is a big feature from a price / feature perspective. I know it’s not the “core” price / performance metric, but it *is* important to take into account when the other is within strike range.

          As for regular cooling for Fiji, we’ll have to wait and see how Fury non-X performs.

          Cheers!

          EDIT: Typo.

            • flip-mode
            • 4 years ago

            Do you have some market research data on hand? I’ll bet you 3 internet beers that if AMD could offer a version of Fury X with an air cooler for $150 less it would be the higher selling product. As part of the same bet, if there are similar options available for Geforces (i.e. a version sold with a water cooler for $150 more) those water cooled Geforces don’t sell nearly as many units as the air cooled unit for $150 less.

            Re-phrasing this: Price is king; $650 is too much for Fury X when the essentially equal GTX 980 sells for $150 less.

            • YukaKun
            • 4 years ago

            Well, they actually have one coming. The Fury non-X is the Fury X sans the water cooling. And you can bet your 3 internet beers you won’t want it for the very same reasons you don’t want the Fury X. It’s the same card, same clocks and same PCB as far as I could tell, but with a regular HSF.

            Your emphasis is settled on the “core” metric and nothing else, so you’re dissing the Fury X as the loser here. Not saying it’s a bad thing, but to the cost of the card you must take into account that is better featured that nVidia’s 980ti. At least the reference cards. Specially at the price points they’re asking for it.

            And no, I don’t have actual market research data, just like you don’t. I do read Tom’s forums all the time and there is people looking for “reference” boards to slap a Waterblock in it though. That’s as good as it gets for “data” 😛

            Cheers!

            • Kougar
            • 4 years ago

            You have always been able to swap coolers and waterblocks on EVGA graphics cards without voiding the warranty. It’s publicly stated in their rules and on their forums. They are very generous about it.

            I don’t fully agree with the Crossfire and waterblock comment. Any enthusiast that seriously cared about watercooling would prefer a DIY approach. Tossing the 120mm rads in favor of anything larger, using a single lower noise pump, and larger diameter, simplified tubing arrangement for starters. A decent pump will last >5 years, the longevity of CLC systems still can’t match that.

            As an avid DIY watercooler, I especially did not like the internal photo of the Fury X. Not only is the entire PWM/VRM system not cooled by the block, but it receives zero passive air cooling because it’s covered up in a box! I know Scott said it was reportedly massively overspecc’d hardware, but running 290X levels of power consumption while insulating the power delivery hardware? And that’s before adding clocks or voltages when OCing the thing, as most Fury X owners will do. If I was in the market for a GPU this alone would be a big +1 for a vanilla Fury with aftermarket cooling.

      • Demetri
      • 4 years ago

      Yes, but it would be nice if AMD could compete without having to slash prices to the bone. The longer this goes on, the more nervous I am at the prospect of them going under and losing driver support, even if I’m pretty sure someone would buy the graphics division.

      • HisDivineOrder
      • 4 years ago

      Isn’t that the story of AMD, though? Always “a price cut away from being a compelling option?”

    • Sam125
    • 4 years ago

    For the games I play, the Fury X isn’t too bad and it’s about comparable to the 980Ti/Titan X. Kudos to AMD for going watercooling with the Fury X. That quietness is a really juicy metric.

      • Westbrook348
      • 4 years ago

      $650 for “not too bad”? If you don’t need the money, send some to charity and/or Tech Report

    • Anovoca
    • 4 years ago

    I dont know how this happened Scott but you forgot to list one of your benchmarks!!!

    [url<]http://s275.photobucket.com/user/anovoca/media/Mobile%20Uploads/arkham%20knight.jpg.html[/url<]

      • Duct Tape Dude
      • 4 years ago

      This image was intentionally left out due to improper labeling as “average” instead of “max” fps.

        • Westbrook348
        • 4 years ago

        When average = max…

      • Milo Burke
      • 4 years ago

      I only regret that I have but three up-votes to give to this post.

      • Damage
      • 4 years ago

      +3 Awesome.

      Borrowed it:
      [url<]https://twitter.com/scottwasson/status/613810818212192256[/url<]

        • Anovoca
        • 4 years ago

        Woot walk off. See everyone next week.

      • derFunkenstein
      • 4 years ago

      Those frame rates are completely not believable. Way too high, despite the cap.

      • chuckula
      • 4 years ago

      This post won the Internet.

      • LoneWolf15
      • 4 years ago

      Mic drop.

      • snook
      • 4 years ago

      scott put this out on twitter, i bawahahahaha’d. you sir are funny. ty

      • maxxcool
      • 4 years ago

      LOL

      • NeelyCam
      • 4 years ago

      Insta-topcomment

      • divide_by_zero
      • 4 years ago

      Well done! Take all my elitist subscriber gold thumbs!

      • Wirko
      • 4 years ago

      The other pic, the OSI model, isn’t half bad either.

      • entropy13
      • 4 years ago

      Reminds me of [url=http://i.imgur.com/IkkH2dJ.png<]this[/url<] classic.

    • itachi
    • 4 years ago

    No overclocking test ?? :o.

    Would be nice indeed to see some tests pushing the VRAM, mod skyrim to the max ;).

    Also is there no loss of quality or performance by using a DP->DVI adapter ?

    Nice card overall kudos for bringing new technology to the table with HBM.. and for making liquid cooling a default option on “not ultra high end” cards like the r9 295×2.

    • Tirk
    • 4 years ago

    Ooh maybe I missed it but have you tried how the Fury X fairs in Mantle or did you just try DX11? It’d be interesting to see the mantle numbers as a prelude to potential DX12 performance.

    Just asking because some of the games you have there have Mantle, I am NOT criticizing your use of DX11. It just seem to be an intriguing factor for an update or later review.

      • Milo Burke
      • 4 years ago

      I’m pretty sure he tested Mantle when available, except for one game where it performed notably worse than DX11. Mantle didn’t seem to help AMD’s case over Nvidia on DX11.

        • Tirk
        • 4 years ago

        Thanks, missed that the first time reading, the bar graphs lured my attention away from the text 😉

        Looks like AMD needs to get cracking on its driver optimizations.

        • Tirk
        • 4 years ago

        Holy smokes people are quick on their thumbing maybe I shouldn’t have admitted I missed it the first time around.

        Thanks Milo Burke for doing the polite thing and actually responding to my question.

          • Milo Burke
          • 4 years ago

          Yeah, people can be quick to down-vote. I cancelled out the negatives for you so far since you’re polite. =]

    • Mark_GB
    • 4 years ago

    [quote<]"The card's six-phase power can supply up to 400 amps, well above the 200-250 amps that the firm says is needed for regular operation. The hard limit in the BIOS for GPU power is 300W, which adds up to 375W of total power board power draw. That's 100W beyond the Fury X's default limit of 275W."[/quote<] 275W = 22.91 amps. 375W = 31.25 amps. This card would explode if you ever pumped anything even remotely close to 400 amps into to. Or even half of that. Even at 100 amps it would almost immediately catch fire, if it did not explode. Remember... We are dealing with 12 volts DC here... Not 120 volts AC. It changes the math a lot. And yes, I saw the AMD produced chart that also says 400a.

      • chuckula
      • 4 years ago

      They are talking about what the VRMs on the card can supply to the chip using voltage step-down from the 12V supply lines. Obviously the 12V supplies are not pushing 200A or anything even close to 200A.

      • Ninjitsu
      • 4 years ago

      Yeah I made the same mistake – Scott was talking about the chip itself and not the card.

    • shaan2585
    • 4 years ago

    “512GBPS HBM” doesn’t matter if you can’t use the bandwidth efficiently.

    I think nVidia realized that when they started designing Maxwell, and that was 4 years ago. What’s wrong with the red guys? Are they absolutely oblivious about the importance of data caching and think that the 2MB SRAM cache on the 750Ti is just for show? Do they ever learn anything at all???

      • Mat3
      • 4 years ago

      All the caches were doubled compared to the 290X.

      • Action.de.Parsnip
      • 4 years ago

      The 2mb sram has near zero effect on gaming performance. The working set is larger than any sram cache. Paraphrased from a blog on the nividia site itself.

        • pranav0091
        • 4 years ago

        You sir, are very wrong.
        Make no mistake about it, a cache is no joke. There is a good reason for anyone to spend die area on cache.

          • Action.de.Parsnip
          • 4 years ago

          “Paraphrased from a blog on the nividia site itself.”

            • chuckula
            • 4 years ago

            Two golden opportunities to actually quote the blog and link to it directly.
            Two golden opportunities wasted.
            Likely reason: The blog only exists in your head.

            • Airmantharp
            • 4 years ago

            I stopped at trying to apply Nvidia architectural advances to AMD GPUs. I could give a rats what Nvidia techs have to say on that subject given the inherent conflicts of interest.

            • Meadows
            • 4 years ago

            Conflicts of interest? Do they want to make slower GPUs perhaps?

            • Airmantharp
            • 4 years ago

            Just in response to A.d.P’s (and the OP’s) drivel. What Nvidia has to say about caching in no way has any bearing on AMD’s GPU architecture.

    • derFunkenstein
    • 4 years ago

    You mention the lack of HDMI 2.0 support and therefore no 4K @60Hz on 4K TVs without DisplayPort. Did they give a reason why they skipped it? For this OMG4K push, you’d think they’d try to drive it in every way possible.

      • dymelos
      • 4 years ago

      The second I read that it wasn’t hdmi 2.0 it was dead in the water for me. knowing more and more people who pick up cheap, but great 4k tvs like the lg I use, it killed any chances of me wanting this card. I would have loved to go to a smaller card but this seems like a big wasted chance on their part to go after the 4k crowd that they were so clamoring for with the advertising on this part. Not sure why amd dropped the ball big time with this and sadly we will have to wait awhile longer of either a respin of this chip or the next gen until we finally get that, in what another year and half or so? looks like my 980s will live to see another day.

      • fade2blac
      • 4 years ago

      Huddy and crew responded to this question during a Q&A prior to a demo and the basic answer: “it was a time to market decision” followed by the mention of a band-aid solution of an active adapter that can convert to HDMI 2.0. In fairness, HDMI 2.0 has it’s own limitations (reduced chroma subsampling, only experimental VRR support as recently demoed by AMD). Consumers might be better served if more TV’s natively supported DP.
      [url<]http://www.hardocp.com/news/2015/06/23/amd_qa_session_on_upcoming_fury_cards[/url<] It appears that HBM integration was the focus for Fiji and nearly all the extra die space was spent on growing the GCN shader/ALU arrays rather than re-balancing the architecture and refreshing fixed-function hardware to integrate things like HDMI 2.0.

    • atari030
    • 4 years ago

    I expected more. Time will tell if things improve with driver optimization and DX12. The other Fiji variants will also have their own tales to tell and we’ll see what value propositions those present. In the end all that matters is the market will hash things out, as it does with all things.

    Who is going to buy?

    I’m not in the market for cards of this ilk, so it matters not to me.

    • Srsly_Bro
    • 4 years ago

    The exercise gym at AMD is named Futility and all employees are required to use it regularly.

    • Wildchild
    • 4 years ago

    Reminds me of when the Radeon R600 series was introduced.

      • Airmantharp
      • 4 years ago

      Naw, this card’s competitive, and at least with the water-cooler as the stock cooling solution it runs cool and quiet without sucking excessive amounts of juice at full tilt.

      It’s actually one of AMD’s most well-rounded releases, if you can forgive the 4GB VRAM cap, drivers, and middling performance to price ratio :).

        • Wildchild
        • 4 years ago

        AMD deserves kudos for the reference cooler Fury X comes with, but everything else was pretty underwhelming. As always the case, I’m sure things will change with some driver maturity.

        A little off topic – but I think the reference design looks really slick as well.

      • Krogoth
      • 4 years ago

      The only similarity that Fury X shares with 2900XT is that both chips suffered from imbalanced architecture choices despite what they can do on paper.

      A more fair comparison would be that Fury X is AMD’s Fermi.

        • auxy
        • 4 years ago

        That’s sort of fair, although I think Hawaii deserves that dubious distinction more than Fiji. (*‘∀‘)

          • Krogoth
          • 4 years ago

          Hawaii was no Fermi.

          Hawaii was faster at launch then the competition. It wasn’t until Nvidia managed to turn the tide with 780Ti launch (albeit by a small margin).

          780 was trading blows with regular 290 while Titan was overpriced for gaming usage patterns despite being released six months eariler than 290X/290.

    • AJSB
    • 4 years ago

    What i like to know is the performance playing…Chess.
    ….at 4K of course !

    Please, no negative comments, you have no clue how demanding is playing Chess with 0:1 Time control ! Every frame counts !

    • Jigar
    • 4 years ago

    IF this was Fury Nano and costed $450 it would have been the worst nightmare of Nvidia, sad part is its Fury X …

    • Chrispy_
    • 4 years ago

    Perhaps the one takeaway from this review is that benchmarks provided by manufacturers themselves are complete junk.

    AMD [url=https://techreport.com/news/28501/here-a-first-look-at-the-radeon-r9-fury-x-performance<]provided their own benchmarks[/url<] just last week, showing the Fury X handily beating a 980Ti at The Witcher 3 by a good 10% or more. In the real-world when Scott does TR's benchmarking with detailed information about the resolution, graphics options, testing hardware and driver revisions, the Fury X is over 10% [i<]slower[/i<] than a 980Ti using the somewhat useless FPS average that AMD's own benchmark results used. As bad as that is, the 99% frame consistency testing shows the vanilla 980 utterly annihilating the Fury X and most likely the el-cheapo GTX970 would too. Ignoring the performance disapointment of Fiji, the disparity between vendor-provided and independently-obtained test results is eye-wateringly huge.

      • jihadjoe
      • 4 years ago

      They probably configured the 980Ti with an AMD CPU! Those bastards!

    • Chrispy_
    • 4 years ago

    Perhaps the one takeaway from this review is that benchmarks provided by manufacturers themselves are complete junk.

    AMD [url=https://techreport.com/news/28501/here-a-first-look-at-the-radeon-r9-fury-x-performance<]provided their own benchmarks[/url<] just last week, showing the Fury X handily beating a 980Ti at The Witcher 3 by a good 10% or more. In the real-world when Scott does TR's benchmarking with detailed information about the resolution, graphics options, testing hardware and driver revisions, the Fury X is over 10% [i<]slower[/i<] than a 980Ti using the somewhat useless FPS average that AMD's own benchmark results used. As bad as that is, the 99% frame consistency testing shows the vanilla 980 utterly annihilating the Fury X and most likely the el-cheapo GTX970 would too. Ignoring the performance disapointment of Fiji, the disparity between vendor-provided and independently-obtained test results is eye-wateringly huge.

      • maxxcool
      • 4 years ago

      Double post is double ?

        • Chrispy_
        • 4 years ago

        Yeah, happens to me on TR sometimes, no idea why. Any mod is more than welcome to delete the duplicate.

        I liked the last time when one got upvoted 50 times and the other identical post got downvoted 50 times. IIRC there was a whole theme running through the mirrored replies too until someone spoiled the fun 🙁

          • maxxcool
          • 4 years ago

          hmm wierdness.

      • Westbrook348
      • 4 years ago

      +100. Spot on analysis Chrispy. FPS are irrelevant. Props to Scott for proving what propaganda AMD’s in house numbers are.

        • l33t-g4m3r
        • 4 years ago

        FPS are relevant in terms of adaptive sync, which is slowly becoming a standard on new monitors. Frame times mostly matter when you’re using a set refresh rate.

          • Westbrook348
          • 4 years ago

          This is totally 100% wrong. Adaptive sync make both FPS and frame times less relevant, especially at playable numbers. But frame times ALWAYS matter more than FPS. That is by definition. FPS averages out frame time data over one second, but it can’t tell you anything about performance over smaller time frames. And for gaming, when you’re talking about milliseconds, FPS can be far too misleading. Variable refresh can make lower FPS bearable, but still only if the frame times are also bearable.

            • l33t-g4m3r
            • 4 years ago

            It’s not wrong. The conditions you’re talking about would require sub 30ish fps, so it wouldn’t be an issue aside from severe cases. Frame times aren’t unrelated to frame rates. If one is bad, so is the other. You can’t have bad frame times at high frame rates, and adaptive sync eliminates the frame drops associated with vsync. Problem solved, aside from severe frame times that would drop the refresh rate below adaptive sync’s threshold, which would be the sole case that matters. Otherwise, it shouldn’t be a problem, as this is what adaptive sync was designed to fix.

            • Westbrook348
            • 4 years ago

            Again, you miss the point of Scott’s work the last four years. It doesn’t matter what conditions we’re talking about: frame times and frame rates are interconnected, by definition. Frame time data will always be enought to calculate FPS, but just knowing the average FPS tells you NOTHING about how a game is actually performing.

            “You can’t have bad frame times at high frame rates” is wrong. You certainly CAN have bad frame times at high frame rates; that’s the whole reason for going “Inside the Second.” The “sole case that matters” is potentially every case, so problem is not solved. Variable refresh is great technology that makes variations in frame times less noticeable and it also enables better performance with cheaper GPUs without having to look at screen tearing. But adaptive sync cannot change the fact that FPS is inherently a flawed measurement of game performance.

            IF you stipulate that the last 1% of frame times are reasonable, then sure, you can use FPS to ballpark performance; we have many people still stuck in a FPS mindset. But all that does is tell you the 50th percentile frame time. You NEED to know more about the frame time percentile plot.

            • Den
            • 4 years ago

            “But all that does is tell you the 50th percentile frame time”

            Average frame rate typically is the mean, not the median. So they are different. Knowing how different they are can give you some info about the skew, so comparing the median to the mean actually could be useful. If the median is lower, then you’re skewed right which would be a good thing. If the median is a lot higher, then you are skewed left which would mean lots of low frame times. Although 99th percentile, time below X ms, or just looking at the frame time graph sorted by percentile all are more useful.

            • Waco
            • 4 years ago

            This. You can have a 300 FPS average and still be a total stuttery mess.

            299 frames in the first half of the second, and one in the second. FPS is “300” while the apparent FPS is more like 2.

            • JumpingJack
            • 4 years ago

            You are wrong, massively wrong. But I have the sneaking suspicion any argument presented to you would simply be brushed aside as it is not inline with your agenda. If it takes 0.5 seconds to render a frame, adaptive sync or no, it will register as a stutter. There is a frame time threshold in which a stutter situation is not perceivable below that threshold and it becomes increasingly apparent the further above you go…. adaptive refresh will not solve that.

      • Lans
      • 4 years ago

      I would like to think it means there is some magical setting which a Fury X is 10+% faster than a 980 Ti. However, the big spoon of salt is how relevant that setting is…

      Totally agree but that TR’s (independent) benchmarking should carry more weight for consumers than manufacturers’. Just I would like to think manufacturers’ benchmarks are still reproducible even if they are cherry picked to show their product in the best light possible without regard to user experience or anything.

      • Sabresiberian
      • 4 years ago

      Are you actually suggesting AMD is doing something shady? Don’t you know they are the most honest tech company ever, a poor underdog, always trying to help the people out with their charity projects, trying to make it in a heartless world that always treats them unfairly? How dare you!

        • K-L-Waster
        • 4 years ago

        Plus those other guys kick puppies on the weekend! I seen’em!

    • joselillo_25
    • 4 years ago

    It should be nice to test this new HBM tech in other programs like Photoshop or some video editing program or 3d software to check the importance of this new memory on this environments.

    • Unknown-Error
    • 4 years ago

    Told ya AMD fanbois. Last time I got almost 50 thumbs-down for stating HBM won’t do squat. Your egos must be on life support now 😉 And I’ll say it again:

    Fancy HBM cannot help AMD. Their GPU architecture is like their CPU architecture i.e. $h!t! The results of a dwindling R&D budget is quite apparent. The rumors about AMD spinning-off/dumping CPU & GPU business actually makes sense now.

      • chuckula
      • 4 years ago

      HBM is good and makes more powerful GPUs in the future a possibility.

      However, there’s an important phrase in engineering that fanboys who aren’t engineers cannot understand: “necessary but not sufficient.”

      HBM is [i<]necessary[/i<] to prevent a sufficiently powerful GPU from choking due to lack of bandwidth. However, the mere presence of HBM is not [i<]sufficient[/i<] to guarantee that your GPU will clobber the competition, especially if bandwidth is not crippling the competition's GPUs. It's very clear that while there are definitely some situations where the HBM bandwidth gives AMD a leg-up, the vast majority of games are not GPU memory bandwidth limited given the current level of GPU resources that are available. In other words, I can put 300MPH racing tires on my Hyndai, but it ain't going to make it faster than a Kia with the same sized engine since neither one of us are going to get anywhere near 300MPH.

        • Milo Burke
        • 4 years ago

        Which Hyundai do you have? Do you like it?

          • Chrispy_
          • 4 years ago

          He has the Hyundai with the 300MPH racing tyres.
          He likes it about as much as a similarly-sized Kia.

      • Krogoth
      • 4 years ago

      Learn GPU architectures 101 then come back.

    • f0d
    • 4 years ago

    i was expecting better after seeing amd’s own benchmarks

    even if they fudged the results a little in their own benchmarks they are nowhere near actual reviewed benchmark results

      • gamerk2
      • 4 years ago

      AMDs benchmarks were tested at 4k with very high (MSAA x8) AA levels, which heavily favored their HBM setup. They basically created a test case where their card looks better.

        • Farting Bob
        • 4 years ago

        Which is what all hardware makers have done for decades. Never pay attention to any manufacturers pre-release benchmarks.

    • HisDivineOrder
    • 4 years ago

    I can’t help feeling like AMD innovated on the wrong parts of this card. It isn’t particularly future-proof with “only” 4GB of VRAM. It isn’t particularly faster than the cards we have today, which is what one would want to see from a card with the promise of a short lifespan.

    And its drivers aren’t particularly mature for something based on mostly ancient architecture at this point. Ancient meaning six months or more (ie., Tonga).

    Moreover, AMD’s lack of focus on improving their DirectX 11 drivers continues to bite them squarely in the butt. Perhaps if they’d bothered to multithread their DX11 drivers like nVidia did or add something akin to ShaderCache or do any sort of effort of any kind to improve their DX11 drivers up to closer to their aspirations for Mantle (like nVidia did), they might not suffer so much today when their card should be kicking ass and chewing bubblegum and being all out of the bubbles and the gums.

    Instead, today, when they should have the advantage, they don’t. Tomorrow, when they’ll have DX12 or Vulkan, they’ll have too little VRAM to matter.

    As is common for AMD, they made compromises in all the wrong places and poor choices are going to turn them into the value alternative (with a user experience price to pay to use one) once again.

    I doubt they’ll get a price drop given it came out at a certain price in the face of nVidia pricing being the way it is, but unfortunately AMD needs a price drop, pronto. How much cheaper can a card with a water cooler built into the cost get, though?

      • Milo Burke
      • 4 years ago

      It looks like AMD was gambling that games could utilize more filtering at the expense of raw pixel-pushing. And so far, it looks like it hasn’t paid off as they hoped.

      I up-voted with hesitancy: I’m sure AMD is and has been working on DX drivers. But perhaps not long enough or with enough people or with talented enough people to get to the point Nvidia has.

      • Freon
      • 4 years ago

      They took a gamble and it paid back at just shy of evens. It could’ve gone much worse.

    • Milo Burke
    • 4 years ago

    Contrary to a lot of early leaks, TechReport showed that clumsy drivers keep the Fury X from outperforming the 980 Ti.

    And contrary to a number of other reviews, TechReport showed that the Fury X vs 980 Ti matchup is a perfect example of how frame-time variance is highly important and can even determine the victor.

    If you value articles like this and haven’t subscribed yet, now’s a good time to show your appreciation. Donate what feels right to you. But anybody can do at least $5/year. That works out to $0.014 per day!

    • ronch
    • 4 years ago

    How typical of AMD.

    Comes out later, and even so it’s not as good. Seen it with Sandy Bridge vs. Bulldozer, Core 2 vs. Barcelona, Fury X vs. 980Ti, etc. I really hope it’s not gonna be the same story with Skylake vs. Zen.

      • Kretschmer
      • 4 years ago

      Zen has zero percent chance of being competitive with Skylake.

      Hopefully AMD with either cancel it or spin off their CPU division before it consumes more resources.

        • vmavra
        • 4 years ago

        Even worse, Zen will most likely go against Kaby Lake by the time it will be released. Unfortunately I estimate Zen will get clobbered by it.

      • Mat3
      • 4 years ago

      2900XT vs. 8800 GTX is the best example. 2900 was late and couldn’t beat Nvidia’s best which was already out. But they bounced back from that and they’ll bounce back from this.

      More similarities: ATI made a lot of noise about the 2900’s bandwidth: the first card with a 512-bit memory bus. Just like the focus on bandwidth with Fury and HBM. Also, one of the possible problems with the 2900 was a flaw with the ROPs keeping it from performing better (performance looked much better against competition when AA was not applied). In the case of Fury, a lack of ROPs may also be holding it back.

      I bet their follow-up card will be better balanced and fix the bottlenecks of Fury.

    • tipoo
    • 4 years ago

    I find it of mild interest that it has 1 bit of memory bus width per shader, and about 1MB of video memory per shader. Unusual symmetry in some of its specs.

    • Pez
    • 4 years ago

    Still think I’ll buy one. They’ll improve more as drivers mature and for the sake of 10% performance I’d rather have the Fury @ £509 in the UK, than the 980Ti @ £640 (comparing water-cooled cards)

    Interesting to see what overclocking brings to the table when AMD allow voltage unlocks.

    • Bensam123
    • 4 years ago

    “The question now is whether AMD has done enough to win back the sort of customers who are willing to pony up $650 for a graphics card.”

    I don’t think people who were buying AMD in the first place wanted a super quiet, efficient card. They’ve always been the bargain hunters and AMD has definitely lost that point with this card. Maybe the Fury will be better or the 390/x’s once we see numbers for that.

    Right now, there is relatively little reason to recommend this card over a 780ti. It requires extra hassle of a liquid cooler (especially if you don’t have room), uses more energy, performs worse, AND it’s just as expensive.

    I’ll still be recommending 290xs (especially when they’re on sale) until things get sorted out. This could be prototype woes of a first gen HBM product (spent more time messing with the memory then with the GPU), but I guess this is what happens when AMD gives up on what they’re good at and chases someone elses lunch.

      • ImSpartacus
      • 4 years ago

      Yeah, I picked up a 290 after the mining discounts. It’s even better now and the 290x is a steal for less than $300.

      But I imagine amd wants to be more than just the value king. Fury x is a start, but they have more to do.

        • Bensam123
        • 4 years ago

        Yeah, they should’ve priced this appropriately regardless. They don’t have the other bits to command the same price as their competition who have more market share and marketing in general.

      • ronch
      • 4 years ago

      Expect prices for these cards to fall soon. Just like the FX-9590.

      They should’ve done what they did with the FX-8350 and priced accordingly relative to the competition right from the start, but you can’t blame AMD for trying to squeeze a few hundred bucks from die-hard fans. Poor fellas.

        • HisDivineOrder
        • 4 years ago

        AMD does that when they’re caught out of position on pricing after nVidia releases a new product. I’m thinking of the 7970 and 7950 pricing up till the day the Geforce 680 and 670 arrived. Suddenly, AMD found it in their hearts to give us a price drop.

        I don’t think a price drop is imminent because they already accounted for nVidia pricing beforehand.

        This IS their value pricing. 😉

      • Freon
      • 4 years ago

      I think fanboys of either side are fanboys of whatever their pride and joy seems to offer. Now it’s going to be all about ultra-low noise instead of everything else since that’s where AMD can claim a solid win.

      I agree, 290 or 290X are still decent cards for their pricing due to the crazy price drops. ~$240/$270 respectively make them good bargains.

      It will be interesting to see if 390/390X prices come back down to reality. I’m unsure anyone will believe the 8GB talking point when AMD’s own flagship is only 4GB. $329/429 (ish?) are kinda absurd. $279/329 are about as far as I think they can reasonable push those in a world where the 970 exists.

      • Bensam123
      • 4 years ago

      In hindsight I guess this is AMD’s ‘top tier’ product and prices will probably be readjusted after launch.

    • USAFTW
    • 4 years ago

    Underwhelming. Looks like AMD bet everything on HBM and it didn’t pay off. Memory bandwidth and ALU throughput not enough to dethrone GM200 and it’s more balanced approach to everything and low-overhead driver.
    I’m starting to miss the AMD GPU division that made the 4870 and 5870. They’ve been falling behind since then.
    My logic may overcome my tendency to support the underdog. I just upgraded from a PhII x4 to a 4690K. Looks like there’s not going to be another AMD card in my rig anytime soon.

    • AnotherReader
    • 4 years ago

    An excellent and timely review!

    Maybe, when time allows, retesting all of the contenders at 2560×1440, would be really appreciated. It is also interesting that the 290X seems to have aged better than the 780Ti which was superior at launch.

    Despite HBM and voltage adaptive clocking, Fiji, at best, matches the 980 Ti. This doesn’t look good for next year when the foundry finfet processes and HBM v2 become available. Of course, it could just be a resource balance issue, and spending the extra transitors to get, say, 128 ROPs and 8 shader engines of 12 compute units each, may be enough to draw level with the green team. Resource balance may be an issue in the Civilization test where the lead over the 290X is less than the usual 30 to 35%.

    If performance doesn’t improve with better drivers, this needs to be priced at $600 rather than $650. The one positive is that, after over 9 months of being barely visible in the rear-view mirror, AMD has caught up to Nvidia in performance.

    • maxxcool
    • 4 years ago

    Ok flame away but here is my take .. Its HBM1, and its beta-ish. Retest in 3-4 months with more refined drivers.

    Lets see how it does when HBM2 enabled NV card details and QA’s are starting to tickle the news jurno’s.

      • Krogoth
      • 4 years ago

      It has nothing to do with HBM memory.

      The Fury X’s lackluster performance is more to do with how resources on the silicon were allocated.

      • Pancake
      • 4 years ago

      In 3-4 months it’ll still have 4GB of RAM. I’m memory starved on my GTX970 as it is playing GTA V at 2560×1600. It’s only going to get worse with newer games.

      4GB in an epic fail. You blew it, AMD.

        • Meadows
        • 4 years ago

        You’re using the wrong settings then, son.

    • Milo Burke
    • 4 years ago

    Scott, what a fantastic review. I think you really did it justice, particularly with the time restraints you had.

    Quick question:
    I figure you test at 4k because that’s the baseline for a tough job for a high end card. But when gaming for fun with variable refresh, assuming the same card at each res, do you prefer 2160p60 or 1440p144 monitor?

      • Damage
      • 4 years ago

      I prefer 144Hz at 2560×1440 with variable refresh, generally. Does depend on the type of game, though.

        • Milo Burke
        • 4 years ago

        Thanks!

        I hope to see you at the barbecue!

        • _Ian_
        • 4 years ago

        I know you must have been under huge time pressure to get the review out so quickly (thank you!), but is there any chance of adding some more tests at 2560×1440 / 1920×1080 at a later date?

        Are the (comparitively) low ROPs holding the Fury back at 4K? Would the increased ALUs and memory bandwidth shine at lower resolutions, or would the lack of geometry performance still put the 980 ahead in most games?

        It’d be really useful to know for those of us who, like yourself, prefer lower resolution & higher refresh rate gaming.

          • travbrad
          • 4 years ago

          A couple other sites did some 1440p and 1080p tests and the gap in FPS actually got wider (in favor of the 980ti) at lower resolutions. They only had FPS numbers though, not frametimes.

    • ultima_trev
    • 4 years ago

    I pray. PRAY to Odin, Jupiter or Satan that this under-performing product is only due to the newness of GCN 1.3 and HBM. I pray that new drivers bring this product up to speed.

    Otherwise they should have just priced at $500 and called it a day. At this point, it competes with GM204, not GM200.

    • Dysthymia
    • 4 years ago

    Disappointing. I’m sure we’ll see increased performance with later driver releases, but that’s not what you should be hearing the day performance numbers come out.

    What was the memory usage during testing? Not that I think it capped out at all, I’m just curious.

    I’m also very curious to see what happens when you overclock this card (especially the HBM).

    How many points per day does it get Folding?

    • gecko575
    • 4 years ago

    [quote<]The cooler in the Fury X is tuned to keep the CPU at a frosty 52°C[/quote<] s/CPU/GPU

    • bfar
    • 4 years ago

    Thanks for the great review, as always it’s one of the leading reviews out there . There are a couple of big elephants in the room where I’d love to see cover in much more detail. This review touches on the first in the conclusion…

    “but right now, this product probably needs some TLC in the driver department before it becomes truly compelling”

    Man, hasn’t PC gaming become way too dependent on vendor driver optimization? Along side the Arkham Knight fiasco in the front pages this morning, the conclusion of this review is very disappointing. So not only are new games not getting optimized drivers on time, but the same drivers are not optimized on time for new hardware releases. For the sake of balance I note that Nvidia couldn’t optimize drivers for their Kepler based hardware on time for Witcher 3, and those are two year old cards! Whether intentional or by accident, a very toxic relationship seems to have evolved between developers and GPU vendors vis-a-vis drivers, development and game performance, and the end users are the absolute losers. Year after year things haven’t improved at all, or have even gotten worse. An investigation into this long standing situation would make a very interesting article all by itself, and I think for hardware reviews, it serves consumers well when vendors are held up to very deep scrutiny in this area.

    The second elephant is pricing. There are some legitimate reasons why prices have increased over the years, perhaps due to manufacturing costs, but there is also some very clear price gouging occurring, particularly in the early adopter space. We see obscene launch prices, along with the usual tricks such as free game vouchers, aftermarket coolers, factory overclocks, withholding launch dates to clear stock; all mostly clever marketing ploys designed to keep prices artificially high. This gen is starting at a fairly epic $650. We even know that Nvidia was planning to release the 980 Ti at €750, but seems to have magically found $100 in cost savings at the last minute. Of course, we all know we’ll be paying around €100 less than that in three months time, and then in less than a year these card will be worth less than two thirds of where they began. In two years they’ll be forgotten about. GPUs depreciate very fast. In such a scenario, should we not ask how €650 can be justified as an asking price for a product where the drivers don’t work properly and new games can’t be played at launch because the vendors can’t optimize on time? I would love to read journalism that seriously challenges the virtual duopoly operating this market , in terms of value and pricing.

      • Milo Burke
      • 4 years ago

      [i<]What newspaper is this and how can I find out if the deliver in my area?![/i<] [quote<]Along side the Arkham Knight fiasco in the front pages this morning...[/quote<]

      • liquid_mage
      • 4 years ago

      You bring up an excellent point regarding the starting price point for high end cards. I know my 9700pro I purchased very near it’s launch was approximately $350us. These new cards launching at $650us are definitely pushing the envelope for all but the most extreme hardware or gaming enthusiast.

        • bfar
        • 4 years ago

        Today’s top dog the Titan X has an RRP of $999.

        This is what Scott said of the 7800 GTX back in 2005…

        “There is, of course, the small matter of the $649 suggested retail price. I don’t recommend that any sane, value-oriented individual of average means fork over this amount of money for a graphics card.”

        As rational today as it was back then.

        • liquid_mage
        • 4 years ago

        Well in fairness the 9700pro was launched in 2002, man i’m getting old. So it looks like the nvidia really started pushing the $600 msrp for high end cards in 2005.

          • Ushio01
          • 4 years ago

          I only went to 2005 as that proved my point to the OP I don’t know if older cards from both sides were also launched at that price.

            • bfar
            • 4 years ago

            And it was a point well made in fairness.

            But if we dig deeper, there’s more to it than mere numbers. We remember that most of those products came alongside a value proposition, in the form of a sightly cut down version of the latest tech at a much cheaper price, for example the widely popular 8800GT. You didn’t need to buy the flagship to get around 80-85% of the performance. Indeed, the RRP was occasionally dramatically slashed shortly after launch, such was the case with the GTX280, which I remember picking up for a song.

            Fast forward to today, and the value proposition for the enthusiast is presented in the form of the GTX980/970, and the re-branded Hawaii cards. Basically last year’s tech. In short, if you want the latest graphics card tech right now, you’re looking at $650 just for starters. So to take your point into consideration, it’s maybe not so much an increase in price, but a decrease in value.

            Just for comparison, it’s worth noting that the US dollar is very strong at the moment, which complicates things for those of us outside the states. Back in 2006 the 8800GTX (which was a truly awesome upgrade) cost me €620 euro. Today, a GTX980Ti would set me back €750. Far too much for a less compelling product.

    • fuicharles
    • 4 years ago

    No doubt, 980 Ti is slightly faster than Fury X. But I see that Fury X still give good value given it run in liquid cooler, pulling off all the hot air out of the PC. Given some drivers update or Direct X 12, Fury X may eventually match 980 Ti

    980 Ti does have another clear advantage – More memory buffer. So, it is more future proof. But If you can spend 650 bucks for a graphic card, you are likely replacing your graphic card every years, so “future proof” may not be that relevant.

    Anyways, I see the real gem will be Fury (Vanilla) or the Nano, which give the best performance/Price ratio. It will be the GTX 970 of AMD (but without the 3.5Gb memory handicap, of course).

    Until Nvidia come up 970 Ti, I can see that Fury (Vanilla) or Nano will be selling hot.

    • ronch
    • 4 years ago

    Due to the exotic cooling, HBM, and the shorter form factor, I do find this card more interesting than the 980Ti. If I had $650 to spend on a video card though, I honestly would go for the 980Ti. For the same cash, you get more consistent and higher performance, plus better idle and load efficiency. Never mind not having driver inefficiencies that always seem to plague AMD cards at launch. To be fair, Nvidia has them too, otherwise they wouldn’t be boasting of performance improvements in later driver versions (which means there WAS room for improvement), but with AMD things seem a bit worse. Is it just a matter of perception then? Either way, it tarnishes AMD’s reputation for driver quality.

    As I’ve said in a previous TR article that talks about Fury X, if you need water cooling, a new and much faster type of memory (which AMD says they spent 5 years to develop!), and more power to get to where your competition got (and barely, I might add) with just air, good ol’ DDR5, and less power, what does that mean?

    • Chrispy_
    • 4 years ago

    Ouch, we have a dud.

    AMD have shoved 45% more stream processors into a product that has no more ROPs and no more triangle throughput than the 290X. Don’t get me wrong, there are some significant memory and shader improvements, but it hasn’t worked.

    In ROP and raster based operation, the crazy-expensive Fiji chips will still be competing with Nvidia’s mid-range silicon, and not even the 980, variant, but the die-harvested 970’s in many cases.

    Progress in the power/cooling departments seems to be mostly down to non-AMD factors like the “aftermarket” cooling, and improvements in TSMC’s 28nm process. So, HBM aside, what exactly have AMD been researching in the 3.5 years since Tahiti hit the market?

      • Terra_Nocuus
      • 4 years ago

      One of the rumors flying around had the Fiji cards sporting 128 ROPs. I’m surprised that they don’t, honestly. GM200 has 96? Gotta step up, AMD.

      • Airmantharp
      • 4 years ago

      Having recently picked up a 970 FTW- very close to base 980 performance- I was really surprised at how well the 980 compared to this monstrosity, essentially trading blows.

      The 970 isn’t on the value chart on the conclusion page, but if it were, it’d be in the best spot- above the 290X, and almost level with but to the left of the Fury X.

        • Kretschmer
        • 4 years ago

        The 290X is still a bit cheaper than the 970 (especially with G-Sync tax), but the 970 is a phenomenal card.

          • Airmantharp
          • 4 years ago

          Yeap; though I’m still in the ‘G-Sync implementations are better than FreeSync *so far*’ camp, and thus worth that premium, while the 8GB 290X cards floating for a similar price were certainly tempting.

      • w76
      • 4 years ago

      [quote<]So, HBM aside, what exactly have AMD been researching in the 3.5 years since Tahiti hit the market?[/quote<] This whole article summed up: This is what happens when you strip your GPU R&D budget to the bones to support a failed CPU business.

      • Action.de.Parsnip
      • 4 years ago

      Where do you start with this….. Have you even been reading the data? Look at the inconsistent performance, the stutters, look at the frame variance. The problem is software, not hardware. The hardware is extremely good, laudible even.

        • Chrispy_
        • 4 years ago

        I don’t even know where you’re coming from with this. Fiji is between 0% and 45% more powerful than Hawaii, depending on which specific feature you look at.

        It performs, overall, about 25% faster than Hawaii, which is exactly what you’d expect given the specs.
        For AMD to expect a 45% improvement (which is what it needs over a 290X to match a 980Ti) they would obviously need 45% more everything. It’s not rocket science, it’s bleeding obvious. In ROP-limited or triangle-limited scenarios, performance will be identical to a 290X, which is a 0% improvement.

    • anshu87
    • 4 years ago

    Sorry if I missed it but where are the overclocking results? Any reason for nt including them?

      • Damage
      • 4 years ago

      Just no time. Will try to OC and report on it later.

    • mad_one
    • 4 years ago

    While I’m not impressed by the product (likely going for a 970 and sticking to 1920×1200 for a while, in fact I’d have liked to stay on the 670 for another year, but Witcher says “No”), I am impressed how memory bandwidth is keeping up.

    Since the GeForce 256 tech press has predicted that memory bandwidth will ultimately limit video card performance, yet AFAIK the GeForce 2 was the only product truly crippled by it. More efficient memory controllers, Z compression, wider busses and GDDR3 and 5 have kept the balance and just as GDDR5 runs out of steam a new technology matures. HBM 2 should ensure that the 16/14nm cards will still limited by chip size and power limitations.

    Note to Damage:

    “Notably, Fiji inherits a delta-based color compression facility from last year’s Tonga chip. This feature should allow the GPU to use its memory bandwidth and capacity more efficiently than older GPUs like Hawaii. ”

    While I don’t know the details, it’s very unlikely that compression will help with capacity. Since the card and driver can not guarantee that compression will happen, it still has to reserve the full amount of memory. Z data and MSAA colour are far more likely to compress well, yet AFAIK the full amount of memory is still reserved.

    Storing the compressed data continuously would also be quite difficult to do in HW. The way it most likely works, is that a region of pixels (say 4×8) is read/stored/compressed as a block. For 16bit/channel * 4 channels this would be a 4x8x8 = 256 byte block. Memory is read as chunks of 4-8 transfers on a 64 bit bus. Assuming 4 transfers are possible and efficient with GDDR 5, this gives us a 32 byte granularity.

    The video card will always write the colour data to the same spot, but if the compression works, it will neither write nor read the last 32 byte blocks. Compression also needs to remove at least 32 bytes or it won’t have any effect. Storing the compressed data as a contiguous block would require memory allocation and pointer tables, which I doubt is feasible, so the data are just stored in place, with some memory locations being skipped, thus saving bandwidth.

      • Damage
      • 4 years ago

      Yet AMD employees have asserted in my presence that compression could help with the 4GB capacity limit. I’m also skeptical for the reasons you cite, but we don’t know what the drivers do under the covers. ’tis an intriguing question.

        • steenss
        • 4 years ago

        It doesn’t appear from initial tests that AMD delta compression is that effective.

          • Cannonaire
          • 4 years ago

          Fiji’s delta compression doesn’t seem to be as effective as Maxwell’s in the initial tests, but compared to Hawaii it is an obvious and tangible improvement.

          What concerns me is just how far below its theoretical maximum bandwidth the Fury X achieves. Neither team seems to have an efficiency advantage over the other in the random texture test outside of the Fury X results, which are distinctively low, comparatively. For reference, the Maxwell cards all exceed their theoretical maximums in the black texture results.

        • mad_one
        • 4 years ago

        Interesting. Thanks for the update.

        I looked around and GDDR5 uses 8 transfers per command, so 64 byte blocks on a 64 bit bus. HBM actually improves on that, with 128 bit busses but only 2 tranfers per address.

        • sweatshopking
        • 4 years ago

        WHEN DID YOU PAY FOR GOLD?!

        • mad_one
        • 4 years ago

        The driver can’t do much here, unless the hardware manages to store the compressed data contiguously (or at least in very large blocks). Utilizing 64 byte blocks of free memory in the driver is not going to work.

        Would be an interesting question of you get hold of an engineer.

        • homerdog
        • 4 years ago

        AMD has apparently worked on some things that will help their cards use memory more efficiently, but delta color compression isn’t one of those things. It only saves bandwidth for the obvious reasons mad_one gave. If anyone tells you differently then they are mistaken.

        • Klimax
        • 4 years ago

        Texture compression like S3TC (aka DXTn) is still use so the only way I can see them to have that is constantly decompress them.

        • Westbrook348
        • 4 years ago

        AMD also asserted that Fury X beat the 980 Ti in 100% of games tested. Time to stop trusting them..

        • HisDivineOrder
        • 4 years ago

        Don’t you begin to think maybe that AMD employees are just saying whatever they can get away with to try and sell something that just don’t sound quite right?

          • K-L-Waster
          • 4 years ago

          I’m not going to fault AMD’s staff for painting their products in the best light they can — that’s their job.

          It’s Scott’s job, otoh, to impartially verify what the actual performance is (which he does admirably, it must be said).

      • erwendigo
      • 4 years ago

      And you, in your rightfulness, don’t say anything about the massive use of 4K tests in this review, that is a clear factor that favors to the AMD card. This one and the Hawaii family.

      But because I can see that Techreport did previous reviews of other cards with this settings that, incidentally favors to the Fury X, I don’t make a accusation of biased review like you with the selection of games, yet I dislike the massive use of 4K resolutions when it’s useless in the 99% of the cases as reference.

      wake up, grow, the list of games aren’t oriented to a unique “color” of cards, you can see a mix of green and red ones, if you distastes that the game market have tittles that performs better with the green cards, and you only can breath with games with a biased performance to the red cards, then you would buy a console and go out of the PC gaming, because you are “suffering” in this market.

        • mad_one
        • 4 years ago

        I think you replied to the wrong comment…

    • shank15217
    • 4 years ago

    AMD will make up for the 10% deficiency in drivers, this is a given. They have done it every other time they released a new card.

      • AnotherReader
      • 4 years ago

      Ordinarily, you would be right. However, this is a bigger and more power efficient Tonga. As such, the drivers should already be optimized. However, there may be some way of using the greater shader throughput to squeeze out more performance. We saw this in the transition from the 4890 to 5870 as reported by [url=https://techreport.com/review/17618/amd-radeon-hd-5870-graphics-processor/5<]TechReport[/url<]: "the RV770's interpolation hardware had become a performance-limiting step in some texture filtering tests, and using the SIMDs for interpolation should bypass that bottleneck. "

    • ish718
    • 4 years ago

    [quote<]The same innovative technology that gives the card such high memory bandwidth also imposes a 4GB memory limit. For the vast majority of gamers the 4G limit won’t be an issue, especially at resolutions below 4K, but crank up the details at 4K and some titles will consume that entire 4GB and then some, which will drag performance down. ~HotHardware[/quote<] [quote<]Also, although we have yet to perform tests intended to tease out any snags, we've seen no clear evidence that the Fury X's 4GB memory capacity creates problems in typical use. ~TechReport[/quote<] Huh?

      • derFunkenstein
      • 4 years ago

      It’s not that hard. Both of those statements can be simultaneously true. There may be edge cases, but TR’s testing didn’t find anything in their use. In fact, TR’s statements are more “complimentary” to AMD, yet there will be people here screaming about anti-AMD bias.

      • Damage
      • 4 years ago

      Still haven’t seen evidence of a problem. We’ll try to test soon, but until then, I’ll avoid asserting that memory capacity will be a real problem for Fury X. It’s one thing to see a software/OS estimate of memory use and another to see the GPU’s performance suffer when another’s does not.

        • ish718
        • 4 years ago

        I am pretty sure 4GB of vram will cause issues at 4k resolution in some games.

        GTA 5 uses significantly more than 2GB of vram at max settings at 1280×1024 resolution on my HD 7870…

    • anotherengineer
    • 4 years ago

    I wonder if that 52C temp has anything to do with the HBM and interposer? We know GPU’s can take 85C, but can memory IC’s take that kind of heat?

    Like the temps, and the noise though. I said I would never spend over $225 on a card, but I will have to wait for the nano reviews.

      • AnotherReader
      • 4 years ago

      We will get a definitive answer to your question when the regular Fury arrives. My speculation is that the hotspot would be the GPU and a properly designed heatsink should be able to minimize the impact of the GPU on the HBM. That would still leave the interposer as a way to transfer heat to the HBM and I can’t make even an informed guess about that.

    • DPete27
    • 4 years ago

    [quote<]this product probably needs some TLC in the driver department before it becomes truly compelling[/quote<] These benchmark tests are openly available to everyone, including AMD. A smart company would run these exact tests (although game selection is obviously a shot in the dark) to verify that your product is living up to its potential and the competition on launch day. Most readers will see the launch article and form their product opinions on that. You can't expect to "polish" your drivers up 1+ months after launch and expect many people to notice/care.

    • DrDominodog51
    • 4 years ago

    It isn’t particularly surprising the Fury X does poorly(in comparison) in Project Cars. Polygons are the Fury X’s weakness.

      • chuckula
      • 4 years ago

      [quote<]Polygons are the Fury X's weakness.[/quote<] Good thing that Fury is targeted and marketed toward the compute segment where polygons don't matter and that AMD didn't try to represent it as a gaming card!

    • Laykun
    • 4 years ago

    [quote<]but right now, this product probably needs some TLC in the driver department before it becomes truly compelling[/quote<] That's the problem I've been having with AMD since the HD 5000 series. Performance improvements and worry free gaming are always another driver release away, on the horizon if you will. They should have named this the 'R9 Somewhat Irate X' as Fury doesn't quite seem to fit the cards performance profile.

    • HERETIC
    • 4 years ago

    We all know this a stop-gap waiting for 16/14nm,
    But this is a really hard sell-for the same price one
    could get a couple of GTX 970 or even get a couple
    of R9-290 and have $150 left over to go towards the
    power bill……………………
    The price really needs to come down………………………

    Let’s hope board makers manage to keep power levels
    down on the non X-as I’m sure they’ll want to overclock
    them-The Nano looks really promising…………………….

    • Ph.D
    • 4 years ago

    Well this is disappointing. I expected a lot more.

    • StuG
    • 4 years ago

    Can you measure how thick the radiator and fan is? I think that would be important for people who might want to fit it as the exhaust fan between the back of the case and the CPU heatsink.

      • StuG
      • 4 years ago

      Guru3d said 6.5 cm thick, for those of you (like myself) who were wondering.

    • Ninjitsu
    • 4 years ago

    [quote<] The card's six-phase power can supply up to 400 amps, well above the 200-250 amps [/quote<] Surely you're missing a decimal point somewhere? 200A is an enourmous amount of current! I doubt PSUs can supply that much over 12V... EDIT: Okay, that last sentence sounds dumb now. But my main source of confusion was the IC handling so much current, and I think I simultaneously assumed Scott was talking about the entire card taking up that much current, and both thoughts kind of mixed themselves. So, I UNDERSTAND THE CALCULATION. I wasn't quite sure how the chip would handle the current. Ignore the brainfart about the PSU. 😀

      • derFunkenstein
      • 4 years ago

      I’m not an EE, but aren’t we talking about much lower voltages by that point? I guess what I’m asking is at what point is that converted to the ~1.1v that a GPU typically uses? 400 amps at 1.1v leaves plenty of headroom for the 275W max imposed in the BIOS.

      OTOH, if it’s 12v and 40 amps, that works out equally well.

        • Ninjitsu
        • 4 years ago

        We are talking about much lower voltages – and that thought did cross my mind – it still is a gigantic amount of current, I wasn’t aware that chips use so much. Given the transistor size and density, size of the interconnects, etc, that stuff should melt under 200A+! Or at least, that’s what I think. Could be wrong.

          • derFunkenstein
          • 4 years ago

          Yeah, I haven’t learned a whole lot about how that works. Does seem like a lot. How do the phases work? Are they parallel or serial? If they’re working in parallel that’d be 400/6 is approx 67 amps each, which I guess is still a lot even at low voltages. Still, if it’s hard-locked at 275W that means it’s more like 46 amps.

          Maybe an EE can help and I can quit conjecturing. :p

          • chuckula
          • 4 years ago

          The PSU is *NOT* pumping 200amps directly. Instead, it pumps a lower amperage (e.g. 25 AMPS at 12V for a 300-watt load) and then the voltage converters on the GPU card step-down the high-voltage signal from the PSU to lower-voltage and higher-amperage power for the GPU itself. Remember that those 25A are divided over multiple 8-pin connectors and the PCIe bus too. Oh, and there are a bunch of those voltage converters so no single one of them is pushing out 200 amps either.

          It’s similar (although implemented differently since this is DC-to-DC) to how step-down transformers work.

          It is still a crapton of current, but it can be managed.

          [Edit: OK, to whatever idiot decided to downthumb, care to explain it better than me or show me how I’m wrong? Or are the fanboys getting so strict that any discussion other than “AMD IS GREAT” is now going to be censored?]

            • derFunkenstein
            • 4 years ago

            that’s a better explanation of what I guessed would happen. I was a music major for crying out loud.

            • Ninjitsu
            • 4 years ago

            I am an EE, a fresh one though – and not very good at this as I can see. XD

            • Ninjitsu
            • 4 years ago

            Okay, that’s what I had expected too except…I still don’t see where the 200A measurement is coming from – is it CEO math?

            • chuckula
            • 4 years ago

            CEO math is always fuzzy, but as an EE you know the basic power equation: E*I where E is the chip’s operating voltage (which can get a little fuzzy in complex chips with multiple power planes, but we’ll average it) and I is the current.

            At 200 amps, you get 300 watts of consumption at 1.5 volts (I’m plugging 300 watts in there, you can solve for any wattage level you like).

            That’s probably a good bit higher than the actual voltage level for the cards, I’d guesstimate more realistic operating voltages at 1.2 to 1.3 volts, which would give you a range of about 250 – 230A at 300 watts.

            • Ninjitsu
            • 4 years ago

            Yup, I get [i<]that[/i<] part. What you're saying is, ~25A is split among different planes, voltage is stepped down to get higher currents, but no current through any one circuit is 200A+, correct? (to be clear: I understood the calculations and stuff, and I had thought of that from the outset, I couldn't imaging putting 200A through an IC, hence the confusion).

            • chuckula
            • 4 years ago

            [quote<] but no current through any one circuit is 200A+, correct?[/quote<] Bingo. There's a whole array of VRMs that each deliver a portion of the power. In a large chip (any large chip, that includes CPUs and GPUs) there are a crapton of power-input and ground pins that are each receiving part of the power for the chip, but never a huge amount through any single pin. The total amount of current is definitely huge, but there's never a single choke-point that's transmitting huge amounts of current or else things would melt quickly.

            • Ninjitsu
            • 4 years ago

            Ah okay it’s clear now! Thanks a lot! That’s exactly what was troubling me so much. I’d give you two more green thumbs if I could! 😀

            • Srsly_Bro
            • 4 years ago

            I bought an thumb-up to the party for you.

            • chuckula
            • 4 years ago

            I just returned the favor.

          • willmore
          • 4 years ago

          Back in time quite a bit, the Alpha 21164 processor spiked up to 43A for the clock signal alone.

      • Chrispy_
      • 4 years ago

      It’s really easy. Today’s silicon uses 1V, give or take.

      DC Power = Volts x Amps.
      200 Watts = 1 Volt x 200 Amps

      All the numbers are correct and make perfect sense.

      • ronch
      • 4 years ago

      “Surely you’re missing a decimal point somewhere”

      Yeah, must’ve dropped it somewhere. They’re pretty hard to spot given their size. Keep your eyes peeled.

      • Freon
      • 4 years ago

      275W divided by around 1.1 volts equals… what number and what unit of measure? 😉

        • cobalt
        • 4 years ago

        [url<]http://www.wolframalpha.com/input/?i=275+W+%2F+1.1+V[/url<] Short answer: 25 emus of current. Or if you need a car analogy, it's 80% of the current of an average car battery. (Amperes are now boring if you have the internet.)

    • guardianl
    • 4 years ago

    Fury X is a little underwhelming. Frame time latency outliers are usually driver issues, so there’s hope for improvement. The worst thing AMD did in the last couple of years is give up on the monthly driver update cadence. I don’t care how much the people in Markham bitched, they need to go back to it.

    Beyond that, as completely impractical as it is we probably need sample sizes of 100+ games to really characterize performance of modern GPUs these days. Oh, and across several resolutions. Even then, there are bound to be tons of outlier cases. Just as the review is done, the next GPU generation should be out 🙂

    Things were a lot easier for reviewers and GPU programmers when all the video card did was basically rasterize textured pixels…

      • chuckula
      • 4 years ago

      I’m not against a followup review at lower resolutions but here’s the kicker: Unless the game is inherently broken (*cough*Batman*cough*) the FPS rates will shoot up so high that your monitor won’t be able to display the frames anyway an it becomes a moot point (especially when there’s price-parity between the products).

      Maybe 2560×1440 could be interesting.

      • swaaye
      • 4 years ago

      The monthly Catalyst drivers weren’t really monthly updates. Each month alternated between code trees. Sometimes major bugs would span several releases. It created quite a few messes.

    • Arclight
    • 4 years ago

    It’s like 2900XT all over again. Wicked cool new memory and yet performs between a “GTS” and GTX of old, between the 980 and 980 Ti currently.

    What a shame, so much ado for nothing.

      • swaaye
      • 4 years ago

      2900XT just had a 512-bit bus with GDDR3/4. Apparently they designed the R600 GPU for a future of FP textures (lots bandwidth) that certainly wasn’t happening in 2007.

    • derFunkenstein
    • 4 years ago

    Great hardware that doesn’t seem to be at all held back by its insignificantly small 4GB. The perpetually unpolished game optimizations, OTOH…

    Having similar but somewhat lower overall performance isn’t terrible. Having it at the same price is. The big spikes in frame times are also terrible, and fixing those would help a lot in terms of catching up in performance.

      • Ninjitsu
      • 4 years ago

      I don’t know. It seems to perform only slightly better than the 980 at times, which also has 4GB.

      If only TR had tested more at lower (arguably more relevant) resolutions like 1080p…or examined VRAM usage…but it’s still a lot of work so I’m not sure if I should complain.

        • derFunkenstein
        • 4 years ago

        I’m interested in VRAM usage because I want to see if “throwing an engineer” at the idea of compression paid any dividends. Since the driver should know how much uncompressed space is free, I’m hopeful you can see usage in a utility like GPU-Z.

          • Ninjitsu
          • 4 years ago

          Yeah you can, though I prefer HWiNFO as it logs just about [i<]everything[/i<] and pretty conveniently.

            • derFunkenstein
            • 4 years ago

            I’m seeing that GTA5 is UNDER-reporting the VRAM usage in the game, at least according to HWInfo and GPU-Z. Typically in the 15-20% range. I’m eating into that last .5GB quite a bit with this game, although the performance is still good.

    • Anovoca
    • 4 years ago

    So what I am getting out of all of this, is that for all the sludge we sling AMD’s way for doing too many rebrands and not rebuilding their architecture enough, AMD would have been better off doing a refresh of the r295x2.

    Bring on the r395 x2!!!!

    • chuckula
    • 4 years ago

    OMG! TR TESTED USING AN INTEL CPU!!

    OMG CONSPIRACY!

    ONLY TRUE AMD CPU TECHNOLOGY LETS THE FURY OUT!
    THESE BENCHMARKS ARE ALL AN ANTI-AMD LIE!
    ONLY AMD CPUS CAN BE TRUSTED TO RUN AMD VIDEO CARDS!

      • derFunkenstein
      • 4 years ago

      The more I think about it, the more I have a feeling that the AMD project computer they showed off at E3 also had an Intel CPU.

        • chuckula
        • 4 years ago

        [quote<]the more I have a feeling that the AMD project computer they showed off at E3 also had an Intel CPU.[/quote<] Oh, it did (4790K) and there are pictures directly from the press event: [url<]http://hexus.net/tech/news/systems/84236-amd-explains-put-intel-inside-project-quantum-pc/[/url<]

          • derFunkenstein
          • 4 years ago

          The biggest surprise from that article:

          “AMD said that the key reason that it used the Devil’s Canyon chip in the Project Quantum machine at the event is that [b<]it is listens to what customers want[/b<]."

            • maxxcool
            • 4 years ago

            Reallllly… and when partners and home users were screaming for ‘big core’ architecture roll back they …. ??

            • derFunkenstein
            • 4 years ago

            Like I said: surprising.

        • maxxcool
        • 4 years ago

        It did 🙂

      • snook
      • 4 years ago

      is there any point where you don’t try to ruin a decent discussion?

        • chuckula
        • 4 years ago

        Non-apology for your personal attacks from last-week not accepted.

        P.S. –> And you were wrong too. But I’m not even attacking you for that.

        P.P.S. –> Since you love to have “decent discussions” where are your intelligent responses to any of my several non-intentionally satirical posts below? Where are you condemnations of the downthumbs that I received for this completely fair and actually pro-AMD post right here: [url<]https://techreport.com/discussion/28513/amd-radeon-r9-fury-x-graphics-card-reviewed?post=916853[/url<]

          • snook
          • 4 years ago

          you wont get an apology. ever, keep that in mind.

          p.s. nope, I was correct. and attack me? hehe, please do.

          p.s.s. perhaps letting go is an issue for you?

            • Action.de.Parsnip
            • 4 years ago

            Would be nice if he wasn’t spamming every article

        • maxxcool
        • 4 years ago

        SSK .. I mean chuckie, adds flavor and value with no MSG

      • ronch
      • 4 years ago

      If AMD themselves use Intel CPUs on their top projects, so could TR.

      [url=http://www.tomshardware.com/news/amd-project-quantum-intel-cpu,29430.html<]AMD Clarifies Why It Uses Intel Core i7 In Its Project Quantum Gaming PC[/url<] The way they explained it though, you'd think Rory was still secretly working for AMD and he's the one who wrote the script. Such broad brush strokes. Honestly, they could've just said something like, "Well, our FX chips were awesome (read: somewhat competitive) when they came out almost 3 years ago but they kinda suck compared to today's Haswells and Broadwells, never mind Skylake, and we're taking our sweet time with Zen, and we couldn't use our NEW FX-9590 CPUs because we were too busy debating over the shroud design to have enough time to look for an enclosure that can hold two radiators. Oh and we're also using Intel CPUs in our gaming rigs at home and they simply smoke our company's FX chips. But Zen's gonna fix all that next year, I think."

    • maroon1
    • 4 years ago

    LOL all of these claims about how revolutionary Fury will be and this all what we get ?! LOL

    Bulldozer was the most overhyped CPU in history (zen might beat that0
    Fury is the most overhyped GPU in history

      • DPete27
      • 4 years ago

      AMD has a rich history of not living up to the market hype it’s created for various products since Bulldozer. These are things that a (unfortunately) sinking company does.

      That said, while I was reading the article, I noticed in the frametime graphs that Nvidia’s plots were all very “thin” and AMD’s plots were “thick” with lots of frametime variances and was thinking it must be poor driver optimization. This shows well in the conclusion graph when you switch between average and 99th percentile. Happy to see Scott seems to agree with that notion. Unfortunate.

    • Rza79
    • 4 years ago

    Well a lot depends on the games tested. Here the Fury X looks unimpressive but in other reviews it looks a lot better. Like in this review, where it matches the Titan X at 4K. Tested in 18 games.

    [url<]http://www.computerbase.de/2015-06/amd-radeon-r9-fury-x-test/5/[/url<] While I'm still very impressed with your testing methods, I don't think your selection of 8 games can be described as good. Two of the three 2015 titles are GameWorks titles (which are placed conveniently first) and two more are 2013 titles. If I compare this to the 18 tested games in the Computerbase review, it shows the Fury X in a totally different light. Also your game settings are very inconsistent and sometimes make no sense. You test all the games at 4K except of The Witcher 3 (which is for some reason is tested at 2560x1440). Then you go on to apply 8x MSAA (@4K!) with Civilization and 4x MSAA with BF4. Compared to the CB review where all games are tested at 4K with some low form of AA (makes sense to me since you're already testing at 4K). CB then goes on to also test 2560 & 1920 resolutions, on a i5 2500K & GPU-computing. Please take this as constructive criticism. I'm a long time reader that wants the review quality stay high but this review pales in comparison to the CB review.

      • BlackStar
      • 4 years ago

      I’ve been reading reviews around the net and they appear to be much more positive than TechReport.

      Another one here: [url<]http://www.guru3d.com/articles-pages/amd-radeon-r9-fury-x-review,1.html[/url<] This also includes FCAT tests for more accurate frame times. Edit: The interesting part in guru3d is the performance scaling with resolution. Fury X performs worse at lower and better in higher resolutions, compared to the competition. This is indicative of driver overhead. It also hints that 4GB of memory is not the limiting factor for 4K resolutions in the current crop of games. I'd be extremely interesting in seeing DX12 results. I suspect that Fury X will prove a more future-proof design than Maxwell.

        • Rza79
        • 4 years ago

        Yeah, I noticed the same behavior. It catches up as the resolution goes up.

        It seems that the TR fanboys don’t take criticism lightly.

          • BlackStar
          • 4 years ago

          Nvidia fanboys seems more likely.

          • Milo Burke
          • 4 years ago

          Haven’t down-voted, just discussing:

          But didn’t TR test at the highest resolution? 4k?

            • the
            • 4 years ago

            5K monitors are out there but lets not get crazy.

          • Jeff Kampman
          • 4 years ago

          You’re presenting purely FPS-based testing as some kind of coup de grace for the Fury X, where any TR reader who’s paid the slightest attention to our testing methods over the past few years knows that average frame rates are only one side of the graphics card testing coin. Without frame time analysis, those numbers don’t mean as much as you seem to think they do. Simple as that.

            • Rza79
            • 4 years ago

            If you would have bothered to read the reviews me and BlackStar presented then you might have noticed this:

            [url<]http://www.computerbase.de/2015-06/amd-radeon-r9-fury-x-test/11/#abschnitt_frametimemessungen[/url<] [url<]http://www.guru3d.com/articles_pages/amd_radeon_r9_fury_x_review,28.html[/url<] TR isn't the only website testing frame time.

            • cobalt
            • 4 years ago

            Having looked at your links, I’m having a hard time digesting the punchline. Or at least I can’t figure out how you’re interpreting your links in a way which contradicts Jeffrey’s point.

            Specifically:

            ComputerBase shows raw FuryX vs 980Ti frame times. They don’t have a quantitative summary, but to my eyes, the FuryX looks like it has some prominent spikes in every tested game where the 980Ti doesn’t, at least in 4K. (1440p is more evenly matched.) That seems to corroborate TR’s conclusions.

            Guru3D shows the raw frame times for a bunch of games, but I didn’t see any other cards for comparison, so it’s hard to draw objective conclusions based on only the charts. I might simply have overlooked something here.

            If you’re interpreting those differently than I am, I’d be interested to hear your view.

        • Freon
        • 4 years ago

        Yes, there seems to be a lot of spin in the conclusions of some reviews.

        If noise is your #1 priority and the pump whine issues truly are resolved, then that’s fine to conclude it’s a great card. Otherwise it’s a tough sell based on performance.

    • raddude9
    • 4 years ago

    [quote<]we've seen no clear evidence that the Fury X's 4GB memory capacity creates problems in typical use.[/quote<] I don't know about the 4GB of memory creating problems, but it does seem to impact performance slightly when it comes to gaming at 4K. Looking at a few other reviews on the web: [url<]http://ie.ign.com/articles/2015/06/24/amd-radeon-r9-fury-x-review[/url<] & [url<]http://hothardware.com/reviews/amd-radeon-r9-fury-x-review-fiji-and-hbm-put-to-the-test?page=14[/url<] The Fury X seems to perform better than the 980ti at 1440p resolutions, but loses that advantage at 4K.

    • Zizy
    • 4 years ago

    Whoa, this was brutal. So, lets look at this from the bright side.
    1.) At least there will be no question what to buy for your next GPU.
    2.) At least AMD CPU part won’t underperform compared to their GPU division.
    3.) At least we will stop contributing to global warming by ending AMD purchases.
    4.) At least this will be an end of fanboy wars.

      • chuckula
      • 4 years ago

      [quote<]1.) At least there will be no question what to buy for your next GPU.[/quote<] Yeah there is! I'm not interested in either AMD or Nvidia at 28nm. Never forget: There doesn't have to be a winner and a loser. It's possible for both sides to be "Meh".

      • EndlessWaves
      • 4 years ago

      Except adaptive sync

        • sweatshopking
        • 4 years ago

        you mean the only one that will survive, G-sync?

      • AnotherReader
      • 4 years ago

      Brutal is GTX 980 Ti vs R9 290X or 5870 vs GTX 285. This is close to a draw.

        • Westbrook348
        • 4 years ago

        How can you directly compare 980Ti to 290X? It’s literally more than twice the price.

          • AnotherReader
          • 4 years ago

          You are right about there being no basis for comparison if price is taken into account, and it should be. I was just trying to illustrate that brutal is a misused term. In reality, this isn’t bad as at least there is the potential of parity in some games. This is much better than 290x vs 980. However, I think the downthumbs are due to the hype not matching the reality. Let’s see what happens with finfet and HBM 2. Given the struggles of the foundries, it would be a pleasant surprise if we get finfet next year with enough yields for big GPUs rather than svelte mobile SOCs.

            • Westbrook348
            • 4 years ago

            I too am super interested to see what a mature HBM and node shrink combo gets us in the next couple years, even though I won’t be upgrading for a long time since I just bought a 980Ti.

            Fury X may have “potential” for parity with 980Ti in some games, but that requires some very wishful thinking unless you are only paying attention to average FPS, which we know is irrelevant. Looking at frame times, Fury X often lags behind even the 980. The 99th percentile plot on Scott’s Conclusion page IS brutal, especially when you take price into account.

            • AnotherReader
            • 4 years ago

            High fps is an indicator that the frame time can be better too. Of course, that may never happen. AMD’s slowdowns in Crysis 3’s most demanding frames have persisted for a long while and seem to be permanent. You can see that Maxwell’s frame time ratios are the same as fps ratios across the various cards. However, for AMD, in many cases, the fps ratio is much higher than the 99th percentile frametime ratio. This can be seen in Witcher 3. This might be a driver optimization issue; pessimistically, it might be a resource balance issue.

            On a sidenote, I wish both Nvidia and AMD would release developer documentation about their older architectures that is good enough to make a solid open source driver. Covering VLIW5 and VLIW4 would be enough. Comparing GPUs to CPUs, the documentation is awful.

    • steenss
    • 4 years ago

    Scott, why did you use Cat 15.5b instead of the Cat 15.15 Fury driver?

      • Damage
      • 4 years ago

      I did use the Fury-specific Cat 15.5 with the Fury X. Sorry, AMD’s labeling is a little shaky there.

      Edit: Doh! It is 15.15 for the Fury X press driver. That’s what I used. Changed the table in the review.

        • steenss
        • 4 years ago

        Ok, thanks. 😉

        • pranav0091
        • 4 years ago

        [s<]Scott, is this the same as the 15.6 beta mentioned at [url<]https://techreport.com/news/28517/nvidia-amd-update-drivers-in-the-shadow-of-the-big-black-bat[/url<] ? [/s<] If not, any specific reasons for sticking to 15.5 ? and 352.90 for Nvidia ? I guess its the timing of the driver releases ?

          • Damage
          • 4 years ago

          Yes, indeed, and those drivers are specifically meant to help with Arkham Knight and nothing more. I didn’t test with Arkham Knight. 🙂

            • derFunkenstein
            • 4 years ago

            WHY NOT??? Average frame rates in the 10s and frame times north of 100ms aren’t interesting to you?

            😉

            • steenss
            • 4 years ago

            One further driver related followup, Scott. Have Cat 15.20 been made available?

            • Damage
            • 4 years ago

            Not until the 20th month of 2015, as I understand the naming convention. 🙂 What are you trying to ask here?

            • steenss
            • 4 years ago

            IIRC, 15.20 is the WDDM 2.0 variant. Might be interesting to see if anyone tried Win10pr/DX12. I’m kinda interested to see whether the much vaunted command processor is really any better than GM200. I’m not entirely convinced it is…

            • Damage
            • 4 years ago

            Ah, Win10 and DX12. I don’t have a Fury driver for that yet, but I’m sure AMD will release something before long. They’ve said they’re working hard on Win10.

            • sweatshopking
            • 4 years ago

            HARD (actually hard) or AMD [i<] hard? [/i<]

            • terminalrecluse
            • 4 years ago

            @damage Don’t mind these leeches. You put in the work and gave us another solid well written review. People will alway try to nitpick on the dumbest things. I thought your jab at his misspelling was funny. Keep up the good work!

            • Wesmo
            • 4 years ago

            Thankyou for clarifying that Scott. I wasn’t wondering why there was a dearth of benchmarks focusing on DX12 performance, didn’t realise it was driver related.

            The only benchmark i’ve seen so far that gives some indication of dx12 performance is the 3dmark device overhead test (http://www.hardwareluxx.com/index.php/reviews/hardware/vgacards/35798-reviewed-amd-r9-fury-x-4gb.html?start=21)

            Curiously there is a clear omission of mantle related benchmarks such as CivBE and starswarm… TechPowerUp tested CivBE with DX11; I guess AMD hasnt bothered to update mantle to support fury.

            • Damage
            • 4 years ago

            How clear is the omission of Mantle testing? I tested CivVE and BF4, if you read the article, and tried Mantle with both. Mantle was faster in Civ but slower in BF4 than DX11. It’s all in there.

            • Wesmo
            • 4 years ago

            I stand corrected 🙂 I was skiming through the graphs and assumed a (mantle) next to the gpu labels of any mantle results.

            Are you planning on adding any unreal 4 engine games to the test suite?

            • JumpingJack
            • 4 years ago

            Not only is it all in there, these results are backed up by other review sites who did similar mantle testing, in some cases it made Fury X significantly slower.

      • Damage
      • 4 years ago

      Doh, got the labeling wrong. My lys-dexic brain saw 15.5 where 15.15 was, probably because 15.15 breaked the YY.MM numbering convention.

      Anyhow, updated the table. I did use the Fury X press driver, Cat 15.15 (pretty sure it’s a beta).

        • Meadows
        • 4 years ago

        I’ve been wondering about the same thing since that guy started accusing you of doing it wrong. Why 15.15? Is this the announcement of a new calendar? Is it the same as the new 15.6 beta? Does it even make a difference? Do pigs fly?

        I’m pretty sure that even if you had used the wrong driver, performance differences would’ve been in the single percentage range.

          • l33t-g4m3r
          • 4 years ago

          I think 15.15 is based from the win10 driver, which has been claimed to be faster in several cases. Also, considering how dependent gcn is on driver optimization, yes it would absolutely matter if you used the wrong one. Would it make a difference pre rage fix? Pre borderlands fix? Pre bf3? Pre frame metering? Yeah.

          What if, hypothetically speaking, the new driver had included a fix for project cars? It would have absolutely made a difference if that was the case.

          It didn’t though, so if there was anything I really took away, it was just driver shaming AMD into fixing a few broken games at the expense of the overall review, which will most likely get fixed now, because every single time this happens, AMD has stepped up and fixed it.

            • Meadows
            • 4 years ago

            You really do hate that one game, don’t you. I hear it’s actually pretty good.

            • l33t-g4m3r
            • 4 years ago

            Not really. I bought it a while back, and it’s not that great unless you really like arcade sim racing. I could take it or leave it, being bit of a novelty game, and I haven’t done multiplayer. Plenty of bugs reported with it, and the developers have dropped support to make PCars2.

            To me, stuff like this is similar to DayZ, including performance issues. There are people who are a fan, but that’s not my cup of tea. It’s a niche game, more so than DayZ is.

            • ermo
            • 4 years ago

            Dropped support? Get your facts straight, man! SMS have 40 people working on post-release patches, features and content to the tune of around $400k a month. I’m not sure in which kind of universe that qualifies as ‘dropped support’?

            And “arcade sim racing”? Do you have any clue at all about the amount of simulation that goes on in Project CARS? You might not like how the cars handle (and some of them do need work), but I keep seeing real drivers saying that the over-the-limit behaviour mirrors what they’d do with their steering wheels in real life and that this stands in contrast to other well known sim titles.

            I’m okay with whatever opinion you have formed on Project CARS, but frankly your (willful?) ignorance pertaining to the facts surrounding the title is getting a little tiresome.

            • f0d
            • 4 years ago

            100% agree
            new cars each month
            DLC packs
            still supporting and patching the game

            how the **** is that “dropped support”
            its pretty clear l33t-g4m3r hates the game

            also “arcade” made me laugh – he has no clue

    • deruberhanyok
    • 4 years ago

    Early DX12 tests showed substantial improvements for AMD cards, and I remember when they first came out there was even more talk than usual about the state of AMD’s drivers in DX11.

    For me, the question was whether or not AMD could see performance improvement in current operating systems through driver optimization, rather than waiting for Windows 10 / DX12 / Vulkan to become widespread. I haven’t seen a lot of improvement on that front – the only glimmer of hope I’ve seen is the new open source “AMDGPU” driver core that will be used for R9 285, Radeon Fury and other newer GPUs based on “GCN 1.2” or whatever it’s called now.

    Looking at these results I’m conflicted.

    Here is a video card with +40% the bandwidth and shader count of R9 290X/390X, and additional technology from Tonga, that seems to average roughly +40% the performance of their previous flagship, has lower frame time latencies and manages to do it while pulling slightly less power at load. That’s good!

    But at the same time, here is a video card with more than double the bandwidth of a GeForce 980, selling for $100 more than a 980, and a reported almost double the compute power of a 980, and yet it only slightly outperforms a 980. That’s bad.

    So I’m wondering the same thing I did when I first saw those DX12 benchmarks: should AMD maybe be spending a whole lot of time optimizing their drivers, and trying to provide a performance increase for current operating systems / graphics APIs? I think the answer is YES – because if I were considering spending $650 on a video card right now, I haven’t seen a single R9 Fury X review that would make me seriously consider one over a 980ti. In fact, I’d have a hard time convincing myself not to get the 980 instead and save $100.

    I certainly wouldn’t buy a card based on potential performance in a future operating system and unreleased games. And that’s going to be a big problem for AMD over the next year.

    • RdVi
    • 4 years ago

    Didn’t AMD already release new beta drivers almost a week ago? I understand the time limitations, but I’d honestly like to see tests with those drivers. It could have at least been mentioned that newer drivers were already released rather than talking about newer drivers like something that people would have to wait for.

    • TwoEars
    • 4 years ago

    I think AMD made a mistake launching with the Fury X first seing as it can’t beat the 980ti.

    It would have been better if they had opened with the R9 Nano or R9 Fury (air cooled non-X version).

    • Krogoth
    • 4 years ago

    I blame imbalanced architecture is at fault here if anything else. Drivers don’t explain everything here.

    Somebody is holding back the Fiji despite having more far resources at its deposal then a fully blown GM200 chip.

    It is almost like 2900XT launch except it isn’t as hot or power hungry. Fury X only consumes slightly more power than 980Ti and Titan X at load. Unlike 2900XT which consume quite a bit more power at load than 8800GTX at launch back in the day.

      • chuckula
      • 4 years ago

      [quote<]Fury X is only slightly less power hungry than 980Ti and Titan X at load.[/quote<] s/less/more. Fury X draws about 30 watts more at load. Fury X is less power hungry than the 290X though. Oh, and the people need to know: can we get a Meh/Not Meh decision here?

        • bfar
        • 4 years ago

        “but right now, this product probably needs some TLC in the driver department before it becomes truly compelling”

        There it is. I’d add a price drop to that myself, but I could say the same about Nvidia’s high end GPUs too.

        • Krogoth
        • 4 years ago

        I meant that Fury X eats a little more power then 980TI and Titan X at load.

        2900XT consume a lot more power at load than 8800GTX back in the day.

      • ImSpartacus
      • 4 years ago

      The 2900xt has some frightening similarities…

        • Waco
        • 4 years ago

        I wish I still had mine…1 GB of GDDR4 and all.

      • ronch
      • 4 years ago

      AMD seems to always had to slap on two more cylinders and a turbocharger to match Nvidia’s performance. And the turbo lag kills them off the line too.

    • chuckula
    • 4 years ago

    You know what’s REALLY interesting.
    Go back to the Titan X or 980Ti review.

    You’ll see plenty of people criticizing Nvidia for different design decisions, and frankly that’s fair since the Titan X and 980Ti are by no means perfect products. You’ll see Krogoth giving them a solid “Meh” rating.

    You know what you won’t see?
    Not even once?

    You won’t see a single post accusing TR of being in some sort of Machiavellian conspiracy against NVidia for testing game X or not testing game Y or using some setting A or not using some other setting B.

    I find the lack of conspiracy theories to be pretty believable in widely read reviews that both had at least a couple of hundred comments. Here’s why: TR is extremely professional and transparent in how it runs tests and rates products. I find it — shall we say “slightly” — [i<][b<]unbelievable[/i<][/b<] that in the review of AMD's equivalent product there would actually be all these "honest" posters who complain about TR's testing methodology when it comes to AMD. After all, the exact same tests were apparently perfectly fine when being used to expose problems with Nvidia products, but now all of the sudden they are evil and half the comments need to be directed at accusing TR of bias? Really guize?

      • killadark
      • 4 years ago

      It MUST be that throw pity at the underdog AMD effect i tell you!

      • maxxcool
      • 4 years ago

      heh, like HEAT fans, you root for your team even when your getting pummeled repeatedly..

      • Damage
      • 4 years ago

      “I know *exactly* what you mean. Let me tell you why you’re here. You’re here because you know something. What you know you can’t explain, but you feel it. You’ve felt it your entire life, that there’s something wrong with the world. You don’t know what it is, but it’s there, like a splinter in your mind, driving you mad. It is this feeling that has brought you to me. Do you know what I’m talking about?”

        • madgun
        • 4 years ago

        Would you take the red pill or the blue pill 😛

          • Kougar
          • 4 years ago

          This isn’t 1999 anymore, everyone chooses the blue pill these days. Ya should be asking if we would take the red pill or the green pill.

            • NeelyCam
            • 4 years ago

            Yes – taking multiple pills is great. Take a blue pill, add a red or green pill, a glass of vodka and finish with a few funny brownies. The games will look [i<]awesome[/i<] - even Arkham Knight!

            • Ninjitsu
            • 4 years ago

            What everyone needs is a [i<]chill pill[/i<].

        • wimpishsundew
        • 4 years ago

        Last time someone told me something like this, it turned out they were gay…

        • auxy
        • 4 years ago

        I just watched this again for the first time since, like, 2000, the other day. Truly a masterpiece of action cinema. It’s easy to forget! (*‘∀‘)

        • the
        • 4 years ago

        [url=https://www.youtube.com/watch?v=V-dMXLUiOyU<]How deep does the rabbit hole go?[/url<]

        • Arclight
        • 4 years ago

        What is frame time?

        • maxxcool
        • 4 years ago

        win!

      • w76
      • 4 years ago

      It’s well known that companies (and lots of governments) actively employ internet “trolls” for marketing/propaganda purposes. If AMD’s lost it’s moral compass as it spirals the drain, well… it’s possible. It’d be nice for them to refute it, at least.

        • Klimax
        • 4 years ago

        IIRC both were found to employ them at one or the other time. We were already there. Which company forced update to rules here?

      • madgun
      • 4 years ago

      I have been following Scott for 12 years and not many people can match his honesty and professionalism.

      But your observation is true in a sense that whenever AMD reviews happen and their products don’t do well, red team fans come out crying foul. Could this be due to a visit by the red team at the AMD head quarters just before launch :/

      • flip-mode
      • 4 years ago

      [quote<]You won't see a single post accusing TR of being in some sort of Machiavellian conspiracy against NVidia[/quote<] If Nvidia's products had done poorly in those reviews you'd have a point. You're comparing Nvidia winning to AMD losing. It's not that you're wrong, but the thing is you probably won't see claims like "TR is conspiring against AMD" in a review where AMD wins, and you won't see claims that "TR is conspiring against Nvidia" in a review where Nvidia wins.

        • chuckula
        • 4 years ago

        Oh but Nvidia got completely destroyed in that review.
        Didn’t you see the R9-295X?

          • flip-mode
          • 4 years ago

          Hmm… you should double check that:
          [url<]https://techreport.com/review/27969/nvidia-geforce-gtx-titan-x-graphics-card-reviewed/12[/url<] Regardless, Nvidians aren't going to feel the need to resort to claims of conspiracy when it takes two of AMD's fastest GPUs to compete with just one of Nvidia's fastest. Heck, that's just bragging rights for Nvidia fans. Seriously, conspiracy theorists come out of the woodwork when their team isn't winning. Democrat in office = conservative conspiracy theorists. Same situation here. AMD's product isn't winning = AMD conspiracy theorists. If the tables manage to turn ever again I imagine we'll see some fretful Nvidians questioning the choice of games. It's happened in the past.

            • madgun
            • 4 years ago

            More conspiracies.

            • Unknown-Error
            • 4 years ago

            Very nicely put.

            • K-L-Waster
            • 4 years ago

            Yep. Easier to believe something nefarious is going on than accept that an under-resourced, stretched thin company didn’t hit the home run with their new card the fanboys were counting on.

            And it’s not like it’s a bad card – it isn’t a home run, but it’s a solid double. AMD’s problem is they are running out of innings to play small ball in.

      • l33t-g4m3r
      • 4 years ago

      You know why? AMD doesn’t have Gameworks, and Project Cars has hard coded PhysX into the game engine for road handling. That’s why.

      Nobody going to accuse AMD of conspiring against NV, because:
      1: They don’t use Gameworks, or non-open source effects.
      2: They don’t have the budget to bribe game developers or review sites.
      3: NV does all these things.

      There is no justification for ignoring it, just because you want to live in denial. There is a real Pro-NV bias throughout the entire ecosystem, and at this point it’s hard to tell who isn’t biased, just because that’s how far NV has sunk it’s claws into PC gaming.

      So, in summary, you want everyone to just unconditionally accept NV as the winner, and after AMD sinks into the dust we can all enjoy paying $1000 for crippled GPU’s that become obsolete in a year’s time. NO-THANK-YOU.

      I think it’s better to be skeptical, because there clearly are issues with the PC gaming ecosystem. AMD isn’t helping matters by releasing their high end cards with sub-par drivers, however that doesn’t mean the NV issues aren’t there, because they are. It’s pervasive. NV goes around and gets every possible dev to code pro-NV effects into their games. It’s not a conspiracy, it’s out in the open. We know they’re doing it, because they admit doing it.

        • madgun
        • 4 years ago

        What’s stopping AMD doing the same. If NVidia thinks it’s hardware can provide, for example, better tessellation through a certain API and it exposes it wiz-a-wiz Gameworks, what is the problem with that?

          • l33t-g4m3r
          • 4 years ago

          Because that’s stupid for one. AMD isn’t trying to actively sabotage the competition, and NV’s gameworks also sabotages it’s older generation of cards, so they’re double crippling. Gameworks helps no-one but NV, and devs that take the freebies associated with implementing Gameworks. It doesn’t help gamers, unless you’re using Titan SLI with a Gsync Monitor, and actually like NV’s smeary and bloated effects.

        • K-L-Waster
        • 4 years ago

        All this post needs to be complete is a reference to the Illuminati, Area 51, and the reptillian overlords.

          • l33t-g4m3r
          • 4 years ago

          Or I could just point you to NV’s Gameworks website, where they list all the games they’ve infected.

            • madgun
            • 4 years ago

            Doesn’t matter when AMD also looses out in non-Gameworks titles like BF4 and GTA 5.

            • l33t-g4m3r
            • 4 years ago

            I didn’t say anything about those titles. Seems like more of a rushed driver to me, since AMD has always been slow to optimize for games and new architectures. I’d say it’s good enough just that the card runs day1 without any major issues, and shows an increase in performance over the 290. The real gains will happen later.

            You people are being too hard on the card for being a day1 release. If the card performs like this a month later, then I’d agree that there’s issues. Either way, it’s still an improvement from what they’ve had, and actually does compete with the Maxwell cards. Hawaii wasn’t cutting it, so yeah, it’s good enough, and it forced NV to lower prices.

            FuryX is a Halo product anyway. I wouldn’t buy or recommend any $650 card, NV or not. The real card to look at would be the 8GB 390 series, or the cheaper Fury series.

            • madgun
            • 4 years ago

            This architecture is based on Tonga. So they had plenty of time to optimize the drivers.

            Also listen to yourself. If it’s an NVidia sponsored title, you blame it on NVidia. And if it’s a neutral title, you blame it on drivers.

            Why would I want to buy a product from a company that is so full of excuses?

            • l33t-g4m3r
            • 4 years ago

            Tonga doesn’t have 4 thousand shaders. I think the Mantle numbers pretty much show the driver isn’t fully optimized for the architecture. Also, driver performance doesn’t discount Gameworks sabotage. Right now, Fury isn’t performing well. That’s not an excuse. However, after a few driver updates, I wonder what YOUR excuse will be?

            • madgun
            • 4 years ago

            We can say the same thing with Maxwell. I will be waiting if an O/C 980 Ti will ever be surpassed by an O/C Fury X 😉

            As it stands out a G1 Gaming 980 Ti has 20% performance advantage over a regular 980 Ti. Now you can use your imagination.

            • K-L-Waster
            • 4 years ago

            Why is it with AMD it’s always “wait until X then it will be awesome”?

            Just wait until Mantle is released.

            Just wait until the game devs fix their Mantle implementation.

            Just wait until the new drivers are out.

            I’m not interested in what it will do some day: if I can’t see that it works on the day I’m looking at buying it, I’m not spending money on it. Full stop.

        • Meadows
        • 4 years ago

        Project Cars uses custom physics for the cars. PhysX isn’t used anywhere except for cosmetic effects (debris and smoke).

        You don’t get a free pass to be wrong just because you don’t like NVidia.

          • f0d
          • 4 years ago

          yep
          I have tested physx in pcars also and its just as fast with cpu physx as it is with gpu physx
          ill post a video of that one day soon because I have heard the physx excuse many times

          • l33t-g4m3r
          • 4 years ago

          Actually, it’s the opposite. The dev has been quoted saying the particle effects do NOT use physx, and collision detection is what does use it.
          [quote<]The Madness engine uses PhysX for collision detection and dynamic objects[/quote<] [quote<]Project CARS does not use nVidia specific particle technology – the system we use is a modified version of the same technology we used on the Need for Speed : Shift and Shift Unleashed games, and was entirely developed in-house.[/quote<]

            • Meadows
            • 4 years ago

            Here you said “road handling”. Which is entirely wrong.

            • l33t-g4m3r
            • 4 years ago

            And you said debris and smoke. The point is that physx is in the game and you can’t turn it off, not what it’s used for.

            • Meadows
            • 4 years ago

            But why would you turn it off? I don’t see you complaining about Source engine games that you “can’t turn off the Havok”.

            Same thing here. If it’s part of the game, then it’s part of the game.

            • l33t-g4m3r
            • 4 years ago

            It’s not the same in benchmarks. Nvidia will run the game faster with GPU acceleration.

            Other than that, it isn’t stopping people from playing the game, but it’s not a valid benchmark.

            • f0d
            • 4 years ago

            no it doesnt run faster with gpu acceleration
            thats flat out wrong

            • Meadows
            • 4 years ago

            GPU acceleration is not an issue, because it only works to reduce CPU load [i<]as long as the GPU has idle resources[/i<]. 1. If your GPU has idle resources, then the game is not an issue for either company anyway. 2. If your GPU runs at full tilt, and you have PhysX set to "auto" in the driver (default), then it's on the CPU. Meaning, if you have a proper CPU, then the game is not an issue anyway. I see an AMD handicap there... but it's not on the GPU side.

            • l33t-g4m3r
            • 4 years ago

            If that was in any way remotely true, you could run physx in any game with an amd card, and it would be playable. Total bunk.

            • JumpingJack
            • 4 years ago

            Physx has will execute through the GPU if available and falls back to a CPU code path when not. Physx games run on AMD hardware just fine, not faster than the competition, but fine nonetheless. This is a competitive advantage Nvidia markets and works with the industry to have… this is not AMD’s fault, but they suffer from the competitive disadvantage. Do nothing to respond to this disadvantage (other than whine) is AMD’s fault.

            • Meadows
            • 4 years ago

            Ah, but you can.

            I could go over explaining the difference between PhysX itself and the use of “Enhanced PhysX effects” for the umpteenth time but I already know you’re a stubborn nutcase who refuses to take in any information that disagrees with your agenda.

        • torquer
        • 4 years ago

        Or AMD could just make better stuff.

          • l33t-g4m3r
          • 4 years ago

          I think they did make better stuff, but pulled a batman on the driver. The Fury isn’t showing an increase in performance compared to the number of shaders, so that’s what I think the problem is here.

          The card does perform good enough, but good enough doesn’t translate into beating the 980ti either.

          As for price, neither card is worth the premium, so anyone who does buy either card would have to be some sort of fanboy, since the mid range options this time around are pretty capable.

            • Krogoth
            • 4 years ago

            The problem is that Fiji silicon is imbalanced in its resource allocation and it does show.

            It is almost like what had happened to GF100 and R600 at launch. Silicon that had far more aggregated resources then the competition, but failed to deliver at launch despite what they could do on paper.

            They got fixed with an overhaul that prove the merits of their underlying design (GF110, R680/RV770)

            • torquer
            • 4 years ago

            I agree and I’m sure that Fiji will improve with (eventual) driver updates and future iterations. Unfortunately for AMD, Nvidia’s implementation of HBM is likely to come out on a smaller process node within the next 12 months and AMD’s Fiji refresh will likely take longer. Its entirely possible that Nvidia will have a miss with Pascal, but given recent history that seems unlikely. Once again AMD may end up struggling to reach parity and cannibalizing their own profit margins to do so.

            Its too bad so many enthusiasts tend to forget the simple economics of the business. AMD shows all the signs of a company who isn’t doing well – slow driver updates, reduced R&D, slower time to market, the inability to flesh out a full lineup without significant rebadging, etc.

            • swaaye
            • 4 years ago

            G80 actually had quite a few advantages over R600 from both the paper and practical standpoints, and Fermi 1.0 had problems but it was also leading edge GPU compute in a time when AMD was nigh useless for that.

            Cypress certainly won on time to market for gamers though. That was a big win for perception.

            Fiji just seems like unfortunate realities of 28nm and interposer limits, and limited gains in GCN efficiency.

            • Wonders
            • 4 years ago

            [quote<]As for price, neither card is worth the premium, so anyone who does buy either card would have to be some sort of fanboy[/quote<] Not true. The only GPUs I have ever bought are $500-$700 (USD) dual-slot cards which I later sell used on eBay for anywhere from 40-70% of original price. I don't do any folding or anything like that, I just enjoy having the state of the art technology and running custom texture packs with the highest possible settings. Computers are a really cheap hobby compared to golf, cars, watersports, etc. and having a top-of-the-line GPU which I then overclock just feels awesome. The bang for the buck completely rules compared to other hobbies.

            • torquer
            • 4 years ago

            Ditto this. I have a ROG Swift and aim for 60+ fps at 1440p and max settings. Thats a lot easier to do with this grade of card. I actually notice the difference coming from my 980 to my current 980 Ti

      • kamikaziechameleon
      • 4 years ago

      Fanboy butt hurt is an amazing thing.

    • CaptTomato
    • 4 years ago

    Junk….waiting 4 16nm

      • raddude9
      • 4 years ago

      What? some computer product released next year might be better than something released this year. I’ve never heard of such a concept in technology…… not

        • CaptTomato
        • 4 years ago

        Buy it if u love it so much, me i want a reason to upgrade and fury ain’t it.

          • raddude9
          • 4 years ago

          I’m not going to buy the Fury X, but the R9 Nano might be just the ticket for a small PC I have. And Yes, I know there will be something better next year, but that’s always true, sometime you just have to dive in

            • CaptTomato
            • 4 years ago

            Why are u doing my thinking?
            if the product makes sense, buy it, but I’m disappointed. ….

      • ronch
      • 4 years ago

      Nah. AMD will probably just match current Maxwells built on 28nm in terms of energy efficiency when they (AMD) move down to 16nm. Then Nvidia also gets 16nm and it’s the same story all over again.

        • CaptTomato
        • 4 years ago

        That’s not necessarily a problem given the price differential we often see….
        And it looks like Nvidia has the upperhand by virtue of dollars so they outspend on RD, same as Intel vs AMD, sad, sad, days ahead.

    • BlackStar
    • 4 years ago

    Just a heads up, the Witcher 3 benchmark has HBAO+ enabled. This is another GameWorks feature that unfairly penalizes AMD cards.

    Switch to regular SSAO and results will likely be quite different.

      • derFunkenstein
      • 4 years ago

      HBAO+ does terrible things to both sets of GPUs, if 1440p performance on my GTX 970 is any indication.

    • kuttan
    • 4 years ago

    A bit bit disappointment to me. I actually expected the FuryX to be close to Titan X. But things will improve a little bit after driver optimizations but still a bit low than what I am expected. Its still a good GPU which has liquid cooling, good performance all with a competitive price.

      • Billstevens
      • 4 years ago

      I like my 290 but it is annoying that at launch for new games Nvidia gets a driver patch and I have to wait weeks for a fix. I’m done with the game sometimes before amd even sends a notice that they are working on it.

    • windwalker
    • 4 years ago

    The thing I don’t understand here is the pricing.
    It’s perfectly fine to release a product 10% below the competition but why price it at the same level?
    It makes even less sense at the high end where there should be plenty of profit margin wiggle room.

      • chuckula
      • 4 years ago

      There’s one (and about only one) thing about a complex multi-billion transistor product that you can change easily after launch: The price.

      Wait a while.

      • raddude9
      • 4 years ago

      If it were priced about 10% less, say $600 it would be a much easier sell with respect to the 980ti (which performs up to 10% better). I have a feeling the price will drop after a few months.

      • Action.de.Parsnip
      • 4 years ago

      1) It just launched
      2) Water cooler
      3) Limited supply

      Thats why its 650 now. It’ll change soon enough

    • JosiahBradley
    • 4 years ago

    Well that kinda threw me a curve ball. Was planning on buying fury today. You know what, I’ll vote with my wallet against the proprietary tech from nVidia, hello fury.

    • southrncomfortjm
    • 4 years ago

    So, if you took the HBM away from this card and just slapped on GDDR5, what would we see? Basically, how much of this performance is directly attributed to HBM?

      • ImSpartacus
      • 4 years ago

      It’s hard to say. You could lower the hbm clocks to get an equivalent bandwidth to something like the 390x, but you won’t be able match its capacity.

      In other words, if this card had gddr5, it wouldn’t’ve had only 4gb of vram.

        • southrncomfortjm
        • 4 years ago

        Right, well then I’d guess my question is, what if everything by the type/amount of VRAM were the same, what would we have. Replace the 4GB of HBM with 12GB of GDDR5… would performance even be in the same ballpark? or would we be looking at performance only slightly better than a 290X?

    • TheSeekingOne
    • 4 years ago

    Nice selection of games BTW! Witcher 3, Project Cars and Crysis 3. Why not test Dying light, Shadow of Mordor, Dead Rising 3, Metro Last Night, Tomb Raider, etc. The Test sample you chose is not well spread out and it didn’t produce an accurate representation of Fury’s relative performance , in my humble opinion.

      • chuckula
      • 4 years ago

      Considering TR tested two major titles with direct AMD involvement — BF4 and Civ5 — and tested Far Cry 4, that’s a pretty good sampling of games that do very well with AMD parts.

      Interesting how you conveniently pretended that TR never tested those games.

        • TheSeekingOne
        • 4 years ago

        BF4 and Civ5 have no direct AMD involvement, I do not think they do.

          • chuckula
          • 4 years ago

          Oh Rlly? Mantle-enabled titles have no AMD involvement? I’m sure you say the exact same thing about games that use Gameworks since that’s not direct Nvidia involvement!

            • sweatshopking
            • 4 years ago

            bro, civ 5 isn’t mantle. civ BE was.

          • Billstevens
          • 4 years ago

          BF4 has a mantle setting and AMDs name plastered all over their engine. The R9 290 smoked that game.

          • Prestige Worldwide
          • 4 years ago

          Oh boy, you are extremely ill-informed.

          • madgun
          • 4 years ago

          So u’re saying, if AMD doesn’t work with the game developers so shouldn’t NVidia?

          What kind of an argument is that?

      • Anovoca
      • 4 years ago

      IKR! We want more practical game reviews, like: Age of Empires, X-Wing vs Tie Fighter, Oregon Trail, and Where in the USA is Carmen Sandiego!

      Get with the times Scott!!!!!!

        • chuckula
        • 4 years ago

        ZORK!!!
        I WANT 4K TEXT ADVENTURES!

        (and by “4K” I mean: You can run the game with a system that has 4K of RAM).

          • Anovoca
          • 4 years ago

          And what about FPS in DOS?

          THESE ARE THINGS WE NEED TO KNOW SCOTT!!!

        • derFunkenstein
        • 4 years ago

        I want to play Diablo II in 4K with the resolution hacks that show more of the area around you. Of course, you won’t be able to see anything since it’s all 32×32 pixels…

          • Anovoca
          • 4 years ago

          And then once you are done hacking the resolution you need to hack your character armor to have +1000000 light radius!

            • derFunkenstein
            • 4 years ago

            still might have a little bit of a vignette effect around the edges of the display, but that’s a good point.

          • jihadjoe
          • 4 years ago

          Speak for yourself my eyes r teh l337!

        • HisDivineOrder
        • 4 years ago

        Pong 4K or nothing, friends.

          • Anovoca
          • 4 years ago

          in full chroma!!

      • Damage
      • 4 years ago

      Scott here. I try to pick games people want to play that look nice and use the GPU well. I don’t have a policy of parity when it comes to whose dev-rel/co-marketing program latched on to the game. I’m limited somewhat because sometimes one side or the other goes through a dry spell. We’ve seen fewer AAA titles from AMD’s side recently, for instance.

      But in this case, here’s the tally.

      AMD: Battlefield 4, Crysis 3, Alien Isolation, Civ: Beyond Earth
      Nvidia: Project Cars, The Witcher 3, Far Cry 4
      Status murky: GTA V

      I did not test Borderlands 2, nor I did test “Broderlands 2.” Brah.

      Your complaint doesn’t seem to fit with the reality here. I have to wonder why you even made it.

        • chuckula
        • 4 years ago

        [quote<]Your complaint doesn't seem to fit with the reality here. I have to wonder why you even made it.[/quote<] He said rhetorically...

        • ImSpartacus
        • 4 years ago

        Did you seriously just insult someone’s spelling?

        Are we in grade school or something?

        Have a little more class.

          • Damage
          • 4 years ago

          I sometimes forget how different the rules are for me and for everyone who gets to tear down my careful work that I provide for free online when they haven’t even read the article to see the selection of games. I have to be nice and calm at all times, with no snark, while they can publicly accuse me of bias while getting basic facts wrong (Borderlands 2 wasn’t tested; Crysis 3 is Gaming Evolved), and you’ll be there to defend them if I drop a line about a typo in an incorrect statement. I am now reminded.

            • Anovoca
            • 4 years ago

            So long as you remember now Scott. :p

            • sweatshopking
            • 4 years ago

            scott. why do you even bother?

            • snook
            • 4 years ago

            free? the gold dot next to my name (and others) says differently.

            I read three “tech” writers, you, ryan and josh. only josh hasn’t pulled this “woe” card.

            be mindful that the guys on your side also have zero time with the product you just reviewed and you should dismiss their opinions with the same consistency.

            truth is AMD instituted a new memory tech and it’s competitive with a very mature process. this is a good thing. your review was spot on, good on ya.

            lastly, I can’t stand when someone I look up to whines. put your big boy pants on and man up.

            thanks, now, dismiss this post 😛

            • Anovoca
            • 4 years ago

            And that gold dot was required to read this article?

            On the other hand I give you 3 little turds. These turds were not free. They cost me $30 ($10 a pop) so I hope you use them wisely.

            • snook
            • 4 years ago

            that (reading) isn’t the issue, the issue is “free” and it wasn’t, simple.
            I turned them into three gold stars for you and at a higher personal cost.
            use them to bay a new dress.

            • ImSpartacus
            • 4 years ago

            Yes, that’s correct. You’re the example for everyone else.

            Having “class” is about knowing when to ignore people when they make unproductive criticisms. You have to ask yourself, “Do I write for the Tech Report so I can needlessly argue with random people on the internet?”

            And going another level past that, if some argument might occur, you have to decide, “Do I write for the Tech Report so I can personally insult the spelling of random people on the internet?”

            I hope you write for the Tech Report because you just love tech. I know I read the Tech Report because I love tech.

            • f0d
            • 4 years ago

            concerning the spelling correction:
            looking at the context and what was said i think it was just a joke – cant you take a joke brah?

            or is scott not allowed to have a little fun? it wasn’t insulting in anyway

            if i misspelled borderlands as BROderlands id laugh at a bro or brah joke even if it was pointed at me by scott

            the rest is just scott communicating with his readers which is one of the reasons i actually come here and not some other random site that just has fps averages

            • Westbrook348
            • 4 years ago

            Great work Scott. I appreciate what you do. Anyone can post FPS numbers. It takes a lot more time and effort to create and analyze frame times. But that’s the data that actually matters; average FPS is totally irrelevant, and it’s been FOUR years since your article that proves why. It’s disappointing the industry (including other review sites like those mentioned by the troll who started the discussion) continues to compare hardware in terms of seconds instead of milliseconds. Even if Fury X were to win a FPS average, I couldn’t care less, if 1 in 100 frames cause stutter. This is 2015 not 2011; fanboys need to wake up to your innovations.

            So thank you for your reviews, this site, and all your hard work. I visit your site daily. I wish you had the time, resources, and staff to create even more content. I’d love to see more cards reviewed (i.e. 970s in SLI really should get some frame time analysis) and more games tested (Tomb Raider and Metro are common, but two years old; the only one you really need to add from the troll’s list is Shadow of Mordor). Overall a great review, and indisputably more accurate and relevant than any review site that focuses on FPS.

            • Damage
            • 4 years ago

            Thanks!

            • Meadows
            • 4 years ago

            Don’t let these people get you down, Mr Wasson. You do great work and we swear by it.

            • madgun
            • 4 years ago

            Scott, there are idiots throughout the internet. It’s just hard to please all of them. You’re doing great work and you have a recognition and respect over different forums that I visit.

            Keep up the great effort!

            • etrigan420
            • 4 years ago
        • TheSeekingOne
        • 4 years ago

        Edited, Borderlands 2 isn’t in your test suite… My bad ) I think I was looking at the 285 review charts and mistakingly thought the game was in Fury’s review.

        Well… I think FuryX deserves a slightly different treatment and the choice of games does matter quite a bit. One game, such Project Cars, can swing the performance charts in on direction quite a bit.
        Shadow of Mordor for example is a game that is more played in my opinion than Project Cars or Crysis 3.

        BTW, I noticed the the Hawaii cards are performing quite a bit better in the tessellation test in this review compared to the numbers in the 285’s review.
        What does that tell?

          • Damage
          • 4 years ago

          One game won’t swing the totals as dramatically as you say since our overall score is produced using a geometric mean instead of a simple average. Outliers are less influential to the overall score with a geomean. That’s how we deal with that issue.

          I’m not about to omit Project Cars given its popularity, amazing looks, and everything else. I always wish we could test more games, but what we do requires careful time and attention. It’s not like running a script.

          AMD does seem to have improved tessellation performance in Hawaii with driver tweaks, which suggests the changes to Tonga’s tessellation units were less consequential than we first thought. That said, Tonga’s still faster in TessMark, and Fiji is faster yet, so the hardware changes do yield something.

            • Anovoca
            • 4 years ago

            If I am interpreting the message between the lines here, I will happily accept your open position of new hardware game tester. Feel free to send me a fury x, some games, and a paycheck and I will get back to you within a week.

            • Klimax
            • 4 years ago

            Sounds like they finally used unused shaders to do tessellation (valid approach). Why they didn’t do it when Nvidia started to put even bigger emphasis on tessellation is beyond me.

        • Silus
        • 4 years ago

        So more games that favor AMD hardware than NVIDIA hardware…

        Definitely a conspiracy against AMD having so many AMD sponsored games…that’s why AMD loses in mos….oh wait

      • maxxcool
      • 4 years ago

      hmmmm

      USER STATISTICS
      Joined:Thu Apr 16, 2015 6:23 pm
      Last visited:Wed Jun 24, 2015 9:12 am
      Total posts:6
      Search user’s posts (0.00% of all posts / 0.09 posts per day)Most active forum:Graphics
      (5 Posts / 83.33% of user’s posts)Most active topic:

      AMD Radeon 300 Series Arriving To Retailers
      Today – R9 390X
      (4 Posts / 66.67% of user’s posts)

        • snook
        • 4 years ago

        and?

          • maxxcool
          • 4 years ago

          Horse, water. /wait/.

            • snook
            • 4 years ago

            im not thirsty/poor horse trainer./wait/.

            • maxxcool
            • 4 years ago

            LOL … 🙂 +1 from me at least 🙂

            • snook
            • 4 years ago

            I did get what you meant, he is just AMDing the joint up and seems to be here for that alone.
            Im not logged in for well over half of my views, basically only if i want to post, then I have to do so. good exchange, funny too. peace.

            there are likely similar nvidia fan accounts.

      • chuckula
      • 4 years ago

      [quote<]Why not test Dying light, [/quote<] HardOCP sure did. And it didn't end well for AMD: [url<]http://www.hardocp.com/article/2015/06/24/amd_radeon_r9_fury_x_video_card_review/6#.VYrVM3XYJhE[/url<] Choice quote: [quote<]Before we begin looking at this game it should be noted that this game definitely can exceed 4GB of VRAM at 1440p. We saw up to 5GB of usage when the VRAM capacity was there to support it. This game is bottlenecked on every 4GB video card at 1440p with maximum in-game settings.[/quote<] Are you ready to cry uncle and stop asking TR to run tests that will make the Fury look [i<]worse[/i<]?

      • chuckula
      • 4 years ago

      Oh and here’s Shadow of Mordor:
      [url<]http://www.bit-tech.net/hardware/graphics/2015/06/24/amd-radeon-r9-fury-x-review/8[/url<] Good news! At 4K resolution the average FPS of the Fury beats the GTX-980Ti! By 1 FPS. Oh and the minimum FPS for both cards is the same. Please ignore the other resolutions where the GTX-980Ti is actually ahead by larger margins. So how does that result contradict TR's overall take on the Fury that it's about on-par with the GTX-980Ti exactly? Where's the big evil conspiracy? I'm looking for it!

        • l33t-g4m3r
        • 4 years ago

        I’d say project cars shouldn’t be on the list for various reasons, mostly due to PhysX being hard coded into the engine for road handling. But that’s only ONE game.

        Looking at the overall performance though, it looks more like AMD botched the release driver than anything else. Considering how AMD has historically handled their drivers, and adding the comment about Mantle performance, I’d say that’s what the problem is. The driver works, but it hasn’t been optimized enough for the new architecture. Performance will probably improve by the time the Nano is released.

          • Klimax
          • 4 years ago

          PhysX is far from being primary problem for AMD in this game. Their biggest problem there is absence of DCL in drivers. DCL murders AMD’s cards not PhysX. A thing which didn’t change since 7970 launch and Civilization V.

      • chuckula
      • 4 years ago

      Oh, let’s not forget Tomb “TressFX” Raider: Fury loses at 4K.

      [url<]http://www.pcgamer.com/amd-radeon-r9-fury-x-tested-not-quite-a-980-ti-killer/[/url<] Specific graph: [url<]http://e5c351ecddc2f880ef72-57d6ff1fc59ab172ec418789d348b0c1.r69.cf1.rackcdn.com/images/VNW11FEy09Rx.878x0.Z-Z96KYq.jpg[/url<] You should have listed "Hitman Absolution" instead where Fury did register a win. However, 1/2, even in Square-Enix games that are generally more favorable to AMD in the first place is not exactly showing destruction of the GTX-980Ti. In fact, I'd say it shows.. dare I say it... that the two cards are about on par. Where's that conspiracy??!?!?!?!?

        • maxxcool
        • 4 years ago

        Dutch-Bros …

      • Khali
      • 4 years ago

      TheSeekingOne, seems to me that AMD let you down and now your lashing out at any one that had the job of delivering the details on how they failed. Looks to me that Scott picked the most currently popular games and added in a few older titles to the mix. Nothing sinister about it.

      Some how I find this video to probably be how most diehard AMD fans are reacting today.
      (warning some cursing in the subtitles)

      [url<]https://www.youtube.com/watch?v=xhVo7yPjQvE[/url<]

      • Silus
      • 4 years ago

      Sure, sure…but when you take AMD’s glasses off, you might realize that Crysis 3 is an AMD sponsored game:

      [url<]http://www.amd.com/en-us/markets/game/featured/all-games/crysis-3[/url<] But I guess that for an AMD fanboy, even then it's NVIDIA's fault

    • Pantsu
    • 4 years ago

    Color me unimpressed. The only positive side I can see is the better stock cooler, but in everything else 980 Ti beats the Fury X. Hopefully with some drivers and going to Win 10 will benefit AMD, but otherwise Fury X certainly doesn’t look like that card AMD needs to beat Nvidia. Oh well, better luck next time.

      • chuckula
      • 4 years ago

      AMD definitely win the cooler war as long as you have a case that can accept the radiator.
      In some compute workloads the huge shader counts should help AMD, but those will have to be single-precision workloads since the DP performance of these parts, while much better than the Titan X, is still less than half of the Tesla compute products that intentionally target double-precision workloads.

    • Kretschmer
    • 4 years ago

    Someday AMD will realize that silicon R&D is pointless without top-notch drivers.

    Today is not that day.

      • chuckula
      • 4 years ago

      Is it tomorrow? Considering the GCN units on these cards are basically identical to the R9-290X, you’d think that AMD would have had the time to get those perfect drivers written by now.

        • Kretschmer
        • 4 years ago

        Improving drivers should be the best “R&D” bang for your buck. Driver optimization is cheaper than silicon R&D and improves perception of your entire platform instead of one or two halo products.

        Then again, you’d expect the Mantle BF4 to be “fully optimized”, and that’s not looking good for AMD, either.

        If only AMD took the $MM spent on Mantle and used it to forge better DX11/DX12 drivers…

          • swaaye
          • 4 years ago

          Yes well like chuckula said, Fury isn’t exactly a new architecture. There may not be room for improvement.

          • nanoflower
          • 4 years ago

          You would think so. Yet I’ve seen people suggest that AMD should just spend all of their time working on DX12 drivers for Windows 10 and not worry about improving their current drives. Of course that neglects the fact that every game currently on the market and many coming out over the next year can be impacted if they improve their DX9-11 drivers versus DX12. Sure we might get a few games out this year that use DX12 and a few more next year but unless some magic happens the majority of computers next year (those already in customer hands) will still be running XP/Vista/Win7/Win8/Win8.1.

        • DPete27
        • 4 years ago

        oh snap!

        • Action.de.Parsnip
        • 4 years ago

        You can do better than that. Drivers are complicated enough that hardware changes need disproportionately large software reworks. Throwing out the time immemorial assumption of a fast main bus and moving to a collection of slow access nodes has massive consequences for the driver. Like cascading all the way down through the stack massive. Access patterns, assumptions of latency and algorithms built around those need changing or throwing out. If you wanted to use that base to fork out a compute driver then it can’t be rushed through development either. Graphics drivers are about controlling the flow of data through programmable resources. That flow is now quite, quite different.

        • Klimax
        • 4 years ago

        Drivers need to account for different resource ratios (shaders/ROP/TU/bandwidth/…) and various features like color compression.

        Also AMD has vast technical debt on those drivers, so I wouldn’t be surprised to see that their shader compiler is still emitting suboptimal instruction stream which won’t help powerful scheduler. (Unlike CPUs, GPU workload is far more difficult to optimize in HW)

        ETA: Also still missing things like DCL. Also drivers still seems to have too large overhead compared to NVidia’s.

      • steenss
      • 4 years ago

      It might help if Scott used Cat 15.15?

      • Anovoca
      • 4 years ago

      But that isn’t the AMD way. The AMD way would be to buy out an R&D firm. Lay off all the employees in that branch. Then, loose money and foreclose on the assets.

      • omf
      • 4 years ago

      If they haven’t come to that conclusion after all these years, I can’t see how they ever will.

      • BlackStar
      • 4 years ago

      Hey at least my AMD drivers are not crashing on Witcher 3 every 30 minutes.

      In fact I’ve been playing for the last 3 weeks and I haven’t had a single crash. That’s more than can be said for Nvidia…

        • chuckula
        • 4 years ago

        Oh really. And I’m SURE you own several Nvidia GPUs that you use for testing Witcher 3 to give us this completely scientifically sound and unbiased conclusion.

          • Anovoca
          • 4 years ago

          Yes, in fact he was running both the Titan and the Fury X at the same time for a totally accurate side by side comparison, and let me tell you did those nVidia drivers keep throwing errors left and right!!!!!

            • chuckula
            • 4 years ago

            Well in that case, I’m sold!
            I’ll take 10 Furys [Furies? How do you pluralize that?] and an order of fries. TO GO!

          • BlackStar
          • 4 years ago

          [url<]http://www.gamefaqs.com/boards/699808-the-witcher-3-wild-hunt/72051464[/url<] There you go, knock yourself out.

          • homerdog
          • 4 years ago

          The game was crashing on me like crazy on my GTX970 until I turned off GeForce Experience.

        • rahulahl
        • 4 years ago

        I play witcher 3 on gtx 980. Only had crashes after trying to load a game for a very brief period. That was later fixed in a witcher 3 patch.

          • Airmantharp
          • 4 years ago

          Had some weird crashes when moving between zones early on, which triggers a map load, and one right before an in-game cutscene, but otherwise the game was stable and smooth on my 970 FTW.

          And I still want to play the game again to get a different ending (so many choices!), learn how to play that card game, and race horses!

        • Kretschmer
        • 4 years ago

        I recently purchased a 290X and have had to hard-reboot a few times. A driver reinstall resolved most of the issues, but there are still a few that I can reproduce quite easily (e.g. turning FreeSync on/off while a game is running “squishes” the screen into half size.

        I’d say it makes me nostalgic for enthusiast computing ten years ago, but these days I prefer to spend more time playing and less time tinkering.

          • Airmantharp
          • 4 years ago

          One of the main reasons I’m reluctant to pick up AMD hardware- which by itself is usually excellent- is dealing with their drivers. I’ve done that dance too many times.

          And here’s a tip: with AMD drivers, clear out everything periodically. It’s almost as if they ‘gunk up’ over time or something.

            • l33t-g4m3r
            • 4 years ago

            Funny, I had to do that with my 780, and it literally required a windows reinstall. Neither side is exempt from these issues.

            • Klimax
            • 4 years ago

            If you somehow end up there, why not DDU?

            • l33t-g4m3r
            • 4 years ago

            Because it completely fubar’d windows. That’s why. DDU is not a magic bullet. Uninstalling the previous driver would either bluescreen or cause windows to permanently not detect a video card, requiring a restore to the last good user state. I don’t remember all the details, but there wasn’t anything that would fix it, so I pretty much had to reinstall. Be kind of interesting how these scenarios would play out on a w10 install. $300 per activation?

            • Klimax
            • 4 years ago

            Pretty sure that was effect of something else. You just don’t get those thing with W7+. Not BSOD on uninstall nor you cause permanently Windows to not detect card. Simply won’t happen unless system is already damaged beyond repair by something else. (not sure by what)

            Uninstalling driver simply won’t cause it. None of those things. (Well, at least not with NVidia drivers…)

            Note: DDU will disable some options, but it will warn you, so maybe it might have been operator’s error… (also has option to skip that)

      • fade2blac
      • 4 years ago

      So many comments appear to place a disproportionate amount of faith/blame in drivers. Based on Scott’s analysis, the greater concern should be the ‘MOAR COARS’ mentality of the Fiji architecture. I don’t think drivers can help a fundamentally disproportionate distribution of GPU resources. The thing has gobs of potential shader throughput and memory bandwidth…but are these primary/key limiting factors? If a game is designed to stress things like the geometry/tessellation engine, the Fiji may be no better than Hawaii. Add to that settings that stress ROP’s and pixel throughput and you have another potential bottleneck. Even the texture format (8 vs 16 bit) can disproportionately affect throughput for AMD.

      I would really appreciate a more intuitive look at how each functional area relates to key performance metrics in games and settings. This way we can better quantify this asymmetrical warfare going on in this generation of GPU’s.

        • Westbrook348
        • 4 years ago

        I would love to see this type of analysis, too. I don’t doubt that AMD’s drivers need some optimizing. But I don’t know if that will be enough to overcome potential resource bottlenecks.

        • nanoflower
        • 4 years ago

        I have to wonder what AMD would have done if they weren’t so close to the size limits of the interposer. Would they have emphasized ROPS in addition to shaders or do they feel that’s not as important. Clearly they’ve been thinking emphasizing shaders is more important, but maybe we will see a more balanced product next year when they can fit more transistors in their GPU with the move to 16/14NM.

      • kamikaziechameleon
      • 4 years ago

      I remember like 6 or so years ago when we started celebrating for a quick minute, “AMD makes good drivers now!” I bought a 4970… :^/

      I remember not too long ago hearing/reading , “Gosh its good to see AMD ontop of their driver game again look at all the cool new features they’ve added!” I bought a 7950 GPU.

      NEVER AGAIN! I’ve been tricked twice!

      Never had darn driver issues on this scale with my Nvidia GPUs.

        • Westbrook348
        • 4 years ago

        Same here. 7950 was my last AMD card for a long long time.

    • TwoEars
    • 4 years ago

    On the bright side the market for $650 graphics cards is pretty small.

    The R9 Nano might be the real gem here.

      • raddude9
      • 4 years ago

      I’m looking forward to seeing a few reviews of the Nano, there’s a lot of people out there with small cases (and PSUs) that would appreciate a powerful GPU option.

    • chuckula
    • 4 years ago

    [quote<]If you've been paying attention over the preceding pages, you pretty much know the story told by our FPS value scatter. The Radeon R9 Fury X is a big advance over the last-gen R9 290X, [b<][i<]and it's a close match overall for the GeForce GTX 980 Ti.[/b<][/i<] However, the GeForce generally outperforms the Fury X across our suite of games—by under 10%, or four FPS, on average. That's massive progress from the red team, and it's a shame the Fury X's measurably superior shader array and prodigious memory bandwidth don't have a bigger payoff today's games.[/quote<] People who hurled insults at me last week while I was dead-on right and they were full of it can post their apologies below. I won't publicly shame you... just yet.

      • sweatshopking
      • 4 years ago

      Chuck, stop taking it so seriously. Even if you are right and agreeing with the page we are going to insult you. IT’S CALLED LOVE.

      • techguy
      • 4 years ago

      I knew something was up when I saw the configurations used to achieve the leaked AMD internal results and the absolute silence about official performance during and after the reveal.

      Strange the card is priced identically to 980 Ti now that we have the facts.

Pin It on Pinterest

Share This