AMD’s Radeon RX 470 graphics card reviewed

If you’re just reading this review for the first time, you may not be aware that our initial impressions and conclusions about the Radeon RX 470 were based on incorrect data in a number of the titles we tested, thanks to a problem with our test system. We investigated and explained those issues in a separate blog post. Be sure to read that post before continuing.

The short version of the story is that we had to retest Grand Theft Auto V, Hitman, and Rise of the Tomb Raider for this review, thanks to excessive DPC latency on our testing PC during our initial round of benching. As consolation, we added benches of the DirectX 12 versions of Hitman and Rise of the Tomb Raider to our results, and we also benched the OpenGL and Vulkan renderers available with Doom. This review has changed significantly from its initial form, and some parts of it won’t make sense if you’re not up to speed on the issues we noted and the steps we took to resolve them. The revised review continues below.

As I write these words, it’s just about dawn outside. Yesterday morning, a Radeon RX 470 showed up on my doorstep, and after endless technical difficulties, I finally got around to collecting data on the thing and four competing graphics cards at about 9 PM last night. Since that time, I’ve forgone sleep and consumed more caffeine than is probably healthy for the average horse, never mind the average sedentary hardware reviewer. That’s alright, however, since we have a Radeon RX 470 review to share with you for the trouble.

  Base

clock

(MHz)

Boost

clock

(MHz)

ROP

pixels/

clock

Texels

filtered/

clock

(int8/

fp16)

SP

TFLOPs

Stream

pro-

cessors

Memory

path

(bits)

Memory

transfer

rate

(Gbps)

Memory

bandwidth

(GB/s)

Peak

power

draw

R9 280X 1000 32 128/64 4.1 2048 384 6 288 250W
RX 470 926 1206 32 128/64 4.9 2048 256 6.6 211 120W
RX 480 1120 1266 32 144/72 5.8 2304 256 7 224 150W
R9 290 947 64 160/80 4.8 2560 512 5 320 290W?
GTX 950 1024 1188 24 48/48 1.8 768 128 6.6 106 90W
GTX 960 1126 1178 32 64/64 2.4 1024 128 7.01 112 120W
GTX 970 1050 1178 56 104/104 3.9 2048 256 7.0 224 145W

As a refresher, AMD revealed the full specs of the Radeon RX 470 a couple days back. This card is the middle child between the e-sports-oriented RX 460 and the VR-ready RX 480. The RX 470’s Polaris 10 GPU has 2048 stream processors and 128 texture units, down slightly from the 2304 SPs and 144 texture units on the RX 480. AMD also hobbled the RX 470 a bit by clocking its 4GB of GDDR5 RAM at 6.6GT/s, slightly slower than the RX 480’s guaranteed 7 GT/s speed floor and typical 8GT/s pairing. The RX 470 maintains the full 32 ROPs and 256-bit path to memory of its slightly better-endowed sibling.

If you’re thinking those changes aren’t that drastic compared to the RX 480, it may or may not be surprising that the Radeon RX 470 is priced at $179.99 and up—just $20 less than the RX 480 4GB reference card. As we’ll soon see, there’s just not that much air between this card and the 4GB Radeon RX 480 at $200 (assuming you can find one at that price). The RX 470 will need to perform quite well to keep buyers from simply jumping ship to the more powerful Polaris card.

The hot-rodded Radeon RX 470 graphics card we have on hand for testing comes courtesy of XFX. This “RS Black Edition” RX 470 claws back most of the cut-down Polaris chip’s clock speed deficit with a 1256MHz boost clock. XFX also used GDDR5 running at 7GT/s instead of the reference 6.6 GT/s spec.

We didn’t have time to tear down this card to the bare PCB before publishing, but the RS cooler is certainly beefier than the reference blower AMD introduced on the RX 480. I count at least three copper heatpipes in there. XFX also outfits this card with a sturdy-feeling backplate and a handy warning LED above the PCIe power connector that glows red if the proper power cable isn’t plugged in. In normal operation, this LED burns blue. Should one of the RX 470 RS’ fans fail, XFX uses a clever installation system that lets users pop out the dead fan and replace it at home instead of sending the whole card in for service. 

The one roadblock this card might face is its price tag. We’re still waiting on official confirmation from AMD on this point, but Newegg already has a listing up for the RX 470 RS for an eyebrow-raising $220. At that price, we’re kind of baffled why someone would choose this card over the perfectly serviceable Radeon RX 480 reference card and the extra computing resources it offers. We’ve already got a reference RX 480 on hand for testing (albeit in 8GB, not 4GB, form), so let’s find out how the RX 470 stacks up.

 

Our testing methods

As always, we did our best to deliver clean numbers. We ran our graphics card tests on the following test system and with the following graphics cards:

Processor Intel Core i7-6700K
Motherboard ASRock Z170 Extreme7+
Chipset Intel Z170
Memory size 16GB (2 DIMMs)
Memory type Corsair Vengeance LPX

DDR4 SDRAM at 3200 MT/s

Memory timings 16-18-18-36
Chipset drivers Intel Management Engine 11.0.0.1155

Intel Rapid Storage Technology V 14.5.0.1081

Audio Integrated Z170/Realtek ALC1150

Realtek 6.0.1.7525 drivers

Hard drive OCZ Vector 180 480GB SATA 6Gbps
Power supply Corsair RM850
OS Windows 10 Pro with Anniversary Update

 

  Driver revision GPU base

core clock

(MHz)

GPU boost

clock

(MHz)

Memory

clock

(MHz)

Memory

size

(MB)

XFX Radeon RX 470 RS Radeon Software 16.8.1 beta  1256 1750 4096
Radeon RX 480 Radeon Software 16.8.1 beta 1120 1266 2000 8192
Sapphire Nitro Radeon R9 380X Radeon Software 16.8.1 beta 1228 1329 1753 4096
MSI GeForce GTX 970 Gaming 4G GeForce 368.81 1114 1253 1753 4096
Gigabyte Windforce GTX 960 4GB GeForce 368.81 1216 1279 1753 4096

For our “Inside the Second” benchmarking techniques, we now use a software utility called PresentMon to collect frame-time data from DirectX 11, DirectX 12, OpenGL, and Vulkan games alike. We sometimes use a more advanced tool called FCAT to capture exactly when frames arrive at the display, but our testing has shown that it’s not usually necessary to use this tool in order to generate good results for single-GPU setups.

You’ll note that aside from the Radeon RX 480, our test card stable is made up of non-reference designs with boosted clock speeds and beefy coolers. Many readers have called us out on this practice in the past for some reason, so we want to be upfront about it here. We bench non-reference cards because we feel they provide the best real-world representation of performance for the graphics card in question. They’re the type of cards we recommend in our System Guides, so we think they provide the most relatable performance numbers for our reader base. To make things simple, when you see “GTX 970,” “GTX 960,” or “Radeon RX 470” in our results, just remember that we’re talking about custom cards, not reference designs.

With that exposition out of the way, let’s talk results.

 

Grand Theft Auto V
Grand Theft Auto V kicks off our newly revised tests at 1920×1080. We cranked almost every setting we could to “Very High,” and we even dialed in 2x MSAA plus FXAA to make Los Santos and San Andreas that much more free of jaggies. You can ignore the Vsync and full-screen settings in the screenshots below—they’re artifacts of the way we had to capture those clips. We’ve included buttons beneath some of the graphs to allow readers to get an impression of the changes in our testing data since our initial RX 470 review went live.




If you read our first iteration of this review, the numbers above represent a major change in performance for the RX 480. Originally, both cards were able to push about 60 FPS on our test system. Now that our test system isn’t suffering from insidious DPC latency issues, however, the fully-enabled Polaris chip distances itself considerably from its lesser cousin. Our 99th-percentile frame times are all much lower, too.

The somewhat furrier frame-time graph for the RX 470 versus the RX 480 in GTA V is one case where we might be observing the difference that 8GB of memory makes on the beefier Polaris card, as well. With GTA V‘s “extended distance scaling” maxed, the RX 470, the R9 380X, and the GTX 960 all appear to be swapping data in from main memory often, while the RX 480 appears to be able to keep all the assets it needs in its pool of GDDR5. I’m guessing that keeping that data local has a noticeable impact on smoothness and performance.



These “time spent beyond X” graphs are meant to show “badness,” those instances where animation may be less than fluid. The 50-ms threshold is the most notable one, since it corresponds to a 20-FPS average. We figure if you’re not rendering any faster than 20 FPS, even for a moment, then the user is likely to perceive a slowdown. 33 ms correlates to 30 FPS or a 30Hz refresh rate. Go beyond that with vsync on, and you’re into the bad voodoo of quantization slowdowns. And 16.7 ms correlates to 60 FPS, that golden mark that we’d like to achieve (or surpass) for each and every frame.

In our new results, none of the cards we tested spend any time beyond the 50-ms mark working on tough frames, and only the GTX 960 spends a notable amount of time on frames that take more than 30 ms to render. Past the critical 16.7-ms threshold this time around, however, the RX 480 practically never runs into frames that take more than 16.7 ms to render. The GTX 970 is close behind, while the RX 470’s trouble with our test settings shows itself as a couple of seconds spent past 16.7 ms. We may have to dial back the “extended distance scaling” setting in GTA V for future tests.

 

Crysis 3

Though it’s an older game, Crysis 3 can still put the hurt on modern graphics cards. Aside from anti-aliasing, we cranked every setting at 1920×1080 to give the RX 470 and friends a hard workout. (Ignore the fullscreen setting in the screenshots below—that’s an artifact of the way we had to capture them.)


In this test, the RX 480 and the RX 470 are within a hair’s breadth of one another in our results, and their frame-time plots are nearly indistinguishable from one another. The GTX 970 rules the roost here, though, while the R9 380X and the GTX 960 4GB bring up the rear.


Just like peas in a pod, neither the RX 470 nor the RX 480 spend any time mired past 50 ms or 33 ms, and they spend about the same amount of time chugging away on tough frames that drop the frame rate below 60 FPS. Moving on.

 

Rise of the Tomb Raider (DX11)
Rise of the Tomb Raider is a demanding and gorgeous game. We ticked the “high” preset at 1920×1080 and ran through a portion of the “Soviet Installation” level that’s thick with flames, particles, and lighting effects to see how the RX 470 handles Lara Croft’s latest adventures.




RoTR has always favored Nvidia cards in our tests, and even in our revised results, the GTX 970 takes a decisive victory in both our 99th-percentile and average FPS numbers. The RX 480 puts more light between itself and the RX 470 than it did in our poisoned initial set of test results,  and it improves its 99th-percentile frame time numbers, too.



The first major signs of “badness” for these cards running Rise of the Tomb Raider in DX11 mode crop up at the 16.7-ms threshold in our new set of data. In keeping with its high frame rate and low 99th-percentile frame time, the GTX 970 spends an imperceptible amount of time working on some difficult frames past the 16.7-ms mark. That’s great performance, and we can conclude the card is delivering a solid 60 FPS or better for the vast majority of our one-minute testing period.

The Radeon RX 480 spends about a second working on tough frames past this threshold, while the RX 470 needs about three. That’s still much better performance than the cards we formerly might have recommended in this price class, however. The R9 380X and GTX 960 fare much worse.

 

Rise of the Tomb Raider (DX12)

Since we already needed to go back and retest Rise of the Tomb Raider, we decided to deploy our brand-new Inside the Second methods (as seen in our Radeon RX 460 review) to gather some DX12 data with these more powerful cards for the heck of it. Aside from flipping on RoTR‘s DX12 mode, we kept settings identical to our DirectX 11 testing.




The switch to the DX12 API has practically no effect on these powerful graphics cards. The Radeon R9 380X does benefit from an improvement in its 99th-percentile frame time, but that figure itself isn’t all that impressive for the Tonga part.


In our measures of “badness,” the RX 470 spends almost five seconds of our one-minute test period working on frames that take longer than 16.7 ms to complete. That may be enough time spent churning to produce noticeable slowdowns and roughness in the course of Lara Croft’s adventures. The RX 480 slashes that figure by 80%, so perhaps the RX 470’s issues stem from the fact it’s a fresh card with immature drivers. We’ll have to see whether that’s the case with time.

 

The Witcher 3

To test the latest adventures of Geralt of Rivia, we maxed out every graphics setting in The Witcher 3‘s menus aside from Nvidia’s HairWorks tech. We then followed a winding road through a part of the Northern Kingdoms with plenty of water, vegetation, and non-player characters to spare.


Just a single frame per second separates the RX 470 and the RX 480 in this test, and the cards provide practically identical performance in our 99th-percentile frame time metric.


Our measures of “badness” show that the RX 470 has a little more trouble than the RX 480 if we use the 16.7-ms threshold as our yardstick. The GTX 970 remains the most fluid among the cards we tested, but both Polaris Radeons and the GTX 970 provide a plenty playable Witcher experience.

 

Hitman (DX11)
Hitman is among the hardest games for graphics cards to run smoothly among those we’ve tested of late. Let’s see how the RX 470 handles it maxed-out at 1920×1080. Just as we did with Rise of the Tomb Raider, we collected and crunched DirectX 12 data for Hitman during the course of revising our results, too.




If Rise of the Tomb Raider tends to favor Nvidia cards, here’s a win for the red team. The Radeon RX 480 handily beats out the GTX 970 in our average frame rate results, and the RX 470 takes a slight lead over the Maxwell card. Both Polaris contenders are neck-and-neck with that card in our 99th-percentile frame-time results, too.



Our measures of “badness” show that both Radeons have little trouble maintaining a solid 60 FPS or better in this game. The GTX 970 is equally capable. Pour one out for the GTX 960, though. Perhaps Nvidia has a GTX 1050 waiting in the wings to avenge it.

 

Hitman (DX12)

Here’s some next-gen API results for Hitman using the same graphics settings as we chose for DX11. AMD prominently features Hitman‘s DX12 numbers in its marketing materials for Polaris cards, so we were curious to see how the numbers would shake out for the Radeons when we flipped the API switch over to Microsoft’s latest.




Moving to DX12 with Hitman gives both of our Polaris cards a minor performance boost, along with an accompanying minor decrease in 99th-percentile frame times. The Radeon R9 380X also benefits from the move. Both GeForces perfom worse in DX12 mode than they do in DX11, though, so it’s probably best to stick with that API if you’re a Maxwell owner.


 


None of our cards except for the GeForce GTX 960 have any trouble running Hitman at a solid 60 FPS. Let’s see how Doom treats our test subjects.

 

Doom (OpenGL)

To get an idea of how id Software’s latest Doom runs on our test cards, we set the game to use the Ultra preset with 16x anisotropic filtering and 8x TSSAA. We then put the chainsaw to some demons guarding the yellow keycard in the early stages of Doom‘s Foundry level. Like we did with the rest of our tests, we ran Doom at 1920×1080. 


As we discovered in our Radeon RX 460 review, Doom‘s OpenGL renderer is not kind to Radeons, and the story doesn’t change with Polaris 10. The RX 470 and RX 480 both deliver admirable average frame rates, but they trail the super-fast, super-smooth GTX 970 in both average FPS and 99th-percentile frame time numbers.


No card we tested with Doom‘s OpenGL renderer spent any time past 50 ms or 33 ms, a testament to how smooth and well-optimized Doom is in general. In fact, for the GTX 970, we have to click all the way over to our 8.3-ms chart to see any truly troublesome frames. For perspective, those numbers mean that the GTX 970 spends just 11 seconds of our one-minute test run working on frames that drop its average below 120 FPS.

The Polaris Radeons aren’t quite that good with OpenGL, but they do keep the time spent past 16.7 ms to a minimum. Doom‘s Ultra settings even let the GTX 960 shine a bit, for once, but the R9 380X spends a significant amount of time working on frames that drop its FPS average below 60. That time translates into noticeable slowdowns during gameplay.

Next, let’s see what happens when we unleash the fury of Vulkan in Doom.

 

Doom (Vulkan)

As with our other next-gen API tests, we kept Doom‘s graphics settings identical to those for OpenGL. The only change we made was to turn on Vulkan.




Yow. In our average FPS measure, the Radeon RX 480 gets a 45% boost from the move to Vulkan at 1920×1080, and it comes darn close to delivering 99% of its frames at 90 FPS or better. Recall that we’re running the game at the Ultra preset with 8X TSSAA .

The RX 470 isn’t quite as fast, but it beats the GeForce GTX 970 in both performance potential (as measured by average FPS) and ties it for smoothness (as measured by our 99th-percentile frame-time metric). The GeForces end up just a bit worse off than they were with OpenGL, so there’s no real advantage to enabling this next-gen API for Maxwell GPUs, at least. Seems we need to get one of them newfangled GTX 1060s for our test bench to conduct a fair fight.

After a second round of retesting, the Radeon R9 380X also got an appreciable performance boost from Vulkan. For some reason, our first test run pegged the card at 59 FPS, while a second test with the Radeon Software 16.8.2 hotfix driver raised that figure to a quite-respectable 87 FPS. The card’s 99th-percentile frame time also improved from a troubling 28.4 ms to a much happier 15.7 ms. To keep the playing field equal, we re-ran every Radeon card above using the same drivers, though neither the RX 470 nor the RX 480 demonstrated any meaningful changes in performance from that update.



After our re-testing of the Radeon cards above, none of the cards spend any time beyond the critical 50-ms or 33.3-ms thresholds, and only the GTX 960 spends a noticeable amount of time working on frames that drop its average under 60.

To see any meaningful struggles from our top three cards, we have to click over to that 8.3-ms plot again. The Radeon RX 480 8GB card comes darn close to delivering a perfect 120-FPS experience at 1080p with Doom‘s Ultra settings. It spends just three seconds of our one-minute test run on frames that take more than 8.3 ms to render, and that translates into an amazing Doom experience.

The RX 470 spends about three times as much time working on tough frames past 8.3 ms, so it’s in a close race with the GTX 970. The Radeon R9 380X does slightly worse still, while the GTX 960 brings up the rear by a wide margin.

 

Power consumption

Let’s take a look now at the effects the Radeon RX 470 has on our system power draw. Our “under load” tests aren’t conducted in an absolute peak power draw scenario. Instead, we run Crysis 3 on each card to show us power draw with a typical gaming workload.

Here’s another spot where the RX 470 isn’t much different from the RX 480. AMD specifies a board power of 120W for this card, but even without more precise testing equipment, we can see that the overclocked XFX RX 470 is drawing about as much power as the reference RX 480 8GB board. No matter how you slice it, the RX 470 lets our system draw less power than it does with an R9 380X installed, and it delivers much greater performance while doing it. Not bad. Meanwhile, our factory-boosted GeForce GTX 970 consumes the most power of the lot.

Noise levels

At idle, all of these cards save the Radeon RX 480 can shut off their fans at idle, making the roughly 32 dBA produced by the rest of my test system the lower limit to quietness in this test. Crank up Crysis 3 on the RX 470, though, and it becomes the loudest card among those we tested. Even the Radeon RX 480 reference card is quieter. The XFX card isn’t unpleasant-sounding, at least. Its twin fans mostly sound like moving air instead of an unpleasant tonal or growly noise.

GPU temperatures

The XFX cooler keeps the RX 470 the coolest of the Radeon cards we have on hand to test, so at least its aggressive fan profile has a good payoff. We did discover an interesting behavior with this card despite its cool-running nature. Even with its 1256-MHz boost clock specification, this RX 470 seemed more content to hang out in the 1200-MHz zone after being loaded with Crysis 3 for a while, rather than boosting up to or near its on-paper maximum. We expected higher clock speeds given this card’s cool-running nature.

Strangely, the RX 480 reference card seemed plenty able to run at or near its specified 1266-MHz maximum in our informal testing. We’re guessing AMD’s reference board engineers were willing to let the Polaris 10 GPU run hotter in exchange for the boostier behavior, given its 80° C load temperature. Perhaps some tuning in Radeon WattMan (née OverDrive) could let us extract more performance from XFX’s custom RX 470. We’ll have to look into this matter further.

 

Conclusions

Since we’ve completely revamped a large portion of our test data for the RX 470 using the same methods we introduced with the Radeon RX 460, we can also crunch our price-to-performance data three different ways. Our value scatters now let you see each card’s price-to-performance using DirectX 11 and OpenGL data only, DirectX 12 and Vulkan data only, or a “best API” chart that accounts for the best result each card deliverd using either current-gen or next-gen APIs.


Let’s start off with average FPS per dollar. In our initial review, we concluded that the Radeon RX 470 delivered RX 480-class performance for similar or higher prices. It’s certainly not on par with the bigger Polaris chip in our results any more, though. The RX 480 8GB card turns in especially strong showings when it’s not shackled by DPC latency on our test rig, and it takes a commanding lead in our “best API” chart. That result doesn’t help the value proposition of the $230 XFX Radeon RX 470 we were sent for testing, though. For just $10 more, the RX 480 8GB reference card delivers about 10 more FPS, plus twice the RAM. We initially advised system builders to step up to the RX 480 or seek out a cheaper RX 470 if possible, and our updated test data just puts an exclamation point on that advice.

Next, let’s look at each card’s delivered performance per dollar using our advanced 99th-percentile frame time metric. To make this measure work, we’ve converted the 99th-percentile frame-time numbers for each card into FPS.


Our 99th-percentile frame-numbers still tell us more or less the same story as we gleaned from them in our initial review. The XFX RX 470 RS card is quite good at smooth frame delivery, and it would be an amazing value at AMD’s $180 suggested price. At $230, its value proposition is just so-so. The problem, as we’ve already noted, is that an extra $10 nets you a big leap in smoothness from the RX 480 8GB card, and $30 in the pocket leaves you with an RX 480 4GB card, if you can find one for that price. Neither of those steps help the XFX RX 470 RS’ value proposition much.

In isolation, the XFX card we tested is a nice product, but it’s not a perfect representative of the RX 470 breed. While the card boasts solid core and memory clock boosts from the factory, its fan profile favors performance over politeness, and it seems to have trouble keeping up its 1256 MHz boost clock all the time. Asus, Gigabyte, Sapphire, PowerColor, and MSI all have custom RX 470s in the works, so we’ll have to see how those cards measure up in due course. We’re especially curious whether any of them will hit AMD’s suggested price point, and how they’ll get there.

At the end of the day, the RX 470 is a great performer with an awkward price tag, at least going by the card we tested. It brings many of the same benefits we saw from the RX 480—including smoother frame delivery than its forebears—to a potentially wider audience. The problem for AMD, if it can be called that, is that it has too many good products huddled near similar price points right now. If the company can draw a brighter line between the RX 470 and the RX 480 with pricing as its pen, the lesser Polaris may yet find a home in the market. Our testing results still suggest buyers will find a better value in the Radeon RX 480, however, and that’s the card we’d be after were we in the market for a Radeon today.

Comments closed
    • flip-mode
    • 3 years ago

    Looking at the graph at the bottom of this page:
    [url<]https://techreport.com/review/30473/amd-radeon-rx-470-graphics-card-reviewed/6[/url<] for time beyond 50ms and time beyond 33.3ms the RX 480 does WORSE than the RX 470? Is that right?

      • Jeff Kampman
      • 3 years ago

      The RX 480 has one spike in its frame-time plot that led to it showing up in the beyond-50-ms and beyond-33-ms graphs as it did. I didn’t call it out because it’s atypical of the card’s performance in the rest of the bench run. The “beyond 16.7 ms” result on that page is much more important.

    • smilingcrow
    • 3 years ago

    Considering that the recent Steam survey showed that the majority of gamers aren’t using Windows 10 even though it had been a free upgrade for about 11 months at the time of the survey the “Best API” option in the scatter plots is invalid for the majority.
    You’d need to have Best API options for Windows 10 and for older versions.
    In practice if that’s one option too many just having two options for DX11 v DX12 with Vulcan always included would make more sense maybe!

      • Jeff Kampman
      • 3 years ago

      Most TR readers seem to have taken the plunge: [url<]https://techreport.com/news/30459/poll-did-you-take-microsoft-up-on-a-free-windows-10-upgrade[/url<]

        • smilingcrow
        • 3 years ago

        A Steam survey will address a much wider user base than a tech review/news site.

        The poll doesn’t show how many people installed 10 and then reverted back to a previous version which I have noticed quite a lot recently especially after the anniversary update which annoyed many.

        Hard core gamers are more likely to suck up the Win10 pain so they can enjoy their hobby I imagine.
        So for those expending a lot of cash on gaming the percentage with Win10 will be the majority.

          • Jeff Kampman
          • 3 years ago

          I write for our reader base 🙂

            • smilingcrow
            • 3 years ago

            Well two thirds of them anyway and hopefully you have more than 4,000 readers! 🙂
            I wonder how representative the 4,000 people who completed the poll is of your overall user base?
            Do you publish figures for how many individual readers you have per month?

            • shank15217
            • 3 years ago

            Does your point apply for Vulcan? Linux? This is a forward looking tech site, if you’re not buying expensive video cards for tomorrow’s game using tomorrow’s API then you’re not buying computer hardware correctly.

    • bwcbiz
    • 3 years ago

    Is it possible to make these value charts dynamic to reflect changing prices (or at least a dynamic version that can be selected)? Right now the RX 470 has some models available at $200. This isn’t as good as $180, but it’s a fair bit better than $230.

    Otherwise, I’d like to see an update to the value/conclusions in some form once the early-adopter penalty has been removed from the pricing on all of these new chip series.

      • MDBT
      • 3 years ago

      GTX 970 prices have fallen $50-100 below their launch price as they clear out old maxwell stock. With such variations in pricing it’s hard to generate a chart that is going to be valid for all applications.

        • SHOES
        • 3 years ago

        furthermore it is not difficult to make an educated decision based on the information shown here. Typically your looking between 2 or three cards and you have one side of the plot that wont change much. Just adjust for the price..

    • Spunjji
    • 3 years ago

    Haha. Damn. I bought an RX 470 thinking it was closer to the 480 than it is! Ah well, it was a fair bit cheaper too. So it goes.

      • Jeff Kampman
      • 3 years ago

      Very sorry to have misled you.

    • beck2448
    • 3 years ago

    Strange review including cards no longer in production. No pascal.

    • gamoniac
    • 3 years ago

    Take it easy, Jeff. The review is awesome, but we want you to stay health and continue write good reviews : )

    • SoundFX09
    • 3 years ago

    Overall, It’s a Great Card, No Doubt about it.

    Now, my guess as to why the 470 is priced higher than expected is due to two things:

    – Low Supplies and Availability of the RX 470.
    – Extreme Demand for Entry-Level and Mid-Range GPU’s.

    Give it another Month or Two, and then we will see the 470 drop in price and become an appealing option. Until then, I get why people wouldn’t go for this card, but I still think it’s worth getting nonetheless if you’re sticking to a 1080p Monitor and want 60 FPS Performance.

    • derFunkenstein
    • 3 years ago

    Not that anybody has a financial incentive to do so, but I wonder how much of this frame-smoothing goodness AMD could backport to Tonga, Hawaii, and Fiji cards. The 380X makes the graphs a mess (in Hitman, anyway) and that can’t be good for the experience, either.

      • Jeff Kampman
      • 3 years ago

      My understanding is that these improvements are at least partway architectural, so it’s not something that can be backported.

        • tipoo
        • 3 years ago

        Only the newest architecture features a Wasson Unit.

          • Meadows
          • 3 years ago

          How many Wassons per second is that?

    • Jeff Kampman
    • 3 years ago

    Another content update: added videos of our gameplay runs and screenshots of test settings to each page.

      • derFunkenstein
      • 3 years ago

      when do you sleep

    • techguy
    • 3 years ago

    Serious thumbs up for the effort, Jeff.

    Gotta say, this card makes no sense, nor does AMD’s product stack ATM.

    • anotherengineer
    • 3 years ago

    Still not enough reviews out for multiple cards. Would like to see more noise comparisons between cards and power usage between 4GB and 8GB versions.

    The MSI gaming has 2 hdmi and 2 DP, was hoping it would have 3 DP.

    • OneShotOneKill
    • 3 years ago

    This card is perfect for pre-built “Gaming” systems and will be a $300 option.

    • selfnoise
    • 3 years ago

    So this card was announced and nowinstock.net has listings of custom cards shipping now. The RX480 remains steadfastly unavailable with only a tiny trickle of reference cards reported at Amazon. If the 470 is a binned 480.. what’s going on here? Are we seeing the result of really poor 480 yields?

      • raddude9
      • 3 years ago

      By that logic, intel must have the worst process going, they have dozens of non-flagship chips that obviously didn’t make the grade.

      Seriously though, trying to speculate about the yield of a chip based on the how many cards some shops have one day after launch is a massive stretch. But then, you probably know that already.

      • smilingcrow
      • 3 years ago

      With the 470 being so close performance wise to the 480 it does seem odd to release them so close together.
      Looking at the initial power issues, lack of over-clocking headroom, very high idle power draw, seemingly low supply it does seem fairly likely that the GPU had at least some issues which has impacted on the forms in which it has been released.
      I don’t believe that AMD intended to release the 470 and 480 in this way but that their hand has been forced.
      Still doesn’t make sense to me how they’ve handled it even taking that into account.

    • travbrad
    • 3 years ago

    Glad to finally see some 1080p benchmarks since that’s realistically what you should be using a 470/480 for anyway. It also demonstrates that resolution can make a big difference. In the 480 review at 1440p it was pretty much neck-and-neck with a 970, whereas here at 1080p the 970 appears to have a decent advantage. I wonder if 1060 would be similar?

    • Bensam123
    • 3 years ago

    Interesting card, I wonder why it’s so close to the 480 in terms of stream processors. ~10% difference is hard to justify the lower cost, when it’s so close to the 480 4GB (not sure why anyone would ever buy the 8GB or even compare cards to the 8GB).

    Newegg automatically price adjusts when demand is high, perhaps there should be a post in two weeks reassessing the price point of this card.

    Unless you can’t get a 4GB 480, this looks like a pretty good alternative at $180.

      • derFunkenstein
      • 3 years ago

      Sub-$200 I agree. This becomes a GTX 970 vs 980 comparison where the cheaper card was a much better value.

        • Bensam123
        • 3 years ago

        MSRP $20 cheaper, for slightly worse then 12% performance loss… Pretty much the same price/performance ratio as the 4GB 480.

          • derFunkenstein
          • 3 years ago

          Which isn’t great. If the price/performance ratio is the same and there’s a ~10% difference in price, just jump up to the faster model. Price/performance always kind of trends in a negative fashion as price goes up because people that want the best usually are willing to pay for it. Since the RX 480 can’t go up without being a loser compared to the GTX 1060, it seems likely the RX 470 will have to come down. As soon as there’s a GTX 1050 that comes in cheaper and gives the 470 a run or its money, that price drop will happen.

            • Bensam123
            • 3 years ago

            If people only want the best, they would only be buying Titans. That’s not how the market works. More of ‘I have $200 to spend, what the best GPU I can get for it’.

            Not sure how a $200 graphics card that’s slightly slower in current API titles and faster in next gen API titles is a loser compared to $250 card… Unless you’re talking about the 8GB model, which isn’t even a argument and no one should be buying that over the 4GB model.

            • travbrad
            • 3 years ago

            $20 is just such a small difference though. It seems like a very small niche of people who would have $180 to spend on a graphics card but not $200. I know it’s probably all about binning but IMO 470 will have to be a bit cheaper to be a compelling option.

            Either that or it will just be in limited supply so they figured they can sell the few 470s they are able to make and drop prices later when supply catches up.

    • insulin_junkie72
    • 3 years ago

    After seeing release-day pricing on the RX 470 (even that PowerColor NewEgg deal only lasted a bit before it went back up to Powercolor’s MSRP), that Sapphire Nitro RX 460 listing Amazon had for $150 actually doesn’t seem so crazy anymore.

    • Shobai
    • 3 years ago

    Jeff, I still have a couple of quibbles about your Frame Number graphs, but they are much improved. Thanks for working on that!

    On the scatter plots on the conclusions page, the GTX 970 has a red coloured mark on the first graph rather than a green one.

    Thanks for the review!

      • Shobai
      • 3 years ago

      Woohoo, the colour has been fixed! I must say, it’s nice to know that your feedback is valued

    • Jeff Kampman
    • 3 years ago

    FYI, folks, I added a page with our power, noise, and temperature numbers: [url<]https://techreport.com/review/30473/amd-radeon-rx-470-graphics-card-reviewed/8[/url<]

      • raddude9
      • 3 years ago

      Any idea what got the RX470 idle power numbers down so much? They are now the best of all the cards tested and a full 9W less than the RX 480. Is this down to the non-reference card, or a driver change or something else?

    • DPete27
    • 3 years ago

    Jeff. Word on the street was that RX480s went out to reviewers with an included BIOS to run the card at 4GB VRAM. Is that not the case?

    • Rza79
    • 3 years ago

    Me again with a Computerbase.de link. Sorry!

    Asus, PowerColor and Sapphire cards tested.

    [url<]https://www.computerbase.de/2016-08/radeon-rx-470-test/3/#abschnitt_benchmarks_in_1920__1080[/url<]

    • wiak
    • 3 years ago

    great review as always, worth waiting for just wish their was more dx12/vulkan benches

    • crabjokeman
    • 3 years ago

    Nice touch on the glowing red power connector. What a wonderful, subtle way for an engineer to say, “No, stupid, that really doesn’t go there…”

    • dikowexeyu
    • 3 years ago

    Best value ever, but too slow to matter.

      • Krogoth
      • 3 years ago

      Not for the targeted demographic.

      It blows away that old fanged 7850/7870 rebrand and 660 while embarrassing the 960.

        • Pancake
        • 3 years ago

        But the competition will move. As surely as there exists the 1060 there will be a die harvested 1050 with a few less functional units, slower clocks etc.

        Then, POW! RIGHT IN THE KISSER!

          • tipoo
          • 3 years ago

          You can say that about every card out. Of course things will always move.

      • raddude9
      • 3 years ago

      Most gamers have 1080p screens and buy sub $200 GPU’s. This is not only the best value card, it’s the only new card (for now at least) in the most popular price bracket.

      • flip-mode
      • 3 years ago

      It is pretty damn far from slow. It is stepping on the heels of a GTX 970.

      • reever
      • 3 years ago

      Someone sure hasn’t been gaming for two decades. Back in my day you had to buy the tippy top of the line to play literally anything at high settings, high resolution, and with AA/AF on. Now you can buy a midrange card for <$200 and it’ll play everything out the box, AND you can jack up nearly every setting.

        • Pancake
        • 3 years ago

        High resolution is relative. I game at 2560×1600. 1920×1080 is chunky like a monkey. And now we have the tight sweetness of 4K. You need tippy top for that.

          • Krogoth
          • 3 years ago

          Except there’s currently no 4K monitor on the market that has a high framefrate. It is all 60hz units, but this mostly due bandwidth limitations of Displayport 1.2 and HDMI 2.0 being the current baseline for such mointors.

            • Voldenuit
            • 3 years ago

            ASUS showed off a 144 Hz 4K monitor at Computex this year, it’s not out yet, and it probably won’t be cheap, and you’re going to need one (or two) helluva GPU(s) to run it that fast.

          • Ifalna
          • 3 years ago

          Is 4K even possible in games like Witcher III and all details up?
          Thought you need even more than a 1080 for that.

            • Pancake
            • 3 years ago

            Well, that’s kinda my point. There will always be an appetite for moar powah.

    • Chrispy_
    • 3 years ago

    The THG review is in with the fancy power consumption measurements and it’s really not very good news 🙁

    141-144W when gaming, not 120W.

    Also the boost of 1206MHz is way off – it almost never hit that speed – their frequency graph shows it running somwhere between 1050 and 1150 most of the time.

      • AnotherReader
      • 3 years ago

      [url=http://www.computerbase.de/2016-08/radeon-rx-470-test/3/#abschnitt_so_takten_die_asus_powercolor_und_sapphire<]Other designs hit much higher clock speeds[/url<]; this seems to be on Asus, not the 470.

        • Jeff Kampman
        • 3 years ago

        FWIW, I’m investigating boost clock behavior with our card and it likes to hang out at about 1200 MHz, well short of the 1256 advertised boost clock.

          • Chrispy_
          • 3 years ago

          With no reference design I guess it’s down to how much power the vendors feed it and what your luck is like with the silicon lottery.

    • Goty
    • 3 years ago

    Looks like I was pretty close to the mark when I postulated about the shader and ROP count of the 470 as it relates to its performance vs the 480. I don’t know that I’d buy these cards over a reference 480 for the same price, but for about $20 less I think it would be a slam dunk.

    • maroon1
    • 3 years ago

    You should have used stock RX470, not an overclocked one

      • chuckula
      • 3 years ago

      In TR’s defense, nobody seems to have received a stock Rx 470 for review.

      Having said that, it should also be noted that if no stock Rx 470’s are really out & about for review, then the prices of the Rx 470 should be (and are in this review) revised upward to match the real-world product availability instead of doing the low-official-price-but-inflated-real-price game we’ve seen recently.

      • Goty
      • 3 years ago

      There probably aren’t any reference RX470s (much like what NVIDIA was doing with cards like the 960 before the “Founders Edition” IIRC), so this pretty much [b<]is[/b<] a stock card.

      • RAGEPRO
      • 3 years ago

      TR has very little control over what it recieves for review. 🙂

      • Jeff Kampman
      • 3 years ago

      As Zak notes above, this XFX card is what AMD sent me ahead of the official launch, so it’s what we reviewed.

      • derFunkenstein
      • 3 years ago

      Living up to your name.

    • codedivine
    • 3 years ago

    Any word on the fp64 capabilities of the card?

      • tipoo
      • 3 years ago

      1/16th of SP. Same ratio as the 480, so expect a similar performance scaled to what you saw for graphics in this review.

      [url<]http://www.anandtech.com/show/10530/amd-announces-radeon-rx-470-rx-460-specifications-shipping-in-early-august[/url<]

    • ronch
    • 3 years ago

    Waitaminit.. I thought the RX 480 pretty much matched the 970 across the board, but here, except for Hitman it’s trailing it. What happened?

      • Jeff Kampman
      • 3 years ago

      Different resolutions? This is the first time we’ve tested a lot of these cards at 1920×1080.

        • ronch
        • 3 years ago

        So it’s starting to feel weak in the knees?

      • synthtel2
      • 3 years ago

      (I got curious and did a whole bunch of analysis. Sorry for the wall of text.) (Jeff: no worries about the lack of settings info in some places, since it sounds like this was a rush review. I would generally prefer the usual where you take a bit more time and do it right, though. 😉 )

      In addition to the resolution differences, the 970 used this time is very slightly faster (nominal boost clock of 1253 instead of 1216). This testing also uses a 6700K instead of a 5960X, which shouldn’t be a big deal, but seems worth a note. There are also a few more per-game differences.

      GTA V (was [b<]64-66 / 19.0-18.1[/b<], now [b<]60-72 / 24.2-20.6[/b<]): MSAA is at 2x instead of off, and description is lacking - a few things might be at slightly different settings (I don't have the game to see what's what). This says "We cranked almost every setting we could to 'Very High,'" while the 480 review has shadows / AO at high and grass / post at ultra. Green team absolutely does like low res + MSAA better while red team would rather just have the higher res, but it looks like both cards are having a tougher time here (check the 970 99% times). SSAO shouldn't introduce much inconsistency like that. By any chance were shadows above high in the 470 review? That would explain a lot (red team is weaker at shadows too). Crysis 3 (was [b<]61-63 / 19.2-19.7[/b<], now [b<]55-63 / 23.1-20.5[/b<]): Resolution was reduced from 1440p to 1080p and the rest of the settings (aside from AA) were cranked from high to whatever max is called. I'd fully expect max settings on Crysis 3 to hurt AMD a lot more than Nvidia. I don't know the exact list behind "system spec", but knowing a bit about how the game is built, I suspect it does a lot with shadows in particular. RotR (was [b<]49-48 / 28.2-27.0[/b<], now [b<]66-76 / 22.2-16.4[/b<]): This was and still is 1080p. This time it was with the high preset, while in the 480 review it was mostly very high. In the 480 review's shots of the settings page, I see a good mix of settings that each team is liable to like. If anything, I'd expect the 470 review's settings to be a bit more AMD-friendly. Unless SSR ended up off or MSAA ended up on this time, I'm not sure what happened. FO4 was absent this time. In the 480 review, it had framerates between the 970 and 980 and frametimes matching the 980. Witcher 3 (was [b<]57-54 / 26.0-23.0[/b<], now [b<]53-58 / 21.2-18.9[/b<]): This should have been the same settings each time (1080p, no hairworks, everything else maxed). It looks like the paths followed within the game world are different, though, and the frametime graphs bear that out. It also looks like the 480 had some periodic hitches the first time, which may have been fixed since at some penalty to average framerate. Hitman (was [b<]70-58 / 22.6-25.9[/b<], now [b<]81-76 / 16.0-25.1[/b<]): Settings aren't noted, but there might not be much to be learned from that anyway, because the 970 is obviously hitting some kind of bad codepath here. [i<](edited for typo correction)[/i<]

        • Ninjitsu
        • 3 years ago

        This is why I keep asking for keeping the same resolutions and settings across a review (and maybe across reviews), instead of finding “playable settings” at a given resolution, per game…

          • Jeff Kampman
          • 3 years ago

          Not every game has the same knobs and dials to tweak, so this idea doesn’t really work.

            • synthtel2
            • 3 years ago

            Keeping the same settings for a given game (at least when the purpose of the testing doesn’t involve variance in them) would be nice though.

            I understand a lot of the resolution changes, but for everything else it’s pretty practical to keep it all the same.

            • Ninjitsu
            • 3 years ago

            That’s not what I meant, just similar settings across the different tiers of stuff. More along the line of what synthtel2 said above.

            For example, stick to 1440p, 4xAA, “high” settings wherever you can. Then for more mid range cards, if you’re using 1080p, use the same settings for the same games.

            Thing is, while the performance hit due to resolution is fairly easy to calculate, the same isn’t true for the other settings like AA.

            EDIT: Check out the charts here, for an example: [url<]http://www.anandtech.com/show/10540/the-geforce-gtx-1060-founders-edition-asus-strix-review[/url<]

          • Voldenuit
          • 3 years ago

          Well, it’s not like having a single, fixed setting will solve the problem either.

          If a certain setting favors GPU A over GPU B, enabling it may paint a picture that is not representative of real-world usage. Then again, how many people tweak every individual game to min/max their particular card? Finding the information on this can be sketchy, and trial-and-error approaches are time consuming and fraught with user error and run variance. nvidia tried to come up with “optimal” presets in GFE, but if I may be frank I find their recommended settings to be garbage and always end up rolling my own anyway.

          Add in driver updates, game patches, and it’s not only a fuzzy target, it’s now a moving fuzzy target.

            • synthtel2
            • 3 years ago

            I agree that more work would be good, but in the absence of that work being done, it’s better to keep as many variables as possible fixed, so at least the data we do get is good. Jeff is a busy guy, as are most people in the position to do this kind of thing. I’m a hyper-optimizer who ponders this stuff regularly, but I don’t have the resources to be properly scientific about it. I’d love to be able to do this kind of analysis more professionally, but that’s easier said than done. 😉

        • synthtel2
        • 3 years ago

        Update for settings screenshots….

        GTA V: Shadows went high -> very high, post went ultra -> very high, and tessellation went very high -> high. The tessellation change works in AMD’s favor, but the shadow and post changes definitely don’t.

        Crysis 3: The advanced settings have a good mix of things that will work for each team, but knowing a bit about the game’s graphics tech, the ones that AMD has more trouble with are the heavier ones.

        RoTR: Lots of stuff changed here, but most critically, PureHair is enabled in the 470 review (the one with otherwise lighter settings). PureHair looks to work via “what if we tried more geometry?” (as in HairWorks). AMD hardware really, [i<]really[/i<] doesn't like that kind of thing. Witcher 3: Everything is as expected here. Hitman: Settings changes were shadow resolution high -> medium and AA off -> SMAA. Maybe the codepath the 970 was having trouble with was somewhere in SMAA? Anyway, I don't think this game's shadows are quite that heavy. Maybe it's just a matter of stuff having been optimized since the last review.

          • Jeff Kampman
          • 3 years ago

          Hitman actually suffered on the Nvidia cards because of a weird interaction between Fraps and the game’s “fullscreen” mode that I discovered last night. I retested in “exclusive fullscreen” and the issue seemed to go away. You might have to clear your browser cache to see the new graphs.

            • synthtel2
            • 3 years ago

            That’s interesting, and I now seem to recall that Hitman has a soft cap on framerate which is probably screwing with everything. Thanks for the updates!

      • wiak
      • 3 years ago

      if the games are benched with modern APIs (Vulkan, DX12) then yes
      amd heavily favors dx12 & vulkan these days, nvidia favors dx11

      in DOOM Vulkan, the Fury X got a 40% boost for crying out lound
      [url<]http://www.eurogamer.net/articles/digitalfoundry-2016-doom-vulkan-patch-shows-game-changing-performance-gains[/url<] will be fun to read reviews when vulkan/dx12 games come out and watch the gcn volcano erupt heck rise of tomb raider (dx12?) is faster by 10% after amd optimized their drivers in 16.7.3 my ancient HD 7970 3GB go a nice boost with doom vulkan and that runs on 1st gen gcn

        • stefem
        • 3 years ago

        Partially thanks of crappy OGL drivers, Doom does not support asynchronous compute on your card

      • Jeff Kampman
      • 3 years ago

      Another head-slapping difference I failed to account for: we’re testing with Nvidia’s latest drivers in this review, which came out after the RX 480. It’s possible Nvidia included some optimizations and such under the hood that could have had a meaningful effect on performance. Hitman in particular seems to be running much more smoothly on Nvidia cards now.

        • chuckula
        • 3 years ago

        Wait, are you trying to tell us that in a review involving an old Nvidia card that a new driver failed to gimp the card in a new game?

        DoomGuy64 will have words with Nvidia for this malfeasance!
        Words I tell you!

          • travbrad
          • 3 years ago

          It’s all just part of Nvidia’s evil plan. They knew TR would be testing the cards and in the next driver they will cripple the older cards again.

    • MrDweezil
    • 3 years ago

    So what happened here? The 470 is basically just as fast as a 480, but cheaper (in theory). There’s no way they meant to put out two cards at same price/performance level so did this card turn out to be faster than they intended, or is the 480 slower than they intended?

      • AnotherReader
      • 3 years ago

      The reference 480 throttles in some games; this is a custom 470 vs a reference 480. If a custom 480 was included, the performance delta would be greater.

        • PixelArmy
        • 3 years ago

        [url<]https://www.techpowerup.com/reviews/ASUS/RX_470_STRIX_OC/24.html[/url<] Not sure of the accuracy of their "reference" RX470, which they emulated using "a 4 GB RX 480 that was flashed with the AMD RX 470 reference BIOS". You gotta go with the RX480 over the RX470 unless you're really pinching pennies.

    • ImSpartacus
    • 3 years ago

    Good to see a quick turnaround.

    Also noticed that there’s an actual mention that almost all benchmarked cards are overclocked out of the factory. I think that’s new, which is commendable.

    I hope that we can eventually get overclock percentages associated with the various parameters. It’s fantastic that there’s communication that an overclock exists, but the magnitude of the overclock is important as well. There might’ve been a time when all factory overclocks were an inconsequential 2-5%, but today we’ve got Pascal that can crank up to an amazing 20+% on the core/boost clock. Since there’s a large range of possible overclock magnitudes, I hope to see the exact magnitude of overclock communicated in the future. But honestly, I get that there are other priorities and it’s good to at least see a little progress on that front.

    • Anovoca
    • 3 years ago

    50db is a killer. If you are buying a 470 it isn’t because you want a performer it is because you want something that is understated but gets the job done. Sounds like their design team completely missed the mark.

      • Jeff Kampman
      • 3 years ago

      This may be a worst-case rather than a typical situation. Rise of the Tomb Raider can really get this card’s fans going, but in Crysis 3 my test rig records 44 dBA.

        • tipoo
        • 3 years ago

        Is that the DX12 patch? Would that imply DX12/Vulkan games may get fans higher, which would make sense if a CPU limit is removed?

          • Jeff Kampman
          • 3 years ago

          Nope, just plain old DX11. Not sure what the deal is.

      • synthtel2
      • 3 years ago

      I’d blame XFX for that one, not AMD. I’m sure MSI / Gigabyte / Asus / Sapphire will have better stuff. (I’ve had lots of bad experiences with XFX, FWIW.)

    • AnotherReader
    • 3 years ago

    Great work reviewing it in only one day! I took a look at some other reviews and it seems that Radeons have [url=http://www.computerbase.de/2016-08/radeon-rx-470-test/5/#diagramm-leistungsaufnahme-des-gesamtsystems-windows-desktop<]lower idle power for high refresh rate monitors[/url<]. Geforces have lower power consumption for multi-monitors though it becomes closer at three monitors. [Edit]: Overclocks range from [url=http://www.computerbase.de/2016-08/radeon-rx-470-test/6/<]1345 to 1395 MHz[/url<]. Memory overclock helps when overclocking as the 20 MHz slower Sapphire consistently beats the higher clocked PowerColor

    • Krogoth
    • 3 years ago

    470 a.k.a what 480 was intended to be but AMD was forced to bump clocks at the last minute to make it match 970’s performance and beat 1060 to the mid-range punch.

    • rudimentary_lathe
    • 3 years ago

    I guess close-to-RX-480 performance for $150 was too much to ask for, but even at $179 MSRP this is shaping up to be a nice card.

      • Hattig
      • 3 years ago

      Yeah, the Powercolor card seems to be closer to this price than this card or the Asus.

      It’s a good 1080p card with good future proofing for DX12 / Vulkan.

      Disappointed that TR didn’t do DX12/Vulkan tests, I feel they are representative of future capability. Doom in Vulkan seems to be a good one (yes, it massively outperforms Nvidia, but that’s kind of the point – Nvidia’s Vulkan implementation is currently not outperforming their DX11 and that needs pointing out).

      Obviously given the time available for this review it’s more than understandable right now.

    • I.S.T.
    • 3 years ago

    The Witcher 3 doesn’t have its resolution listed. I assume it’s 1920×1080.

    • Tristan
    • 3 years ago

    lol, AMD ordered yet another review ? All tests in FHD, to show smallest possible differences between 480 and 470. They must have bad yields at GF, because there is serious shortages ‘full’ 480, and they somehow must compensate it by selling 470 ‘bit worse’, at higher prices 180$ instead 150$ what they promised.

      • AnotherReader
      • 3 years ago

      Citation needed for $150 promise.

      • deruberhanyok
      • 3 years ago

      $150 would be nice, but I don’t think they ever said that – it was just a bunch of rumour sites making guesses.

        • nanoflower
        • 3 years ago

        Yes it was the reviewers speculating on prices and trying to place where the cards should be in the price spectrum. As the review brings up this product would make much more sense at a lower price.
        As for the the test it makes sense to test the card at settings that would be used by people buying the card. The 470 at 4GB just isn’t a great card for 1440 or 4K so what’s gained by testing at those settings if the card isn’t intended to run with those settings?

          • djayjp
          • 3 years ago

          It absolutely can do 1440p/30hz+ (though on medium-high settings). So testing that rez is highly relevant, plus it’s also useful to see the effect memory bandwidth and ROP count has.

      • synthtel2
      • 3 years ago

      Citation needed in general, but especially….

      [quote<]All tests in FHD, to show smallest possible differences between 480 and 470.[/quote<] I don't think graphics cards work how you think they work. I even did the [url=https://techreport.com/forums/viewtopic.php?f=3&t=117882<]testing[/url<] to prove it. Even if there were such a thing, the shader and BW deficits of this 470 versus this 480 are about the same. Rasterization is another angle, but AMD really doesn't want to make the tests raster-heavy, because then Nvidia will eat their lunch (and breakfast, and dinner, and all the snacks). Even if all of the above somehow worked with your theory, if AMD were ordering reviews they would be seriously pissed about this one, because low resolutions and MSAA (GTA V in this review) are all Nvidia's thing.

        • RAGEPRO
        • 3 years ago

        Also the fact that these tests paint the RX 480 in a considerably less rosy light versus the GTX 970 than the original tests. Heh.

    • dragosmp
    • 3 years ago

    Thanks for the nice review Jeff, much appreciated. This is actually the card I was waiting, just not for 220$. Hopefully as the supply gets better the prices will fall back to MSRP

    • JosiahBradley
    • 3 years ago

    The PowerColor Devil 470 is only 179$ and almost beats a stock 480. I think that is a good value proposition.

    • deruberhanyok
    • 3 years ago

    Thanks for getting this up so quickly, Jeff!

    I’m with you on puzzling over the price – seems like it would make more sense to just have the 4GB 470 at $180 and the 8GB 480 at $240 (suggested). Make the RAM amount and clock speeds combined the thing that separates the two.

    As it is, the 4GB RX 480 sitting right between them in SRP is awkward at best.

    I’m also curious to see power and temperature numbers when you guys have them. I have a feeling these will fare much better than the 480 due to the better coolers in use.

    (also, with the custom board design RX 480 cards sitting around $280 on Newegg right now, $180 for an RX 470 looks like a really good deal!)

    • xeridea
    • 3 years ago

    I would like to see DX12 included. Other sites seem fine collecting data in DX12 with PresentMon. Frame times can be captured. DX12 is still somewhat in early stages, but there are many different games that use it, and since RoTR has been updated, they all are fairly stable. It would be nice to see what cards perform better or worse in DX12 compared to DX11. If the games you play perform better in DX12, you aren’t going to care about DX11 performance. DX12 is clearly the future so I don’t understand the amount of resilience to showing it.

      • MDBT
      • 3 years ago

      Because it’s a waste of time for a tiny number of people who are even aware of whether or not a given game will run in DX12, actually want to play those specific games today, are looking to buy a new card right now, only have <$250 to spend, and will base that purchase on the single digit percentage difference in performance in that new API. How many of those people exist? Enough to drive a financially meaningful number of page views to a site?

      I’d bet there’s a larger percentage of us who don’t own cards that are still on the shelf and would like to know how our older hardware compares but instead we have to play with the transitive property of inconsistent review practices, constantly changing game titles and poorly documented game test settings to try and guess within maybe a +/-20% margin how much better this new card is than the 2 gen old one in our machine now.

        • xeridea
        • 3 years ago

        Obviously it is financially viable to review DX12 performance, because every other review site does it. Having DX12 in reviews would let people know what games support DX12. Impossible to do on this site, though every other review site shows it. Are you suggesting we just completely ignore the future and forever scrutinize over DX11 (released 2009) performance? ~30% of the popular games (not counting low requirement ones) support DX12, so I wouldn’t say it is tiny.

        Are you suggesting the reviews perpetually re-evaluate 4 year old cards to see how they do in todays games? You have been using it for years, you know how it performs. Reviews are for people looking to buy cards to get an idea of performance before they buy it.

          • MDBT
          • 3 years ago

          The larger and more established the site the more resources they have and the more likely they are able to spend more of those resources to chase down increasingly smaller parts of the demographic. If I have a review site all by myself I’m going to prioritize the work that gets me the most page views and add more to the review if time allows. This also logically explains why the teardown of a partner cooler/board on a cut down version of another card that already got that treatment was not included right now.

          TR didn’t say it was impossible, Jeff said [quote<] DX12 also poses challenges for data collection that we're still working on. [/quote<] That sounds like they don't have a format/method that they like worked out yet. Perhaps this is more about quality than just having numbers to publish. As for your response about the old cards, yes I'm saying that I want a way to compare the new card to an older card instead of having to estimate it by guessing how comparable my card was to a 970 on one game from 2 years ago and then using the 970's performance on a completely different game to try and extrapolate how my card compares to the new card. As you yourself stated in another comment in this review most of us don't upgrade every two years. The more steps in this process the harder it is for a consumer to understand what they are getting in comparison to what they have now. More people will benefit from adding numbers from one or two older cards, say an HD7000 or 200 series AMD card or a 600/700 series Nvidia, in these kinds of reviews than from the niche obsession with the agonizingly slow adoption of a new API.

            • xeridea
            • 3 years ago

            By impossible I was referring to it being impossible to find what games support DX12 based on reviews from this site, because they don’t post any. It is clearly possible to get the frametime data that they need, so it wouldn’t be any harder to make the charts. Their reasoning has been more about them not thinking DX12 matters enough yet, technical reasons gathering data is just an excuse. Some games even have frame time view built into them.

            They post comparisons reasonably recent cards. If your card is 4 years old just assume anything you buy will be a crazy awesome upgrade, or use other sites that have performance comparisons between any of hundreds of cards vs any other card. You could even compare a 4850 to a Pascal Titan if you wanted to.

            • Mat3
            • 3 years ago

            “…chase down increasingly smaller parts of the demographic…”

            Newsflash: the number of DX12/Vulcan games are getting bigger and it’s becoming [b<]more[/b<] relevant. Two of them are in this very review. Even Pascal cards are now getting a boost by moving towards it, so any review now days that completely ignores it is not a full review.

            • Klimax
            • 3 years ago

            Don’t worry, that relevancy is purely tempoary…

            • Mat3
            • 3 years ago

            Not sure if serious…

          • smilingcrow
          • 3 years ago

          There are reviews with 100% DX11 games and some with 40% DX11 games and most are probably in-between.
          It’s a nice mix as people can see different perspectives.
          Not every site has to match your preference and they aren’t wrong for doing so.
          If you want a real overview of a card you should be looking at multiple sites anyway as they offer different things.

            • xeridea
            • 3 years ago

            I like the TR reviews, but usually end up looking at other sites anyway, because they have real sound test chambers, proper power consumption testing, and often other non cookie cutter issues explored. The value proposition charts are nice though.

            • wiak
            • 3 years ago

            same here, i read techreport due to their nice frametimes and graphing 🙂

        • Bensam123
        • 3 years ago

        Weird… Why are we, as a site, only forward looking when looking for a card with 8GB of memory, but not when it comes to performance on next level gaming APIs?

        Seems silly. I guess we could look at this a different way. 8GB 480s aren’t worth the extra money compared to a 1060 (it HAS to have 8GB of memory to be comparable to a 1060) and Nvidia doesn’t perform up to par most the time on next gen APIs… but I’m not going to start finger pointing.

          • Voldenuit
          • 3 years ago

          How representative of futre performance are the current DX12 games on the market? From all accounts, Hitman is poorly coded in both DX11/DX12. Ashes of the Singularity does extreme things with batches that 90% of other games would never do (and no FPSes would ever have as many units in flight). ROTTR doesn’t look any different and is slower in DX12 in both camps. DOOM Vulkan does show clear gains for AMD over nvidia, but how much of this is due to poor OpenGL drivers in the first place, and is it representative of nvidia’s full Vulkan potential, or is there hope for future driver gains?

          It almost seems unbelievable to me that the most balanced, least buggy prognosticator of DX12 is Futuremark Time Spy, where I have always had a deep mistrust of canned benchmarks as indicators of real world gaming performance.

          I don’t know if these are the reasons TR is holding off on DX12/Vulkan benchmarks; I personally would like to see them in addition to the DX11 benches. You can ask readers to take a grain of salt with benchmarks you don’t think are mature, but you can’t make them swallow it of they don’t want to. And people are going to get numbers from other sites anyway, and form their own conclusions, be they well- or ill-informed.

            • rechicero
            • 3 years ago

            Well, we all know that AMD cards improved more with time in DX than NVIDIA ones, because the drivers are less optimized. If we are going to discard Vulkan because with that API is the other way around seems a little biased. Another good point for reviewing Doom is that it will force NVIDIA to properly support Vulkan.

            • Bensam123
            • 3 years ago

            Double standards… yup.

            • Bensam123
            • 3 years ago

            So instead of talking about what we know, you’re throwing me the ‘what if’…

            What if Nvidia produces a super amazeballs awesome driver with .0001% overhead?!?!? Then we’ll cross that bridge when it happens, but right now that’s not remotely the case and based off what we know, AMD definitely is showing clear signs of performing extremely well in current forward looking APIs.

            I suppose we aren’t going back and looking at last gen Nvidia hardware that isn’t even compatible with current APIs as well, even though they said they would be.

            • Voldenuit
            • 3 years ago

            [quote<] So instead of talking about what we know, you're throwing me the 'what if'...[/quote<] Quite the opposite. We have a very small number of DX12-enabled games and benchmarks right now, not all of them favor AMD, even if more do than favor nvidia. However, even the gains in AMD for DX12 vary from title to title and card to card, with older generation cards like the 390 sometimes showing much more improvement than the 480. A lot of people seem to be speculating very wildly based on this small sample; I'm just saying, "hey, don't extrapolate too much from a little data". For all I know, the predictions of the AMD camp may be correct, but I'd rather get there through more data than guesswork. EDIT: It's worth noting that the software side is evolving rapidly, too. As early as several months ago, Rise of the Tomb Raider's DX12 was slower than its DX11 path, but as of the Jun update, we're now seeing improvements on both nvidia and AMD hardware in DX12. This is something sites like wccftech, extremetech and techpowerup investigate, but TR doesn't seem to do that. I think it would be worthwhile to follow, otherwise you're doing the GPU makers a disservice.

            • Voldenuit
            • 3 years ago

            [quote<]What if Nvidia produces a super amazeballs awesome driver with .0001% overhead?!?!? [/quote<] PS Doom Vulkan support currently doesn't support async on nvidia hardware, [url=https://community.bethesda.net/thread/54585<]per bethsoft's FAQ (dated July 23)[/url<]: [quote<][b<]Does DOOM support asynchronous compute when running on the Vulkan API?[/b<] Asynchronous compute is a feature that provides additional performance gains on top of the baseline id Tech 6 Vulkan feature set. Currently asynchronous compute is only supported on AMD GPUs and requires DOOM Vulkan supported drivers to run. We are working with NVIDIA to enable asynchronous compute in Vulkan on NVIDIA GPUs. We hope to have an update soon.[/quote<] I'm not claiming that nvidia will see the same gains that AMD saw with Vulkan+Async on DOOM. It's worth considering that Time Spy saw only a 6% gain for the 1070 with async (vs 11% for the 480). But the current DOOM Vulkan numbers are not representative of the future performance of Pascal cards (Maxwell users, are, unfortunately, out of luck as far as async gains go).

        • wiak
        • 3 years ago

        well if you buy a $250 card and want to play doom, what do you think people will use? OpenGL or Vulkan?

        i guess Vulkan after they see that huge performance uplift

        review sites can always include older APIs too, alot of the other sites does that

          • Pancake
          • 3 years ago

          That’s a perfectly legitimate viewpoint. Also, if you’re a big fan of Ashes of the Singularity or Hitman you should probably also go with AMD.

          However, these are all horrible unpopular B-grade titles and probably not suited for general review.

      • wiak
      • 3 years ago

      this is the *open api standard* version of the Project Cars that was soo biased toward nvidia that it skewed the results, this time around PhysX cant mess with the real hardware volcano in gcn

      so simply use the API that gives the best results, why use OpenGL when the Vulkan API gives you 40% more perf like in the Fury X result

      • Jeff Kampman
      • 3 years ago

      Keep an eye on our next couple reviews.

      • Jeff Kampman
      • 3 years ago

      In fact, re-read this one now!

    • willmore
    • 3 years ago

    I love it that you got a card in time to have a launch day review. Thanks card provider! Thanks hard working TR staff!

    • Chrispy_
    • 3 years ago

    These will be excellent cards once they’re available:

    Low power use
    Great 1080p performance
    Low pricetag.

    The 1060 is a great card too but it struggles at 1440p so buyers will be 1080p gamers most likely. When you can get great 1080p gaming for $180, why would you want to spend $300?

      • pranav0091
      • 3 years ago

      >> When you can get great 1080p gaming for $180, why would you want to spend $300?

      By that logic only the cheapest versions of any product would get sold. Obviously, that is far from being true. Also, not very fair to pick the highest price for a 1060 when you can find it for 249$. If I were to cherry pick, I could compare things from the opposite ends.

      The market always exists at almost any price-point. Though their sizes may not be the same.

      <I work at Nvidia, but my opinions are only personal>

        • sweatshopking
        • 3 years ago

        I can’t find a 1060 for less than 400$ CAD. they should be 330.

        • rechicero
        • 3 years ago

        The logic is sound… But people usually don’t use logic so, you’re right too…

          • pranav0091
          • 3 years ago

          Agree, people are not mathematically-logical : Thats why the whole luxury market exists.

          /rant begin
          If every one was perfectly logical over measurable quantities, who’d buy an S-CLass? Or a Vertu? Or a Titan? Or a gold sandwich? The list of things is just too long.

          The problem isnt necessarily with logic. Human logic is almost always constrained by what it thinks is “fair”, and fairness almost always is “what is good for me and I can justify without giving that impression”.

          The problem is that we think our logic is logical, without admitting the inherent selfish bias. Thats why the products that we cant afford are always called illogical. And the ones that can afford it, well, to them its only logical to buy them.

          /end rant

            • AnotherReader
            • 3 years ago

            While I agree with you for the most part, I have to add one dissenting note. I may be part of an increasingly rare breed, but just because I can afford something doesn’t mean I’ll buy it. If my needs aren’t being met by the current product, then I’ll upgrade. However, I prefer to save my money for a rainy day rather than spending it willy nilly on the fad of the month.

            • pranav0091
            • 3 years ago

            I consider myself just like you, for the most part. 🙂

            • rechicero
            • 3 years ago

            GPUs is not a luxury market. Luxury market sells more than just the function, you buy “social positioning”.

            But GPUs are tools. If you buy a card for gaming, can maximize the games on your monitor for $200 and you buy a Titan, there is no logic at all in the decision. You can rationalize the hell out of it, but you’ll be lying to yourself. I would probably be more hard on the decision, but lets stop on illogical.

            You’re just throwing your money away because there is absolutely zero advantages. The only one would be telling other people you have a Titan for your 1080p monitor… And the thing is “telling” is cheaper than actually buying.

            Luxury is not just about prize. You don’t buy a Scania R730 for personal transport, you dont buy a Titan for 1080p gaming.

        • Chrispy_
        • 3 years ago

        I picked the launch price for both, seemed fair.

        AMD has announced that the RX470 will have $150 models, so if you want to cherry pick the cheapest 1060 option it would only be fair to compare that against the $150 price for an RX470.

          • chuckula
          • 3 years ago

          Further down the thread there are people attacking somebody who complained that the cards aren’t $150 saying that AMD never claimed that the Rx 470 would ever be $150.

            • Chrispy_
            • 3 years ago

            Downvote all you want, I’m just the messenger.

            I can’t speak for dollar prices but Hexus.net reported AMD confirmation of the price drop yesterday:
            [url=http://hexus.net/tech/reviews/graphics/95275-asus-radeon-rx-470-strix-gaming-oc-4gb/?page=16<]http://hexus.net/tech/reviews/graphics/95275-asus-radeon-rx-470-strix-gaming-oc-4gb/?page=16[/url<]

            • djayjp
            • 3 years ago

            Bro that’s 165 POUNDS £. You absolutely cannot assume that will be $150USD. The pound also has devalued significantly since Brexit and so there actually has been price parity (even if it shouldn’t be). $165USD for reference RX 470 is still pretty darn cool though!

            • Chrispy_
            • 3 years ago

            Yes. I know, I’m feeling the Brexit vote hurt. UK prices in pounds WITH VAT are actually higher than US prices in dollars without tax at the moment. That $239 RX480 costs us £249 now.

            Perhaps that £249 will put the Hexus link into perspective – £165 compared to £249 gives those in the US an idea of just how cheap the 470 is with AMD’s price cut. All we have to do now is wait for availability of the base model 4GB cards to improve.

            • djayjp
            • 3 years ago

            Hmm interesting. But what was the original GBP MSRP?

            • Chrispy_
            • 3 years ago

            That Strix in the article I linked was launched at £190, so a £25 discount to £165.

            Makes sense, because I’ve seen listings for the almost-nonexistent RX 480 4GB at £183 on preorder. Charging £190 for an inferior card would have been just plain dumb.

          • derFunkenstein
          • 3 years ago

          By Christmas, maybe, but that $150 model doesn’t exist today. The $180 model doesn’t exist today. So you have to compare the current available prices. Even I know that, and I don’t know very much. :p

            • Chrispy_
            • 3 years ago

            It’s all academic then, because the $249 GTX 1060 doesn’t exist today either and the $199 4GB RX480 doesn’t exist either. What is the point of discussing pricing if you’re not going to use the numbers that vendors are providing?

            • derFunkenstein
            • 3 years ago

            Now is not a great time to buy a GPU. I think that’s pretty obvious right now

            • PixelArmy
            • 3 years ago

            Assuming Newegg’s out-of-stock, auto-notify and back-order means some of these products exist(ed) at their respective prices:
            – The $180 RX470 simply does not exist. $190 is the lowest. Strike 1
            – A $250 GTX 1060 does exist. Strike 2
            – Of the 5 GTX 1060 sold directly by Newegg (not the marketplace), only one of those breaches the $300 mark that you say is launch price. Strike 3
            – The reviewed RX470 is $220. Strike 4
            – By your own admission, the Hexus link doesn’t deal with dollars, but still try to back the $150 assertion. Strike 5

            (If we don’t make the out-of-stock, etc. assumption, the RX470 also doesn’t exist today).

            • Voldenuit
            • 3 years ago

            [quote<] A $250 GTX 1060 does exist. Strike 2[/quote<] [url<]http://www.newegg.com/Product/Product.aspx?Item=N82E16814125879[/url<] [url<]http://www.newegg.com/Product/Product.aspx?Item=N82E16814126115[/url<] [url<]http://www.newegg.com/Product/Product.aspx?Item=N82E16814500402[/url<] Sadly all out of stock, but when I checked 2 days ago, there was an evga 1060 for $249 that was in stock. The PNY is also $249 when it's in stock. The cards exist, people are just buying them as quickly as they show up.

            • PixelArmy
            • 3 years ago

            Right, my point is there is more evidence of $250 GTX 1060s or $200 4GB RX 480s than $180 (or a $150 lol) RX 470s.

            The OP seems to have just been posting hyperbole that he tries to justify after being called out.

            • Chrispy_
            • 3 years ago

            Hyperbole? I think you need to calm down and enjoy a beer in the sun or something.

            I’m posting in a different country to you where availability is very different. I posted a link from a UK review site that had confirmation direct from AMD that the price cuts are real, and given that the 470 launched [i<]yesterday[/i<] it's already [url=https://www.overclockers.co.uk/sapphire-radeon-rx-470-4096mb-gddr5-pci-express-graphics-card-with-backplate-gx-37g-sp.html<]in stock[/url<] at the suggested, newly-reduced MSRP. Clearly you're having supply issues in the US, I get it; You have 4x higher population than the UK and 40x more land area to spread the stock around. It's hardly surprising that your stock issues are 10x worse than high-density areas like Europe. Just be patient young grasshopper and wait for the dust to settle rather than foaming at the mouth 😉

            • PixelArmy
            • 3 years ago

            Lol, when all else fails, play the “I’m in a different market” card…

            If, in every post, you use USD, what market you think everyone is going to think you’re talking about? The earliest allusion to the UK market is a link [i<]four[/i<] responses deep and even then you're still trying to justify your USD musings...

            • Chrispy_
            • 3 years ago

            I just use USD and imperial units when posting here, since it’s a US site. There are lots of non-US readers.

            I can’t remember the last time the readers were surveyed but I have a funny feeling that there were more non-US readers than US readers. I think it was this site anway.

            It would make sense since TR ranks high in English-language Google searches and 70% of the world’s native-English speakers are outside the US, and that’s not even including all the other countries who have English as a common second language.

            • PixelArmy
            • 3 years ago

            It’s not that use you USD that is the problem, it’s that you use your assumptions based on the UK market to poorly justify what you’ve written in USD.

            The GTX 1060 launch price was $250 and unlike the 1070/1080, the 1060 actually had units at that MSRP. I don’t know what from the UK market perspective you can use to justify saying $300 was “pick[ing] the launch price.”
            £165 RX470 doesn’t mean it’ll be at $150 in the US (even if it might end up there). It looks like you went with percentages, but the price drop in the UK might simply mean AMD thought card was mispriced in the UK market.
            Non-existant cards being “all academic anyways”, surely there’s a difference between cards that are out of stock vs cards that have never been announced at prices ($150 USD is only found in pre-launch forum comments/predictions).

            It’s not that hard:
            a) pick a market
            b) use values from said market
            c) don’t infer from another market

            • Chrispy_
            • 3 years ago

            Were $250 1060 cards available on launch day? I thought not. so the $299.99 FE was your only option. If I’m wrong I’m wrong but you’re comparing my retail link listing of an AMD card [b<]within 24 hours of launch.[/b<] As far as I am aware, the 1060's were FE only at first and the vendor cards came out a week or so later. You're getting all hung up on trivial points that aren't even relevant to the discussion in the first place. Stock has been poor for both the 480 and the 1060, regardless of MSRPs, regions, and variants. Like I said earlier, calm down and enjoy life intstead of foaming at the mouth over pendantic tangential issues.

            • chuckula
            • 3 years ago

            [quote<]It's all academic then, because the $249 GTX 1060 doesn't exist today either [/quote<] Funny, I've been running one for over a week.

      • smilingcrow
      • 3 years ago

      One reason is minimum frame rates and also people are aspirational.
      So if a 1060 OC is 30 to 40% faster than a 470 OC that can make a big difference for demanding games and for $70 more is not bad value.
      Then there are those wanting to run high frame rates with G-Sync.
      You have over simplified a complex scenario.

      • travbrad
      • 3 years ago

      [quote<]When you can get great 1080p gaming for $180, why would you want to spend $300?[/quote<] Depends on your definition of "great" and whether you have a high refresh rate monitor. Minimum FPS in these games TR tested isn't even 60, let alone 120-144. While there are diminishing returns at higher and higher refresh rates there's a very noticeable difference between 60fps and 80-100fps for example.

        • Chrispy_
        • 3 years ago

        Indeed, though the number of people with high-refresh displays is shockingly small. We’d all *like* one but they are still vanishingly rare, especially at the entry-level.

        Given that 60Hz is what almost everyone is running, the definition of “great” should probably be read as “about 60FPS” in this context as the people who can benefit from more than that are negligible.

        Another definition of “great” is relative: The GTX960 was a “good” 1080p card and the RX470 is 50% faster than this for the same money. If that’s not great then you are a hard man to please 😉

          • f0d
          • 3 years ago

          agreed that not many have a high refresh display but im willing to bet that there are more people with displays higher than 60hz than there are 4K users
          yet many reviews have 4k tests instead of trying to adjust their settings and reviewing with high refresh (for example lowering settings to keep a solid 100fps 99% of the time)

          for once id like to see a review that would be relevant to me and my display choice

            • Ifalna
            • 3 years ago

            Don’t know about you but I’d rather play at max detail and 30-40FPS isntead of having a game look like quake in order to get some ridiculous high FPS.

            I do realize that people do this that are into competitive shooters and I understand that, but when I game I want stuff to look as nice as possible while being fluid.

            • Chrispy_
            • 3 years ago

            It’s interesting how there are different feelings about framerates. I prefer fluidity over detail every time, not because of how the game looks, but because of how it makes the game feel – input lag, responsiveness etc.

            But yeah, I know what you mean – “as nice as possible while being fluid”. I think any enthusiast knows that to get this you need to set everything to ultra and then dial back just a couple of settings – like shadows from ultra to high, AA from 16xCSAA to 4xMSAA. It varies by game but that’s normally enough to give you a 50% boost in framerates for almost no discernable difference.

    • smilingcrow
    • 3 years ago

    Releasing 3 cards in close proximity with such a narrow spread in performance and pricing is a bit odd! Especially as these are all that have been released so far.

      • Dposcorp
      • 3 years ago

      This can probably be chalked up to simple GPU binning.
      Once they test and verify them, they set them up with the correct specs for the correct card, and off they go.

      Not much difference from what CPU makers like AMD/Intel do.
      This way you wont get as much waste.
      [url<]https://en.wikipedia.org/wiki/Product_binning[/url<] Great article Jeff, and I appreciate it more then usual as I am running a GTX 960, and am trying to decide if I want to upgrade my CPU or GPU. Since I still game (when time permits) at 1080P, my video card can last a while longer.........off to get a new CPU.

        • smilingcrow
        • 3 years ago

        But Intel don’t release a new CPU lineup with only 3 CPUs which only have a performance spread of around 20%.
        Nvidia put out 3 10×0 cards with a spread nearer 40% plus the Titan on top of that for another 25% or so.
        That’s why it seems odd for them to be so bunched together.

          • derFunkenstein
          • 3 years ago

          That’s exactly what Intel did when it released the initial wave of Skylake CPUs. There was the i7-6700K and the i5-6600K and then that was it for months.

            • smilingcrow
            • 3 years ago

            I did wonder about that after I’d posted but there were 2 chips released first then almost the whole family were released within less than a month which is a shorter time frame than for these RX 400 series.
            Also the performance delta even between those first two chips was noticeably greater than between the 3 RX 400 cards released so far. Plus, the price delta was much larger between just those first two chips so not comparable at all.

      • EndlessWaves
      • 3 years ago

      What’s the third one?

      The 460 is a different chip that’s less than half the size. Expect it to be an awful lot slower.

        • smilingcrow
        • 3 years ago

        There are two versions of the 480 at different price points. I said cards not chips.

    • derFunkenstein
    • 3 years ago

    Great card that almost matches the 480 but the pricing is definitely weird. I’d just recommend the 480 too.

    • djayjp
    • 3 years ago

    Strange that the author still says that the RX 480 is the better value despite the fact both of his own value graphs say the RX 470 is the better proposition (even with the $40 higher than MSRP).

      • xeridea
      • 3 years ago

      Yeah I noticed that to. Also, the 470 gets you a lot better cooler, and slightly better power efficiency, likely due to having half the RAM, which apparently isn’t that much of a factor (though higher clocks on 8GB RAM matters somewhat). The more stable boost clocks on better cooler seem better than the extra/faster RAM. There is a $209 card on Newegg, and others will likely sell for $200 or less after initial craze.

        • djayjp
        • 3 years ago

        My only (slight) concern is that the GTX 970/R9 390, due to VR, may be a kind of standard, used as something to aim for even in non VR games. It’s the first standard the PC space has really had (aside from matching the consoles I suppose). The games are ultimately built with consoles in mind so I suppose what’s most important is at least that quality level, but at 1080p/60+. It’s tricky to optimize though as some games on console use a combination of high and low settings.

        *EDIT: NVM apparently when you OC the RX 470 it often outperforms the RX 480 8GB!:

        [url<]http://www.eurogamer.net/articles/digitalfoundry-2016-amd-radeon-rx-470-review[/url<]

    • NTMBK
    • 3 years ago

    [quote<]As I write these words, it's just about dawn outside. Yesterday morning, a Radeon RX 470 showed up on my doorstep, and after endless technical difficulties, I finally got around to collecting data on the thing and four competing graphics cards at about 9 PM last night. Since that time, I've forgone sleep and consumed more caffeine than is probably healthy for the average horse, never mind the average sedentary hardware reviewer. [/quote<] And this sort of dedication is why I keep coming back to TR. Great work getting this review out in one night!

    • DreadCthulhu
    • 3 years ago

    I managed to get in an order for Powercolor’s Red Devil model earlier this morning, for $180. Looks like it will be a great deal at that price. I agree that the higher price models don’t make much sense if you can find a RX 480 or GTX 1060 in stock at a decent price.

      • stefem
      • 3 years ago

      From Hexus.net
      “Update 15:10: AMD has now reduced the pricing of the entry-level reference-designed RX 470 to £165. Given that move, Asus may need to rethink the Strix’s value proposition.”

      165£ are 215 USD, pricing may be different in the UK but that is a very large difference, you pay very low taxes? where do you live?

        • rechicero
        • 3 years ago

        VAT. That’s like 20% or so (not positive about UK, but I think is 20%).

          • Firestarter
          • 3 years ago

          UK is 20%, most EU states have around 20% VAT

    • MileageMayVary
    • 3 years ago

    I’d be curious how it also stacks up against a 290/390 and against its bigger brother 480 when the resolution is increased (Yes, I know its a 1080 card).

    But for now, get some sleep!

      • HERETIC
      • 3 years ago

      No 290 but you’ll get a good idea here-
      [url<]https://www.techpowerup.com/reviews/ASUS/RX_470_STRIX_OC/24.html[/url<]

    • chuckula
    • 3 years ago

    FYI, Tom’s did a power analysis of another Rx 470 that is moderately OC’d out of the box.

    It’s technically out of spec on the 75-watt 6-pin connector, but it remains in-spec on the motherboard slot power draw, so it looks like these vendors balanced the power draw somewhat better than the Rx 480 at launch.

    [url<]http://www.tomshardware.com/reviews/amd-radeon-rx-470,4703-6.html[/url<]

      • xeridea
      • 3 years ago

      I read that review also. I like how they actually measure power draw of the card, and where it gets it rather than just showing total system power usage at the wall and making conclusions from there. I suspect they just take advantage of the 480 driver fix, which has already been done.

      • Hattig
      • 3 years ago

      The card isn’t out of spec.

      As THG themselves write, but seemed to ignore trying to create an issue out of nothing:

      “The PCI-SIG’s specifications only apply to current, meaning power consumption results don’t tell the whole story. The current graphs below show that the motherboard slot’s connector comes in well below 5A. Given that the PCI-SIG’s specifications call for a maximum of 5.5A, this is certainly on the safe side.”

      Current is all that matters, because PCI-SIG hires engineers that know what they are doing when creating a specification (unlike THG who have paid a lot for sophisticated power measuring tools but don’t know the basic physics behind it all). 6-pin connectors are good for way way way over 75W as long as the current is in spec, and many many cards do this from both camps.

        • pranav0091
        • 3 years ago

        What about voltage? Is the 6-pin a variable voltage connector? If not how can you not exceed the spec when you are at a locked voltage (12V on the 6 pin) but supplying more current than the spec recommends?
        What am I missing?

        <I work at Nvidia, but my opinions are only personal>

          • Ninjitsu
          • 3 years ago

          >What am I missing?

          CONSPIRACY + NGREEDIA = GTS 450

            • Concupiscence
            • 3 years ago

            I don’t think I’ll ever stop finding this funny.

          • Bensam123
          • 3 years ago

          If you’re talking specifically about power provided over the 6pin connector… 8pin connectors have two grounds that are wired directly to one of the three other grounds. There is no difference between a 6pin connector and a 8pin in terms of power delivery besides trying to differentiate the two and possibly weeding out really ancient PSUs that don’t have 8pin connectors, but the user uses a adapter anyways.

          • Freon
          • 3 years ago

          The spec having a current limit makes more sense than a power limit because voltage can droop, and typically when you provide a regulated power supply (i.e. the VRMs on a GPU) less input voltage, it would draw more current to make up for it. So, the whole power phase on a GPU should take whatever steps in design to avoid that and keep current draw under spec even if voltage is not a dead even 12.0V.

          Current is going to blow traces or junctions on a board, not voltage. I’d guess the physical copper traces and slot could handle 20V or more without any issue, even on the cheapest boards. It’d possibly blow components on both sides, but that’s not the point.

          This is an interface and both sides of the interface should be adhering to spec. As much as its the responsibility of the board to supply a steady 12.0V, it’s the job of the add-in card not to draw more than 5.5A from the board. Both sides have responsibilities. Giving the add-in board a current limit is a safer spec than a power limit.

        • chuckula
        • 3 years ago

        I never said it exceeded the power draw specification through the motherboard.

        I said it technically exceeded the power draw limits for PCIe overall, and it does exceed the technical limit through the 6-pin connector. However, those connectors that lead right back to the PSU are much less sensitive than the power delivery through the motherboard so it’s not really a big deal.

        • Ninjitsu
        • 3 years ago

        power = current x voltage
        more power means more heat
        more heat means changes in ideal transistor operating characteristics and efficiency, and possible physical damage

        • synthtel2
        • 3 years ago

        [quote<]6-pin connectors are good for way way way over 75W as long as the current is in spec[/quote<] 75W / 12V = 6.25A. You can't push 80 or 90 watts over 12 volts and stay below [s<]5.5A[/s<] 6.25A, unless your superpower is to blatantly break physics. [s<](In this case, it looks like it's actually 12V * 5.5A plus 3.3V * 3A for a total of 75.9W, or very very slightly more if the PSU is actually outputting 12.1 or 12.2 volts as many do. The same stuff applies, though: if you're pulling more than 75W through the PCI-E connector, you're doing it wrong, full stop.)[/s<] Edit: The 5.5A reference got me off-topic - the PCI-E slot wasn't what was in question (I think; Hattig, I'm still not entirely sure I'm grokking the intended meaning from your comment). The 6-pin connector is rated for 6.25A / 75W. Edit for further explanatory value: The power that's drawn through the mobo is drawn through just a few tiny pins (the 12V rail gets just 5 of those tiny things), and there aren't any guarantees that it's routed well on the board itself. If you draw too much there, you're asking for trouble. The external connectors are beefy and routed straight from the PSU. Drawing too much power there is almost certainly fine, within reason.

          • Meadows
          • 3 years ago

          For what it’s worth, there’s little apparent difference between 6-pin and 8-pin PCIe connectors, and yet the latter is rated for twice the wattage. I don’t doubt a Radeon won’t burn down the connector or power cables outright but it might still cause issues in the longer term.

            • Bensam123
            • 3 years ago

            There is no difference from a electrical standpoint. Not sure why people keep pointing out ‘spec’, but don’t actually take time to look at what they’re talking about.

            There should be no difference 6pin vs a 8pin long term. The only possibility is 8pins were designed to weed out old PSUs that weren’t capable of delivering enough amps over the 6pin.

            • Krogoth
            • 3 years ago

            8-pin PCIe connector have an extra wiring for a 12V rail and ground which means it can handle higher power draw then a 6-ping PCIe power connector.

            It only matters for high-end GPUs and SLI/CF on a sandwich cards though.

            • Shobai
            • 3 years ago

            This is incorrect, see Bensam’s comment below (both additional pins are GND).

          • Bensam123
          • 3 years ago

          Not sure what you’re talking about… a 8pin connector is rated for 150w and has three ‘hots’ on it. A six pin is exactly identical, only it has two more grounds that are wired directly to one of the other three grounds. Go look at your PSU if you don’t believe me.

            • synthtel2
            • 3 years ago

            I was thinking of power drawn from the mobo, not direct from the PSU. On re-reading, that (probably?) wasn’t what was being discussed here. I’ll edit appropriately.

            • Bensam123
            • 3 years ago

            Yeah I noticed that, there is a lot of cross-talk between the power being drawn from the board slot vs the pcie connector and it’s hard to differentiate between the two conversations.

            • stefem
            • 3 years ago

            Does the 8 pin specs require wire with a larger section than the 6 pin one?

            • Bensam123
            • 3 years ago

            No, they’re the same gauge. You can look on wikipedia, that’s part of why all of this is so blown out of proportion. Most people didn’t even take the time to look at the spec they’re quoting as ‘out of spec’.

            • travbrad
            • 3 years ago

            Yeah I wouldn’t be worried about the power connector pins/wires going a bit above spec. The real problem was with the PCIe power draw, which has been resolved.

        • Meadows
        • 3 years ago

        You can’t *not* be out of spec that way. If the card draws upwards of 85 W from the 6-pin connector (which it does), but does not exceed 5 amps, then you’re looking at 17.6 bloody volts instead of 12.

        A 5% swing would be in spec, but that’s a goddamn 40 percent. How is that “not out of spec”?

        On the other hand, if voltage is within spec, then you’re looking at a minimum of 6.7 amps instead of 5, potentially creating a heat hazard and ruining the PSU’s efficiency further in the process.

      • DrDominodog51
      • 3 years ago

      You can’t really hold that against [i<] all[/i<] RX 470s, but it is nice to know that the Strix version does overdraw from the 6 pin when I recommend GPUs.

    • maxxcool
    • 3 years ago

    At one point I was excited to see the AMD 480/470 in action and was very seriously going to pick one up. Reviews were good, has future tech in ASYNC .. but then I saw the overclocking reviews.

    4-5% before hard locks, a blown monitor report, the whole power phase debacle.. I’m going to have to change my mind.

    4-5% headroom really says to me there is a process issue or a silicon issue and the devices are right at the extreme of the gpu’s ability. When I look at a 1060/1070/1080 running a hundred or more mhz faster for 10%+ head room I just can’t justify buying AMD at the moment given it appears NV’s silicon and or process is clearly better.

    disappointed…

    edit :: not using breast implants for GPU’s

      • Concupiscence
      • 3 years ago

      I can understand that disappointment, but I’m just not an overclocker. At stock values these cards look pretty darn good.

        • maxxcool
        • 3 years ago

        I can agree.. but the red flag is up in my head. Only 4-5% headroom makes me wonder if these are already overclocked in factory form.

        It just makes me too nervous to trust it with such minimal tolerances.

          • kalelovil
          • 3 years ago

          The RX 470 offering ~90% of the RX 480’s performance for ~80% of its power consumption is further evidence the RX 480 is operating outside the ideal of the Polaris 10 power curve.

            • xeridea
            • 3 years ago

            Having half the RAM is also a factor. The lower clocks are definitely a factor in efficiency (as with any GPU/CPU). Limited OC is a telling point though. Likely they are clocked to compete with 1060.

            • Chrispy_
            • 3 years ago

            Indeed; AMD seem to keep on repeating their previous mistakes. It’s hard to support the underdog when they keep doing the same retarded thing over and over again despite so much criticism and obvious evidence supporting a necessary change.

            Given that we’ve been stuck at 1080p for the mainstream for so long now, power use and efficiency are scrutinized by reviewers and owners more than ever before – yet AMD keep ruining their power and efficiency results for almost no real gains!

            • ImSpartacus
            • 3 years ago

            One would hope. Part of me is rooting for amd, lol.

            • maxxcool
            • 3 years ago

            I have -0- proof.. but this is what I think as well. It seems ‘hot clocked’ out of the gate and that would line up with a number of the power\heat issues..

          • Eversor
          • 3 years ago

          There are various factors at play here. Probably the most fundamental to overclocking has been (for a couple AMD generations now) the fact that they don’t really design the power circuitry as capable of handling the GPU at intensive scenarios. This seems to be the reason why the 470 is so close to the 480: at high usage the 480 boost is probably down clocked to meet TDP targets while the 470 may not due to having less shader blocks active.

          Then there’s the manufacturing quality of the silicon itself, which has gone down the drain since AMD spun off Global Foundries. It has been a problem with CPUs and is now one with GPUs as well. It’s not just an hypothetical flaw in the chip’s architecture that keeps clocks down, one can see in reviews that while AMD is on 14nm vs 16nm for Nvidia, they have to run higher voltages everywhere.
          I imagine they will improve the situation over time but given the kind of voltages we’ve been seeing on every line of AMD CPUs, I can’t really see it right now.

          • Lans
          • 3 years ago

          I down voted you because you seem to be confused by “end user OCing” and “factory OCing”. The first involves end user taking risk and may or may not achieve certain targets and may disastrously shorten cards life. The latter is guaranteed to work and should be qualified with accelerated aging to make sure cards have a certain expected life span.

          If all things are equal (they are not), I may consider (end user) OC headroom but I generally don’t care since I use stock or maybe even underclock/undervolt… of course that is definitely a small number of people who does this.

            • maxxcool
            • 3 years ago

            Not confused at all. Thanks…

          • MathMan
          • 3 years ago

          Maybe the first step is to propose a definition of overclocking?

          The way I look at it, AMD can set any clock they please, and anything higher than that is an overclock…

      • djayjp
      • 3 years ago

      Process is one thing, architecture (and price) is another. You can’t justify getting the higher price/performance card that is the RX 480/470? Well gotta have priorities I guess….

      • xeridea
      • 3 years ago

      If you look at power efficiency improvements from Pascal vs Polaris, AMD gained more in efficiency, by a large margin (~90% vs ~55% for 480 and 1080 vs their predecessors). It seems that the process tech is fine, but they are somewhat higher clocked to start, to be more competitive vs 970 and 1060.

        • Voldenuit
        • 3 years ago

        toms had their review 470 (an Asus) draw 144W vs a 1070 FE, which drew 148W. That’s not a very promising comparison, as the 1070 is in a different league performance-wise.

        Somewhat worryingly, toms measured 88W on the 6-pin connector (PCI spec is 75W). Any half-decent PSU should have no problems with it, but ppl with el-cheapo budget PSUs and corner-cutting OEM systems (which would probably be a large market demographic for the 470) might have issues.

        Granted, this is a custom board (I don’t think AMD is bothering with reference boards for the 470), so it’s too early to make broad generalizations about the power draw and consumption of the 470. But it still looks as though nvidia has the better power efficiency per fps this generation.

          • xeridea
          • 3 years ago

          The Pascal architecture is clearly more efficient (for gaming anyway, in which it is highly tuned). I was purely referring to efficiency gains from the process shrinks, in which the 14nm Polaris sees a lot bigger gain than the 16nm Pascal vs 28nm predecessors.

            • djayjp
            • 3 years ago

            I’m not certain Pascal is more efficient, actually. If one looks at Vulkan or DX12 (which are the APIs that ultimately matter), then I’m not certain it is a foregone conclusion.

            • xeridea
            • 3 years ago

            It is more efficient power wise. It clearly lags in DX12/Vulkan though, where it takes a performance hit. DX12/Vulkan certainly bring the AMD cards a lot closer, where 480 is pretty even with, or crushes the 1060 (Nvidia is particularly bad at Vulkan).

            • smilingcrow
            • 3 years ago

            The APIs that matter most on today’s cards are the ones that are used most on today’s cards.
            According to the most recent Steam survey less than 45% of systems were running Windows 10 so over 55% of systems have zero access to DX12.
            Vulkan is a rounding error at this point in time.
            In 2 years time DX12 will at best account for 20 to 25% of gaming hours put in per day.
            Then it will start to look more significant but by then won’t the current generation of cards have been replaced anyway?
            AMD look good on DX12 but then again they always seem to be promising the future and not delivering today.

            Edit: My inner Grammar Nazi.

            • djayjp
            • 3 years ago

            Regardless of API, all of these cards perform great on today’s games. What really matters is how they’ll perform in tomorrow’s games (which will use either DX12 or Vulkan since they’ll already be programmed using such for Xbox one (DX12) or PS4 (Vulkan)).

            • smilingcrow
            • 3 years ago

            I’m aware of the future but I’m also aware that it tends to turn up late. 🙂
            I imagine that people buying cheaper cards are less likely to be the ones buying loads of expensive triple AAA games which are more likely to be supporting the new APIs.
            People on a budget card may well be shopping for budget games a lot so lots of DX11 games.
            Just because most of the big releases due in the next year will be DX12 doesn’t mean that everyone will be discarding their old DX11 games.

            Don’t confuse this week’s sales chart with what the average person is playing.
            There are more than 3x times as many people using older versions of Windows than 10 which lack DX12 support and even in Steam it’s at less than 45%.
            This is after Win10 has been a free upgrade for a year although I imagine a bump in August’s stats.
            DX11 will be dominating the gaming hours played per day until after Windows 7 is out of support so 2020 at the earliest.
            I think you guys are confusing hard core gamers who read IT sites like this with the real world.
            Early days for DX12 and Vulcan on Windows is still in swaddling clothes.

            • djayjp
            • 3 years ago

            I think you’re misrepresenting this card though– the 970 was considered high end just a couple months ago. Also, how many of those users on < Windows 10 have such performant cards? Besides, all it could take is one “must have” that exclusively runs on one of those newer APIs to significantly shift the user base. This is also besides the point however since developers have always targeted the niche high end gamer in terms of features.

            • K-L-Waster
            • 3 years ago

            TBH, I think TR is better off testing games that people are playing now rather than a stand in for what they might be playing 6 months from now.

            • selfnoise
            • 3 years ago

            I think testing for “tomorrows” games is pretty drastically oversimplifying the situation anyway. We don’t know what the take up rate on DX12 will be, we REALLY don’t know what the take up on Vulkan will be, and also there are so few games that use these APIs that it’s hard to really form a clear picture of how the cards perform across titles. What is a performance trend and what is the unique quirks of how each game is programmed? Also, do we know how mature Nvidia’s support for Vulkan/DX12 is?

            Too many questions. Time will tell.

            • djayjp
            • 3 years ago

            People are, today, playing the most performant version/API of their games (which often is the newer APIs). I’d like to see a value plot just using them.

            • K-L-Waster
            • 3 years ago

            [quote<]People are, today, playing the most performant version/API of their games (which often is the newer APIs). [/quote<] Viable DX12 versions are only available in a handful of games -- the rest are either demos or too issue prone to want to play. The vast vast majority of gamers today are still playing DX11 games or older (heck, look at the number of people playing MOBAs, or who are still playing Skyrim...) I'm not saying DX12 doesn't matter -- I'm saying it's an emergent technology that hasn't stabilized enough yet to be using as a significant buying decision. Let the playing field settle down first.

            • travbrad
            • 3 years ago

            [quote<]People are, today, playing the most performant version/API of their games (which often is the newer APIs). I'd like to see a value plot just using them.[/quote<] Nope that is usually DX11 (or even earlier). Source: [url<]http://steamcharts.com/top[/url<] Not that all of those games need fast graphics cards, but some do benefit from them.

            • djayjp
            • 3 years ago

            Of course I meant the current games that are popular for benchmarking purposes.

            • Voldenuit
            • 3 years ago

            [quote<] Of course I meant the current games that are popular for benchmarking purposes.[/quote<] It's interesting that the games that are popular for benchmarking may not be the games that people actually play. Looking at the steamcharts link that travbard provided, I was surprised to see that the only DX12 game on it was Total War: Warhammer, which I didn't even realize was that popular of a game. If a game is popular and it's a good technical showcase, it's a no-brainer to include in a hardware review. If a game is popular and it's a poor technical showcase, it's questionable to include it in a review. I'm not just talking about poorly coded or poorly optimized game, something like CS:Go for instance would be a terrible benchmark in a modern GPU review, because all you'd be showing is the CPU bottleneck at 160+ fps. If a game is unpopular *and* it's a good technical showcase, that's when it can get hard to decide. In this case, a game may be used as a proxy to extrapolate current or future performance on other titles, with the caveat that something you see in game engine A may absolutely not translate to game engine B (or even another game running on game engine A). Sometimes games are unpopular *and* poor technical showcases, but are still popular review choices for historical and community reasons. I lean towards not including such games, but review sites may have their own reasons for doing so. Lastly, just because a game doesn't show up on the most-played list, doesn't mean a game was unpopular, since the list only shows titles by number of players, and not enjoyment derived. A short single player game may leave a lasting impression, but you might not go back to it as often as a multiplayer game or multi-hour RPG or TBS/RTS. And of course travbard's list doesn't show any non-Steam games, it should not be taken to represent the entire PC gaming landscape.

            • djayjp
            • 3 years ago

            Excellent points and well thought out. For him I was just stating that people (who play such games) will do so using the best performing API version (assuming they are aware anyway). Even then, however, popularity isn’t the real issue, but instead (in using a game as a real-world benchmark in a GPU review) of course use the best performing API. This is a no-brainer and only such should be used as any basis of performance comparison in a GPU review. Again, I’d really like to see a value plot only using the latest APIs.

            • Voldenuit
            • 3 years ago

            Anandtech reports Hitman benchmarks using the best API for a given card, meaning DX11 for nvidia and DX12 for AMD (in that title). I don’t think that’s perfect, either.

            In this transitional period, I’d prefer that sites report both DX11 and DX12 results for games where there is mixed performance. Also, not everyone is on Windows 10 (not even everyone using an AMD card), so it might be better to just report both DX11 and DX12 numbers and let users come to their own conclusions.

            • djayjp
            • 3 years ago

            Yeah you’re right. That approach would make the most sense then.

            • Klimax
            • 3 years ago

            Sorry, but most of future games will be still on DX 11. Not enough gains for massiv load of work in DX12/Vulcan. Costs are simply too high and requires developers to target each variant.

            And lets not forget: DX 12 is parallel to DX 11, it is NOT successor to it nor its replacement. (it’s in Microsoft’s documentation)

            • xeridea
            • 3 years ago

            Several games currently support DX12 or Vulkan, with many more coming, including Star Citizen, which will get a significant boost due to the size of the game world. If your games run better on DX12 there is no reason to use DX11, and many games already run better in DX12 on AMD cards. Of the most popular games (that would benefit, LoL and other non demanding games not counting), about 30% currently support DX12.

            In 2 years there will be more cards, but most people don’t get a brand new card every 2 years, future performance matters. DX12 is already relevant for many, and it is quickly gaining traction, so I wouldn’t solely rely on performance of current/older games, on a 7 year old API for buying decisions that will effect you for years to come.

            • smilingcrow
            • 3 years ago

            Tell that to the more than 75% of Windows users that have zero access to DX12.

            [url<]https://www.netmarketshare.com/operating-system-market-share.aspx?qprid=10&qpcustomd=0[/url<]

            • xeridea
            • 3 years ago

            If you can’t run it obviously it isn’t a factor for you. I am saying I wouldn’t ignore it just because it isn’t totally dominant yet. It is clearly a factor for a lot of people, every review site besides this one includes it in their reviews. Also, reviews will have ~5-15 games tested, even though most people don’t play most of them, they are all still important because there is a large number that play each one.

            • smilingcrow
            • 3 years ago

            I’m not suggesting to ignore it but to see it in context which is that it is unlikely to overtake DX11 until the 2020s except for new games.

            • xeridea
            • 3 years ago

            So TR should ignore it until 2020? Sounds like a recipe for fading into obscurity for GPU reviews. If they don’t want to do DX12 for another year or so that’s fine, they will just lose millions of views from those that are interested in DX12.

            DX10 was reviewed even though the Vista launch was a disaster, and people would order computers with the “downgrade” to XP. DX10 wasn’t mainstream for a while, but it was still reviewed for comparison, to see if it was worth seeking after. It is a year after W10 release, marketshare is higher than Vista had at its peak. Several games support DX12, I don’t think there is reason any more to ignore it.

            • smilingcrow
            • 3 years ago

            Chill with the “ignore it” rhetoric as I said in my last post not to ignore it which would obviously be silly.
            You can’t seem to see what I am saying in context a bit like not seeing DX12 in context.
            People on Tech Sites like to know about new tech and shiny shiny so of course new games will be reviewed using DX12.
            TR have taken what is likely a short term position on DX12 which is not as radical as one site which now reviews cards with only 40% DX11 games.
            Now that seems way off kilter.

            • xeridea
            • 3 years ago

            I am saying ignore it, because that is what the site continues to do. Its high time it was looked at, and the only thing they have done is show how bad the initial RoTR DX12 engine was, as if to justify them not testing DX12 yet. Problem is, the other DX12 games have stable DX12 implementations, with consistent results, which are pretty much in line with what would be expected.

            I am not expecting it be a main focal point, just that they would at least include a couple DX12 titles for comparison.

            • smilingcrow
            • 3 years ago

            Fine but don’t take it out on me as I’m not the one saying to ignore DX12.
            Get a grip, it’s just one Tech Site not a conspiracy across the whole web funded by Nvidia ninjas.

            • xeridea
            • 3 years ago

            Not taking it out on you, I was just explaining my reasoning. I don’t think it is some conspiracy, there are plenty of tech sites that interested people can view to get whatever information is important to them. I am just saying it would be nice if DX12 was at least glanced at here, and I don’t understand their apprehension. I know early on things were a bit shaky, and issues with earlier Nvidia drivers, but that is long in the past.

            For many it isn’t necessarily what card does better in DX12, but what API is better for their card, so having a comparison would be nice.

            • sweatshopking
            • 3 years ago

            Your position is perfectly reasonable.

            • Pancake
            • 3 years ago

            Sounds like you just want to be selective about your games tested. Rise of the Tomb Raider is easily the most significant DX12 game released to date. That’s a rolled-gold flagship mega-title. I’m not interested in results for Ashes of the Singularity or other rinky-dink B-grade titles.

            • xeridea
            • 3 years ago

            Not being selective. I have mentioned in other posts that the DX12 performance in RoTR is fairly stable after their patch, which was shortly after the TR article on it. The issue with them picking it as the sole game to show was that there were other big games with a lot better DX12 engines at the time. With RoTR DX12 now being much improved, it being a big name is all the more reason to look into DX12 performance.

            • Klimax
            • 3 years ago

            First: Many? Evidence? Gaining tracktion? Not really. Too costly, too future unproof. No real gains…
            Second: In future, you’re carefully written DX 12/Vulcan path won’t be of much use anway, because it won’t be able to use well future GPUs.

            • xeridea
            • 3 years ago

            I guarantee a carefully written DX12/Vulkan path with be of more use 5 years from now than DX11. DX11 can’t even properly utilize current GPUs.

            • smilingcrow
            • 3 years ago

            Looking at the low take up with Steam members of Windows 10 even when given away as a free upgrade I’d say that Vulkan seems a far safer bet.
            Are the new DX12 titles also going to offer a DX11 fall back mode? Worrying for developers if not.

            With so many people boycotting Windows 10 it’s going to be much harder for review sites to convey the lay of the land as so many won’t have DX12 access at all.
            I guess the consoles might get a win out of this fragmenting of the market!

            • djayjp
            • 3 years ago

            There was also this thing called DX 10 (and later 11) that was exclusive to Windows Vista and on…. They were the minority, cutting edge future then, and 5 or so years later, became the norm (when was the last major DX 9 game released?).

            • Voldenuit
            • 3 years ago

            [quote<]Vulkan is a rounding error at this point in time.[/quote<] I keep hearing quotes online, ostensibly about how game developers are much happier with vulkan than DX12, but I don't know how representative that is of the gaming industry, or if it's just wishful thinking on the commenter's part. Also, it's not like we have a lot of data points to prognosticate Vulkan performance in future games, should it ever take off. There's probably a lot of wishful thinking there as well. EDIT: It's worth noting that I personally want Vulkan to succeed, because it's more platform-agnostic, and isn't tied to a single OS. Specifically, I like that it works on Windows 7.

            • smilingcrow
            • 3 years ago

            I hope Vulcan takes off big time because being tied to the MS of the last few years in particular isn’t something I’d wish on anyone.
            With it being a replacement for OpenGL and also being cross-platform it should have a good future.
            Very early days for the Windows platform though.

            • JustAnEngineer
            • 3 years ago

            I’m going to have to call you out for repeating this fallacy.

            The steam surveys are a backward look at where the gaming hardware and software market have been. They do not provide data on where the markets are going in the future. The steam hardware survey shows the entire spectrum of hardware that has been purchased in the past ten-plus years and is still being used. It includes notebooks and under-powered SFF brick PCs, not just desktop gaming PCs. Likewise, Valve’s software survey includes casual indie titles and all of the older steam games that are being still played that were written five, ten, or even thirty years ago.

            I know for a fact that the most recent steam hardware survey included my ultrabook with its anemic GeForce 620/710 GPU and it did not include my DirectX 12 capable desktop gaming PC. It would be [b<]insane[/b<] to assume that you should not buy anything better than a GeForce 710 for your new desktop gaming PC just because the steam survey shows that someone still has the steam client running on that hardware. If you were purchasing a graphics card today to play ONLY the games that are available today and NONE of the new games that will come out in the future, then your position of largely ignoring newer APIs would make sense. However, if you intend that your GPU might still be used for gaming six months from now or even two years from now, then it would be much more rational to consider DirectX 12 and Vulkan performance rather than living entirely in the past.

            • Voldenuit
            • 3 years ago

            Well, it’s a complicated picture, and there’s a lot of conjecture going on about how much performance DX12 will bring to any given GPU.

            For instance, when [url=http://www.extremetech.com/gaming/231134-amd-radeon-rx-480-review-the-best-200-gpu-you-can-buy-today<]et tested the RX 480[/url<], they found that async compute only gave it a 3% boost in AOTS, but it gave the older R9 390 a 12% boost, catapulting it ahead of the 480. Is it a question of drivers? Execution units? Scheduler? Should you be advocating people get a 390 instead of a 480 based on this (of course not). Similarly, the R9 Fury seems to gain a lot from ROTR's async patch, but I would really like to see the 480 tested here before making an assumption that it would see similar gains. Personally, I'd like to see more sites investigate this in a controlled and technical manner, similar to how Kanter investigated the tiling renderer on Maxwell and Pascal. EDIT: mixed up Shrout and Kanter, fixed.

            • Ninjitsu
            • 3 years ago

            [quote<]which are the APIs that ultimately matter[/quote<] In terms of market share and the games people are actually playing, I'd argue that they matter the least at the moment.

            • djayjp
            • 3 years ago

            The word “ultimately” actually means “finally”.

            • Ninjitsu
            • 3 years ago

            Well, if we’re going down that route, then you really never specified [i<]when[/i<] - finally at the end of today, the hour, next year, next decade? Will there be no more APIs after this? If there will be, then will they or won't they not matter more than DX12/Vulkan? Etc. So the only way to interpret what you're saying is not in terms of time, but in terms of "on the list of things that matter to this review/GPU in the present context, these APIs are the most important". /shrug

          • EndlessWaves
          • 3 years ago

          Techpowerup also had the 144W Asus card but they tested it alongside a psuedo-reference card (RX 480 running reference RX 470 bios) and found the latter drew 121W. It’s likely this is a(nother) badly behaved Strix rather than a 470 issue.

          The 470/480 appears to be very well tuned to it’s reference speeds.

      • Rza79
      • 3 years ago

      Computerbase took the PowerColor RX 470 Red Devil to 1395Mhz. That’s a 15% overclock.
      They also took the Sapphire RX 480 Nitro+ OC to 1415Mhz. Also a 15% overclock.
      Nothing to scoff at.

        • maxxcool
        • 3 years ago

        Given the reports from other reviews i have doubts that those are normal results.

          • Rza79
          • 3 years ago

          What do you mean? They are super lucky or do you think that they are lying?

            • maxxcool
            • 3 years ago

            Does not matter for me.

          • RAGEPRO
          • 3 years ago

          Ahh, I dunno. I have a buddy who I personally saw take his RX 480 to a little over 1400 (1415 or so, I don’t recall exactly). I don’t think it’s all that uncommon for these cards to be able to do that.

            • maxxcool
            • 3 years ago

            Legit response. And I expect that there are a number of good examples that will float to the top. But overall the review samples were less than stellar. It just makes me hedge my bet despite wanting ASYNC ..

      • Bensam123
      • 3 years ago

      Yeah, I bet you were going to buy one. Then you were disappointed the 480s weren’t frying slots so it lost a lot of its luster. Guess we’ll have to make do with this ‘almost’ dedication to AMD and wanting to buy their products, but coming away thoroughly disappointed…

      Guess you’ll have to go buy a Nvidia product which you were never going to do… ever…

        • maxxcool
        • 3 years ago

        … o.0 … wut?

          • raddude9
          • 3 years ago

          Translation: He is accusing you of being a Nvidia fanboi who had no interest in buying an AMD card in the first place. Because, despite the fact that this card is clearly the “fps per $” winner you’re bringing up nebulous “overclocking headroom” arguments to justify not buying the best value GPU on the market.

            • maxxcool
            • 3 years ago

            You’d think he might have said that instead of typing in circles.. 😛

            • Bensam123
            • 3 years ago

            Sometimes you need to type circles for other peoples circles.

            • maxxcool
            • 3 years ago

            In the sand ?

            • maxxcool
            • 3 years ago

            lol someone doesn’t like belinda carlisle

    • chuckula
    • 3 years ago

    Good review as always guys. As an aside, I’ve seen several other reviews out and about and they all have one thing in common: Not one of them is of a “reference” Rx 470. Is there even going to be a reference version of the Rx 470 that’s sold on the market?

      • Concupiscence
      • 3 years ago

      After the blowback from the RX 480 reference cards (and customer mental association with power issues out of the gate), I’ll be a little surprised if they do.

      • nanoflower
      • 3 years ago

      I thought AMD said there would be no reference design for the 470 and 460.

        • EndlessWaves
        • 3 years ago

        No reference design is the usual situation outside the high end market, I suspect the RX 480 only got one because it was the first of a new generation.

    • sweatshopking
    • 3 years ago

    Id love a 480. They are just unicorns in Canada right now

      • tay
      • 3 years ago

      They’re unicorns everywhere. GloFo production problems is my guess

        • sweatshopking
        • 3 years ago

        1060 is no better though.

          • JustAnEngineer
          • 3 years ago

          SSK reveals: All current generation GPUs are made from unicorns!

            • tipoo
            • 3 years ago

            *That’s* how they kept getting gains out of silicon!

            • wiak
            • 3 years ago

            unicorn blood 😀

Pin It on Pinterest

Share This