AMD’s Radeon RX 580 and Radeon RX 570 graphics cards reviewed

A little under a year since graphics processors moved to next-generation fabrication processes, the market has settled into a comfortable inertia at the important entry-level and mid-range price points. AMD’s Polaris-powered Radeon RX 480 has delivered impressive performance at friendly prices for some time, and the Radeon RX 470 still handily outperforms the GeForce GTX 1050 Ti for a few more bucks—and a lot more power from the wall—than the green team’s high-end low-end card.

If there’s one thing that system integrators, retailers, and PR departments hate, though, it’s inertia. Today, AMD is shaking things up a bit differently than the green team has been doing of late. Instead of pairing higher-speed memory with existing GPUs and calling it good, as Nvidia has done with some of its Pascal cards in the wake of the GTX 1080 Ti launch, AMD has been working with GlobalFoundries to improve the 14-nm FinFET process that underpins most of its chips at the moment. AMD calls the result a “third-generation” 14-nm FinFET process, and it’s fabricating two respun Polaris chips on this improved 14-nm node along with a new chip for entry-level discrete graphics cards.

Polaris 20 is the “big” Polaris of this generation, and it’ll power the Radeon RX 580 and RX 570. The smaller Polaris 21 will soldier on in the Radeon RX 560. Finally, a new, even smaller, and as-yet-unnamed Polaris chip will end up in notebooks and the eminently entry-level Radeon RX 550. To distinguish these massaged Polaris chips from their predecessors, AMD is calling the lineup of graphics cards that bears them the Radeon RX 500 series.

  ROP

pixels/

clock

Texels

filtered/

clock

(int/fp16)

Shader

processors

Memory

interface

width (bits)

Estimated

transistor

count

(Millions)

Die size

(mm²)

Fab

process

Polaris 21 16 64/32 1024 128 ??? ??? 14 nm
Tonga 32 128/64 2048 256 5000 355 28 nm
Polaris 20 32 144/72 2304 256 5600 232 14 nm
Hawaii 64 176/88 2816 512 6200 438 28 nm
GM206 32 64/64 1024 128 2940 227 28 nm
GM204 64 128/128 2048 256 5200 398 28 nm
GP104 64 160/160 2560 256 7200 314 16 nm

Despite the new code names, Polaris 20 is basically the same chip as Polaris 10 before it from a microarchitecture point of view. From ROP count to die size, Polaris 20 has practically identical resource complements to Polaris 10 before it. We don’t know how Polaris 21 will look quite yet, but if it follows the template of Polaris 11, the chip will have 16 GCN compute units and 1,024 shader processors under its hood. Whether AMD will choose to sell a fully-enabled Polaris 21 part on a Radeon add-in board for gamers remains to be seen.

The as-yet-unnamed chip in the Radeon RX 550 is an interesting new addition to the Radeon lineup at the low end. AMD says it wants to get its 14-nm GPUs into more systems, and the RX 550 will offer e-sports and casual gamers who would typically rely on integrated graphics a cheap path to discrete-card bliss. The GPU in the RX 550 has eight GCN CUs enabled out of an unknown total, so it boasts 512 shader processors hooked up to 16 ROPs and a 128-bit memory bus. AMD board partners will have the freedom to pair 2GB or 4GB of memory with this chip (and most others in the RX 500 series). Most importantly, the RX 550 shouldn’t require outboard power from the budget systems it’s likely to find a home in, and it’ll carry a lightweight expected price of just $80 or so.

In this incredibly competitive segment of the graphics market, it’s perhaps not all that surprising that AMD chose to tap GloFo’s process improvements by increasing boost clock speeds—and board power—in order to give its higher-end RX 500 cards a leg up against Nvidia’s comparable GeForces. Both the RX 580 and RX 570 get solid clock speed bumps compared to their predecessors, but board power is also up 30W on each card. It seems that AMD didn’t mind adding a few more watts to the bottom line of these cards’ already laggardly power consumption figures to outgun the GTX 1060 and friends. Run-of-the-mill desktop builders are unlikely to mind the added watts (and heat) too much, but the change won’t make Polaris 20 parts any friendlier to power bills, small-form-factor PCs, and folks whose climes aren’t amenable to much waste heat.

Despite the improved performance on paper, AMD won’t be increasing the suggested prices for Radeon RX 500-series cards. 8GB RX 580s will start at $229, while 4GB versions of that card will start at $199. The Radeon RX 570 4GB will maintain the RX 470’s $169 suggested price, while 2GB RX 560s will start at $99. AMD says its board partners will be able to tweak memory configurations on all RX 500-series cards, so expect to see 2GB and 4GB RX 550s and RX 560s alongside 4GB and 8GB RX 570s and RX 580s.

 

The cards at hand

To show what its fresh Polaris GPUs can do, AMD sent over a trio of Radeon RX 500-series cards for our perusal.

MSI’s RX 570 Gaming X 4G reps the slightly cut-down Polaris 20 GPU in tandem with 4GB of 7 GT/s GDDR5 memory. To keep the GPU cool, MSI taps one of its excellent Twin Frozr VI coolers. This twin-fan design has earned a TR Editor’s Choice award in one of its other iterations. Here, MSI threads twin heat pipes through a slim fin stack that slips nicely into the confines of a dual-slot design. Despite its relatively modest dimensions, we observed rock-solid 1281 MHz boost clocks from this card in operation.

We wish MSI had reinforced this card with a backplate for the looks, but you can’t have it all, we guess. The RX 570 Gaming X 4G will go for $189.99 at e-tail.

In fully-enabled Polaris 20 territory, Sapphire’s Nitro+ Radeon RX 580 carries over the company’s classy design language from its RX 480 cards. The Nitro+’s metal backplate features a geometric design that’s eye-catching without being garish, and the Sapphire logo on the side of the card lights up in a pleasing shade of light blue.

Sapphire carries heat away from the Polaris 20 chip using four beefy heatpipes and a tall, dense fin stack. We don’t have official base and boost clocks for the Nitro+ card just yet, but we observed a solid 1411 MHz boost speed from our sample in our tests. Sapphire pairs this card with 8 GT/s GDDR5, and builders will pay $249.99 for the package.

The PowerColor RX 580 Red Devil Golden Sample will likely get the adrenaline flowing for the pubescent, or at least the pubescent at heart. If you can get past the Hot Topic-approved iconography scattered over this thing, we observed 1430 MHz boost clocks out of the box—an impressive figure for a Polaris GPU. The Red Devil sports 8 GT/s GDDR5, as well.

The monster two-and-a-half-slot cooler on board the Red Devil could let overclockers extract the most from the Polaris 20 GPU, and PowerColor reinforces the card’s enormous fin stack with a suitably devilish backplate. All that metal doesn’t come cheap, though. The RX 580 Red Devil Golden Sample will go for $270 online.

Because of its sane dimensions and reasonable $250 price tag, we elected to test the RX 580 using the Sapphire Nitro+ card. The Red Devil Golden Sample’s slightly higher stock clocks are tempered by its $270 price tag and outsize cooler. We figure if you can stomach paying $40 over the suggested price for one of these cards, you should probably start looking at GeForce GTX 1070s.

Our testing methods

Since the Radeon RX 500 cards are derived from existing hardware that we’re quite familiar with by now, we didn’t feel it was necessary to revisit every game in our test suite for this article. Instead, we ran some quick tests using three games: GTA V, Crysis 3, and Hitman (using that game’s DX12 renderer) to get a sense of how the new Radeons improved over past products.

As always, we did our best to deliver clean benchmarking runs. We ran each of our test cycles three times on each graphics card tested, and our final numbers incorporate the median of those results. Aside from each vendor’s graphics drivers, our test system remained in the same configuration throughout the entire test.

Processor AMD Ryzen 7 1800X
Motherboard Gigabyte Aorus AX370-Gaming 5
Chipset AMD X370
Memory size 16GB (2 DIMMs)
Memory type 16GB (2x8GB) G.Skill DDR4-3866 (rated)
Memory frequency DDR4-3200
Memory timings 15-15-15-35
Storage Intel 750 Series 400GB (system drive)

2x Corsair Neutron XT 480GB SSDs

1x Kingston HyperX 480GB SSD

Power supply Corsair RM850x
OS Windows 10 Pro with Creators Update

Our thanks to Gigabyte, G.Skill, Kingston, Corsair, and Intel for their contributions to our test system, and to EVGA, MSI, AMD, and XFX for contributing the graphics cards we’re reviewing today.

  Driver revision GPU base

core clock

(MHz)

GPU boost

clock

(MHz)

Memory

clock

(MHz)

Memory

size

(MB)

XFX Radeon RX 470 RS 4GB Radeon Software RX 500 series press beta 1256 1750 4096
XFX Radeon RX 480 RS 8GB 1288 2000 8192
MSI Radeon RX 570 Gaming 4G 1281? 1750 4096
Sapphire Radeon RX 580 Nitro+ 1411? 2000 8192
EVGA GeForce GTX 1050 Ti SC Gaming GeForce 381.65 1632 1835 2027 8192
EVGA GeForce GTX 1060 3GB SC Gaming 1607 1835 2000 3072
EVGA GeForce GTX 1060 6GB SC Gaming 1607 1835 2000 6144

For our “Inside the Second” benchmarking techniques, we now use a software utility called PresentMon to collect frame-time data from DirectX 11, DirectX 12, OpenGL, and Vulkan games alike. We sometimes use a more advanced tool called FCAT to capture exactly when frames arrive at the display, but our testing has shown that it’s not usually necessary to use this tool in order to generate good results for single-GPU setups.

You’ll note that our test card stable is made up of non-reference designs with boosted clock speeds and beefy coolers. Many readers have called us out on this practice in the past for some reason, so we want to be upfront about it here. We bench non-reference cards because we feel they provide the best real-world representation of performance for the graphics card in question. They’re the type of cards we recommend in our System Guides, and we think they provide the most relatable performance numbers for our reader base. When we mention a “GTX 1060” or “Radeon RX 470” in our review, for example, just be sure to remember that we’re referring to the custom cards in the table above.

With that exposition out of the way, let’s talk results.

 

Grand Theft Auto V
Grand Theft Auto V can still put the hurt on today’s graphics cards, so we ran through our usual test run with most of the game’s settings turned all the way up at 2560×1440.


In GTA V, minor clock-speed bumps lead to minor improvements in performance. Each RX 500-series card is a bit faster and smoother than its RX 400 predecessor, but not by much, and not by enough to catch the GeForce GTX 1060 duo. The RX 580 makes quite the try of it, though.


These “time spent beyond X” graphs are meant to show “badness,” those instances where animation may be less than fluid—or at least less than perfect. The formulas behind these graphs add up the amount of time our cards spend beyond certain frame-time thresholds, each with an important implication for gaming smoothness. The 50-ms threshold is the most notable one, since it corresponds to a 20-FPS average. We figure if you’re not rendering any faster than 20 FPS, even for a moment, then the user is likely to perceive a slowdown. 33 ms correlates to 30 FPS or a 30Hz refresh rate. Go beyond that with vsync on, and you’re into the bad voodoo of quantization slowdowns. 16.7 ms correlates to 60 FPS, that golden mark that we’d like to achieve (or surpass) for each and every frame. And 8.3 ms corresponds to 120 FPS, an even more demanding standard.

Our time-spent-beyond-X graphs continue the theme of minor performance improvements. The RX 570 lops nearly a second off its predecessor’s time spent chewing on frames that drop the instantaneous frame rate below 60 FPS, while the RX 580 halves its forebear’s already-impressive results here. Moving on.

 

Crysis 3

Like GTA V, Crysis 3 can still challenge today’s graphics cards at its more demanding settings. We ran through our usual test area at 2560×1440 on the game’s High preset with the “SMAA Low 1x” anti-aliasing setting.


Crysis 3 proves a bit more favorable for these new Radeons compared to GTA V. The RX 580 pulls ahead of the GTX 1060 pair in both our average-FPS metric of performance potential and in the measure of delivered smoothness captured by the 99th-percentile frame time. The RX 570 performs a tad better than the RX 470, but the clock-speed bump over the Polaris 10 card doesn’t vault the 570 to new heights of performance for its weight class.


In our time-spent-beyond-X measures of “badness,” the RX 570 duo offers tangible improvements in our one-minute run of Crysis 3. The RX 570 nearly halves the amount of time the RX 470 spends on tough frames that would drop the instantaneous frame rate below 60 FPS, while the RX 580 spends nearly one-seventh of the time the RX 480 does on similarly-challenging frames. In both cases, the RX 500-series cards deliver smoother gameplay than the RX 400-series cards do.

 

Hitman (DirectX 12)
Hitman was one of the first heralds of the DirectX 12 API, and we’ve selected it as the representative for next-generation graphics performance in this piece. For the record, Hitman will lock out certain graphics settings if a card doesn’t have enough memory. We threw caution to the wind and disabled Hitman‘s safeguards, since they prevented us from testing the game with our chosen settings on the GTX 1060 3GB. If that card has weird performance issues with these settings, well, that’s why.


Hitman is a marquee title for AMD graphics cards’ performance under DX12, so it’s no shock that the RX 580 and friends match or beat the GeForce competition by small to decent margins. Even so, the GTX 1060 3GB suffers especially hard from our decision to override Hitman‘s memory safeguards. The RX 570 and its 4GB of RAM aren’t bothered in the least by our test settings. Meanwhile, the RX 580 cruises to an overall victory in both our average-FPS and 99th-percentile frame-time metrics.


Our time-spent-beyond-X graphs put a nice exclamation point on the performance of both the RX 570 and the RX 580. The RX 570 spends nearly three-and-a-half fewer seconds of our one-minute test run on tough frames that take more than 16.7 ms to render compared to the RX 470. The RX 580 shaves almost two seconds off the already-impressive performance of the GTX 1060 6GB and the RX 480 at this threshold, as well.

 

Power consumption

To get a sense of how much each of the cards we tested contributed to system power draw, we stood still in a visually-complex area of Hitman‘s Paris mission and checked readings on our trusty Watts Up power meter. Our monitor and any other test hardware were connected to the wall using a separate power strip.

Well, that’s something. Nvidia’s Pascal cards continue to be the efficiency champs, and it appears AMD’s increased board power for the RX 570 and RX 580 is a factor in reaching higher clocks. The RX 570 doesn’t need much more power to hit its boosted clock speeds versus the RX 470, but Sapphire’s RX 580 needs even more than its official 30W board-power bump  to hit its impressive boost clocks.

Noise levels

To get a sense of how noisy each graphics card on our bench today gets under load, we used the iPhone app SoundMeter by Faber Acoustical. Each measurement was taken 18″ from the fans of the graphics card while it was running our Hitman load.

Our XFX Radeon RX 470 and RX 480 aside, most of the cards here are quite tolerable given our test environment’s 29.8 dBA noise floor. Despite its hefty power draw, Sapphire’s Radeon RX 580 remains quite pleasant and neutral-sounding under load, save a tiny bit of coil whine. Even if your power bill is suffering with this card in your system, your ears can rest easy.

MSI’s RX 570 is also quite nice-sounding despite its relatively high reading on our iPhone dB meter. The company’s Twin Frozr VI cooler might be a bit too effective in some circumstances, though. I noticed the fans on this card would sometimes stop and restart during our tests, as if the cutoff point for the card’s semi-passive mode was set too high. This on-off-on cycle could prove more annoying to some builders than a constant gentle noise, and I hope MSI can fix it in a future driver or firmware update.

 

Conclusions

Before we ruminate on the data we’ve collected, let’s summarize that information using our famous value scatter plots. To make our higher-is-better graphs work, we’ve converted the geometric mean of the 99th-percentile frame times for our tests into FPS. On both the 99th-percentile-FPS-per-dollar chart and the average-FPS-per-dollar chart, the best values will congregate toward the upper left of the chart, where the price is lowest and performance is highest.


Going by our all-important 99th-percentile-FPS-per-dollar metric, both the RX 570 and RX 580 offer nice improvements in delivered smoothness over their first-generation Polaris brethren. The RX 570 pulls on par with the GTX 1060 3GB in our 99th-percentile-FPS-per-dollar measure, while the RX 580 actually surpasses the GTX 1060 6GB for smooth gaming despite that card’s higher performance potential (as measured by our average-FPS-per-dollar chart).

We’d have expected nothing less, too, given the fact that the Sapphire RX 580 8GB card we used in our tests made our test system draw 93W more power under load on the way to achieving victory over the EVGA GeForce GTX 1060 6GB. No, that figure is not a typo. For its highest-end RX 500-series card, AMD seems to have abandoned any pretense of competing with Pascal on a performance-per-watt basis. Getting the last few untapped percentage points of performance out of a GPU can require major increases in voltage, and AMD seems to be OK with that extra power draw in exchange for the smoothness crown at the $250 price point.

In turn, builders will need to spec cases with more cooling power and higher-end power supplies to take advantage of the RX 580’s high-quality frame delivery. If waste heat and power consumption are prime concerns, the GTX 1060 6GB is unquestionably the better pick.

If the RX 580 demonstrates what happens past the hairy shoulder of the voltage-and-frequency-scaling curve, the RX 570 seems better able to enjoy the fruits of GlobalFoundries’ improved 14-nm process tech. Despite drawing just a few more watts than our XFX RX 470 card under load, the MSI RX 570 Gaming 4G we tested achieves much higher boost clocks than its first-generation Polaris forebear, and it has no trouble maintaining those speeds under load, either. Not a bad deal at all for the more wallet-friendly Polaris 20 card. Given the difficulties the GTX 1060 3GB can cause gamers thanks to its meager complement of memory, we’d pick the RX 570 (or any RX 580 4GB card) over the Nvidia competition here any day.

A fuller set of tests might swing the pendulum more decisively in favor of AMD or Nvidia at this price point, but we think we’ve gathered enough data to say that the RX 500 series successfully closes the small gaps that existed between RX 400-series cards and the Nvidia competition. Depending on how prices work out at e-tail for this highly competitive segment, we think the conclusion is simple: buy an RX 570 or RX 580 if you were planning to buy an RX 470 or RX 480. If heat or power consumption are a concern for you, get the GTX 1060 6GB instead. You really can’t go wrong either way.

Comments closed
    • muyuubyou
    • 3 years ago

    Maybe as an OpenCL number cruncher it makes sense?

    • jokinin
    • 3 years ago

    If you happen to live in Spain, with one of the most expensive electricity prices in Europe, the choice is clear : the GTX1060, because it has almost the same performance and its power consumption is much lower.

    • Meadows
    • 3 years ago

    These cards make absolutely no sense. They’re not even 10% faster and they consume 20% more power.

    It’s like the FX 9370 all over again, except for graphics.

      • Krogoth
      • 3 years ago

      They make sense for OEMs though. Nvidia does the same thing too back with their second batch of Kepler chips 660Ti/670 => 760 and 680 => 770.

    • USAFTW
    • 3 years ago

    Looks like they’re pushing the silicon to the limit or too far depending on what You’re used to. Anand found that the new cards run a full 100mv higher than last year’s RX 480 and RX470, and they’re not quite beating the GTX 1060. As far as I’ve been able to gather, all Nvidia Pascal GPUs max out (at stock) at 1.062 volts.
    I really want AMD to be able to compete with Nvidia but realistically, they’re getting desperate. The utter silence emanating from AMD about Vega is also quite unsettling. If Vega is going to compete with GTX 1080, they’re too late already.

    • iQuasarLV
    • 3 years ago

    Conceding that Hitman is probably the most recent graphically intense engine in this review; why are 7-8 year old engines still being considered as the pinnacle of graphical fidelity? Surely something more modern has come along since then?

    The programming prowess and APIs have drastically improved since then. 2009 was a long time ago.

      • I.S.T.
      • 3 years ago

      Well, Cryengine is still being used… Just not often. Crysis 3 is one of the more stressful Cryengine games, so…

      • Jeff Kampman
      • 3 years ago

      Grand Theft Auto V is probably the single most popular AAA game on Steam, and it can still look amazing with the settings turned up, so…

        • iQuasarLV
        • 3 years ago

        Defaulting to the popularity reasoning does not really address the heart of the question raised. I could think quite a few other games that could make the list based purely on popularity, and have just as aged programming.

        Even in the TR Ryzen article GTA V was mentioned to prefer more aged programming methods (ex:two threads at high speed). I just do not see where GTA V, or any game built on an initial DX11 era engine, can capitalize on software/hardware advancements out today. Perhaps its time to retire some games, re-evaluate where the market is, is heading, and what is currently capitalizing on it. Then rebuild the go to suite.

        Can’t run on ancient tech forever. I thought it was ridiculous when most sites insistently fell back to Crysis 1, seven years after release, to still review GPU tech.

      • Meadows
      • 3 years ago

      Thing is, Crysis 3 is far from being 7 years old, and regardless, it still looks better than almost any other FPS today. This fact is both amazing and awful.

        • synthtel2
        • 3 years ago

        I think the biggest issue there is that it uses a light pre-pass pipeline, as opposed to fully deferred like most today. (This has also been known as deferred lighting as opposed to deferred shading.) Light pre-pass is great if you’re optimizing for the XB360 (a full Gbuf doesn’t usually fit in eDRAM), but doesn’t really have any advantages on modern platforms. It means you have to do the bulk of your draw calls and rasterization twice, and it changes BW use patterns a fair bit. Ideal-case BW use is a bit higher, but it may be a bit more tolerant of overdraw.

        There are all kinds of things about it that can change NV/AMD balance in ways that aren’t usually relevant in the modern era.

          • Meadows
          • 3 years ago

          If that’s the case, then it’s probably not very indicative of real-world performance (unless you play a select few Asian MMOs), but I think it’s still a good tool for comparing GPUs against one another.

            • synthtel2
            • 3 years ago

            All else equal, I’d still much rather compare using a pipeline representative of post-2013 gaming. Games that use light pre-pass and can still pose a real challenge for typical modern cards are increasingly rare, and while Crysis 3 may still be a fine benchmark for many purposes, it is starting to feel a bit synthetic.

    • Bensam123
    • 3 years ago

    So we’ve went from ‘power consumption not really mattering because when you put everything into context you’re actually spending $.30 more per year’ to ‘oh 30w extra is going to make me sweat more and make the AC run moar!’

    If 30w is the difference between the sahara and the arctic to you, something is wrong. Even 100w is going to be hard to notice.

    Hundreds of watts is the threshold you’re looking at here. When you look at your electric heater, notice it says ‘750w’ on it or ‘1500w’. Not ‘get hot and steamy for 30w’.

    Also not everyone games in absolute silence. Most people have headphones on. You also wont hear the fans.

    Now back to your regularly scheduled price/performance metrics.

      • Bumper
      • 3 years ago

      For me, it’s not the watts alone that matter. The whole product factors into how I think when I make a purchasing decision. Does it consume more watts? Is it faster? Is it cheaper? What features are differentiating?
      RX580 uses more watts, is slower, same cost, has 2gb more ram than the GTX 1060. I personally think the custom rx580 would need to be around $30 cheaper than the 1060 to be interesting. That makes up for the extra watts and slower speed.

      You are right about the a/c thing though. If you can cope with 100watts being pushed out at you then 100 more wont make a difference. Its like adding an extra 60 watt light bulb. If that is making such a big difference in how cool my room feels I’d just leave my computer off no matter what graphics card it has and think about moving or getting a better a/c or big fan. I live in a hot climate and when its that hot outside its best just to embrace it and go swimming or at least get a little sweat going.. Sitting inside is the worse because i would run my a/c 24/7 to keep it below 78.

      • K-L-Waster
      • 3 years ago

      The thing I find most interesting about the response to this review is the people who are most critical of the RX580 are people who already own RX480s. Usually it would be the Team Green posters, but not this time. This time it seems to be AMD users for most part asking “WTF AMD?!”

    • deruberhanyok
    • 3 years ago

    Hmm. I don’t see how “improved 14nm” translates to that kind of power usage and clock speeds. Weren’t there overclocked 480s that ran at those clock speeds? So this new spin is… what’s actually different?

    I was expecting, if nothing else, same performance but decreased power use (as Chrispy_ mentioned in the comments, some RX480s are running at lower voltage which translates to ~120W or even lower) across the board. At least it would have been a noticeable change, and an obvious sign of an improved process showing us that Polaris is a little more capable / efficient than we though.

    Instead we have slightly higher performance for disproportionately higher power use. AMD can call it a process improvement if they want but the rest of us call it an overclock.

    I mean, if they want to rebadge, they can rebadge, but don’t try to pitch it as an improvement.

    • BillyBuerger
    • 3 years ago

    [quote<] the Sapphire RX 580 8GB card we used in our tests made our test system draw 103W more power under load on the way to achieving victory over the EVGA GeForce GTX 1060 6GB. No, that figure is not a typo.[/quote<] Umm, actually that is a typo. The 580 draws 103W more then the 1060 3GB and 93W more than the 1060 6GB. Still crazy power. But thought it was funny that there's a typo in your "this is not a typo". 🙂

    • ronch
    • 3 years ago

    I thought AMD was going to release Vega-based GPUs next? I was kinda a bit surprised they’re going for a rebrand. Is Vega on track this year?

      • TwoEars
      • 3 years ago

      People are betting on a reveal at, or close to, E3. But will cards actually be available then? Who knows.

        • Krogoth
        • 3 years ago

        Probably sometime in June 2017 to coincide with summer sales.

      • MOSFET
      • 3 years ago

      Generally best to keep consumers mystified.

        • willmore
        • 3 years ago

        You keep me uncertain as a customer and I will make you uncertain if I will be your customer.

    • ozzuneoj
    • 3 years ago

    This review makes me wish nvidia would bump the 1050 and 1050 Ti down a notch and release something between the Ti and the 1060 3GB. The performance gap is huge and it makes buying a budget\mid range card a bit frustrating.

    I have a hard time seeing a 1050 Ti as worth $140+ when the 1060 and RX 470+ are so much faster and can be had for just a bit more. Price\performance seems to be all over the place right now. You can buy a GT 740 (GTX 650), GTX 750 Ti, GTX 950, RX 460 or 1050 for the SAME PRICE at many retailers, and yet if you want something with a little more vram you’re getting a (barely faster) 1050 Ti for only $30-$50 less than a 470, 480 or 1060 3GB (on sale).

    A 4GB 1050 for $99-$109 MSRP would cannibalize too many sales of the Ti, so I see why they don’t do it, I just wish there was something more appealing from nvidia in the $100-$160 range. Bring on the 1050 Ti Boost!

    • EzioAs
    • 3 years ago

    There are some nice GTX 1060s (6GB) at Newegg below $250 [url<]https://www.newegg.com/Product/ProductList.aspx?Submit=ENE&DEPA=0&Order=BESTMATCH&Description=gtx+1060&ignorear=0&N=100007709%20600358543&isNodeId=1&LeftPriceRange=0%20250[/url<] Most are clocked slightly lower than the one used in the review (but some still higher than reference) with the exception of the EVGA SSC card. They also come with an offer of either Ghost Recon: Wildlands or For Honor if you're interested in any of those two. Below $250 I don't think they look that bad compared to the RX 580. Yes, even at $250, the RX 580 leads in price/performance (for the most part) but the GTX 1060s still hold surprisingly well.

    • Dysthymia
    • 3 years ago

    Is it naive to think that the 14-nm process improvements shown here would not be set back by any differences in Vega’s architecture? Could it be further improved by then? Or will Vega’s clock speed be about as disappointing as Polaris was initially?

    • JJAP
    • 3 years ago

    I’m confused about the FPS per dollar chart. Why is the 570/470 at $190/$200 (!?) (where the 580 4G would/should be) and not “maintain the RX 470’s $169 suggested price”. And where can I find a 1060 3G at $180? Has the MSRP dropped from $200?

      • EzioAs
      • 3 years ago

      [quote<]And where can I find a 1060 3G at $180?[/quote<] [url<]https://www.newegg.com/Product/ProductList.aspx?Submit=ENE&DEPA=0&Order=BESTMATCH&Description=gtx+1060&ignorear=0&N=100007709%20600286741&isNodeId=1[/url<]

      • DancinJack
      • 3 years ago

      Did you even read the review?

      The card they tested was 189.99.

        • DPete27
        • 3 years ago

        RX470s have been on sales for $130 after MIR pretty regularly this year. I’d have a hard time paying $190 for a 4GB RX570 when I can buy an 8GB RX480 for the same price.

    • davidbowser
    • 3 years ago

    From a cooling and noise front, this just drives home what I had been investigating: AMD is not going to just stay to compete on mid-range power efficiency. MOAR WATTS!!!

    The RX460 seems to be the best passively cooled bang for buck. XFX has one that Amazon and [url=https://www.newegg.com/Product/Product.aspx?Item=N82E16814150786<]the egg[/url<] sell for US$140 (-$20 MIR if you like to gamble).

    • ptsant
    • 3 years ago

    In a very weird sense, I have a more positive view of the RX480 than of the 580. How is that for bad marketing?

    I have an RX480 and it’s a great card, at least for the price. I undervolted it quite a lot and am very happy. I really don’t get why they had to launch a “new card”. They could just have continued selling the 480.

    • JosiahBradley
    • 3 years ago

    I’m beyond Krogoth’d. Where is Vega!?

    • Kelbor
    • 3 years ago

    I would just like to point out that this is just Sapphire’s version of the RX580. It will be interesting to see how other card manufactures spin the RX500 series, and if the same obscene power usage is observed. I’m just saying maybe power usage is blown out of proportion by third party manufactures in order to create the fastest RX580. Time will tell.

      • Demetri
      • 3 years ago

      At stock clocks, it looks like the 580 averages around 25 watts more than the 480 did. A heavily OCd version of the 580, like the Powercolor GS is another 25 watts on top of that.

      [url<]https://www.computerbase.de/2017-04/radeon-rx-580-570-test/4/#diagramm-leistungsaufnahme-der-grafikkarte-spiele-durchschnitt[/url<]

    • the
    • 3 years ago

    Typo in the Crysis 3 benchmarks. When selecting time spent beyond 33.3 ms, the RX 570 has five results.

    • MrDweezil
    • 3 years ago

    AMD cooked up a 40 CU configuration for Microsoft’s Scorpio, and a 44 CU config for the scorpio devkit. Why couldn’t they have used that here?

      • Voldenuit
      • 3 years ago

      Scorpio also has a 384-bit memory bus feeding both CPU and GPU, and slower clocks than the OEM 580s (1172 vs the 1400+ we’re seeing on aftermarket cards).

      A 384-bit memory bus would have made the PCB and memory architecture more expensive on the 580s.

        • DoomGuy64
        • 3 years ago

        Still worth it, and power use wouldn’t be so nuts. There would even be headroom for clocks higher than the xbox. Just price it between the 1070 and 1060.

          • Voldenuit
          • 3 years ago

          384 bits would mean 12 GB of VRAM, I don’t think they would have the werewithal to go there with a midrange part. And if they tried a 6/2 split like nvidia did with the 970, well, people are still getting checks from [i<]that[/i<] class action suit. Not to mention the 580 is available today, while Scorpio is 6-9 months out.

            • DoomGuy64
            • 3 years ago

            Eh. I thought AMD already sold a 384 bit chip neutered to 256. There’s nothing stopping them from doing it again. Also, since it’s a midrange part, they could simply copy Nvidia and go 6/3 vram. 6gb is all you need for midrange. 3gb is better ignored though, so 256/4gb would be preferred.

            People who think 384 bit shouldn’t be used, don’t know AMD. The original GCN was 384 bit, and Hawaii was 512 bit.

            The reality is, AMD’s BEST bet is to release that 384-bit Scorpio GPU to the public, and it would trash the 1060. The extra CUs mean it could also be clocked lower to save power.

            Either way, it’s incredibly stupid to have a chip that could make a difference and just sit on it. It’s self sabotage.

            • synthtel2
            • 3 years ago

            They can’t just sell Scorpio on a stick and make money. They can’t do much better than NV at perf/W with that, but it’ll cost a whole lot more for them to build than a 1060 (maybe more like GP104).

            • DoomGuy64
            • 3 years ago

            Which is why a card faster than a 1060 will cost more than a 1060. DUH.

            You seem to be forgetting that AMD ever sold the 7970 which was 384 bit, and the 290 which was 512 bit. I don’t want to see a single follow up saying a damn thing about bus width. AMD’s done it for years, and cards like the 7950 were not that expensive.

            If you really wanna go full retard on the bus width, which anyone who continues to dwell on it is, I can still negate that argument via TONGA. AMD could just limit the bus width to 256. There is no argument here, no matter what approach you take.

            A 256 bit Scorpio on a stick would cost the same as the 580 to build, and it would very much destroy the 1060 in perf/W because the extra resources would inevitably do just that. AMD’s chips are not so much power hungry, as massively overclocked. Scorpio wouldn’t need to be overclocked to beat the 1060, and thus perf/W would be good. You could clock it like a reference 480, and use the improved process to lower power requirements.

            • synthtel2
            • 3 years ago

            Sounds like someone started the day on the wrong foot. Being rude about it doesn’t do you any favors, bro. Also, I didn’t even mention bus width, but now I kind of want to get extra rambly about it just to annoy you. Good trolling diversion, I guess? (I successfully resisted rambling about it any more than was inherent to the topic.)

            Scorpio is purported to be a 360mm[super<]2[/super<] die, and that sounds about right considering 44 CUs plus all the non-GPU stuff. Ignoring bandwidth constraints for the moment (which do matter), that's 55% (or more, because yields also matter) more die fabrication cost for something less than 22% more performance at iso-clocks (less than because there's no boost to rasterization abilities). If it's underclocked to target the same performance as a 580 with better perf/W (something more than 1100 MHz, b/c rasterization hasn't been boosted), it'll be able to get by just fine with 256-bit VRAM, and the perf/W will be competitive but the perf/$ won't at all. 360mm[super<]2[/super<] for $220 on 14nm would be a bad day for AMD, and they still wouldn't be able to charge much more than they are now. If we're still screwing perf/W and factory OCing to the moon, 384-bit is going to be required, which means that we can say more definitively that manufacturing costs are up by 50% (since die and VRAM costs now both rise by that amount). That means AMD would want the full variant of that to be selling for something significantly over $300, meaning it has to compete with a 1070 head-on. Adding +50% BW, +22% ALU, and +0% rasterization power to a 580 isn't going to be enough to keep up with a 1070, and it'll be burning a lot more power than a 1070 at the same time.

            • DoomGuy64
            • 3 years ago

            [quote<]Sounds like someone started the day on the wrong foot.[/quote<] Because you didn't read/ignored my post and repeated an argument I already addressed. I think that would rub just about anyone the wrong way, don't you? [quote<]plus all the non-GPU stuff.[/quote<] So you're just going to ignore removing the non-gpu stuff would make it smaller? You do see how dumb that argument is, right? The whole rest of your paragraph is invalid, being based on nonsense. [quote<] it'll be burning a lot more power[/quote<] Which is why I said keep it at stock 480 clocks, and 256 bit. That's enough to beat the 1060 and not intrude in 1070 price territory. Hell, even if AMD went the full power hog route, it would still sell because they aren't offering anything better, and AMD has a monopoly on freesync. There are people who would buy it. Probably the same people who AMD is catering the 580 for. Either way, you can't sell a product you aren't making, so AMD is only hurting themselves by sitting on tech while the 580 is being the worst rebrand ever. Vega needs to come out sooner, and the 580 is best ignored because it's a power sucking 480. AMD should have released the Scorpio GPU as the 580 instead. It couldn't have been worse. That said, putting Scorpio on the desktop probably won't ever happen because of licensing, integration, and being a different process. Not without major changes. I'm just speaking hypothetically because the 580 was such a worthless rebrand, and should have never happened. Anything else would have been better. Shrinking Hawaii, for instance.

            • synthtel2
            • 3 years ago

            I didn’t say what you think I said (on multiple counts), this’ll go better if you don’t put words in my mouth, and this’ll go better if you don’t try to explain everything I say by thinking I’m an idiot. (For that matter, you sure seem happy to assume unbounded cluelessness from AMD too.) I have carefully considered a great number of scenarios here, I’m just only writing out a subset of them because I considered many of them obvious and I’m not going to waste time writing a wall of text if you’re going to be like this.

            Getting a new die designed and in production is very expensive, even if you already know what you want to put on it. Saying “let’s build a 44 CU 32 ROP 4 geom engine [either bus width] dGPU” isn’t made much cheaper by Scorpio’s existence. I was assuming we were talking about literal Scorpio because the alternative is even more cost ineffective. Now reread my last post with that in mind.

            If they were actually looking to spend the money to get another big Polaris dGPU in production, I really don’t think it would be one with those specs, but that’s irrelevant, because anything released at this point has to either (a) be cheap to start production on, (b) get massive sales in the next few months, or (c) still be a good chip to have around once Vega has hit the shelves. AMD chose (a). They might have the tech for (b), but it’s called Vega. Anything much bigger than P10/20 seems like a bad bet for (c).

            Now, what would be a more interesting question is why they didn’t release a P12 with either 50 or 52 CUs / 384-bit GDDR5 or ~56 CUs / 384-bit GDDR5X in time for the holidays last year. I’m not trying to troll (yet, at least), so I’ll just leave you to chew on that one.

            • DoomGuy64
            • 3 years ago

            [quote<]Now, what would be a more interesting question is why they didn't release a P12 with either 50 or 52 CUs / 384-bit GDDR5 or ~56 CUs / 384-bit GDDR5X in time for the holidays last year. [/quote<] Good question. AMD skipped the entire high end for Vega. Maybe it made sense for them, but I don't get it. They used to compete in the upper mid with Hawaii, but it seems that class of card has been completely dropped from the lineup. AMD did have a short monopoly on mid-range, but once nvidia released the 1060 they should have dropped the P12 if they had it. This just makes me think something is wrong, and AMD doesn't have the resources to compete, even when it potentially has the tech. They're playing it extra safe because they can't afford to make something that doesn't hit expectations. edit: But if that was the case, why bother with the 580? I doubt it will sell better than the 480, and now they'll have this power sucking product line cluttering the market. It only makes sense as a limited stop-gap measure because they couldn't afford to bring out a P12. Gives me bad vibes. Vega better deliver.

            • K-L-Waster
            • 3 years ago

            It feels a bit like AMD has limited resources so they are picking their battles — used to be they tried to be everything to everybody, but recently they seem to be concentrating more on the segments that they can make money in.

            The thing about the high end cards is they make great halo products, but they’re expensive to develop and the volumes aren’t huge. If AMD found they were most successful selling mid-range cards, it may have made the most business sense to only release cards for that range.

            That explains the RX400 series at least. The RX500 series is a little tougher if Vega is close. (If…)

          • dodozoid
          • 3 years ago

          Only Scorpio is not a GPU, It is an SOC.
          It propapbly has no PCIe interface to connect to the host system as it IS the host system.
          Plus the deal between MS and AMD probably wouldnt realy allow that even if it was technicaly possible and financialy meaningful…

      • Demetri
      • 3 years ago

      I don’t think it would be cost effective. Scorpio is an integrated SOC built on a different process (TSMC 16). They’d have to create a new die for a discrete part that may end up getting price squeezed by a cut down Vega chip.

    • TwoEars
    • 3 years ago

    Ouch.

    • blastdoor
    • 3 years ago

    Maybe now is the time to revive the “Video Toaster” brand, but with actual toast!

    • Chrispy_
    • 3 years ago

    So when I have my launch-day 480X running at 1200MHz/0.925V for an estimated sub-100W, AMD’s “improvements over time” look pretty hopeless.

    Admittedly that 480X is the best of several I’ve had access to – so consider that a cherry pick. The other ones were all capable of 1275MHz at 1.0V which is still only 120W.

      • xeridea
      • 3 years ago

      GPU-Z power? That is GPU only, need to consider GDDR5, VRMs, etc. It is amazing how much power you can save undervolting though.

        • Chrispy_
        • 3 years ago

        No, whole board calculated using the basic theory that power consumption is proportional to the square of the voltage multiplied by the clockspeed.

        I assume that a 1266MHz RX480 at the default 1.160mV uses 160W.

        So, 160*(1200 * 925^2)/(1266 * 1160^2) = 96W, but that’s not quite accurate since the memory didn’t use 1160mV originally – it was 1050mV I think.

        My ballpark calculation is ~100W peak draw.

          • xeridea
          • 3 years ago

          Yes, the theory would put it like that, though the stock 480 throttles a lot, so there are other factors at work obviously. It says right there in GPU-Z, clearly marked, “GPU only power draw”.
          VRMs aren’t 100% efficient, GDDR5 doesn’t run on superconductors, and fans are not perpetual motion machines. The theory is for the GPU only, you can’t ignore other things.

          I am a miner, with 6 470s running 1110 clock, 937mv, about 1000 watts is consumed at the wall. Platinum PSU, so say 920 for computer, Celeron at idle, mobo and SSD maybe 50w. Going by GPU-Z only, they would take 600W, but clearly 850-900 is drawn.

          I can lessen load to 60w GPU-Z, but consume ~720w at the wall. so 720 * .92 – 50 = 612, but GPU-z only shows 360.

      • Chrispy_
      • 3 years ago

      I fail at replying to the right comment :\

      • morphine
      • 3 years ago

      This begs the question, why not just use the 1060?

        • Chrispy_
        • 3 years ago

        Because it wouldn’t Freesync or Vulkan properly, and every update the scaling mode would reset.

        Also (not that I use it with an HDTV over HDMI anymore) it would also inconveniently revert to limited-range 16-235 colour every time I changed resolution resulting in the wrong gamma value, poor white luminance, high black luminance and approximately a 70% loss of contrast ratio.

        Thanks, but no. I’ve regretted holding onto my GTX970 because Nvidia’s drivers have also been a recurring catastrophe through most of 2016.

          • Klimax
          • 3 years ago

          Ehm, what do you mean by “not Vulcan properly”? (Not that I care about Vulcan nor DX 12) Also which scaling mode gets reset?

          I guess whose drivers are catastrophe depends on what one uses. I got hit with similar series of issues with Intel’s Wi-Fi drivers)

            • Chrispy_
            • 3 years ago

            Almost all GCN cards see a 30% improvement in Doom2016, running the Vulkan API for example.

            Nvidia cards get nowhere near as much benefit (sometimes absolutely none at all) from Vulkan.
            Nvidia 10-series cards see around 10% gain (they have some kind of async compute emulation/acceleration)
            All other Nvidia cards see performance [i<]drops[/i<] or no change. The scaling mode that gets reset frequently with Nvidia drivers is the display resolution scaling. You can choose to have your display scale up the given resolution to full screen, or you can have the GPU always output at the display's native resolution and perform the scaling for different resolutions. Usually the GPU does a better job than the display at smoothing the pixel mismatch and aliasing artifacts, but more importantly for an HDTV, they often apply different picture settings or postprocessing depeding on the resolution and refresh rate, so you want to keep your HDTV at 1:1 pixel-mapped native resolution at all times and perform scaling on the GPU.

            • Klimax
            • 3 years ago

            Re no significant improvement in Vulcan: Well duh, NVidia spent eternity on optimizing drivers. (They started back around TNT!) That’s why 99% of developers can’t really beat DX 11/OpenGL path. That’s one of reasons why Vulcan/DX 12 are nonsensical thing on PCs.

            Re scaling: Ok, no wonder I didn’t know what’s getting reset as I don’t use/change settings there. Though it is odd why part of settings would get dropped on update and not elsewhere.

            • Chrispy_
            • 3 years ago

            It’s not quite that simple though. AMD/Nvidia are at price parity for DX11/OpenGL performance, near enough.

            So, when your cheap sub-$200 R9 380 is matching a similar-age $350 GTX 970 in Vulkan titles, you have to take notice. It’s not about driver optimisation anymore, it’s about AMD having a hardware advantage that Nvidia doesn’t.

            I really liked my GTX970 despite the 3.5GB memory issue (which is why I still have one) but Nvidia cards really suck at Vulkan and DX12. Anyone who hangs onto their hardware for more than a year or two is going to regret the green team. I’m just lucky I have access to a wide range of cards from both teams.

        • DPete27
        • 3 years ago

        I have an ethical(?) opposition to Nvidia right now because of GSync. The RX480 is/was a better perf/$ value than the GTX1060 with the obvious downside of higher power draw (some of which can be gained back by undervolting at the same clocks).

      • Dysthymia
      • 3 years ago

      I overpaid for my reference 480 but it is factory OC’d to 1328 MHz at 1.15V. The reference cooler really sucks though.

    • Unknown-Error
    • 3 years ago

    Those power numbers? Was that supposed to be a joke?

    Going through the benchmarks I was like “Not bad AMD” but the moment the “Power consumption, Noise levels” page loaded I was like HOLY S4!T (and not in positive way).

      • kn00tcn
      • 3 years ago

      i didnt know AMD made these non reference overclocked cards

        • Kougar
        • 3 years ago

        AMD didn’t make reference cards for the RX570 & RX580 either. 😉

        The power consumption seems to vary considerably between brands and cards, with differences up to 50w between 580 models on load power consumption.

    • OneShotOneKill
    • 3 years ago

    It smells stale! Here honey try this card for me and tell me if it is still good.

      • OneShotOneKill
      • 3 years ago

      Honey: It smells burnt, I think it is still cooking inside.

    • cybot_x1024
    • 3 years ago

    Those power consumption numbers are very fermi-liar

      • jihadjoe
      • 3 years ago

      LOL and it’s even funnier coz the model numbers match right up. 580 and 570 =)

        • Ninjitsu
        • 3 years ago

        Oh man, AMD’s model numbers are really triggering me at the moment, wish they’d have gone with another series :/

      • Unknown-Error
      • 3 years ago

      That has to be the “Comment of the Month”!

      • Krogoth
      • 3 years ago

      Nah, large Fermi chips were quite a bit worse in their depute especially GF100 stuff. It is much closer to the Hawaii stuff.

        • Klimax
        • 3 years ago

        Well, review is about Polaris not Vega thus comparison is not that far off…

          • UnknownZA
          • 3 years ago

          Also Fermi was produced at 40nm which is power inefficient compared to the RX580, that uses the 14nm process which is supposed to be more power efficient. So these power consumption figures aren’t good at all. In my opinion.

    • ET3D
    • 3 years ago

    I’m waiting for the RX 550 just to feel bad that I bought a low profile 460 and underclocked it. 🙂

      • ImSpartacus
      • 3 years ago

      Don’t feel bad, you’ll get better thermals & power consumption when underclocking a larger chip as compared to a smaller chip at higher clocks such that performances are equal.

    • DPete27
    • 3 years ago

    Can we have a follow up article to clock the RX580 back to the 1266MHz of the RX480 so the process efficiency can be observed? (or vice-versa at 1340MHz since there are factory OC’d RX480s that have those clocks)

    Also, FYI, my RX480 factory OC is 1305MHz and requires exponentially more voltage than the reference 1266MHz (+20-25W for only 3% clockspeed bump). I’d be curious to see an RX580 running at the reference 1340MHz and what the power draw is there. How much power draw are the board partners adding for a measely 5% performance increase over reference?

      • DPete27
      • 3 years ago

      NVM. Anandtech rose out of the ashes and [url=http://www.anandtech.com/show/11278/amd-radeon-rx-580-rx-570-review/16<]already did the important testing.[/url<] There you can see that 5% clock bump costs 35W. More importantly, we see that the reference RX580 bumps clocks by 5% from the RX480 for a cost of 20-25W. Given my RX480s 20-25W increase for 3% clock it looks like a [i<]massive[/i<] 2.5% improvement on process tech. Not earth-shattering, but I'd take it [i<]IF[/i<] RX480/RX580 prices were the same. Take the same assumptions with manual undervolting as the RX480 and you could be looking at a stock clocked RX580 being within 40W or less of a GTX1060 for roughly equivalent performance. When you put it that way it sounds a heck of a lot more appetizing, doesn't it?

        • ptsant
        • 3 years ago

        A potential 2.5% improvement in process tech may well fall within the range of chip variation or the observation that the RX480 was probably too aggressively volted in most cases. I suspect that even with a 1000mV voltage, 90% of 480s would probably pass QC.

        Anyway, it’s worth undervolting the 480 a lot and I suppose this probably holds for the 580.

          • DPete27
          • 3 years ago

          [deleted] I cant form a sentence.

      • Rza79
      • 3 years ago

      Computerbase did an undervolting experiment with the RX480 here:
      [url<]https://www.computerbase.de/2016-06/radeon-rx-480-test/12/#abschnitt_viel_potenzial_fuer_undervoltage_bei_der_rx_480[/url<] Undervolting the RX480 increased performance with 5% thanks to the added headroom while decreasing power usage with 33W. In typical AMD fashion, their CPU's and GPU's are nowhere near their minimum voltages.

    • Voldenuit
    • 3 years ago

    I hope the PCIE slot power draw spec is being observed this time.

    And while I agree that the power consumption figures are disappointing (100W more than 1060? [i<]Jeebus[/i<]), to say the least, keep in mind that these are aftermarket cards with factory overclocks. Still, AMD's performance per watt metrics are not looking good so far. Anecdotally, Sapphire has been a bit more conservative than msi and powercolor on voltage and power draw; I'd be curious to see if the powercolor 580 has similar, better or worse power draw under the same conditions.

      • ImSpartacus
      • 3 years ago

      The 580 has an 8-pin by default, so it’s good up to 225W.

      That’s plenty for a “stock” 580.

      Though the further-overclocked 580s have seen some frightening power needs, so we’ll see 8+6 (or 8+8 , God help us) more often than not.

      • DancinJack
      • 3 years ago

      It is. Pcper did their usual power consumption stuff. The 8-pin is pushing over 150 though, but most power supplies will be fine with that.

    • Voldenuit
    • 3 years ago

    Why is there no entry for Polaris 10 in the comparison table? Even if the hardware specs are identical, official core and memory clocks would give a quick heads-up to what’s new.

    • EzioAs
    • 3 years ago

    I’m surprised by the power consumption given the relatively modest clock bump. Most people buying this probably won’t care but it sure doesn’t look good compared to Nvidia’s offering or previous RX 400s. Then again, most people probably wouldn’t care. I know I don’t if the price/performance ratio is better than the competition.

    Side note, it seems like Nvidia’s caught up pretty well on Hitman. That game used be the poster child for AMD GPUs.

      • southrncomfortjm
      • 3 years ago

      This is totally anecdotal, but I’m betting most enthusiasts already upgraded in the last year to an RX 4XX or GTX 10XX, leaving a much smaller market for RX 5XX cards. So, while I am disappointed, I can understand AMD not swinging for the fences on this. Hopefully Vega is amazing.

    • cygnus1
    • 3 years ago

    [quote<] the Sapphire RX 580 8GB card we used in our tests made our test system draw 103W more power under load on the way to achieving victory over the EVGA GeForce GTX 1060 6GB [/quote<] I suppose you could underclock the 580 to try and get that extra power/heat under control, but honestly, the 480 looks better to me from a power/heat/performance balance perspective. Or I could buy a smaller PSU and get a 1060....

      • Den2
      • 3 years ago

      Or buy a 1060 and OC it.

        • cygnus1
        • 3 years ago

        Exactly… trade off to get Freesync over G-Sync I guess…

    • AnotherReader
    • 3 years ago

    Thanks for the quick review! IMHO, the [url=http://www.anandtech.com/show/11278/amd-radeon-rx-580-rx-570-review/16<]voltages have been increased[/url<] far beyond the sweet spot of Polaris: 1.2 V for the PowerColor and 1.16V for the reference 580. They would have been better off backing up a bit on the voltages and increasing the memory transfer rate to 9 Gbps. However, a new memory state promises power consumption improvements for video playback and multi-monitor users. The multi monitor improvements aren't working though.

    • odizzido
    • 3 years ago

    meh. As exciting as a rebrand….

    • mcarson09
    • 3 years ago

    Only three games in the review? This 5XX series feels like a rebrand, but STILL…

      • Rza79
      • 3 years ago

      Not only three games but also three old games.
      I’m looking at the TPU review and the RX580 is winning by a big margin in RE7, Battlefield 1, COD, Deus Ex, Doom, F1 2016, Tomb Raider DX12 & Sniper Elite 4. All games from the past 6 months (more or less, except TR).
      I know power consumption is high but in these games, it’s beating the GTX1060 with 20-30%. Nothing to scoff at.

        • EzioAs
        • 3 years ago

        1. Not to disregard the work done at TPU but frame times are more valuable.
        2. The RX 580 review currently available at TPU is done on a highly clocked version with a significant enough price difference.
        3. Even based on the TPU review, the RX 580 is only “faster” than the GTX 1060 6GB by a minuscule percentage; 3.1% at 1080p, 5.2% at 1440p and 6.4% at 2160p out of the overall TPU game bench.
        4. You’re certainly cherry picking your recent game samples because the GTX 1060 6GB does have higher framerate in Civilization VI, Dishonored 2, Ghost Recon: Wildlands, Styx: Shards of Darkness and Watch Dogs 2. All of which are within 6 months of that review. Not to mention F1 2016 is already about 8 months old so by your standard, it should be taken out.

          • Rza79
          • 3 years ago

          I wasn’t specifically making a case for the RX580 being more powerful but about the fact that three games is a way too narrow selection of games to make judgement about a GPU. Especially since none of them is new.
          Though I would like to point out that the games you mention, I didn’t cherry pick out but I left them out because performance between the GTX1060 and RX580 was within 10%. Too close I thought to be worth mentioning.

            • EzioAs
            • 3 years ago

            Okay, but it’s still not 20-30% in all those games though.

            RX 580 is faster than the GTX 1060 (1080p) by:

            – 15.8% in Battlefield 1 .
            – 9.6% in Deus Ex: Mankind Divided
            – 16.2% in Doom
            – 8.7% in Sniper Elite 4

            In CoD: IW, Rise of the Tomb Raider (DX12) and RE7, the RX 580 does seem to perform a bit over 20% better though at 1080p.

            • Rza79
            • 3 years ago

            I was looking at 1440p fps. Didn’t even check out 1080p.
            I didn’t take a calculator to calculate the exact percentages. Now I did.

            – 20% in Battlefield 1
            – 16% in Deus Ex: Mankind Divided
            – 17% in Doom
            – 12% in Sniper Elite 4

            SE4 is not close to the 20% I thought but I would say 16-17 is close to 20%.
            I don’t want to go into a detailed percentage discussion of these games. My primary point still being, the issue of the three old tested games.

            • EzioAs
            • 3 years ago

            Yeah, the difference in these games is quite staggering. One of the surprising games is RE7. I never thought AMD GPUs had quite a lead than their counterpart in that game. I imagined we’d add Hitman to these discussions too if Nvidia didn’t improve their driver for that game recently.

            On another note, a quick glimpse at Newegg shows that there are at least (AFAIK) two cards that are factory overclocked higher than the Sapphire Nitro+ LE: the XFX GTR-S BE and Asus ROG Strix. Price wise, the XFX retails the same with the Sapphire card while the Asus demands $20 more, which mean all the 3 cards have a $50-70 price premium over MSRP which in my opinion, is too much to be valuable for buyers at this price range. Power consumption could be even significantly higher in these cards if they bump up the voltage even more (Not that it’s going to be an issue in any PC with a decent 400W+ PSU).

            Newegg link: [url<]https://www.newegg.com/Product/ProductList.aspx?Submit=ENE&DEPA=0&Order=BESTMATCH&Description=rx+580&N=100007709%20600494828&isNodeId=1[/url<] If I were really looking to buy the RX 580 though, I'd probably find one at AMD's SRP or something close to it.

            • travbrad
            • 3 years ago

            Some of the gap is just from the clocks on that 580 they used. It has 5-8% higher clocks than a stock 580 and costs $290 on newegg (getting pretty close to 1070 prices). Granted you can probably overclock to the regular 580s to the same clocks, but 1060s are generally even better at overclocking. There’s really very little to choose between them performance wise.

            I wouldn’t find either of them acceptable for 1440p either but I know some people are willing to put up with lower framerates.

    • strangerguy
    • 3 years ago

    Saw the review up, immediately jumped to the power consumption chart and it made me LOL.

    • Anonymous Coward
    • 3 years ago

    Meh, thats a pretty small bump to get promoted by 100 model number points, and without even a new flagship card to carry the banner.

      • blahsaysblah
      • 3 years ago

      I own a GTX 1060, but I still see it in positive light. They basically gave you a certified OC, giving you maximum potential of the hardware and under warranty to boot. I bet this was what the RX480 was supposed to be, but they were afraid of comparison to more efficient Nvidia solution. RX480s seems to be selling well, so they doubled down. Seems good deal for consumers who upgrade on faster cycle.

      • Demetri
      • 3 years ago

      It should have been named RX 485, but marketing.

        • ImSpartacus
        • 3 years ago

        Just be happy the 480 didn’t get a cold rename to 580.

      • DragonDaddyBear
      • 3 years ago

      Agreed. Usually this kind of stuff results in an increase of 90.

    • blahsaysblah
    • 3 years ago

    Mystery solved: Now we know where all that extra headroom the power supply for the MS Scorpio has iss for…

      • ImSpartacus
      • 3 years ago

      savage af

    • DancinJack
    • 3 years ago

    The power consumption on that 580 is just stupid. No way no how could I recommend that card.

      • derFunkenstein
      • 3 years ago

      Yeah they’ve really gone to plaid with RX 580 power consumption.

        • DancinJack
        • 3 years ago

        Yeah, and unfortunately, it doesn’t really look like GloFo has improved their 14nm process much, at least in terms of voltage/clock. AMD, both RX/Vega and Ryzen, could really use some less juice and that’d make them waaaay more attractive, IMO.

          • derFunkenstein
          • 3 years ago

          It’s just that AMD is happy to clock them past the “sweet spot” of the voltage/performance curve in order to get more total performance. But then you just hit this wall where it takes a ton more power to get anything more. That’s basically where they are with Polaris and Ryzen both right now.

      • DPete27
      • 3 years ago

      I’m not disagreeing with you that the RX580 power consumption is insane. But I knocked >30W off my RX480 @1305MHz by undervolting. That process is very easy to do in AMD WattMan that comes with AMD Crimson driver software.

      AMD really should be looking at their power delivery algorithms more closely. For how much they touted their AVFS when Polaris launched, they’re really leaving a lot on the table and it makes their cards look bad.

        • DancinJack
        • 3 years ago

        Yeah, but honestly, how many people are going to undervolt? You shouldn’t have to is the point, really. Not that I’m saying it’s not a good idea.

        Agreed, they need to spend some more R/D on the process/power stuff next time’round.

          • ImSpartacus
          • 3 years ago

          The problem is that they simply aren’t getting the necessary performance.

          Amd is constantly having to push its products much harder than Nvidia (or even Intel on the cpu side).

          Remember AMD didn’t exactly want the 480 to be exceeding 150W, but they had to basically overclock it to get even close to a competitive performance level.

          If you underclock AMD’s products, they suddenly become quite efficient. It’s just they don’t get the necessary performance/transistor to be competitive.

            • DancinJack
            • 3 years ago

            For sure. You know what is really amazing. The RX 580 and GTX 1080 have almost exactly the same TDP. That’s got to be embarrassing for AMD.

            • DPete27
            • 3 years ago

            It will be interesting to see how Vega fares with Tile-based rasterization which is what Maxwell and Pascal use. Apparently that’s good for a significant bump in power efficiency.

            • ImSpartacus
            • 3 years ago

            Oh yeah, it’s humiliating.

            GP104 is taking no prisoners.

            Meanwhile, I have the sneaking suspicion that the best case for amd is a 225W Vega 10 Pro that competes with the 1080. That’s the best case.

            Wccftech just dropped a report that top tier Vega 10 XT will have a clc, so amd will probably use that to eek out a bit more efficiency just like they did on the Fury X.

          • DPete27
          • 3 years ago

          That’s exactly my point. AMD’s “great” AVFS is still about 30W too conservative on average at a given frequency. If you’re gloating on how smart your power delivery is, some Joe Schmoe off the street shouldn’t be able to manually squeeze your cards down that much so easily.

          And to your first comment…I couldn’t tell you. Certainly here at TR, it’s well known that Hawaii and Polaris can be undervolted with significant power saving benefits. WattMan being automatically included in the driver software does lend a higher chance of people using it than the Nvidia equivalent of having to download a 3rd party app separately though.

            • DoomGuy64
            • 3 years ago

            AMD would have been better off shrinking Hawaii. Ridiculous power inefficiency for a minor clock boost.

            Considering how well these cards undervolt, I don’t know if I would even blame the hardware over management. Somebody up the chain of command is responsible for this fiasco, and it keeps happening. They need to be fired, because the hardware doesn’t need to be sucking this much juice.

            If management wants to sell high power chips, then design a proper high power chip. Don’t overclock a low power chip to market as high performance. That doesn’t work. Overclocking is a bonus for enthusiasts, not an ootb standard.

            The only silver lining is that these cards should be more efficient than the 480s when undervolted/clocked. But who’s going to bother? Not me. The 580 is ultimately going to sell solely to gamers who don’t care about power, or like to undervolt. Everyone else will avoid it.

            • DPete27
            • 3 years ago

            I said it elsewhere in this comments section, but the factory OC’d cards tested in this article sacrifice 35W for a 5% clockspeed bump over the reference RX580. Stack that on top of an assumedly 25-30W too conservative AVFS and suddenly the RX580 is within 40W of the GTX1060. Not bad, but like you said, someone or multiple someones in the chain decided they need to push these things to the ragged edge for those last 3fps.

            FYI, the reference 1266MHz of the RX480 was actually pretty well placed right at the break point of exponential power usage in my undervolting experiments.

            • Voldenuit
            • 3 years ago

            [quote<]Considering how well these cards undervolt, I don't know if I would even blame the hardware over management. Somebody up the chain of command is responsible for this fiasco, and it keeps happening. They need to be fired, because the hardware doesn't need to be sucking this much juice.[/quote<] Individual cards may be undervolting well, but on a population level, it very may be that amd needs to apply liberally higher voltage to keep their viable yields up. It's hard to say when all we have are anecdotal reports. We might be looking at 90%, 60% or 30% of cards that undervolt well, but without a sample size in the thousands and good controls on the population being studied, it's hard to say.

      • blahsaysblah
      • 3 years ago

      I don’t think the intended audience will care though. Just more value for the $$$. No different than increasing voltage on i5k except the power usage increase there is hidden from the same user.

        • EndlessWaves
        • 3 years ago

        Yeah, a desktop is the wrong choice to begin with if you want to reduce your power consumption while gaming.

        With the average gaming phone/tablet using about 5W, comparing a GTX 1060 and RX 580 on power consumption is like comparing a Lamborghini and Veyron on fuel consumption. Yes, one is almost twice as good as the other, but neither should even be considered if it’s a high priority for you.

          • Voldenuit
          • 3 years ago

          I care about power consumption because I care about noise.

          My GTX 1070 draws so little power under load that my PSU fan stays at 0 rpm during gaming.

            • MOSFET
            • 3 years ago

            If you can hear your PSU fan, your games are definitely not loud enough!

          • terranup16
          • 3 years ago

          Power matters more to me because of the heat it entails (and in turn how hard I need to run my AC to not be sweating when playing games for awhile, and that is going to eclipse the power cost of any card I’m comparing here). If AMD’s RX580 isn’t going to compete with nVidia’s GTX 1070 and 1080 on performance but is going to suck down power closer to those than a GTX 1060, if I’m going to spend more on AC or if I’m going to sweat more, I may as well take the better performance because it’s probably straight-up worth it at that point.

          To further clarify my situation though, I have two gaming desktops in a 12’x14′ room and at present I’m working from home on my gaming desktop most of the day so the idle and full bore exhaust coming from the computers in the room has a big impact on the room even with an 18″ stand fan in the room.

      • Mikael33
      • 3 years ago

      Potential marketing slogan: “Amd: Overclocking your graphics card so you don’t have to !”

      • Krogoth
      • 3 years ago

      That’s always been the price of trying to bump up clockspeeds. Nvidia GPUs are just about as power hungry when you attempt to overclock the crap out of them. The factory-overclocked 6GiB 1060 aren’t exactly easy on power either.

      Performance GPUs and power efficiency rarely go together. You usually have to get pick or the other unless you are willing to underclock the chip’s power curve into a more reasonable level (“silent/quiet” PC crowd does this all of the time)

        • Mikael33
        • 3 years ago

        Duh, amd for the past few generations has had to push it’s cards beyond the voltage/clock sweet spot to get performance comparable to nvidia, nvidia hasn’t had to push their cards nearly as hard to get good performance. The 1080 Ti is worlds more efficient than Amd has and it’s obviously the fastest card you can buy btw 🙂

      • Concupiscence
      • 3 years ago

      You’d honestly be about as well off with a Radeon 390, at this point.

    • drfish
    • 3 years ago

    Thanks for tossing the 1050 Ti into the results!

      • Anonymous Coward
      • 3 years ago

      I was happy to hear the RX550 news, I’m under the impression that the low end has not been seeing the same rate of performance scaling as the high end.

        • blahsaysblah
        • 3 years ago

        Yeah, used to be, to get multi-monitor support, you spent about the same to get less than nothing. Hopefully, there is no RX540 and all the office workstations sporting Ryzen 7 CPUs get the RX550.

          • EndlessWaves
          • 3 years ago

          Hopefully the RX 550 will ship with some modern video outputs instead of assorted sockets from the clearance section. Three DisplayPort 1.4s and an HDMI 2.0 output would be nice.

            • derFunkenstein
            • 3 years ago

            You don’t want 2xDVI, 1xVGA, and mini HDMI?

            • MOSFET
            • 3 years ago

            I always want two DVI, but hey personal preferences…

            • derFunkenstein
            • 3 years ago

            I like seeing VGA ports on notebooks because a couple times a year I know I’m going to need to connect to a projector that has only that connector.

            • Anonymous Coward
            • 3 years ago

            I have the idea that buyers of lower-end cards are more likely to appreciate some legacy ports because its likely not only the graphics card where they are keeping costs low.

            Go RX550!

      • stefem
      • 3 years ago

      Yea but a GTX 1070 would have been more interesting, even just as reference to put in perspective power consumption measurement

    • DPete27
    • 3 years ago

    Jeff.
    What voltage did the AMD 5xx series cards need to hit their respective clocks?
    Did you try manual undervolting at stock clocks? If so, how low could you go and how did that effect power consumption?

    • DragonDaddyBear
    • 3 years ago

    It wasn’t reviewed but I think the most exciting card might be the 550. It may turn out to be a good HTPC card on the cheap.

    • 1sh
    • 3 years ago

    Craptastic.
    An overclocked RX480 that consumes a LOT more power than the competition in exchange for a few more frames.

    • christos_thski
    • 3 years ago

    So it seems like Fury/Fury-X remains the top shelf Radeon gpu until Vega.

      • derFunkenstein
      • 3 years ago

      Yeah, these cards were never in any danger of taking over the high end, anyway.

      • Kretschmer
      • 3 years ago

      AMD competes on power consumption, and they gave it their college try!

    • southrncomfortjm
    • 3 years ago

    Now is when I get worried that AMD will fall back into its old habits of rebranding very similar cards year after year. Seems to me that Nvidia should have an easy time blowing these cards out with their 11XX cards. Here’s hoping for more continued strong competition by AMD v. everyone else.

      • flip-mode
      • 3 years ago

      Fall back? I don’t really think either AMD or Nvidia ever abandoned that approach. They both use it whenever they feel the need to. It’s just how it is.

        • southrncomfortjm
        • 3 years ago

        Nvidia seems to be a bit better at not just rebranding with higher clocks. They normally introduce at least a couple new GPUs each year.

          • Den2
          • 3 years ago

          Normally AMD releases at least a new flagship too though. Just they seem to like making everything but the flagship a rebrand unless they’re a node change. With fewer and fewer node changes, that strategy means releasing what was a high end card two generations ago as a lower end model now…

            • ImSpartacus
            • 3 years ago

            AMD only has enough dosh to put out one new chip in most years (or two small-ish chips in a die shrink year like 2016).

            Vega 10 is all you’ll get and you’ll like it! Now finish your dinner and go brush your teeth for bed.

      • I.S.T.
      • 3 years ago

      To be fair, AMD’s never gotten as bad as the four or so, possibly more, rebrands of the full G92… Like, I had one of those. It was a damn fine card. Doesn’t mean what Nvidia pulled was okay. It was bull****.

      Honestly, the only reason I stick with nvidia at this point is I have some physx titles I’d like to try physx on as well as, at least right now, having a generally better price per watt/heat ratio. I live in Texas. Heat is a super high concern for me. The RX 580 has decent performance, but the extra 30 watts is like O_O to me. Nvidia also generally has better Open GL support and that is often used in 3D consoles’ emulation.

        • f0d
        • 3 years ago

        amd have 3 rebrands with the 7790 (r7-260 r7 -360) and the 7870 (r9-270 r9 370)
        g92 as far as i know was 3 also – 8800 gts (9800gtx gts250)

          • EzioAs
          • 3 years ago

          Just for the sake of accuracy, the HD 7790 was rebranded as the R7 260X.

          The R7 260 was introduced a bit later with fewer cores & TMUs and the R7 360 is its rebrand with almost the exact same spec except for the 50 MHz core clock bump.

          The HD 7870 on the other hand was rebranded as the HD 7870 GHz Edition then the R9 270 /270X and the 370X.

          The HD 7850 was rebranded as the R9 270 1024SP and later as the R7 370.

          Source: [url<]https://www.techpowerup.com/gpudb/?mfgr%5B%5D=amd&mobile=0&released%5B%5D=y14_c&released%5B%5D=y11_14&generation=&chipname=&interface=&ushaders=&tmus=&rops=&memsize=&memtype=&buswidth=&slots=&powerplugs=&sort=released&q=[/url<] Side note: I've only been a PC enthusiast since 2010 but GCN had the most confusing lineup after AMD forgo the Radeon HD branding.

            • f0d
            • 3 years ago

            i diddnt really care about cut down cards when listing and i guess i should have put an X on the end

            “amd have 3 rebrands with the 7790 (r7-260x r7 -360x) and the 7870 (r9-270x r9 370x)
            g92 as far as i know was 3 also – 8800 gts (9800gtx gts250)”

            either way the same chips were in the non X versions anyways…

            but yeah their line up throughout the R 2XX and 3XX series was just a massive clusterflux

          • Demetri
          • 3 years ago

          If you count OEM-only cards, it’s worse. Cape Verde (7750/70), Bonaire (7790), and Cedar (5450) all lasted for 5 respins.

    • Kretschmer
    • 3 years ago

    Why spend R&D when you can pump the voltage to “stupid?”

    • chuckula
    • 3 years ago

    Nice try AMD.
    I’m still waiting for RyZen.

    • derFunkenstein
    • 3 years ago

    [quote<] Hot Topic-approved iconography[/quote<] This perfectly encapsulates what I couldn't quite put into words when the PowerColor card was pictured on TR last week. Great job.

Pin It on Pinterest

Share This