AMD’s Radeon RX 480 graphics card reviewed

We’ve been hearing about AMD’s next-generation Polaris GPUs for a little over six months now. We knew those chips would have enhanced display output capabilites, and we also knew they would be produced using 14-nm FinFET process technology. Now, the first product to use this new architecture is here. The Polaris 10 GPU on AMD’s Radeon RX 480 graphics card isn’t a high-end monster like the GP104 chip powering Nvidia’s first 16-nm FinFET cards, though. Instead, the RX 480 puts a $200-and-up price tag on VR-ready performance.

AMD sees a great deal of opportunity to regain market share in the $100-$300 graphics card price range, and the RX 480 is the company’s first shot at the middle of the enthusiast bell curve. Let’s see how it intends to join this battle.

Baby, you’re a star

Polaris is actually a pair of chips—Polaris 10 and Polaris 11—that are two takes on the same underlying architecture. Both Polaris 10 and Polaris 11 will be making their way into desktops, but Polaris 11 is a smaller chip that will probably find a home in many more gaming laptops than it does desktop PCs.

  ROP

pixels/

clock

Texels

filtered/

clock

(int/fp16)

Shader

processors

Rasterized

triangles/

clock

Memory

interface

width (bits)

Estimated

transistor

count

(Millions)

Die size

(mm²)

Fab

process

Polaris 11 16 64/32 1024 2 128 ??? ??? 14 nm
Tonga 32 128/64 2048 4 256 5000 355 28 nm
Polaris 10 32 144/72 2304 4 256 5600 232 14 nm
Hawaii 64 176/88 2816 4 512 6200 438 28 nm
GM206 32 64/64 1024 2 128 2940 227 28 nm
GM204 64 128/128 2048 4 256 5200 398 28 nm
GP104 64 160/160 2560 4 256 7200 314 16 nm

As you can see from the table above, Polaris 10 occupies a much smaller footprint than Tonga before it. Despite its smaller area, Polaris 10 has a bit more of almost everything on board: more stream processors, more texturing units, two times the L2 cache of older chips at 2MB, and a variety of neat tricks that we’ll discuss momentarily.

Source: AMD

The block diagram above should be largely familiar to anybody who’s laid eyes on AMD’s past GCN chips. Polaris 10 has 36 GCN compute units for a total of 2304 stream processors. It has 144 texturing units, four geometry engines, 32 ROPs, and a 256-bit path to GDDR5 memory. From a pure resource standpoint, this part falls somewhere between Tonga and Hawaii in the AMD pantheon.

Source: AMD

Polaris 11 is, at a glance, a little less than half of Polaris 10. It has half the ROPs, less than half of the shaders, half the triangles per clock, and a memory bus that’s half as wide. It also has 1MB of L2 cache. It’ll almost certainly be smaller than Polaris 10’s 232-mm² die, and AMD further notes that this chip is its thinnest GPU ever. That may sound like a weird claim to make, but it’s important for achieving the kind of console-class gaming experiences in thin-and-light notebooks that AMD wants to deliver with this architecture. Aside from one desktop card, we don’t really know where Polaris 11 will land or what names it’ll carry yet. The spotlight is on the Radeon RX 480 for today.

Now that we’ve seen the two basic Polaris parts, let’s examine the common improvements that AMD baked into the underlying architecture.

 

Radeon, refined

Under the hood, Polaris is a more efficient version of the GCN architecture we’ve known since its inception on the Radeon HD 7970. This fourth generation of GCN incorporates a number of small improvements.

Source: AMD

The geometry engines in Polaris now feature a stage called the Primitive Discard Accelerator that can remove zero-area (or “degenerate”) triangles, as well as polys that aren’t being sampled, early in the graphics pipeline. AMD says this feature improves performance in workloads that combine lots of triangles (like highly tesselated scenes) and multi-sampled anti-aliasing. Heavy tesselation has traditionally been a weakness for GCN cards, so it’ll be interesting to see what effect this feature has on Polaris’ performance. Each Polaris geometry engine also gets an index cache for storing what AMD describes as “small instanced geometry.” This cache reduces the need to move data around the chip, reducing internal bandwidth requirements and improving throughput.

Polaris 10’s front end is getting a new pair of programmable units that AMD calls “Hardware Schedulers,” or an HWS for short, alongside its four asynchronous compute engines. These blocks perform a variety of scheduling tasks for asynchronous compute workloads. They can set up real-time and prioritized task queues for audio and VR processing, manage concurrent tasks and process scheduling, and perform load-balancing between compute and graphics workloads. Since the HWSes can perform this work on the chip, they reduce CPU driver overhead. Because they’re programmable, AMD says it can update the capabilities of each HWS with new microcode, too.

Compute Unit Reservation in action. Source: AMD

One example use of the HWS duo involves audio processing for VR. In order to be sure that a given audio task will complete within a certain time frame, a developer can use a new feature called Compute Unit Reservation to request a specific number of on-chip resources that will be dedicated to a specific task queue. The HWSes ensure that the proper resources are allocated for the job—”spatial management,” in AMD parlance. These blocks can also perform what AMD calls “temporal management.” An example of such a task is managing the Quick Response Queue that the company specifically made for handling VR-related workloads like asynchronous time warp for the Oculus Rift.

Source: AMD

The stream processors in each GCN compute unit are getting some new tricks in Polaris, too. If many wavefronts (AMD’s name for groups of threads) of the same workload are set to be processed, a new feature called instruction prefetch lets executing wavefronts fetch instructions for subsequent ones. The company says this approach makes its instruction caching more efficient. Polaris CUs also get a larger per-wave instruction buffer, a feature that’s claimed to increase single-threaded performance within the CU. Polaris can group client L2 cache requests, too, so it can fetch data from that cache more efficiently. 

In addition to a larger L2 cache that allows more data to remain on the chip, Polaris has improved delta color compression (or DCC) capabilities that allow it to compress color data at 2:1, 4:1, or 8:1 ratios. These compression methods should allow the chip to enjoy greater effective memory bandwidth and higher efficiency. Polaris’ memory controller supports 8 GT/s GDDR5 DRAMs for up to 256 GB/s of memory bandwidth. Instead of moving to a new memory technology, AMD says it’s getting more life out of GDDR5 mostly thanks to Polaris’ improved DCC capabilities. Between DCC, the expanded L2 cache, and improved cache access methods, AMD claims it can reduce the power required by Polaris’ memory interface by up to 58%, too.

Getting smart about power usage

Along with the move to the 14-nm FinFET process itself, AMD is deploying several new monitoring technologies on Polaris that are meant to help each chip perform at its best. The company is making this move in response to a basic problem of power delivery to the chip itself: variations in input voltage as large as 10% to 15% require an increase in the average voltage sent to the chip to compensate. AMD says this safety margin wastes a lot of power, so it’s responding with a new technology called adaptive voltage and frequency scaling, or AVFS.

Source: AMD

When the company designed Polaris, it borrowed a few pages from its CPU design team. Each Polaris GPU now has embedded frequency sensors on its die that work in concert with its temperature and power sensors. If a chip can run at lower voltages to achieve a given frequency on the DVFS curve, for example, this tech will let it do so and allow it to save power at the same time. The chip can also quickly adjust its frequency in response to voltage droops instead of running within a safety margin at all times, extracting 5%-10% more performance on average.

Source: AMD

That on-chip monitoring technology also allows the GPU to analyze the input voltage it’s receiving from its host system at boot time and compensate for any differences between that input and the power characteristics of the test equipment on which the chip was initially binned. The chip can then use this information to adjust its voltage regulators to deliver the same operating environment it saw on the test bench, improving efficiency.

Finally, AVFS lets Polaris chips compensate for the effects of transistor aging and the aging of other components in the system. Polaris’ AVFS modules have aging-sensitive circuitry inside that let the chip compensate for any degradation in performance as it gets older. That same boot-time monitoring technology can determine when other components in a system (presumably, the power supply) are no longer as young and spry as they once were, too. By self-calibrating and adapting to this aging, AMD says the chip will offer “more robust operation” over time while delivering better performance out of the box.

Source: AMD

Getting down to the real nitty-gritty, AMD is improving the design of the multi-bit flip-flop circuits it uses in Polaris. The company says there are about 21 million of these circuits on Polaris 10, and they account for 15% of the chip’s TDP. By moving to a quad multi-bit flip-flop, AMD says it reduced Polaris’ TDP by 4%-5%.

All that’s well and good, but we know you really want to see the RX 480 card itself. We won’t make you wait any longer.

 

The Radeon RX 480 and friends

If you’ve watched TR over the past few days, you’ve already seen the AMD reference design for the RX 480. Today, however, we can spill all the beans about clock speeds and field-strip the card down to its PCB. Let’s get to it.

  GPU

base

clock

GPU

boost

clock

Shader

processors

Memory

config

PCIe

aux

power

Peak

power

draw

Suggested

price

Radeon RX 480 1120 MHz 1266 MHz 2304 4GB or 8GB GDDR5 1x 6-pin 150W $199.99 (4GB)

$239.99 (8GB)

The Polaris 10 GPU on the RX 480 will run at 1120MHz base and 1266MHz boost speeds. We expect AMD’s board partners will push those reference numbers up a bit as part of their usual process of tweaking and tuning. At those stock clocks, this card has a rated board power of 150W.

The RX 480 will ship in two versions: one with 4GB of GDDR5 RAM, the other with 8GB. The 4GB card will have a $200 suggested price, while doubling the RAM will set buyers back an extra $40. AMD is giving its board partners freedom to adjust GDDR5 speeds, but it’s set a 7 GT/s floor on the speeds those vendors can use. The reference card we’re testing is an 8GB version, and it has 8 GT/s GDDR5 on board.

Around back, the RX 480 has three DisplayPort 1.3 ports and an HDMI 2.0b port with HDCP 2.2 support. Those DisplayPorts are DP 1.4-HDR-ready, too. Folks who want to plug a DVI monitor into their RX 480 without an adapter are out of luck.

Flipping the card over reveals a stubby PCB with an extra vent above the blower fan. This vent hole is a nice touch for folks planning to install multiple RX 480s in their systems in CrossFire.

Stripping the cooling shroud from the card reveals a small aluminum heatsink for the GPU itself, plus another heatsink for the card’s power-delivery circuitry. You can also see the single six-pin PCI Express power connector the RX 480 needs to do its thing.

Unscrewing the card’s entirely standard fastener complement lets us strip the RX 480 right down to its bones. As you can see from this view, the aluminum heatsink uses a copper disc in its base, presumably to improve heat transfer between the GPU and the rest of the heatsink.

Moving closer to the PCB gives us a better look at the Polaris 10 chip itself, as well as its eight GDDR5 memory chips. With that, you’ve basically seen all there is to see of the RX 480.

Source: AMD

While we’re only reviewing the RX 480 today, AMD should be releasing two other Polaris cards in the near future. The RX 470 will use the same Polaris 10 GPU as the RX 480, but it’ll lose four compute units, bringing the shader count down to 2,048. It’ll also be available with 4GB of GDDR5 RAM only. AMD isn’t releasing full details of this card today. From the outside, this card looks exactly the same as the RX 480.

Source: AMD

The RX 460 is a tiny card built around the Polaris 11 GPU. It has 14 of that chip’s 16 GCN compute units enabled, and it won’t need an external power connector. We also don’t have full details of that card yet. It is a fair bet that both of these cards should slot in under the $199 price point established by the 4GB RX 480, though. We expect to learn more about these products over the course of the summer.

Now that we’ve fixed a course on Polaris, let’s see how the RX 480 performs.

 

Our testing methods

As always, we did our best to deliver clean benchmarking results. Our test system was configured as follows:

Processor Core i7-5960X
Motherboard Asus X99 Deluxe
Chipset Intel X99
Memory size 16GB (4 DIMMs)
Memory type Corsair Vengeance LPX

DDR4 SDRAM at 3200 MT/s

Memory timings 16-18-18-36
Chipset drivers Intel Management Engine 11.0.0.1155

Intel Rapid Storage Technology V 14.5.0.1081

Audio Integrated X99/Realtek ALC1150

Realtek 6.0.1.7525 drivers

Hard drive Kingston HyperX 480GB SATA 6Gbps
Power supply Fractal Design Integra 750W
OS Windows 10 Pro

 

  Driver revision GPU base

core clock

(MHz)

GPU boost

clock

(MHz)

Memory

clock

(MHz)

Memory

size

(MB)

AMD Radeon RX 480 Radeon Software 16.6.2 beta 1120 1266 2000 4096
Sapphire Radeon R9 380X Radeon Software 16.6.2 beta 1050 1375 4096
Gigabyte GeForce GTX 960 4GB GeForce 368.39 1228 1329 1753 4096
Gigabyte GeForce GTX 970 GeForce 368.39 1076 1216 1750 4096
Gigabyte Windforce GTX 980 GeForce 368.39 1228 1329 1753 4096

Our thanks to Intel, Corsair, Asus, Kingston, and Fractal Design for helping us to outfit our test rigs, and also to AMD and the Nvidia board partners who sent us graphics cards for testing.

For our “Inside the Second” benchmarking techniques, we use the Fraps software utility to collect frame-time information for each frame rendered during our benchmark runs. We sometimes use a more advanced tool called FCAT to capture exactly when frames arrive at the display, but our testing has shown that it’s not usually necessary to use this tool in order to generate good results for single-GPU setups. We filter our Fraps data using a three-frame moving average to account for the three-frame submission queue in Direct3D. If you see a frame-time spike in our results, it’s likely a delay that would affect when a frame reaches the display.

Aside from the Radeon RX 480, our test card stable is made up of non-reference designs with boosted clock speeds and beefy coolers. Many readers have called us out on this practice in the past, so we want to be upfront about it here. We bench non-reference cards because we feel they provide the best real-world representation of performance for the graphics card in question. They’re the type of cards we recommend in our System Guides, so we think they provide the most relatable performance numbers for our reader base. When you see “GTX 960” or “GTX 980” in our results, for example, be sure to remember that we’re talking about custom cards, not reference designs.

Each title we benched was run using DirectX 11. We understand that DirectX 12 performance is a major point of interest for many gamers right now, but the number of titles out there with stable DirectX 12 implementations is quite small. We’ve had trouble getting Rise of the Tomb Raider to even launch in its DX12 mode, and other titles like Gears of War: Ultimate Edition still seem to suffer from audio and engine timing issues on the PC. DX12 also poses challenges for data collection that we’re still working on. For a good gaming experience today, our money is still on DX11. That said, we probably need to revisit DX12 performance in the near future and see how the tables turn.

 

Sizing ’em up

Do a bit of quick math, and you end up with the theoretical peak performance numbers for the following graphics cards:

  Peak pixel

fill rate

(Gpixels/s)

Peak

bilinear

filtering

int8/fp16

(Gtexels/s)

Peak

rasterization

rate

(Gtris/s)

Peak

shader

arithmetic

rate

(tflops)

Memory

bandwidth

(GB/s)

Radeon RX 480 41 182/91 5.1 5.8 256
Sapphire Radeon R9 380X 33 133/67 4.2 4.3 256
Radeon R9 290 61 160/80 3.8 4.8 320
Radeon R9 Fury X 67 269/134 4.2 8.6 512
Gigabyte GeForce GTX 960 32 82/82 2.6 2.6 112
Gigabyte GeForce GTX 970 63 126/126 4.9 4.0 224
Gigabyte GTX 980 85 170/170 5.3 5.4 224

We won’t be testing every card in the table above, but our theoretical numbers offer some interesting insight about how the RX 480 stacks up against its AMD stablemates and the Nvidia competition. Thanks in part to its ROP count, the RX 480 handily outpaces the Tonga-powered R9 380X in raw fill rate, but it falls a bit short of the enormous shader arrays on the R9 290 and the Fury X. The RX 480 also achieves higher theoretical int8 texturing rates than everything in the table save for the R9 Fury X, even if its performance on more complex textures is still limited by GCN’s half-rate throughput with fp16 data types.

Now that we’ve taken stock of the RX 480’s theoretical performance, let’s take a look at some actual numbers generated by the Beyond3D test suite to see how these cards behave in practice.

The RX 480 is clocked higher than the Tonga-powered R9 380X, and its slightly larger shader array gives it slightly more pixel-pushing power than that card. Nvidia’s cards maintain their long-running advantage here.

Hand the RX 480 an incompressible texture, and its memory bandwidth numbers unsurprisingly outpace everything else on the board. When the GeForces can employ their delta-color-compression, however, the competition heats up. Still, whatever new DCC mojo AMD has added to the RX 480 appears to be one of several factors contributing to the RX 480’s very solid performance increase over the R9 380X.

Hm. Despite having more texturing units onboard than Tonga does, the RX 480 seems to run into a wall at about the same peak rate as its predecessor. Perhaps there’s another bottleneck at work somewhere for this test.

Now here’s something interesting. In our past graphics card reviews, AMD’s cards have fallen behind in this polygon throughput test when we’ve presented them with work in a strip format. Here, the RX 480 sets itself apart by delivering similar rates for both formats, and at faster rates than even the R9 Fury X before it. Perhaps we’re seeing that fancy new Primitive Discard Accelerator at work.

Polaris 10’s theoretical peak shader performance is pretty potent, and our ALU tests confirm it. The RX 480 edges past even the GTX 980 in these tests.

Now that we’ve examined these cards’ theoretical performance, let’s put them to the test in some real-world gaming scenarios.

 

Grand Theft Auto V

As we learned in our recent review of Nvidia’s GeForce GTX 1080, Grand Theft Auto V runs pretty well on a wide range of hardware. This time around, we dialed back the resolution to 2560×1440 and left all of our other graphics settings the same. Pardon our wall of screenshots:


Click the buttons above to cycle through our frame time plots. As you can see, all of the cards we tested run GTA V quite smoothly at these settings—there are no large spikes at regular intervals that would suggest hitching, or wide swings up or down in the overall trend of the data. These graphs are the only place you want to see something flatlining, and all of the cards we tested turn in nearly-ideal results.

Going by the measure of potential performance that average FPS provides, most of our cards start off with a strong showing in GTA V. The Radeon RX 480 and GeForce GTX 970 are within a hair’s breadth of one another, while the R9 380X is further back and the GTX 980 pulls far ahead. Average frame rates don’t tell the whole story, though, so let’s have a look at the 99th-percentile frame time each card delivered during our tests. This measure shows how much time the card took to produce each frame for 99% of our benching run.

Not a bad result here, either. The GTX 970 delivers an ever-so-slightly better 99th-percentile frame time than the RX 480, while the R9 380X and GTX 960 bring up the rear. The GTX 980 is clearly in a class of its own with GTA V at this resolution, maintaining well over 60 FPS for most of our test.


These “time spent beyond X” graphs are measures of “badness”—the amount of time that a card was displaying animation that may have been less than fluid, or at least less than perfect, during our test.

If a frame takes longer than 50 ms to produce, for example, the time the card has to spend churning away on that difficult scene represents a drop below 20 FPS—a slowdown that most users will definitely notice. If a card consistently produces frames without taking more than 33 ms on each one, we know it’s running at 30 FPS. If frames take longer than that to get through the graphics pipeline, that slowdown can produce judder and other ugliness with vsync enabled on a 60-Hz monitor. Finally, if a card is completing frames in 16.7 ms or better, the corresponding animation on screen is moving along at 60 FPS or more. 60 FPS is the golden mark we’d like to see a card achieve (or surpass) for each and every frame.

Happily, none of our cards spend any time past the 50-ms or 33-ms thresholds. The GTX 980 also gets a gold star here for not spending any time past the 16.7-ms mark. The GeForce GTX 970 spends just a fraction of a second under 60 FPS, while the RX 480 spends, well, a larger fraction of a second there. Both the Radeon R9 380X and the GeForce GTX 960 have a far harder time of it—they spend considerable amounts of time working on frames that caused the fluidity of animation to drop.

 

Crysis 3

Despite its 2013 release date, Crysis 3 is still plenty capable of putting the hurt on modern graphics cards. In fact, our GTX 960 was so overwhelmed by the game that it only sustained an average of 22 FPS at our chosen 2560×1440 resolution, for some reason. We’re omitting it from these numbers as an outlier.


Once again, the RX 480 is neck-and-neck with the GTX 970, both in average frame rates and in its 99th-percentile frame time. Moving on.


In our measures of “badness,” the R9 380X spends a fair amount of time on frames beyond the 50-ms and 33-ms marks, and it’s the only card that has any kind of problems with keeping frame times below 16.7 ms for an extended period. Our data reveals that the GTX 970 and RX 480 both had some troublesome frames to churn through, but nothing to the degree of the R9 380X. The GTX 980 turns in a great performance.

 

Rise of the Tomb Raider

Lara Croft’s latest quest is one of the more demanding games we’ve come across recently. It posed a considerable challenge for even the GeForce GTX 1080 in our recent review, so we’re dialing it back to 1920×1080 for this batch of cards.


Sensing a trend yet? The GTX 970 and the RX 480 deliver pratically identical performance in Rise of the Tomb Raider. The GTX 960’s frame time plot gets a little furry, but the R9 380X’s is well and truly all over the place. This is one situation where the value of frame-time benchmarking really shows itself: despite having identical average frame rates, the R9 380X and the GTX 960 have vastly different 99th-percentile frame times. As you might expect, playing RoTR on the R9 380X isn’t nearly as smooth an experience as it is on even the GTX 960.


The R9 380X spends a noticeable amount of time past the 33.3-ms mark. Moving to the 16.7-ms threshold, the GTX 980 is the champ, while the RX 480 and GTX 970 spend similar-but-significant amounts of time working on difficult frames that would result in a drop below 60 FPS. The R9 380X and GTX 960 bring up the rear.

 

Fallout 4

Like GTA V, Fallout 4 is another game that’s not terribly hard for most graphics cards to run well. We tested this game at 2560×1440 while leaving all of the visual quality settings the same as in our GTX 1080 review.


None of the cards in our tests have any trouble running Fallout 4 smoothly. The RX 480 overtakes the GTX 970 in our average frame rate measure, and it also matches the beefier GTX 980 in our 99th-percentile frame time metric. The GTX 970 takes ever-so-slightly longer to deliver most of its frames, while the GTX 960 and R9 380X bring up the rear.


In our “badness” measures, the action really starts to happen at the 16.7-ms threshold. The RX 480 and GTX 980 are neck-and-neck, while the GTX 970 is just behind. To be clear, none of these three cards are having issues that would severely impact animation smoothness in-game. The R9 380X and GTX 960 continue to skew our graphs, though.

 

The Witcher 3

Aside from running this title at 1920×1080, we left our graphics settings the same as they were in our GeForce GTX 1080 review. Ignore the resolution in our first setings screenshot.



GeForce GTX 960 aside, none of the cards we tested The Witcher 3 on have any problems running Geralt of Rivia’s adventures smoothly. The RX 480 turns in a higher average frame rate than the GeForce GTX 970, but its 99th-percentile frame time is a tad higher. Still, the GTX 970, RX 480, and R9 380X are all pretty closely matched here. The GTX 960 is off in a corner somewhere, while the GTX 980 delivers somewhat better average frame rates and a slightly lower 99th-percentile frame time than the rest of the peloton. No surprises there.


The RX 480 and the GTX 970 both spend about the same amount of time past 16.7 ms working on challenging frames in this title. Once again, we can say they’re pretty closely matched. On to the next.

 

Hitman

If our GTX 1080 review is anything to go by, Agent 47’s most recent missions are quite taxing for most graphics cards to render smoothly. We tested this demanding title at 1920×1080.


Here, the RX 480 really stretches its legs, even in DX11 mode. Its average FPS figures are just slightly behind the GeForce GTX 980, and its 99th-percentile frame time is even a smidge better than that card’s. The GTX 970 delivers a solid 99th-percentile result, but its average FPS numbers suggest that its performance potential is considerably lower than the RX 480’s in this title. Meanwhile, the Radeon R9 380X nearly matches the GTX 970, while the GTX 960 falls behind.


At the critical 16.7-ms threshold, the GTX 980 leads the pack, but the RX 480 doesn’t spend much more time working on frames that take longer than that to render. The GTX 970 and the R9 380X have a somewhat harder time of it, while the GTX 960 just isn’t up to the job. We’re not sure what’s up with our particular sample of GTX 960, but it doesn’t seem all there for some reason. Might have to take it out behind the shed after this review is over.

 

Power consumption

The power consumption of Polaris chips is another major point of interest for us. We’ve seen AMD demo systems running Star Wars Battlefront before with tantalizing numbers on the power meters beside them, so we were excited to replicate that experience in our own testing environment. To test power draw, we fire up a real game, Crysis 3, to show how much juice each graphics card needs under a typical workload.

Well, that wasn’t quite what we expected. The RX 480 doesn’t draw much more power at idle than the competition, but it’s worth noting that Polaris aside, all of the cards we’re testing are using GPUs fabricated on a 28-nm process. Polaris 10 doesn’t seem to benefit much from the move to 14-nm fabrication as far as idle power draw is concerned.

Fire up Crysis 3, and the RX 480 draws as much power to do its thing as the GTX 970. If we consider the Radeon R9 290 (or the R9 390) the GTX 970’s most natural competitor on the 28-nm process node, AMD has shaved anywhere from 100W to 140W off that card’s power draw while delivering the same performance, if our past reviews are any indication.

An improvement that large is impressive until one considers that the GTX 1080s in the TR labs need 265W-300W of total system power to do their thing. To be fair, our power numbers are one measurement taken under one particular load and in one particular testing environment, and modern power management is a complicated thing with many input variables. All that said, our gut impression upon seeing these numbers is that Pascal is frighteningly efficient, more so than the GTX 1080 taken in isolation might have suggested. Our completely wild hunch is that Nvidia has tons of headroom to play with in designing a Pascal GPU to target this price class, if it wants to.

Noise levels and GPU temperatures

Before we take a look at the RX 480’s cooling performance, we should make a quick note about our test environment. The closed-loop liquid cooler on our test system’s CPU produces about 40 dBA on its own, so our noise floor is artificially high. Noise numbers near or below 41 dBA indicate that a given graphics card isn’t exceeding that rather high value to begin with, at least. The ambient temperature in our testing environment was about 73° F during our tests.

We’re not fans of blower-style coolers for the most part, and the RX 480’s isn’t doing anything to change our minds. While the card runs at or  below our 40-dBA-ish noise floor at idle, its load noise level climbs to 51 dBA—just short of the triple-fan cooler on our Gigabyte Windforce GTX 980. Despite all the sound and fury, the RX 480’s load temperatures reached 83° C under load, too.

One of the characteristics we’ve come to associate with efficient graphics cards is polite manners in the noise department, but the RX 480’s reference design doesn’t deliver. The blower fan isn’t pleasant-sounding, either—it’s grindy and obtrusive. We hope AMD’s board partners have custom coolers in the works that deliver a better experience in the noise, vibration, and harshness department.

 

Conclusions

Before we get too deep into our thoughts and feelings about Polaris and the Radeon RX 480, let’s break out a couple of our famous value scatter plots. We’ll first look at the performance-per-dollar these cards offer going by the potential measure of average FPS. To accurately reflect the changes in price many of the graphics cards in our tests have experienced since their release, we’ve averaged the prices of those cards on Newegg right now. We used the RX 480 8GB card’s $240 suggested price for these results.

The RX 480 8GB card we tested delivers a hair more performance potential than the GTX 970, but at a significantly lower price than the average going rate for that card on Newegg right now. We’d have to test the 4GB RX 480 to be truly sure of its value proposition, but just imagine a similar dot at $200, and AMD might have quite the hit on its hands.

Going by the measure of performance potential that average FPS provides, the RX 480 is sometimes slightly faster than the GTX 970, and it’s sometimes a little slower. Those familiar with the long-running battle between the GeForce GTX 970 and the Radeon R9 290 should be getting a sense of deja vu right now. What’s nice about the RX 480 is that AMD is extracting that kind of performance from a die that’s roughly half as large as Hawaii with what is, in many respects, a smaller engine inside. 

Next, let’s take a look at our 99th-percentile-FPS-per-dollar graph. We take the geometric mean of the 99th-percentile frame time each card delivers across our test suite and convert it into FPS to make our higher-is-better logic work.

The RX 480 just barely squeezes past the GTX 970 in this measure, but its appealing price tag helps it plant a flag that’s the highest and leftmost on our 99th-percentile FPS chart. Once again, imagine a little dot in a similar place on the $200 line. That’s pretty incredible performance in our advanced metrics for a card at this price point.

Indeed, what’s most notable about the RX 480 compared to past Radeons of any price is its consistently smooth frame delivery. Where AMD’s older cards have trailed the GeForce competition in delivering smooth gameplay—often by wide margins—the RX 480 chalks up a huge improvement in both our advanced 99th-percentile frame time and “badness” measures compared to the Radeon R9 380X. We’re completely comfortable calling the RX 480 the equal of Nvidia’s GeForce GTX 970 in those regards. That’s excellent progress from the red team, and we hope that whatever mojo is responsible for this turn-around works its way into every future AMD graphics card.

On the other hand, the RX 480’s power consumption and noise figures aren’t as rosy as we might have expected them to be. Our power consumption tests today aren’t perfectly comparable to those in our older reviews, but one Radeon R9 290-powered system with a similar CPU and motherboard drew about 400W under load when we reviewed that graphics card as part of a larger test a while back. The Radeon RX 480 delivers similar performance to that card while shaving about 140W off the total power draw of our system. TR readers helpfully point out that using board power as a rough guide, the RX 480 is about 90% more efficient than the R9 290 before it, considering the performance we measured. Either way, that figure seems to fall short of the 2.8X performance-per-watt increase that AMD often touted with Polaris.

If you care about noise and heat, the performance of the reference cooler on the RX 480 will probably leave you wanting, too. It’s quite loud at full tilt, and it lets the GPU underneath get quite hot under load. Perhaps AMD needs to set the designer of the Wraith cooler loose on its graphics cards, as well. Like the Founders Edition GeForce GTX 1080 we just reviewed, we think most buyers will probably be best off waiting to see what sort of custom cooling AMD’s board partners have in store before dropping two Benjamins or more on a reference card.

Like we’ve noted, the 4GB version of the RX 480 delivers efficiency and performance figures that are both pretty similar to a GTX 970, and it pulls off this feat for a $200 suggested price. We think that’s a nice place to be. A performance jump like this hasn’t happened around this price point for a long, long time, and it’s quite welcome. The RX 480 8GB offers a bit of “future-proofing” in memory-hungry games like Rise of the Tomb Raider for $40 more. Either card should meet Oculus’ and HTC’s recommended specs for a Rift or Vive, so aspiring VR junkies can put the $65 to $150 extra that a GTX 970 would command right now toward a VR headset. Regular gamers can just enjoy fast, smooth gameplay in traditional titles and pocket the cash.

Right this second, the RX 480 sets a new bar for performance and smoothness at its price point, and it’s undoubtedly the midrange card we’d recommend to most—at least, once AMD’s board partners get their hands on it. It’ll be interesting to see what Nvidia’s answer to the RX 480 will be, but for now, we’re excited to see where AMD will go now that it has its eyes on the stars.

Comments closed
    • Lore
    • 3 years ago

    Biggest surprise for me in this review is how badly the gtx960 is beaten by the R9 380x. Will probably be buying the Sapphire Nitro rx480, the stock cooler performance and power draw from the socket is unacceptable. My 7970 will have to soldier on for a couple more weeks…

    • Mr Bill
    • 3 years ago

    Ahem, “Star of wonder, star of night” is the Bethlehem “star”. Polaris is named either for the [url=http://earthsky.org/brightest-stars/polaris-the-present-day-north-star<]pole star[/url<] in the [url=http://en.es-static.us/upl/2010/07/polaris_big_dipper_little_dipper.jpg<]tail of ursa minor[/url<] (the little bear, the little dipper, or even more ancient the tail of the little dog) or for the polaris missile.

    • tipoo
    • 3 years ago

    Oh, Wasson was tweeting about the 480, I missed it at the time! And about what they probably hired him for: Frame time consistency.

    [url<]https://twitter.com/scottwasson/status/748162402265403392[/url<] He's also looking masterful in that beard.

    • torquer
    • 3 years ago

    I am so sick of TR’s bias on video card reviews. It has been literally YEARS since they even reviewed a Matrox card!

      • tipoo
      • 3 years ago

      And a decade since an S3 graphics review!

      [url<]https://techreport.com/review/9898/s3-multichrome-dual-gpu-technology[/url<]

        • torquer
        • 3 years ago

        Just part of the long, long slide toward irrelevancy… *sigh*

          • tipoo
          • 3 years ago

          It’s sad seeing those 10 year old comments now hoping this was its slow ramp up to being a viable Nvidia/ATI competitor. Instead it was a ramp straight downwards.

            • torquer
            • 3 years ago

            Agreed. They do have some great multi monitor tech at least. They were pioneers

      • the
      • 3 years ago

      Sadly Matrox isn’t even developing their own GPUs anymore but they’re still around. They’re in the business of [url=https://techreport.com/news/29679/matrox-c900-graphics-card-drives-nine-displays-from-a-single-slot<]releasing cards for niche markets using AMD GPUs[/url<]. Board designs are there own and I'd fathom GPU/memory clocks are a bit different from the desktop versions to reduce power consumption (passively cooled designs are a big deal in the niche Matrox still serves). As far as companies that I'd like to see a comeback, PowerVR could sneak into the desktop once again. They certainly have a power efficient and modern design but no one seems interesting in building a desktop level chip using it. I do have a feeling that any desktop hardware would be waiting for the PowerVR Wizard architecture to mature to provide hardware raytracing acceleration.

      • spiritwalker2222
      • 3 years ago

      Heck, when was the last time the reviewed an ATI card?

    • Billstevens
    • 3 years ago

    On the bright side, it looks like AMD has improved all aspects in their mid tier. Better noise(even with blower), better power consumption, and competent performance with launch day MSRP being honored, at least in the US. Oh an there is evidence their drivers are correcting frame timing issues even for old cards. The 1080 review showed much better AMD frame timings on games like the Witcher 3.

    Possible downsides when the 1060 gets launched, Pascal appears to be more power efficient than Polaris even at a 16nm versus 14nm.

    Possible upside versus 1060 is that the 480 may scale better with a minor OC giving it a chance at better performance.

    • Sam125
    • 3 years ago

    It seems to me that the 4GB RX480 is going to be a smash success at its price point; providing that the AIB vendors don’t increase prices for their “improvements” too much.

      • NeelyCam
      • 3 years ago

      Depends on 1060

        • Chrispy_
        • 3 years ago

        The 1060 is looking likely to outperform the 480 by a comfortable margin and at a perf/Watt advantage.

        Expect the greedy Jen-Hsun Huang to charge as much as he thinks the market will bear. If it’s 15% faster, it’ll cost 15% more. If it’s more efficient it’ll be quieter and cheaper to run too, so he’ll monetise that on top of the 15%.

        Maybe the 1060 will be $275, ’rounded up’ to $279. I can hope it’s $249 but the more I think about it, the more unlikely it is. Nvidia is going to sell every 1060, 1070 and 1080 they can make for the next three months, so the plan clearly has to be to rake in whatever people will pay for now and worry about being competitive once the initial demand has died down, in the fall perhaps.

          • sweatshopking
          • 3 years ago

          Greedy? Wtf. I thought his job was to make NVidia money?

      • tanker27
      • 3 years ago

      For the love of ………..Please get your terminology right [b<]ALL video cards are AIB[/b<]. There is Reference and Non-Reference or even Custom. AIB literally just means "it's a graphics card" >.<

    • AnotherReader
    • 3 years ago

    Kyle at HardOCP reports that [url=http://hardforum.com/threads/amd-radeon-rx-480-video-card-review-h.1903637/#post-1042385832<]AIBs are seeing better overclocking with their custom cards[/url<]. Edit: Linked to the post.

      • torquer
      • 3 years ago

      No no, see they should give the finger to their stockholders and start giving cards away.

      The market sets the price. If people weren’t willing to pay what Nvidia charges, they’d drop prices. Happens all the time.

    • ronch
    • 3 years ago

    Ironically, the reference card tells board partners how NOT to build an RX 480 card.

      • Anonymous Coward
      • 3 years ago

      Can be a useful data point either way.

    • DPete27
    • 3 years ago

    So since the RX480 draws [more-or-less] exactly as much system power as the GTX1080 and we have a common baseline of the GTX980 in both reviews, we can conclude that the GTX 1080 offers 75% more performance per watt compared to the RX480…wow!

      • PixelArmy
      • 3 years ago

      [url<]https://www.techpowerup.com/reviews/AMD/RX_480/25.html[/url<]

    • rinshun
    • 3 years ago

    I have to ask: in Tomb Raider and The Witcher 3 benchmarks the RX 480 got some frametime spikes. Can you “feel” them while you are playing?

    The competition seems to have a smoothier frametime curve even though 480 averages are better.

    I’m asking that because I live in Brazil and the prices here for both of them are roughly equal

    • joselillo_25
    • 3 years ago

    Two days reading posts and still do not know what card to buy 🙂

    Is first time in my life I am tempted to buy a console to end all of this bullshit 🙂

      • Ninjitsu
      • 3 years ago

      Or just buy a 1070 for the same price*

      *depending on your location and stuff.

        • Mr Bill
        • 3 years ago

        *Falling off a truck*, for example?

      • rechicero
      • 3 years ago

      If you need to buy right now: depends on your budget and what you want. But I’d say its pretty clear.

      Best sub 250: 480x (trading blows with 970, but with more memory, better room for perf improvement via drivers and probably better for DX12 in the future. And freesync is $75-100 cheaper than the G-sync road, with similarly priced cards, remember that if its important for you.

      Best. period: 1080 and 1070.

      If you dont need to buy right now and want to go for the sub 250, wait 2-3 weeks, check the 1060 and if you decide to go for AMD (important choice if you want Freesync or G-Sync down the road), go for a good non reference card.

      I think everybody would say the same (even chuckula), I dont think there is actually much debate. The 480x could be better, a lot of ppl expected better power efficiency, but perf per dollar is the best and power effciency not worse than the competition right now.

        • credible
        • 3 years ago

        Considering the vast chasms in benchmarks with the 480, at least the 8gb version, I believe this card has a lot more to give, first with Asus and the like then driver optimization.

        I have a 970 atm and I really think I will wait a few weeks, till this supposed price war starts and will grab a 480.

        Gonna put my money where my mouth is because having a competitive AMD is very important for the graphics card industry and especially, us the consumer.

        It seems more than reasonable for me to assume that the next cards coming from AMD are really gonna show what this new team has going for it.

          • credible
          • 3 years ago

          Plus my sons have a 660ti and a hd 7850, so for that reason alone, meaning price, I will be going with this 480 for both of them…though certainly not the reference one lol.

            • Concupiscence
            • 3 years ago

            Call me crazy, but I’m pretty sure you could save some green and give one of your boys the 970. They wouldn’t wail and gnash their teeth to receive such a thing, and you’d save a nice little pile of money in the process.

          • flip-mode
          • 3 years ago

          Going from a GTX 970 to an RX 480? Man, that does not make any sense.

            • Concupiscence
            • 3 years ago

            Discounting the regularly invoked scenario of compute shaders during gameplay, they’re [i<]really[/i<] close to each other in everything that isn't memory constrained. The 480's a super upgrade for anyone running Pitcairn or all but the beefiest Kepler parts (or Fermi...), but replacing perfectly acceptable hardware with a new design that's got obvious teething issues is... a little zealous?

            • credible
            • 3 years ago

            Yes a little over zealous and got a little mixed up as well.

            For me I was meaning I am going to wait till vega comes out as this is supposedly the lower end of their new cards.

            But getting a 480 for my oldest, who has the 7850 is a no brainer and the other boy can still wait a bit and take my 970 while I hopefully can justify getting a vega card.

          • rechicero
          • 3 years ago

          I think if you need to choose between buying a 970 and a 480, it’s a no brainer: the 480 is better. But if you already have a 970, I’d say… wait til next gen.

      • NeelyCam
      • 3 years ago

      Consoles ftw!

      Or, wait until 1060 reviews

        • travbrad
        • 3 years ago

        Yep consoles make the choice so much simpler! You get to choose between a rubbish GPU (Xbone) and a slightly less rubbish GPU (PS4), paired with a mediocre CPU. Things are so much easier with less choices.

      • Beelzebubba9
      • 3 years ago

      I just bought a GTX 1070 and decided to be done with it.

      It’s clear that the RX 480 won’t upset nVidia’s performance hierarchy and neither will the 1060, so the 1070 is a pretty well known value at this point.

    • Tech_Geek
    • 3 years ago

    Excuse me if i missed that part (i checked again) BUT where are the Dx12 tests? Sorry but this conclusion is half and unacceptable due to the lack of Dx12 benches, you can’t just ignore these and draw a conclusion based solely on Dx11 performances. This is what i think, down thumb me however you want

      • Ifalna
      • 3 years ago

      Currently DX 12 SUCKS. Leave it off until game devs get used to it:
      [url<]https://techreport.com/news/30335/here-an-early-look-at-dx12-inside-the-second-benchmark-data[/url<] I can understand why TR omitted these results from the actual benchmarks because due to a lack of proper optimization they are simply put: unusable.

      • Jeff Kampman
      • 3 years ago

      [url<]https://en.wikipedia.org/wiki/List_of_games_with_DirectX_12_support[/url<] Yeah, based on that tiny handful of games (both released and planned), I'm perfectly OK with the conclusion we drew. DX11 performance is still much more important.

        • VincentHanna
        • 3 years ago

        Speaking as someone who still runs GTX 580s and is thinking of upgrading, I am more interested in what I can expect running dx12 games, since it will be one of the major features that actually DRIVES my desire to update. Some of the people who use your site don’t upgrade their GPU every 6 months. Backwards looking benches does everyone a disservice, but especially those who want a card that will stick around and earn it’s keep a little more.

      • NarwhaleAu
      • 3 years ago

      Agreed – there is a missing piece that you can fill in from other reviews though. The good news is the 480 seems to outperform the 980 on some DirectX 12 titles. Regardless, it looks like a relatively better performer when looking at DirectX 12 compared to DirectX 11. That puts it ahead of the 970 for me, even if they were the same price.

      • NeelyCam
      • 3 years ago

      [url=https://techreport.com/news/30335/here-an-early-look-at-dx12-inside-the-second-benchmark-data<]Here's some[/url<]

      • Ashbringer
      • 3 years ago

      I was also wonder why no Ashes of Singularity or Hitman in DX12 mode? Why wouldn’t they benchmark in those modes? Especially since DX12 is pretty much in every new future major AAA game?

    • NeelyCam
    • 3 years ago

    “We accepted the unacceptable.”

    • tsk
    • 3 years ago

    How long does it usually take for AIBs to bring their cards to market?
    I see Asus, MSI and Sapphire have all teased their cards, but I really need the RX 480 within two weeks.

    • Chrispy_
    • 3 years ago

    So I’ve been reading all the other reviews around the web and have come to one conclusion:

    [b<]AMD have screwed the pooch with the reference blower, yet again.[/b<] The GPU is fine, it performs exactly as expected (Similar to R9 390X) which means it performs close to a factory overclocked 970 for the most part. Even at the inflated $239, that's a frickin' bargain right now. The cries of disappointment are really just that the chip uses more power than hoped. Clearly some of the blame for that is GloFo's 14nm process which just doesn't seem to run at the clocks and voltages TSMC's 16nm is running on. That's unfortunate, but nothing can be done about that. The other half of the blame is on AMD's rubbish cooler that fails the chip. The fan control has a temperature target of 80C but most of the reviews have it at 83-85C, all whilst the fan generates far more noise than a card of this level ought to. The competition runs 5-15C cooler and we know that the cooler the chip, the more efficient it is. With the Fury X AMD's watercooling to 55C or so allowed them to make a huge performance/Watt jump on 28nm; Here, we're seeing the opposite where high temperatures are invoking worse performance/Watt than expected. Once the third party coolers appear and are able to keep the GPU at lower temperatures, I think we'll see cards drawing less power and/or staying at max boost clock for more of their load duration.

      • Kaleid
      • 3 years ago

      Here’s a test with another better cooler and unsurprisingly it overclocks better whilst at the same runs cooler.

      Personally I’m waiting for 3rd party solutions to hopefully skip the PCI-E problems completely.

        • Chrispy_
        • 3 years ago

        Missing a link?

      • Mr Bill
      • 3 years ago

      Tom’s Hardware measured 42dBA at load, no louder than the NVIDIA 1070 at the same load.
      [url<]http://www.tomshardware.com/reviews/amd-radeon-rx-480-polaris-10,4616-10.html[/url<] But I think it would take a vapor chamber heat spreader to get the temperature lower.

    • christos_thski
    • 3 years ago

    So why didn’t they simply use an 8-pin connector instead of screwing up the power delivery? Are there so many PSUs with 6-pin PCIexpress power delivery only? I’m loving the value for money in this card, but I’m waiting for this to be resolved before I buy one (and right now, nvidia does not have anything comparable).

      • tipoo
      • 3 years ago

      Some third party board makers are already swapping the power circuitry and putting in an 8 pin. Zero reason to go with the stock blower at this point.

      As for why AMD did it, I dunno, man.

        • Mr Bill
        • 3 years ago

        The two sided inlet is a good idea. There is nothing wrong with not blowing dissipated heat all over the inside of the PC case.
        (1) Perhaps a redesign of the blower blade profiles would make less noise at the same speeds. (2) Possibly a different material for the fan blade would reduce resonant noise.
        (3) A vapor plate would move more heat from the chip and dissipate over a larger area.
        (4) A vapor plate would give better thermal transfer at lower fan speeds.
        (5) An 8 pin power circuitry would give better overclocking headroom if cooling also better.

        A vapor plate will add ~$30 to the cost.
        [url=https://www.amazon.com/Dynatron-R31-Chamber-Passive-heatsink/dp/B00Q40L27U<]1U Dynatron R15 Vapor Chamber[/url<]

      • Bensam123
      • 3 years ago

      6/8 pin has nothing to do with draw. They could have 2x8pin connectors and draw 90% of the power through the PCIE slot. From what I read on Anandtech, they’re already working on this and there might be a driver update next week.

    • anotherengineer
    • 3 years ago

    I remember at one time GPUZ had an ASIC rating, do they still have that and does it work for these new cards and Nvidia’s?

    • kuttan
    • 3 years ago

    Toms Hardware reviewer Chris Angelini Nvidia Bias once again proved.

    The very same PCI-E power draw concern Mr. Chris Angelini raised with RX 480 equally applies to
    GTX960 (worse). But for GTX960 its not a big deal for him, but for RX 480 its dangerous,
    your motherboard get fried etc.

    His comparison between GTX 960 power draw vs RX 480
    [url<]http://imgur.com/mmfBUAw[/url<] Consumers get fooled by these people with double standards. Really shame.

      • chuckula
      • 3 years ago

      I think you might have gotten fooled by.. uh.. yourself.
      Can you point out what you are trying to show, because all I see is one graph showing the GTX-960 staying within the limits of PCIe power delivery and another graph for the Rx 480 showing otherwise.

      Especially since that’s just the 12V rail on the PCIe slot being measured, and [b<]NOT[/b<] the full power delivery including the 3.3V rail. The limit for the 12V rail by itself is 66W (12V * 5.5A) while the remaining 10 watts go to the 3.3V rail (3.3V * 3A). You might note that those numbers technically add up to 76W, not 75, but there are some single-digit percentage tolerances built into the standard. [url<]https://en.wikipedia.org/wiki/PCI_Express#Power[/url<] So an average draw of just shy of 60W on the 12V rail of a PCIe slot meets the spec. An almost 80W draw on the same rail does not meet the spec.

        • DancinJack
        • 3 years ago

        Just another confused AMD fanboi (not that there aren’t Nvidia ones too). Hopefully he/she understands the graphs now that you’ve explained them. We’ll see.

        • kuttan
        • 3 years ago

        Look closer to see which card is worse there. GTX 960 spikes power above 150W a lot more than RX480 does. GTX 960 also had much higher power fluctuation in comparison to RX480.

          • chuckula
          • 3 years ago

          Yeah, fanboy confirmed. I tried to be nice, but then you kept pushing it.

          The graph you posted was not for a stock GTX 960, it was for a special overclocked Asus strix version that showed some transient spikes while still keeping an average power draw level that meets the spec.

          Now, those transient spikes are bad, nobody is denying that.

          In fact, the author of the THG guide article didn’t deny it at all:
          [quote<]We’ve got to go back to the foundations article mentioned above to put the measurements at separate rails into context. This is because the otherwise very good Asus GTX 960 Strix leaves the motherboard connector to deal with unprecedented unfiltered power spikes all on its own:[/quote<] Oh, but, once again that was a [b<]non-stock specialized OEM card[/b<] not the default card with the default BIOS and default power delivery mechanism. So Asus screwed up a bit. However, if the GTX-960 was really truly all that bad, maybe the [b<]OTHER GTX-960 IN THE SAME REVIEW[/b<] would have shown problems. But it didn't. [quote<]For comparison, here’s a look at the Gainward GTX 960 Phantom OC, which presents a much more calm picture, while being almost as fast as the Asus GTX 960 Strix.[/quote<] Read the whole article here: [url<]http://www.tomshardware.com/reviews/nvidia-geforce-gtx-960,4038-8.html[/url<]

            • kuttan
            • 3 years ago

            The graph I posted is a GTX 960 with single 6 pin PCI-E power just like in the case of RX 480. The point here is with ASUS GTX 960 Strix PCI-E power draw is worse than RX 480 with the GTX 960 there spikes power above 150W a lot more than RX480 does but Chris Angelini is not so vocal about Nvidia is what I said.

            • DancinJack
            • 3 years ago

            lol k

            • pranav0091
            • 3 years ago

            He is certainly vocal enough – about the ASUS card. There is no need to be vocal about Nvidia – its not a reference card that showed that behaviour. Even then its average power draw is safely inside the PCIE specs, unlike the RX 480’s.

            You cant blame the car manufacturer because a drunk drove over the speedlimit.
            How about you actually read that full page for once, you know?

            <I work at Nvidia, but my opinions are personal>

            • rechicero
            • 3 years ago

            That was actually a good answer, with a link :-). And tells me it can be not a problem of the GPU, but a badly designed card. (seeing how things can change with exactly the same GPU). Thanks, learn a lot with this kutta vs chuckula argument!

            Anyway, the first statement of kutta could be actually true (dont have time to check right now). Both the Strix 960 and the 480 grab more power than they should from the mobo, if the writer made a big thing with the 480 and just a comment about the 960, that would be bias. I say if, because I need to go and cant read both articles to check it, sorry

          • pranav0091
          • 3 years ago

          How about you read the whole review, sir?
          That was one isolated AIB card showing that behaviour, but still averaging under the prescribed limits.

          How about you read ALL of this page before jumping to a conclusion:-
          [url<]http://www.tomshardware.com/reviews/nvidia-geforce-gtx-960,4038-8.html[/url<] Also, read what chuckula wrote up above. <I work at Nvidia, but my opinions are purely personal.>

        • rechicero
        • 3 years ago

        Are we looking at different graphs? The one I see have the 960 consuming more than 225W (please note that in the 960 the scale is different, in the 480 is a 0-150 W, in the 960 they needed to go all the way to 300 W. With the same scale, the 480 would “seem” less watt hungry. Yes, we are looking at different graphs as you seem to didnt notice there are 2 graphs for each card. One of the 12 V rail and one for the whole PCIe.

        PS: In the 12V graphs I see pretty similar things, with the 960 spiking a little higher and more times, and the 480 being more consistent in the power draw.

        EDIT: It looks like is cherry picked. I really wish you were a little more neutral, because in your next post you really nailed with arguments (kutta’s example was cherry picked)

      • Theolendras
      • 3 years ago

      This kind of intelletual shortcuts and deformed facts are made daily by Donald Trump, yet he is popular… Ignorance by itself is not that bad, spreading false conclusion on wildly misinterpreted data is much worse. Congrat today you’re in the second category.

      • rechicero
      • 3 years ago

      I was going to down vote you (dont like the calling names thing), but checked the link and… wow. An Strix 960 grabbing more than 225 W from the PCIe… and probably in a less stressful scenario.

      Edit: It looks like you cherry picked a 960 for this. The sad thing is, if you want to defend AMD no matter what, the argument would be “hey, look how things can change with different implementations of the same GPU. The 480 seems rushed in that part (crappy cooler) so it could be just a crappy pcb and it will be fixed in non-standard models.” And, at the same time, your point about the writer… I guess I’ll need to read the whole article.

    • chuckula
    • 3 years ago

    As somebody old enough to remember when THG was run by Dr. Pabst and was on the same level as Anandtech (when Anand was still too young to buy beer legally!) and then witness it’s downfall, I have to give their power measurement regime quite a bit of credit for professionalism.

    They are using a special PCIe riser that measures the power levels directly at the PCIe socket and have a separate meter that measures power through just the PCIe power cables (a 6-pin cable in this instance). That’s the right way to get a precise and accurate measurement of just what the card is consuming [and WHERE it is getting that power] that avoids issues like how all the other components in a PC will start to pull more power under load just like the GPU does and avoids the vagaries of PSU efficiency at different load levels.

    Having said all that, I’m not saying that THG’s results should be 100% conclusive on the subject, but it should be more than enough to get other people looking at the issue using proper power testing equipment.

      • AnotherReader
      • 3 years ago

      +1 for that and the trip down memory lane. THG was the first review site that I stumbled upon; I think it was around the time of the first Voodoo. PC Per and THG are measuring power in the right way; I hope others take a leaf from their book.

        • chuckula
        • 3 years ago

        I checked out PCPer too. They didn’t have quite the level of detail as to the precise power source, but their graph that isolates power delivery to the GPU from the rest of the system definitely spent a large amount of time in the 160 watt range.

      • chuckula
      • 3 years ago

      OK: PC Perspective did its own analysis and came to results that are similar to THG: [url<]http://www.pcper.com/reviews/Graphics-Cards/Power-Consumption-Concerns-Radeon-RX-480[/url<] At this stage I think there is something real and this needs to be addressed. I'll make a forum topic about it later.

      • Theolendras
      • 3 years ago

      I agree this is the right way, although, measuring whole system power consumption shows other interesting trends… Like inefficiencies in drivers, leading to higher global consumption not just locally on the adapter.

      • HERETIC
      • 3 years ago

      Yeah they certainly hit rock bottom.
      They seem to be on the way back up now-
      Since Chris has been doing their SSD reviews they’re good,
      and their PSU reviews are up there with the best………………

      • Airmantharp
      • 3 years ago

      Got to wonder about the next step- if it were possible to limit power transmission to the card at the PCIe-spec for the slot and the 6-pin, how would that affect the card?

      This is of course separate from the question where the card attempts to pull too much juice from barely-spec components that may not be able to properly limit power draw and avoid damage!

      • Chrispy_
      • 3 years ago

      There are some excellent articles and reviews on THG; You can no longer be mocked for citing a THG article since it’s possible it’s top-notch and ground-breaking.

      THG is the Amazon Marketplace of reviews though. The quality varies massively and the site owner (Purch) is not impartial, some (very few) of the articles are basically advertorials disguised as reviews. Likewise, they have a large pool of reviewers and even when the review is impartial it can still be a low-quality, unprofessional one.

      As long as you’re smart enough to sort the wheat from the chaff, it’s not so bad.

    • sparkman
    • 3 years ago

    [quote<]That's excellent progress from the red team, and we hope that [s<]whatever mojo[/s<]Scott Wasson who is responsible for this turn-around works [s<]its[/s<]his way into every future AMD graphics card.[/quote<] There, I fixed that for you.

    • AnotherReader
    • 3 years ago

    HardOCP’s RX 480 bucks the trend by drawing [url=http://www.hardocp.com/article/2016/06/29/amd_radeon_rx_480_video_card_review/12<]far less power than a GTX 970[/url<]. That is a promising sign. Edit: I noted that they received the review sample from AMD so it seems that the hatchet is buried for now. Kudos to AMD for not repeating their behaviour at the time of the Nano's release.

      • stefem
      • 3 years ago

      Power usage change on application and situation basis, for example NVIDIA consumes a lot less in Hitman than in Metro LL

        • AnotherReader
        • 3 years ago

        That is correct, but a 50 W variance out of 150 W is out of the norm.

          • stefem
          • 3 years ago

          They are measuring power consumption of the entire system so it’s more something like 50W out of 300W

            • AnotherReader
            • 3 years ago

            Umm. The GPU performance is similar so the rest of the system can be disregarded.

            • stefem
            • 3 years ago

            You are making a lot of approximations, there are lots of variable to consider, it may be a particularly hungry GTX 970, may be more load on CPU in GPU limited scenario, may be a less hungry RX480 or even (and more likely) a combination of the above.
            That’s why some measure card power alongside/instead of system power and did not get same results, benchmarking is a really serious thing if wand to do it properly.

    • ronch
    • 3 years ago

    I’m willing to bet a GTX 1060 that delivers the same performance as the 480 will kill it in terms of efficiency.

    [url<]http://www.guru3d.com/articles_pages/amd_radeon_r9_rx_480_8gb_review,5.html[/url<]

      • Theolendras
      • 3 years ago

      I bet it will deliver better performance under DX11 too, but will be higher price, not that it can’t be justified, but anyway I slice it, whichever product you choose going forward is a good leap in value.

    • Anonymous Coward
    • 3 years ago

    I wonder, speaking of efficiency, would a shiny new GPU design be more efficient at simplistic pixel-pushing than a giant version of a years-old design? Are they gaining efficiency on simple triangles, or just gaining efficiency considering all the other insane complicated things they can do?

    Texture compression is a good trick, ignoring degenerate triangles sound good, sounds clever to render a scene from multiple views, antialiasing optimizations are good.

    But is all that computational potential coming at a noticeable cost in basic operations?

      • WaltC
      • 3 years ago

      There is nothing “simplistic” about “pixel pushing”…;)

      Texture compression et al is not a “trick,” it’s a technology that allows them to add to the baseline performance–if it didn’t they wouldn’t use it. “Giant versions of years’ old designs” are extremely limited by the both the design and manufacturing capabilities of the day. Pretty much, brand new architectures shoot for efficiency gains everywhere.

        • Anonymous Coward
        • 3 years ago

        They doubtless shoot for optimizations everywhere, but complications and flexibility should cost [i<]something[/i<] and GPUs can scale sideways to absurd levels. Is the flexibility of new designs enabling efficiency optimizations that would not be available to something from the dx9 days?

    • sweatshopking
    • 3 years ago

    This is basically just a die shrink of last gens arch. It should be treated as such.

      • chuckula
      • 3 years ago

      Is the shrunken die the reason for the shrunken text in your post?

      • tipoo
      • 3 years ago

      It’s as big a jump as any of the GCN iterations plus a fab shrink (even if Glofo isn’t so hot…Or rather, is), would you call Volcanic Islands, 1.1, 1.2, 1.3, and all that, generations?

        • sweatshopking
        • 3 years ago

        GCN IS OLD AND TIRED. TIME FOR GRAPHICS CORE NEXT NEXT

          • derFunkenstein
          • 3 years ago

          I hear that

          • Stochastic
          • 3 years ago

          Any chance this will happen anytime soon? With AMD manufacturing the current and upcoming console components, it’s probably in their best interest to not let the GCN architecture deviate too much for a while.

            • tipoo
            • 3 years ago

            Not likely, their roadmap has been very public. Vega is also GCN. But it has being on TSMC going for it, that’s when we’ll really learn what the efficiency equation is with Polaris and Pascal when the playing field is even.

            [url<]http://cdn.arstechnica.net/wp-content/uploads/2016/03/Roadmap-640x360.jpg[/url<]

            • stefem
            • 3 years ago

            Was part of its strategy, they even publicly said, in a tweet by Joe Macri if I remember (anyone can confirm?), they tried to play a gambit to NVIDIA with console and mantle.
            Since most games are port from console that sounds like a smart move, well, maybe not much in the perspective of PC gamer

            • derFunkenstein
            • 3 years ago

            You know that’s right

          • tipoo
          • 3 years ago

          That’s exactly why that name was as good an idea as Nintendos fondness for “New”, lol. Why doesn’t this game work in my new 3DS? Oh, you don’t have a New 3DS, just a new 3DS. Wut.

            • derFunkenstein
            • 3 years ago

            We were contemplating getting my daughter a 3DS for her birthday in about a month but the confusion between new 3DS and New 3DS, and the fact that a New 3DS doesn’t come with a freakin’ charge cable (with Nintendo’s stupid proprietary plug, no less), caused us to abandon that idea.

      • ronch
      • 3 years ago

      I’m actually thinking about that too. Seems to me AMD needs a die shrink back in the Maxwell days to hit NV’s efficiency numbers, but then 20nm was canceled.

      Could also explain why AMD can price it this low. Less R&D costs.

      • PrincipalSkinner
      • 3 years ago

      It’s what Raja sort of confirmed in one of the Polaris articles at AT.
      But you get downvoted by AMD fanboys for shoving the truth in their red faces.

      • travbrad
      • 3 years ago

      So the same as half of Intel’s CPUs then?

        • sweatshopking
        • 3 years ago

        Sure.

        • derFunkenstein
        • 3 years ago

        Intel has done die shrinks more frequently, at least, and has two brand new core archs since 2012 – Haswell and Skylake.

          • EndlessWaves
          • 3 years ago

          Intel are really doing badly then if AMD can get a 70% performance boost from just a die shrink.

          [sub<][/sarcasm][/sub<]

    • PrincipalSkinner
    • 3 years ago

    Ah. Good old Serbian prices.
    [url<]http://itsvet.com/graficka_karta_asus_radeon_rx480_rx480_8g-pdaa1-25.i?id=4330[/url<]

    • NTMBK
    • 3 years ago

    Has anybody seen any underclocking/undervolting in reviews? I wonder how the perf/W would look with a mild drop in performance.

    • yogibbear
    • 3 years ago

    All hail the glorious toasters that will be the next console generation. Thanks AMD.

      • Krogoth
      • 3 years ago

      GP104 isn’t exactly that cool either at load. It just has the performance to back it up.

      • NTMBK
      • 3 years ago

      PS4K is supposedly clocked at ~900MHz, so it should be a lot more efficient than the 480.

        • Laykun
        • 3 years ago

        I don’t think efficient is the word you’re looking for. Even at lower clocks it’s likely to be inefficient as it’s performance per watt is still not great, it’s just less toasty.

          • NTMBK
          • 3 years ago

          No, I definitely mean efficient 🙂 Perf/W drops as clocks ramp up- we all know the story of Prescott. There’s a reason why laptop GPUs tend to be wider chips that are lower clocked, as opposed to small high clocked chips.

    • Klimax
    • 3 years ago

    Interesting. PR made it appear as 980 level. We got 970 level. At least performance consistency is there.

    Primary problem will be for AMD, that they left NVidia damn too much space. NVidia has enough space for all three primary aspects of card: Efficient, powerful and fairly cheap. Gulf between 1070 and 970/480 is too big. (two chips can fit in nicely)

    Biggest question is, when 1060 drops. (My not so serious suggestion was a week before Polaris, but that didn’t happen…)

    • Laykun
    • 3 years ago

    If the power efficiency in this GPU is any indication of what Polaris 11 will have then AMD don’t really stand a chance in the mobile GPU sector. They want Polaris 11 in notebooks but it can’t match Pascal power efficiency by a long shot. At the very least Polaris 10 will not see popular usage in notebooks, since 28nm Maxwell already provides similar power efficiency and performance. I just hope that Polaris 11 strips a lot of unnecessary cruft from the design, then they might have a chance.

      • Theolendras
      • 3 years ago

      New process, adding AVFS and not having any gain at idle there is clearly an issue. GTX 1080 going up in frequency quite a bit makes me think there is process issues right now.

    • rUmX
    • 3 years ago

    Thank you Scott for helping to improve AMDs frame times and frame rate consistency!

      • Meadows
      • 3 years ago

      Funny thing is, we’re going to praise Scott even if it turns out this was mainly Raja Koduri’s doing.

        • kvndoom
        • 3 years ago

        ALL PRAISE AND GLORY TO THE WASSONTOAD.

      • tipoo
      • 3 years ago

      This may be a half jest, but iirc that kind of testing of frame times that TR pioneered was exactly a stated reason for why AMD hired him, I forget where that nugget got in my head from, maybe the AMD AMA on reddit.

    • chuckula
    • 3 years ago

    The boost clock of 1,266 MHz to produce a theoretical 5.8 TFlops (that is very close to the R9-390X)
    reminds me of the slide in this article that Raj allegedly presented at the Macau event at the end of May: [url<]http://videocardz.com/60752/amd-radeon-rx-480-specifications-leaked[/url<] Interesting how the Rx 480 went from 5.5 Tflops to 5.8 Tflops in less than a month. Interesting how the power consumption and overclocking headroom numbers were definitely not as nice as what people had been hoping for. Something tells me we have a repeat of the R9-290X launch: There was some last minute in-house overclocking to get that last bit of performance, but sacrifices were made to do it.

      • mad_one
      • 3 years ago

      Power is partially a design choice. Turning down the TDP and testing the performance loss would give some insight on this.

      I don’t have a bunch of cards lying around to test this, but for my MSI GTX970 Gaming 4G, which has a fairly high TDP compared to the reference 970, reducing TDP by 20% results in a performance loss of less than 5%, while reducing noise dramatically.

      AMD has often pushed their chips to the limit, with the 390 series being a rather extreme example.

      c’t magazine tested the RX480 and found the card to be drawing more than 75W from the mainboard, which is also a hint that the card was planned with lower clock speeds and power draw.

      • Theolendras
      • 3 years ago

      Yep that’s what I would also conclude. Did everything they could to get past the 970 I guess…

      • Rza79
      • 3 years ago

      It seems the card was designed for something like 1200Mhz (or less) and not 1266Mhz.
      Why does AMD keep shooting itself in the foot?

        • tipoo
        • 3 years ago

        Yeah, I’d be interested in seeing if a much higher efficiency point is hit sacrificing a few dozen MHz.

      • ronch
      • 3 years ago

      One word: marketing.

      • tipoo
      • 3 years ago

      Aren’t some board partners getting to 1500MHz out the door though? The chip itself seems to have some room, but the default board and cooler are crummy with a bizarre power delivery system.

        • chuckula
        • 3 years ago

        [quote<]Aren't some board partners getting to 1500MHz out the door though? [/quote<] Maybe, but are they doing it with the same power delivery system of the reference board?

          • tipoo
          • 3 years ago

          Nope, that’s what I was getting at, the default board is a bizarre mess, but the silicon itself seems to have some potential. AMD went 80% of the way then said screw it, leave the rest to the others to get right.

    • Kougar
    • 3 years ago

    I think those scatter plots would look very different with a 1070 in them. Having the 980 without the cheaper 1070 that can do 50% better performance of said 980 is a little misleading.

    The real choice is whether or not to buy a $240 480 8GB or spring for a $380 1070. A 58% price bump for an average ~80% boost to performance is a better proposition for higher resolution gamers or anyone planning to use the GPU for the long haul.

      • mikehodges2
      • 3 years ago

      I was wondering that. Why weren’t the 1070/1080 included on the scatter graphs?

        • Kougar
        • 3 years ago

        To be fair to Tech Report, they either don’t have a 1070 or haven’t been able to publish benchmarks on it yet. That said they did review the 1080 and that was also left out of the scatter plots.

      • slowriot
      • 3 years ago

      First, where in the world did you find a GTX 1070 available for $380? I’d love a link.

      Second… I think you’re greatly over estimating the number of people who can make a $140 (which isn’t what’s actually available right now, more like $200) price jump from a $240 RX 480 8GB to a 1070. Far more people will be considering the jump to a GTX 1060 when that shows up.

        • Kougar
        • 3 years ago

        Availability and price gouging are frivolous temporary arguments, this review will still be here a month from now, even a year from now. (Not that the 8GB 480 is presently in stock anywhere near its msrp either)

        Yes the 1060 is the true competitor, but it isn’t out yet. NVIDIA probably won’t launch it until 1070 sales settle down. Anyone considering forking out $240 or more should have an accurate idea of what the “next best” option is, and currently remains the 1070 non-FE, certainly not the 970 nor 980.

    • derFunkenstein
    • 3 years ago

    This Rx is just what the doctor ordered!

    I’ll have a good day. /shows self out

      • chuckula
      • 3 years ago

      GUESS WHAT.
      I GOT A FEVER.
      AND THE ONLY RX IS MORE [s<]COWBELL[/s<] [u<]POLARIS[/u<]!

        • derFunkenstein
        • 3 years ago

        If you’ve got a fever, you should probably take a chill pill, bro

        • morphine
        • 3 years ago

        You two…

        OUT! NOW!

        >.<

          • derFunkenstein
          • 3 years ago

          I used to like you.

            • morphine
            • 3 years ago

            I make bad jokes. You two are something else. 😛

            • derFunkenstein
            • 3 years ago

            OK I still like you

            • sweatshopking
            • 3 years ago

            THREE WAY FRENCH KISS

            • anotherengineer
            • 3 years ago

            So a ménage à trois?

      • CScottG
      • 3 years ago

      ..junky.

      • Mr Bill
      • 3 years ago

      [url=https://www.youtube.com/watch?v=rJppnG1tflU<]"I've got a rocket, you're going on it, you're never coming back"[/url<]

    • Delta9
    • 3 years ago

    FYI, was in the Microcenter location in NJ tonight at 8:00 and they had rx 480 in stock and front and center. They were going for $249.99 for 8gb model. The card is out there with 8gb for only a $10.00 premium over the $240 msrp on launch day. It will be interesting to see how much of a drop in GPU temperature is achieved by custom air cooling when they are released. The entire cooling setup on the reference design looks weak and its performance seems to reflect that. It will be interesting and may change some of the perception of this GPU if these coolers could take 15+ degrees off the load temps. The temps were in the 80s, so there is quite a bit of room for improvement.

    • End User
    • 3 years ago

    Hey, at least it’s cheap.

    • flip-mode
    • 3 years ago

    What is so bad about this GPU? The one single concern is power consumption, which is not even all that bad. The GPU is a significant improvement over the previous product – R9 380X – at this price point, and it beats previous gen cards – GTX 970 – that were a good step up from its price point. The GTX 970 launched at $350. The RX 480 launches at $200 and beats it. And, impressively and unexpectedly, AMD managed to fix the inconsistent frame time delivery – which is perhaps the greatest achievement here.

    The thing is, you can rationalize a perspective from any direction. If you want to call the RX 480 a disappointment JUST because of the power consumption issue, well, that is your opinion then. Objectively speaking, the RX 480 improves performance substantially over the R9 380X and consumes less power at the same time. Objectively speaking it is clearly a decent improvement. However, AMD has had power consumption troubles for a looong time now. It is getting better, but people seem to be expecting AMD to pull a rabbit out of a hat and not just get better, but get miraculously better. Maybe there is as much an issue with expectations as there is with the actual product.

    Edit: Oh, yeah – drivers. AMD’s pattern of bringing consistent and appreciable performance improvements with driver updates is as well established as AMD’s pattern with higher power consumption. Unless I’m mistaken, take just about any Radeon from the last several years and it looks like two different products when you look at launch performance versus performance 1-2 years later.

      • Krogoth
      • 3 years ago

      It is just Nvidia fans that are trying to throw a negative spin on the issues and AMD’s marketing hyping the 480 to be a “980 killer” when they should have stick with it being a “960 killer”.

      • Convert
      • 3 years ago

      I’m not too worried about power consumption myself, but definitely noise is a concern.

      Most of the games in the lineup are pretty much a wash between the 970 and the 480.

      It is definitely an improvement over the prior generation.

      Why I consider it a disappointment:

      Even if a higher end card is released from AMD, it will only be marginally better. Which isn’t a good thing long term if AMD keeps this up. They are taking the same approach as their CPU business and that is to bow out of the high end market because they simply can’t produce competitive products.

      It’s good for consumers for the short term and bad news for AMD long term.

      You are right that I expected AMD to pull a rabbit out of their hat. That’s part of why it’s disappointing for sure. I don’t know, I’m just finding it really hard to give them any credit here. When the dust settles it’s going to be the same as the last generation where Nvidia has similarly priced cards and I have to ask, why bother?

      • blahsaysblah
      • 3 years ago

      High power usage is OK, fact that normal user can pop in card and be pulling more Watts than allowed is really bad.

      Go through guru3d, hardocp, pcper, techreport, tomshardware, pick a site, they all show power usage around 165W. That’s before they use the brand new Wattman OC tool from AMD to push it even harder. With the power numbers provided by Toms Hardware and their advanced setup, the card is evenly pulling power from the motherboard PCI-E slot and the 6pin. Both of those are rated for 75W. (Where the PCI-E slot is actually rated for 65 W@12V with 10W from 3.3V)

      Right now, normally playing Witcher 3, Metro Light, … games you can be pulling more power from both ports than allowed.

      AIB partners will just put 8pin, give it proper cooling to reduce power usage and fine. But this is black eye for normal kids saving money for summer, buying card and putting into old computer. Playing games hard during summer break and possibly frying motherboard.

      Raj said on PC-Per why this happened. They are rushing to make a ton of cards, so all the launch 4GB cards are actually 8GB cards with 4GB disabled. Faster/cheaper to only certify only one card. This 50/50 power from PCI-E and 6-pin is fine if board is used by 470 and lower clocked 4GB, not for this last minute maximum 480 OC.

        • Theolendras
        • 3 years ago

        I agree this is concerning, I wouldn’t buy this expecting to overclock it… There is no room left when looking in the power delivery, it looks like the reference board is kinda an overclock model already… I wonder how much of a difference it will do for turbo boost and power consumption as a whole once more efficient custom design (cooling-wise) are available.

      • Theolendras
      • 3 years ago

      That summarize it quite well.

      Polaris is satisfying for 2 reasons to me :

      – Value
      – Frame consistency issues seems way better now

      Remaining issues

      – Power consumption
      – Poor performance of their DX11 driver stack

      I don’t expect the DX11 will improve much, but the good news is it’s way better under DX12 just as preceding GCN products. Hardware Canucks actually did some bench on this. It’s not like there is a lot of DX12 titles yet, but there is enough of them on the market to see a clear trend. Since a lot of bleeding edge title that will release in the next two years will probably have a path to DX12 it’s quite impressive the punch it has. It’s seems to be comparable to the 980 in those scenarios.

      [url<]http://www.hardwarecanucks.com/forum/hardware-canucks-reviews/72889-radeon-rx480-8gb-performance-review-21.html[/url<] Altough I would expect most title to remain DX11 in the mid term, most of them probably won't require bleeding edge hardware either...

    • Shobai
    • 3 years ago

    Thanks for the review, Jeff and Robert, that was a good read.

    I mentioned on your GTX 1080 review that I find the Frame Number plots a bit weird – would you mind explaining how you determine the x axis range for them?

    The plots are similar in this review to the previous: the GTA V plots have a common x axis range of ~6500 between the tabs; along with the GTX 970 plot on both tabs, this lets me easily compare and contrast the performance of each card. Having said that, the x axis range could have stopped at 5500, giving less wasted space on the plot [from compressing the data into the left hand side].

    On the next page, with Crysis 3, while the GTX 970 is again on both tabs [very helpful, as mentioned above] the x-axis range is different between the tabs [making the compare and contrast harder at a glance] and I can’t tell whether the GTX 980 plot is truncated or not. Assuming that it isn’t, setting the x axis range to 5000 for both tabs would appear to be appropriate.

    Again, I want to thank you for the review. I hope that this suggestion can improve how your hard work comes across to readers.

      • morphine
      • 3 years ago

      If I understood correctly, then what you’re seeing is a feature, not a bug. Different cards generate a different number of frames during a given timespan, hence the data on the X axis having different lengths.

        • Shobai
        • 3 years ago

        Almost, but not quite: I understand that the faster cards produce more frames, which makes for a longer plot. My gripe is that the range for the x axis could be chosen in such a way that we can see all of the data while making the best use of the available plot area.

        In the GTX 1080 article, at least one of the Frame Number plots utilised less than 50% of the space dedicated to it. Of the two examples from this review that I mentioned previously, the first appears to waste ~15% of the space dedicated, and I can’t be sure that the second is displaying all of the data gathered [specifically, for the GTX 980 plot].

        My suggestion would be that the range for the x axis in all tabs for a given frame number plot would be determined by the device returning the highest count. If the scale is in 500 frame increments, then the range can be set to the next highest increment [eg, if the GTX 980 returns the most frames at 3800 in a given game, set the range at 4000 for all tabs]. If the frame count is within 10%, say, of that higher increment, then perhaps for legibility select the range to be one increment higher again [eg, if the GTX 980 returns the most frames at 3960 in a given game, set the range to 4500 for all tabs].

        [edit]

        In fact, go back and have a look at the [url=https://techreport.com/review/30281/nvidia-geforce-gtx-1080-graphics-card-reviewed/10<]Fallout 4 page of the GTX 1080 review[/url<] - in the Frame Number plot it is clear that the GTX 1080 data is being truncated: the GTX 1080 plot is almost 1000 frames longer on the Radeon R9 tab than the Geforce GTX tab. This suggests that the GTX is significantly better than the R9 Fury but no better than the 980 Ti. Was this the intent of the reviewer? Does this line up with the rest of the data? Did any reader notice this? How does this affect purchasing decisions? etc, etc. [edit2: whoops, that should have been "almost 1500 frames longer"]

    • egon
    • 3 years ago

    Bit disappointed by power consumption at idle, but also under multi-monitor use and Blu-ray playback, which TechPowerUp measures.

    Multi-monitor idle:
    RX 480 – 40W
    R9 380X – 39W
    GTX 970 – 14W
    GTX 1070 – 7W

    Blu-ray playback:
    RX 480 – 39W
    R9 380X – 34W
    GTX 970 – 15W
    GTX 1070 – 7W

    It’s nothing new – the efficiency gap between AMD and NVidia cards in these areas has existed for years, and is something I figured AMD would’ve got around to addressing in their hardware/drivers, but there’s also a lack of incentive to do so when most major sites don’t test power use for multi-monitor idle and video playback.

    (edited to add R9 380X & GTX 1070 figures)

      • HisDivineOrder
      • 3 years ago

      Definitely keeps me away from AMD products.

      • chuckula
      • 3 years ago

      Is this a bug similar to that weird bug for multi-display high refresh rate setups that Nvidia sometimes encounters?

      • Demetri
      • 3 years ago

      Yep, it gets beat by the 380X in single monitor idle as well. I just don’t understand it. How do you take a step backward in idle power consumption and video playback vs a previous gen chip on an inferior process?

        • Voldenuit
        • 3 years ago

        [quote<]How do you take a step backward in idle power consumption and video playback vs a previous gen chip on an inferior process?[/quote<] Might be they can't power down the RAM. Remember it has to feed 8 GB of VRAM. Or maybe it's a combination of design decisions, architecture, process maturity and drivers. Probably all of the above.

    • Mikael33
    • 3 years ago

    Looks like it is a great bang 4 buck card, as I expect the rest of the range will be but from a power efficiency pov they’re still way behind nvidia, I didn’t expect them to beat Pascal in efficiency but I didn’t expect them to be this far behind. Not really surprising given the difference in R&D budget though, I don’t think And can afford to make GPUs that can compete with nvidia in both perf per watt and for $. I have no idea how much they cost for amd to make but I hope for amd nvidia can’t obliterate them in a pricing war.

    • tipoo
    • 3 years ago

    Does that power quality sensing circuitry have anything to do with the scary thing Tomshardware found? It’s using 20% more than the PCI-E design capacity of power

    [url<]http://www.tomshardware.com/reviews/amd-radeon-rx-480-polaris-10,4616-9.html[/url<]

      • tipoo
      • 3 years ago

      Shameless bump, but curious. Is it because they know X amount of power is “safe” from PCI-E with the quality sensing circuitry?

        • Anonymous Coward
        • 3 years ago

        I’m not sure how the power delivery will look when they are pushing too hard, but perhaps the voltage starts dropping and they can see that they’ve reached the limit.

          • blahsaysblah
          • 3 years ago

          Until the PCI-E pins degrade to cause voltage issues??

          [url=http://www.pcper.com/reviews/Graphics-Cards/Power-Consumption-Concerns-Radeon-RX-480/Overclocking-Current-Testing<]Power Consumption Concerns on the Radeon RX 480[/url<] (edit typo) [quote<]I asked around our friends in the motherboard business for some feedback on this issue - is it something that users should be concerned about or are modern day motherboards built to handle this type of variance? One vendor told me directly that while spikes as high as 95 watts of power draw through the PCIE connection are tolerated without issue, sustained power draw at that kind of level would likely cause damage. The pins and connectors are the most likely failure points - he didn’t seem concerned about the traces on the board as they had enough copper in the power plane to withstand the current.[/quote<]

      • Klimax
      • 3 years ago

      Far more fun:
      [url<]https://www.reddit.com/r/Amd/comments/4qfwd4/rx480_fails_pcie_specification/?[/url<]

        • chuckula
        • 3 years ago

        I’m sure there’s an incredibly thoughtful and calm deliberation going on over there!

        • tipoo
        • 3 years ago

        That update list was a highly entertaining bathroom read.

        • f0d
        • 3 years ago

        wow
        if true i would be a little scared overclocking the 480
        and im usually fearless when in comes to overclocking (1.5v 3930k/1.4V R9-290)

        • Ninjitsu
        • 3 years ago

        *grabs popcorn*

      • f0d
      • 3 years ago

      [quote<]First-time poster here. I ran into a problem after upgrading my rig with an RX 480 today. Everything was working great but then after a 7-hour straight gaming session with Witcher 3 Blood and Wine (which by the way is AMAZING) I got artifacting and then everything went black and the sound cut out. I reboot my PC several times, but nothing would come up. After looking up the error code on the motherboard, I found that it was "No VGA present" so at first I thought the card was dead and I put back in my 750 ti, but it too would not work with the same error code. So I put the RX 480 in the second PCI-E slot and now everything is working just fine. After everything was A-OK I tried slot 1 again and it failed again, so now I'm in slot 2.[/quote<] [url<]https://community.amd.com/thread/202410[/url<] ouch looks like this issue might be for real

        • Waco
        • 3 years ago

        The trolls in that thread gave me thought cancer. There’s nothing worse than marginally informed people (who think they know all) giving advice to someone seeking help.

    • DragonDaddyBear
    • 3 years ago

    I remember a TR member did some analysis on the 290(?) regarding under volting/clocking. It turned out to be fairly efficient after the changes. I wonder how this would do.

    • tipoo
    • 3 years ago

    Unfortunately for AMD, they’re on completely different fabs this time than Nvidia, Glofo vs tsmc. I’ve wondered if that’s part of their efficiency disadvantage. We’ve seen this with the 6S load testing. That’s the thing now, with different fabs, the playing field is not even, and not only does the architecture matter, but the fab process does too when comparing them to Nvidia. Which kind of sucks for AMD.

    With the iPhone it mattered less because it’s mostly idle, even with the screen on, but for a high performance GPU it’s the full throttle aspect that matters.

    Interesting though that TSMC will still make their high end parts (I don’t know if that means just Vega, or the 300 dollar Polaris too), so maybe it’s not all lost on the efficiency side if the fabs are to blame.

    I think this is to fuffil the WSA, makes sense, higher end part gets the higher end fab, the 200 dollar part isn’t particularly efficient but they hit this performance and price.

    They even switched Zen to TSMC after Glofo efficiency concerns.

    So I do have hope that the more expensive TSMC parts will provide them much needed efficiency to go up against Nvidias higher end, and hopefully it doesn’t mean Polaris as a whole is just inefficient.

      • ronch
      • 3 years ago

      I’m theorizing AMD knows about GF’s efficiency disadvantage (if it’s really the reason why Polaris isn’t as efficient as we expected) but perhaps GF quoted them a price they couldn’t resist? AMD could’ve made a calculated risk with P10, gambling that hitting a lower price will offset worries about efficiency.

      With Vega, they absolutely need every bit of efficiency they can get given the sheer number of transistors in there, and saving a few coins just isn’t worth it in a high end product.

      And of course it’s not just price. Perhaps TSMC is simply fully booked that they can’t meet the volume AMD needs?

        • tipoo
        • 3 years ago

        In addition to any pricing deals, there’s the WSA, they’re legally obligated to a certain quantity from GloFo. So they filled it with their mid range 200 dollar part, which, to be fair, makes sense to do.

          • ronch
          • 3 years ago

          Yeah, didn’t bother mentioning that. That’s totally killing AMD.

          Thanks a lot, Hector!

            • tipoo
            • 3 years ago

            I went looking to see if there was at least good news about its expiry…They’re bound until May 2024. Yikes.

            [url<]http://www.wikinvest.com/stock/Advanced_Micro_Devices_(AMD)/Wafer_Supply_Agreement[/url<]

        • muxr
        • 3 years ago

        > I’m theorizing AMD knows about GF’s efficiency disadvantage

        Which efficiency disadvantage? It’s almost 2x the efficiency of Tonga. If you look at R9 380 -> RX 480 efficiency gain, it’s far bigger than 980ti -> 1080.

          • tipoo
          • 3 years ago

          2X with both the architecture and the node. Look at the iPhone 6S TSMC vs GloFo articles to see that Glofo is less efficient at load.

          [url<]http://www.extremetech.com/extreme/215912-new-reports-claim-samsung-powered-version-of-apple-a9-is-too-hot-to-handle[/url<] Samsung fab == glofo fab

      • WaltC
      • 3 years ago

      AMD is by far GLoFo’s largest customer…and actually, it’s GloFo that is obligated to supply a certain percentage of AMD’s foundry needs–that was part of the deal originally–so that GloFo couldn’t put them off if it got a better contract offer, etc…;) I also note that RX480 is done on 14nm Finfet–better than nVidia’s 16nm Finfet, I should think. But anyway, it never hurts to have as many options as you can for FABBing…and some FABS are better than others at certain things, too–for instance, imagine Intel trying to Fab the new AMD & nVidia GPUs–don’t think they are up to it (if they were they’d surely be doing it–their GPUs are still far behind the curve.) As well, GloFlo is still ramping up capacity whereas TSMC is actively looking for customers to fill its current capacities. Thinking that one company uses a certain fab because it always makes better chips than another is just silly, imo…;) A company like AMD, just like nVidia, prefers having an array of options instead just one. Even Intel (for certain chips it still sells) is not adverse to using FABs outside it’s own every now and then.

        • tipoo
        • 3 years ago

        The agreement cuts both ways. Glofo has to supply a certain amount to AMD, yes, but that also means AMD has to order a certain amount.

        I hope they can both agree to ditch it well before 2024, when it expires.

      • Theolendras
      • 3 years ago

      I guess Vega will mostly answer the question. But then, there is good news in some way, if indeed Glofo has a process that does not deliver fully in it’s early stage, it does mean it has a lot of untapped potential up for grab.

        • shank15217
        • 3 years ago

        This is what people don’t get, process improves and glofo will also make ZEN cpus with have polaris gpus, so it does make sense to use glo-fo for both.

        • tipoo
        • 3 years ago

        Potential for improvement is great, but surely being stuck with either company for the next 7 years restricts them. Glofo on top? Great. TSMC on top? AMD has to suffer through it because of the WSA.

    • crystall
    • 3 years ago

    I wonder how the power numbers will look like on custom cards since the blower-style cooler is possibly the worst choice they could have made power-wise. Leakage goes up quadratically or cubically (depending on the type) with temperature so having it run at 83C under load isn’t doing it any favors. The high-speed blower fan is also possibly adding a few extra W to the overall board consumption. The only reason I can think of why they might have chosen this design is price: it does look like a really cheap heatsink/fan combination.

    That being said it’s a [b<]great[/b<] card at this price point. I've got an HD 6870 and I was about to upgrade it last year but then I wasn't really in the market for 300$+ cards. At 200$ the 4GB is a no-brainer for me so I'll be just waiting for custom designs to crop up.

    • Flying Fox
    • 3 years ago

    Of all the flak that TR is getting about Nvidia bias, see how fast they come up with the AMD review, right on the day when the NDA is lifted. And look at how long it takes for them to cook up that other article!!111!!eleventy!11!1!

    • Chrispy_
    • 3 years ago

    I guess TSMC beats GloFo by a country mile: This matches a 970’s power consumption and performance, whilst the 1080 matches the 970’s power consumption whilst providing nearly double the performance.

    [b<]DOUBLE.[/b<] Just stop for a second and take that in. I'm really disappointed that GloFo has let AMD down. Again. But it's not all just GloFo's 14nm that disappoints, it's the [b<]COMPLETE LACK[/b<] of any architectural advances. Each generation Nvidia takes a bounding leap forwards in architectural performance and efficiency. That "double" I was talking about is part TSMC's 16nm process, and part Nvidia making significant improvements to the IPC of Pascal over Maxwell, just like Maxwell was a significant jump up from Kepler, etc. AMD have seeming staggered backwards or remained stationary in terms of IPC and architectural improvement: R9 380X = 32CU @1050MHz RX 480 = 36CU @ 1266MHz In terms of shaders*clockspeed Polaris 10 has a 36% advantage ignoring architecture. From the scatter plots, The 480 performs 34% better (average FPS) or 31% better (99% FPS) than a 380X. That means basically no [i<]visible[/i<] improvement in GCN efficiency. No IPC gains. This might as well have been a die-shrink for all anyone else cares about.... GTX1060 coming out in a couple of weeks. [url=http://wccftech.com/nvidia-geforce-gtx-1060-pascal-gp106-leak/<]Purportedly[/url<] it's half a GTX1080 which will probably drop in at around the same performance as the 480, but using 100W, not the 150W that Polaris seems to need (perhaps even more than that because it can't maintain boost clocks at that limit), and with a much higher quality cooling solution by the looks of it.

    • EndlessWaves
    • 3 years ago

    The TR review hasn’t answered the most important question.

    Is it pronounced Arr Ecks or Arr Ten?

      • Jeff Kampman
      • 3 years ago

      radeOS

    • chuckula
    • 3 years ago

    Confirmed: the Rx 480 has [b<]WON[/b<] based on Chuckula's trademarked "CommentsPerHourBench (2016)". GTX-1080: A paltry 228 comments over 5 days. Rx 480: OVER [s<]NINE THOUSAND[/s<] [u<]Three Hundred[/u<] comments on the first day.

      • Froz
      • 3 years ago

      You need to calculate all comments about 1080 from all other articles, asking when the review will be posted :p.

    • Beelzebubba9
    • 3 years ago

    Anyone getting the notion that Polaris might just be more evidence AMD doesn’t have the money to genuinely compete with nVidia or Intel anymore?

      • AnotherReader
      • 3 years ago

      Let’s wait for an AMD GPU on TSMC before we write them off

        • Beelzebubba9
        • 3 years ago

        Honestly why?

        On GloFo’s 14nm process they were barely able to match the 28nm GTX 970 for power efficiency; what makes anyone think moving to TSMC will some how allow AMD to make the same leap in energy efficiency it took nVidia two node shrinks and a new arch to hit?

          • travbrad
          • 3 years ago

          I doubt they will match the energy efficiency of Pascal even with TSMC but it might get them close enough that it doesn’t matter to a lot of people.

            • AnotherReader
            • 3 years ago

            What travbard said. Besides, like Fiji, Polaris 10 is an unbalanced design. It is lighter on ROPs than it should be. To see what extra ROPs can do, compare the [url=https://www.techpowerup.com/reviews/HIS/Radeon_HD_6870/29.html<]5830 to the 6870[/url<] or [url=https://techreport.com/review/26050/nvidia-geforce-gtx-750-ti-maxwell-graphics-processor/8<]R7 260X to R7 265[/url<].

            • stefem
            • 3 years ago

            But then the question is “Why they keep pursuing this (unbalanced) design approach if that don’t pay?”

            • AnotherReader
            • 3 years ago

            AMD has been betting on shader performance for some time now. You see this if you compare the resource balance of Nvidia and AMD’s high-end GPUs since the HD 2900 XT.

            [code<] Node(nm) NV ROPs AMD ROPs NV tessellators AMD tessellators 80 24 16 0 [b<]1*[/b<] 55 32 16 0 [b<]1*[/b<] 40 48 32 4 2 28 96 64 6 4[/code<] Note: I am counting the unused tessellator of the HD2900 through HD4890 as 1. Edit: I wish we could use HTML tables. Getting this to display correctly took way too much time.

            • anotherengineer
            • 3 years ago

            I don’t know, I think there is something going on here that’s more than meets the eye.

            Here are 2 cards both TSMC 28nm by AMD and Nvidia and check out the voltage differences vs. clock speeds!!!
            [url<]https://www.techpowerup.com/reviews/Sapphire/R9_390_Nitro/27.html[/url<] [url<]https://www.techpowerup.com/reviews/Colorful/iGame_GTX_980_Ti/34.html[/url<] Better than glofo for sure, but one has to wonder, is AMD using the same silicon as Nvidia (doping etc.)? are they adding extra voltage to be on safe side? or is it due to architecture? or all of the above or none or other?

            • tipoo
            • 3 years ago

            The article mentions the +15% safety net is gone now because of the new voltage sensing circuitry, but unfortunately the gain from that was rendered moot by Glofo. I wonder if Nvidia had already removed that safety net and had similar power feelers.

      • Eversor
      • 3 years ago

      I’m getting the confirmation that outsourcing chip manufacturing has been the worst decision ever. The worst effects I’ve seen personally was Brazos, on TSMC, which was 40nm ~1.4v @ 1700MHz and nowhere near the 18W TDP the chips are supposed to have. That is much more voltage, at much lower clocks, than AMD 90nm designs before the GloFo spinoff. Seems that years later, the trend continues and there will be little hope for Zen.

    • codedivine
    • 3 years ago

    Thanks for the great review folks. Any word on the fp64 capability of the card?

    • Anovoca
    • 3 years ago

    Just out of curiosity Jeff, when you take the cover off to photograph, do you do this before or after the benchmarks and registering the thermals? I am just curious because manufacturers are notorious for using cheaper thermal paste than what an enthusiast would carry in his/her tool box and I would think that if you were to reapply the heatsink with a more liberal glob of quality paste, your tests would end up with a very slight but still notable improvement.

      • Jeff Kampman
      • 3 years ago

      After, always.

    • USAFTW
    • 3 years ago

    Very neat review.
    Here’s my take:
    1. Power consumption a bit disappointing but not at all a big deal for me.
    2. The RX 480 was compared with versions of GTX 970 and 980 that have core and memory speeds out the wazoo (I’m not sure if I spelled that right). In recent games it puts up a decent fight.
    3. The reference cooler with it’s stupidly cheap and small aluminium heatsink can sod off.
    4. The additional smoothness is a very nice development. Traditionally, from AMD and Nvidia cards I’ve tried over the years, the Nvidia card was usually smoother but that’s out the door now.
    5. So a 28 nm salvaged GPU from Nvidia (970) is as efficient as the RX 480. WTF???
    Either this or the 1060 will be my next card.
    6. The card costs the same as the GTX 960 2GB version did at launch. I think the perf/$ is pretty incredible and sadly the leaks and the hype may have prepared the readers with some unrealistic expectations.

    • ronch
    • 3 years ago

    Is it just me or are all the review and retail units of this card I’m seeing equipped with reference coolers? IIRC, historically products sold on Day 1 all pretty much had custom coolers already. You’d be lucky to find a retail unit sporting the reference cooler from AMD.

    Is this right? Or are AMD’s board partners getting a mega discount from a single cooler supplier?

    • Prestige Worldwide
    • 3 years ago

    Looks like the GTX 1060 will be releasing July 7th to bring competition to this price point.

    [url<]http://videocardz.com/61583/nvidia-geforce-gtx-1060-to-be-released-on-july-7th[/url<] edit: Whoops, Ronch beat me to it by 12 minutes

      • raddude9
      • 3 years ago

      And then the non-reference factory-overclocked 480’s will come out… and so the dance continues

    • ronch
    • 3 years ago

    There are rumors that the GTX 1060 will come out on July 7, sport ~1260 shaders, and is ~200mm^2 big. And given the clocks the big Pascals run at, I’m willing to bet the 1060 will clock higher than the 480 too.

    If all this is true, it just may put a spanner in the RX 480 works.

      • slowriot
      • 3 years ago

      What rumors are there suggesting the GTX 1060 will be price competitive with the RX 480? GTX 1070s are $450+ cards. That alone suggests the GTX 1060 is going to be around $300 and maybe a bit higher for 8GB/factory overclocked options.

      EDIT: I guess I should say 6GB options, because it appears the GTX 1060 is going to ship with 3GB and a 192-bit memory bus.

        • chuckula
        • 3 years ago

        There’s no reason a GTX-1060 at $250 couldn’t compete with the Rx 480. [Incidentally, $250 would be a markup over the launch price of the GTX-960, so I’m not even claiming Nvidia is trying to cut anyone a deal.]

        As for a card in the $300 or so range that’s above the 1060 but below the 1070, that’s what the “Ti” suffix is for.

          • slowriot
          • 3 years ago

          Hmm. That would be interesting but wouldn’t a $250 price be for a 3GB version? Which would mean buyers are comparing the $199 RX 480 versus the $250 GTX 1060. $50~ is a lot to people in this range.

          I also wonder if that $250 is akin to the $379 number Nvidia threw out with a GTX 1070 but is no where to be found in retail.

        • stefem
        • 3 years ago

        The MSRP of the GTX 1070 is $379 and street lowest price I’ve seen is $399 at newegg.com but I’ve haven’t searched, both AMD and NVIDIA has problem feeding demand, for example here in Italy the RX 480 8GB is at 299€ which is some more than $300 actually.

      • raddude9
      • 3 years ago

      By the time the 1060 comes out there will be factory-overclocked non-reference 480’s to compete with like this one:
      [url<]http://rog.asus.com/23792016/gaming-graphics-cards-2/asus-republic-of-gamers-introduces-strix-rx-480-graphics-card/[/url<]

        • ronch
        • 3 years ago

        Except the RX 480 reference card blunder will tarnish the image of the RX 480. Non-enthusiasts will probably just avoid it. They don’t care about those 6-pin or 8-pin whatnots.

          • raddude9
          • 3 years ago

          Non-enthusiasts tend to pay more attention to price than enthusiasts and the 480 is likely to be cheaper than the 1060.

          • Froz
          • 3 years ago

          You are overestimating the reach of such information within non-enthusiast circle.

          My experience so far is:

          1) non-enthusiast don’t even know about this card yet
          2) if they know, it’s very unlikely they heard about the power issue
          3) I’ve checked several sites from my country, which are mostly targeted at people who are enthusiasts and there was no mention of this issue at all (and they did pick up 970 memory thing quite quickly). It is also worth to mention that we are mostly target for mid-low level cards, so this is what huge majority of people will buy here, definitely not anything even approaching 1070 price.

          So, to sum up, I think this issue is mostly known by people who are not that likely to buy mid-low range card anyway. If GTX 1060 is going to cost more, I think RX 480 will sell a lot.

    • madseven7
    • 3 years ago

    Can we have a poll on who is disappointed with the 480 and who is happy with its performance?

      • Beelzebubba9
      • 3 years ago

      My pick would be ‘price/performance is good, energy efficiency is disappointing’.

    • maxxcool
    • 3 years ago

    Hmmm now we need a Cross-Fire test from TR … lets put this fanboi shenanigans all to rest once and for all.

    • odizzido
    • 3 years ago

    Nice review for those of us still using windows 7. You guys should do a DX12 comparison next of the 970 and 480 showing the performance difference between W7 and W10 as a follow up article.

    • yogibbear
    • 3 years ago

    Thanks for thumbing me down back on Raja’s announcment news post that his benchmark was gonna turn out to be BS marketing overstatement and rightly ruin any good points the card has because they overhyped it. :/ Feels good to be vindicated.

    • tipoo
    • 3 years ago

    Interesting though that TSMC will still make their high end parts (I don’t know if that means just Vega, or the 300 dollar Polaris too), so maybe it’s not all lost on the efficiency side if the fabs are to blame. This using GloFo seems like a way to keep up the WSA.

    • AnotherReader
    • 3 years ago

    [Edit: The most important pro after FPS/$ is its consistent frame times]. The performance is good for the price, but Pascal is vastly superior on a performance per watt basis. For mid-range cards like the GTX 1060 and the RX 480, that doesn’t matter, but if it holds true for Vega vs Pascal, then AMD is in big trouble. I suspect that this is due to Global Foundries screwing up again. Compare the [url=https://www.techpowerup.com/reviews/AMD/RX_480/28.html<]voltages at idle[/url<] to those for the [url=http://www.anandtech.com/show/9146/the-samsung-galaxy-s6-and-s6-edge-review/2<]first non-Intel Finfet device: the Exynos 7420[/url<] in Samsungs's S6. I hope that Vega is fabbed by TSMSC. Edit: Chuckula was right about it being equivalent to the 390. AMD really should have given it 64 ROPs.

      • chuckula
      • 3 years ago

      Ruh Roh, you said Chuckula was right.

      Sorry about that avalanche of downthumbs.

        • AnotherReader
        • 3 years ago

        One shouldn’t let the possibility of disapproval colour one’s opinion.

      • Zentennen
      • 3 years ago

      I find that a lot of the time ROPs * clock speed helps a lot to stabilize frame times. A few examples:

      Nvidia has recently focused on more ROPs (64 for all mid range GPUs) and generally have more stable framerate than AMD.

      GTX 980 Ti has significantly more stable framerate compared to the fury X, especially when both are overclocked.

      The times when AMD are competitive at the same price point, that is to say r9 390, 380 and 380x, it is when their ROP * clock speed is similar to nvidia.

      ROP * clock speed = 99th percentile frame time has never been a direct correlation, but some sort of link is there, at least from my analysis, however, the RX 480 blows that out of the water. It’s very stable despite only having 32 ROPs. I remember they mentioned that GCN4.0 has a “geometry processor” in the architecture, I wonder how much that helped because either my theory is completely wrong or that is some serious special sauce when it comes to framerate smoothness.

      I’m no GPU expert by any means, I’ve just gathered data from the most recent releases and tried to find what seems to be the most common cause for instability.

    • Ninjitsu
    • 3 years ago

    Good review*! And as expected. A bit surprised the 390 and 390X weren’t in there, as according to everybody** that’s where performance was to lie according to…[i<]Ashes of the Singularity[/i<] ¬_¬ *I do have some complaints about the methodology (no, no accusations of bias, read on without without fear of that). It's about the resolution. This is very obviously a 1080p card. Why use 1440p at all? You've used 1440p, then lowered the detail settings (especially anti-aliasing)...it's puzzling. It's a ~$220 card (between both versions), aimed squarely at mass market, which is 1080p@60Hz. It's easier to extrapolate to the next resolution looking at maxed out graphics settings (and 4xAA) than to guess performance for a lower resolution with higher quality settings. Then, you've also used 1080p in between. It's kinda inconsistent, I find. If you [i<]absolutely must[/i<] test 1440p, stick to it, or keep to 1080p. At least the reader knows that all the graphs are talking about the same thing. Finally, I don't know if you're mixing the resolutions in the perf/$ charts, but it dawned on me that it's again inconsistent. It represents value at a mix of resolutions, but is that really useful? Maybe it'll amount to the same thing in the end, I don't know. Doesn't feel right, but you folks are the ones with the data, and are in a better position to judge. I hope this doesn't come across in a bad way, I just think it would be more useful and more clear to stick to a single resolution (or 2 resolutions for everything). Currently it's a bit harder to work out overall performance at any one resolution. **not literally everybody.

      • DoomGuy64
      • 3 years ago

      It is a 1080p card, but what we learn from the 1440p tests is that this card keeps up with the 970 which is not a 32 ROP card. That means AMD has significantly improved the efficiency of GCN, and also that Nvidia will have a hard time beating the 480 with a lesser 32 ROP card like the 1060.

      That said, 1080p is where this card shines. TR’s hitman benchmark is probably the best example of what this card is capable of in 1080p. Almost 980 level performance, making this the perfect card for anyone with a 1080p monitor.

    • chuckula
    • 3 years ago

    Update on Rx 480 Newegg availability: Out of stock on all models as I post this:
    [url<]http://www.newegg.com/Product/ProductList.aspx?Submit=ENE&IsNodeId=1&N=100007709%20601203818&Tpk=Rx%20480&ignorear=1[/url<] They will likely cycle cards through just like they have done with Nvidia products, so keep checking if you are interested.

    • odizzido
    • 3 years ago

    Pretty decent card for the price. The power draw makes me think that this gen for AMD isn’t going to have much luck scaling up to 1070/1080 levels though.

    • hkuspc40
    • 3 years ago

    I never understood what all the hype was about. It’s unfortunate that AMD can’t really compete with nVidia. It really is just as bad for nVidia as it is for AMD. So this card forces nVidia to lower the price of a 970… Big whoop. They are already onto the next generation of performance.

      • Northtag
      • 3 years ago

      The GM204 is 1.7x bigger than the Ellesmere XT, so nVidia have nothing right now that can give them a win at this price point. I’m sure the 1060 when it arrives will have a lot to say, but the 970 ain’t it.

        • blahsaysblah
        • 3 years ago

        They already recouped all R&D costs, by now its pure profit minus minor expenses.

        • pranav0091
        • 3 years ago

        Thats not how it works – if it were, the foundries would go out of business and would actively hate smaller nodes.

        With increasing density comes increasing costs per transistor, and the interplay of cost/mm and cost/transistor and performance/mm (ie, frequency, power and density) determine what a chip costs – its VERY wrong to say that a chip thats k times larger – but in a larger node – is more or less expensive unless you know all of those ratios I mentioned above.

        Tbh, the performance – though I mentioned it above – doesn’t matter to the cost price of the card, it only matters to the sale-price.

        <I work at Nvidia, but my opinions are purely personal>

    • ronch
    • 3 years ago

    Those who were really expecting to get GTX 980-class performance must be sobbing now.

      • Beelzebubba9
      • 3 years ago

      Naw, just slightly disappointed that it’s not good enough to justify an impulse buy. 🙂

    • anotherengineer
    • 3 years ago

    Well I always liked TPU’s performance/$ graphs
    [url<]http://www.techpowerup.com/reviews/AMD/RX_480/26.html[/url<] But I find this interesting, AMDs voltages are more on 14nm than Nvidia's on 16nm, and historically they have always done this even if on the same process and I have no idea why?? [url<]http://www.techpowerup.com/reviews/AMD/RX_480/28.html[/url<] [url<]http://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_1070/28.html[/url<]

      • DPete27
      • 3 years ago

      Nvidia uses lower voltages for higher clockspeeds. That’s the way its meant to be tuned…right?

        • pranav0091
        • 3 years ago

        Speaking from what I recollect from college days :-

        No, thats not how its supposed to be. Lower voltages would mean that you will NOT hit higher clocks on the mass market cards (not the same as not having overclocking headroom – infact, quite the opposite)

        To achieve the least possible power consumed, you want to use the least voltage to bias a transistor, that ensures that it can be still be biased in the right fashion. But the silicon is a lottery – not all chips (even in the same physical wafer) are born equal, and thats just how it is, due to engineering limitations.

        So why doesnt one use the lowest possible voltage on a given chip? Binning and stability. Since not all chips are born equal, you find out the minimum voltage that successfully biases the transistors in the right fashion for a significant chunk of the chips and then use it on them. This means that certain chips have more headroom than other, by definition.

        So one reason to have a high voltage could be that the chip-to-chip variance was found to be too high. Another could be that you wanted a huge stockpile of chips and so you wanted to relax the constraints on binning, and therefore are forced to set a higher minimum voltage.

        There are, possibly, more reasons, but I cant recall them right now.

        <I work at Nvidia, but my opinions are purely personal>

          • DancinJack
          • 3 years ago

          He was joking.

            • pranav0091
            • 3 years ago

            For a moment I thought so, but the sarcasm was so advanced that I wasn’t really sure 🙂

      • tipoo
      • 3 years ago

      They’re on completely different fabs this time, Glofo vs tsmc. I’ve wondered if that’s part of their efficiency disadvantage. That’s the thing now, with different fabs, the playing field is not even, and not only does the architecture matter, but the fab process does too when comparing them to Nvidia.

      And the 14nm and 16nm, don’t compare their sizes by what they advertise, they’re within the same generation, each fab just counts sizes differently (and bulls***s differently).

      Interesting though that TSMC will still make their high end parts (I don’t know if that means just Vega, or the 300 dollar Polaris too), so maybe it’s not all lost on the efficiency side if the fabs are to blame.

        • anotherengineer
        • 3 years ago

        I’m well aware of that, hence why I mentioned 14nm and 16nm. But ya who knows, be nice if some review could actually undervolt these radeons and see how they handle it.

    • TwoEars
    • 3 years ago

    Interesting price/perf card but not a knockout. What worries me is the Nvidia 1060 with g-sync and what I believe will be better performance/watt. I believe it’s supposed to launch next Thursday. Pricing will be key here, we’ll know the whole story soon enough.

    • anotherengineer
    • 3 years ago

    All in all not too bad really. A GTX 970 has mature drivers and 4GB of ram, and in Canada is almost double the price of the RX480 with 8GB of ram, updated HDMI and DP and freesync.

    The 4GB model is actually $200 US also and not marked up.
    [url<]http://www.newegg.com/Product/Product.aspx?Item=N82E16814202222[/url<] As for heat and clocks, I wonder if it's due to the cooler or if TSMC silicon and process is better than GF silicon and process or both??? I would imagine the performance will increase over time like it did with Hawaii also?

      • looncraz
      • 3 years ago

      Cooling 150W+ in a 232mm^2 area is a fairly tall order for a blower of this type.

      The other thing going against it is that the heatsink is just too small. If it had been 1/2″ longer (along the plane of the PCB, protruding more towards the VRM) it would be able to dissipate 20W more at the same temperatures – or having ~10% lower fan speed, which would have been closer to 48db or so.

      The CNC machine profile would only require one extra pass on the pass to notch room for the circuitry closer to the VRM as well, and the extra aluminum would cost nearly nothing. Maybe $0.15 total difference… and the cooler temps could result in lower power draw, providing a positive feedback.

      • cpucrust
      • 3 years ago

      I’m seeing $339 CAD pre-order prices for the RX 480 from one of my favorite vendors for
      MSI, XFX, ASUS, and HIS AIB. All four seem to have triple DP with one HDMI.

      Pictures of the AIB provider offerings show them to be identical to the AMD reference model, except the AIB provider’s circular sticker covers the top of the fan.

      Only the box art appears to be different 😉

        • slowriot
        • 3 years ago

        Custom cooling solutions, etc. will start showing up mid July.

        Now is not the time to buy an RX 480. Between the GTX 1060 launch being pushed up and the arrival of custom RX 480s it is definitely worth waiting another two weeks.

    • Convert
    • 3 years ago

    Great and timely review! Sadly for AMD this card is a massive disappointment.

    • Kretschmer
    • 3 years ago

    The fact that we’re all jumping right into results means that this was an excellent review indistinguishable from the expected level of TechReport quality.

    Kudos!

    • chuckula
    • 3 years ago

    Incidentally, Newegg is down to one model left in stock as I post this: [url<]http://www.newegg.com/Product/ProductList.aspx?Submit=ENE&IsNodeId=1&N=100007709%20601203818&Tpk=Rx%20480&ignorear=1[/url<]

    • ronch
    • 3 years ago

    For us guys who can’t afford a GTX 970, here’s [u<][b<]OUR[/u<][/b<] GTX 970 from the brand that sells cheaper alternatives. /thumb bait

    • chuckula
    • 3 years ago

    Interestly point to consider when it comes to comparisons at the silicon level between TSMC for Pascal and GloFo for Polaris.

    As is well-known, GloFo’s 14nm process is basically a licensed version of Samsung’s 14nm process used for smartphones and other high-end SoCs. Interestingly enough, we’ve seen a direct comparison between these two fab processes being used to make the same design in the past for the Apple A9 chips.

    Here’s an interesting article that TR wrote about the issue: [url<]https://techreport.com/news/29215/report-iphone-6s-battery-life-isnt-significantly-affected-by-soc-source[/url<] Now the article seeks to minimize the differences between two SoCs on battery life draw since the SoCs pull pretty much the same amount of power at idle. However, look at the actual power-draw benchmarks that at least approximate the usage that you would expect from a GPU when you are actually doing something with it. Interesting how TSMC's process came out ahead under load even back then.

      • derFunkenstein
      • 3 years ago

      The results are not a surprise considering [url=http://www.samsung.com/semiconductor/about-us/news/13364<]Samsung and GloFo are sharing technology[/url<]. BTW, Consumer Reports downplayed the power difference because even when an iPhone is in use and the screen is on, it's mostly idle. Under normal usage, the differences were minimal. In this case, under normal usage, the power consumption (between the RX 480 and the GTX 1070) is also minimal. That's bad because of the huge performance difference between the two. So it's different contexts.

    • derFunkenstein
    • 3 years ago

    This looked really great—GTX 970 performance at a substantial discount—until I got to the power consumption part of the review. Those power consumption numbers are just ridiculous. The RX 480 uses just as much power as a GTX 1080 and the performance isn’t close.

    Energy doesn’t cost me a whole lot. Between kW/h cost plus delivery cost it’s under 11 cents per kW/h. It’s not like the costs are going to break the bank, but why is Polaris so much less efficient, on the order as half the performance per watt? This is madness.

      • Krogoth
      • 3 years ago

      Aging architecture is the primary reason followed by the immature 14nm process.

        • HERETIC
        • 3 years ago

        Lets not forget GF is known for leaking like a sieve.

        • flip-mode
        • 3 years ago

        Or it could be a hardware bug, or it could be a driver issue, or it could be a design flaw with this chip in particular or it could even be something with the other components on the circuit board.

      • anotherengineer
      • 3 years ago

      That’s almost like free electricity!!!

      I’m paying (after taxes) about 25 cents/kw.hr here in Ontario………

      Sad thing is the province to the right of us and to the left are both under 10 cents/kw.hr

      You should see the mass exodus of industry here…………

        • derFunkenstein
        • 3 years ago

        Isn’t 25c Canadian almost like 11c USD? :p

          • anotherengineer
          • 3 years ago

          But you’re not paying my electric bill though 😉

    • PixelArmy
    • 3 years ago

    So… GTX 1060 @ $250 and closer to the GTX 980 this was pegged to be?

    • madseven7
    • 3 years ago

    AMD should release this card at $239 Canadian dollars not $239 US. $239 US is $359-$369 Canadian WTF?! Poor value! So pissed.

      • Freon
      • 3 years ago

      The weak Canadian dollar is not AMD’s fault.

        • sweatshopking
        • 3 years ago

        No, but the exchange rate puts it at 310, not 350

      • tipoo
      • 3 years ago

      AMD should knock 30% off the cards value because our dollar is crap? Their margins are probably a far shot from 30% already, they’d be paying us to take it. 30% is an Apple margin, 30% is an Intel margin, AMD hasn’t been able to play in that space for a long time, and few companies do.

      What is this world of yours where 970s are cheaper than it and exchange rates don’t matter?

      • Prestige Worldwide
      • 3 years ago

      More like, should be $258 for 4gb model, and $297 for 8gb. This is just a $30 early adopter tax by the etailers more than anything else, which is normal with every GPU launch.

      Inform thyself before rage posting.

      [url<]https://www.google.ca/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=199+usd+to+cad[/url<] [url<]https://www.google.ca/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=229+usd+to+cad[/url<]

        • rxc6
        • 3 years ago

        Those US prices don’t include taxes either.

          • Prestige Worldwide
          • 3 years ago

          Neither do Canadian prices. GST / PST / HST depending on which province you live in is all calculated in addition to the posted price.

      • Froz
      • 3 years ago

      That’s not exactly how currency works.

      • Vaughn
      • 3 years ago

      And since when does AMD control the exchange rate for the Dollar?

      like seriously bro?

    • southrncomfortjm
    • 3 years ago

    Looks like AMD made some good performance gains here which is what they really needed. Now that I’m not tied to a G-Sync monitor, I may consider this card as an upgrade to my GTX 760 around the holidays.

    • puppetworx
    • 3 years ago

    Price/performance looks really good, though the cooling and power efficiency facets look to have been sacrificed to that end. That’s the most important metric to improve as far as I’m concerned but I would liked to have seen a card with a bit more performance than this right now.

    Given that the GTX 1080 is around ~70% faster (and +200% the price, to be fair) I wonder just how many cards AMD is going to release above this.

    • madseven7
    • 3 years ago

    Raja really screwed this up. Released a slide showing 480 cf beating a 1080. This led to higher expectations of 480 performance (me included) to be at least 980 performance. Now seeing the results I’m really disappointed. 28nm to 14nm I thought I’d see better performance, higher clocks and lower power compared to the competition. None of that came about. 970 is better on 28nm which the card came out almost 2yrs ago.
    I am Canadian and love that Radeon was once a Canadian company. I love the underdog but this underdog really disappointed me this time. I’m not sure this company can turn it around after seeing these results.
    Sad day for me.

      • raddude9
      • 3 years ago

      You’re disappointed?
      from the review: [quote<]"the RX 480 sets a new bar for performance and smoothness at its price point, and it's undoubtedly the midrange card we'd recommend to most"[/quote<] I don't see anything wrong with this card, so perhaps the problem is your expectations... which is why you should always wait for the TR review and not put any heed by the bottom-feeding websites out there.

        • madseven7
        • 3 years ago

        I’m disappointed for many reasons. Going from 28nm to 14nm should provide better performance, better power efficiency or both. Compared to 970, where do you see that? Remember that card was released 2yrs ago.
        When you release a slide stating 480 in cf is beating 1080 what do you expect regarding 480 performance? I’m not naive. I didn’t expect 1070 or 980ti performance, but I did expect at the very least beating 970 soundly and at or approaching 980 performance with a lot better efficiency than what is shown in the 480 reviewed. Wouldn’t you?
        I’m sad cause I want AMD to succeed. I hope they have something else up their sleve. I hope they really have a dx12 advantage over their competition otherwise their future does not look promising.

          • juampa_valve_rde
          • 3 years ago

          I’m feeling close to what you say, but also, there is no lying on Raja statements, techpowerup put a review of rx480 crossfire and in selected games at relevant resolutions (2560 & 4k) a couple 480s can be faster than a gtx1070 and sometimes a gtx1080, but only best case scenarios measuring fps (forget about frame latency).

          In general i’m more disapointed for the reference card than the chip itself, which i consider pretty good and future proof (may become better when process matures), but the card just wins a mehh, cooling sucks, eats more power than what should and its noisy, luckily most of that shortcommings can and will be fixed by partners, personally i would like a single fan like the R9 Nano on this card.

          • Longsdivision
          • 3 years ago

          [quote<]I hope they have something else up their sleve[/quote<] you mean like the 480x, 490, 490x?

            • madseven7
            • 3 years ago

            Yes without using Vega. I want AMD to crush Nvidia.

            • Froz
            • 3 years ago

            Polaris is low-mid range, they said that quite long time ago. For high-end you do have to wait for Vega.

            • K-L-Waster
            • 3 years ago

            I’m by no means an AMD fan (my last 5 cards have been NVidia) but I think you’re being a little hard on them here. To continue the car analogies, they’ve basically delivered Camaro performance for the price of a Cruze, and you’re complaining that it isn’t beating a Corvette Z06.

            I think they have a pretty compelling mid-range card here. Just keep the expectations realistic.

            • tsk
            • 3 years ago

            There is no 480x, the 480 is the fully unlocked Polaris 10.
            490 is the only one confirmed by AMD.
            [url<]http://www.pcper.com/news/Graphics-Cards/AMD-Lists-Radeon-RX-490-Graphics-Card-New-APUs-Gaming-Promo[/url<]

          • raddude9
          • 3 years ago

          It was commonly know for a while now that this was going to be a $200 card.

          [quote<]I'm not naive.[/quote<] If you thought AMD was going to give you a 980ti with 8GB of RAM for around $200, then perhaps you are a bit niave, they're a company, it's their job to make profits. They want to leave themselves some headroom for a RX 490.

            • madseven7
            • 3 years ago

            I think you might be a bit naive. They are going from 28nm to 14nm. Smaller chips able to make more of them for same or cheaper price. Lots of room to make profits. If they want to make money and gain market share get the clocks up , get performance up and get efficiency up. Do not market 480cf better than 1080 and get peoples hopes up just to let them down. SEE TR REVIEW.
            Like I said, I wasn’t expecting 980ti performance but to be close to 980 and soundly beat 970 with better power efficiency.

            • raddude9
            • 3 years ago

            [quote<] Do not market 480cf better than 1080 [/quote<] They demoed one game where 2x RX480 was slightly better than a 1080. Assuming they would use a game that would scale well across 2 GPUs, that puts the 480 performance at slightly better than half of a 1080. Maybe you read more into that demo than you should have?

            • bwcbiz
            • 3 years ago

            Lots of room to make profits if they were already making money on the 28 nm chips. But AMD’s been hemorrhaging money for years on both GPU and CPU. So this price point is a plausible price/performance point that allows them to make money. Plus the release GTX 1060 could force them to reduce these prices quicker than planned if there is a significant price performance gap.

          • EndlessWaves
          • 3 years ago

          Why are you comparing it to the GTX 970 though? Either in die size or price it’s closer to the GTX 960.

          So you do get substantially more performance.

            • madseven7
            • 3 years ago

            Comparing to a 970 that’s 2yrs old, with and older arch, older process and bigger size you would think the newer one being smaller would be also faster and power efficient in which case it’s not. And it also not cheaper to boot.

            • EndlessWaves
            • 3 years ago

            Blame your local retailers if it’s not cheaper. Here in the UK the 4GB 480 is £180 while the 4GB GTX 970 was £250. Sure, they’ve now discounted the 970s because of the new 480 launch but that’s hardly a fair basis for comparison. You can always pick up old cards for the same price as new ones of equal performance close to or after release.

            It’s also right where you’d expect performance-wise. It offers the same 70-75% improvement over it’s predecessor that the GTX 1080 did.

            AMD haven’t been able to pull out double the improvement in power efficiency compared with nVidia which would definitely have been nice, but as an expectation rather than a wish was completely unrealistic.

            • bwcbiz
            • 3 years ago

            Yeah, that would make sense if AMD were basing the design of the 480 on the design of the 970. But AMD’s architecture is GCN, which has always drawn more power than comparable nVidia designs. nVidia clearly has an advantage in power-efficiency that’s endemic to their designs and AMD isn’t able to replicate.

            Does anyone know if this is straight-up better engineering or if there’s some patent protection precluding AMD from using the best tech? Clearly if there’s a patent, nVidia isn’t licensing it to their biggest competitor.

          • odizzido
          • 3 years ago

          pcper tested dx12. In dx12 RotTR the 480 is 23-29% faster and in hitman dx12 it’s 35-41% faster.

          My question though is RotTR/hitman just slower in dx12 for no visual gain? Is it the same visuals but better performance? I have no idea.

        • ronch
        • 3 years ago

        Sure, there’s nothing wrong with this card. But if you have an old 4-cylinder 2.0 engine, buy a new car that has a 3.5L V6 and find out it doesn’t really deliver much more performance and refinement, and it being a much newer model, then I would think there’s something wrong with the new model.

          • raddude9
          • 3 years ago

          That car analogy doesn’t work at all, when is a new 3.5l v6 considerably cheaper than an old 2.0l.

          Surely it should be the other way around:
          GTX 970 = old 3.5l v6
          RX480 = new 2.0l turbo

          Similar (ish) performance, but the 2.0l is cheaper

            • madseven7
            • 3 years ago

            “Similar (ish) performance, but the 2.0l is cheaper”

            But its not cheaper. there are 970’s that are cheaper now than the 480

            • Beelzebubba9
            • 3 years ago

            Retail 970s are below $200?

            • rechicero
            • 3 years ago

            Not with the same memory…

            • ronch
            • 3 years ago

            The 480’s main selling point is price. Yes, it’s very much a major consideration for most folks but my point is, take price out of the equation and the RX 480 isn’t all that impressive. At 14nm and all the power management tricks AMD put in and hype they put out, you’d expect it to have much better efficiency. But even TR admits efficiency isn’t where they expected it to be. I’d think Nvidia could easily put out a similarly-performing GPU with less power consumption and smaller die.

            Any questions?

            • raddude9
            • 3 years ago

            No questions. Taking the most important factor, price, out of the equation leave me nothing to argue with.

          • tsk
          • 3 years ago

          Your car analogies are better suited in the comment section on the verge.

      • ronch
      • 3 years ago

      So… nothing has changed.

      • Mat3
      • 3 years ago

      Go see the HardwareCanucks 480 review. It gives a much better picture overall. More games and resolutions tested, including a lot more DX12 tests where the 480 beats the 980 and gives the 970 an ass whooping.

        • EndlessWaves
        • 3 years ago

        That is interesting.

        If that turns out to be the long term performance of the 480 and not specific to those games then that’s a nice situation.

      • DoomGuy64
      • 3 years ago

      Lol, that was using crossfire in AOTS.

      • End User
      • 3 years ago

      [b<]ATI[/b<] was once a Canadian company!!!!! I know because I worked there. Markham is a god forsaken place.

    • barich
    • 3 years ago

    I’m probably the target market for this card (I won’t spend over $200 on a GPU as I’m not a heavy gamer and I currently have a Radeon 7770 that isn’t quite cutting it anymore) and I don’t get the complaints. It’s faster than a GTX 970 at a lower price point. They’ve got a winner until nVidia gets a cut down Pascal out to compete, which could take a while.

    Is it in line with what AMD promised? No, but should we really be surprised? Companies share benchmarks that show their products in the best light, film at 11.

      • slowriot
      • 3 years ago

      Hype is a double edged sword. I think if AMD had been more restrained leading up to this then expectations wouldn’t have been so insane and therefore reactions would be generally more favorable.

      The RX 480 is a solid to good option but people were expecting a price/performance revolution, its just an evolution with some trade offs.

      • strangerguy
      • 3 years ago

      Frankly, I would wait for the 1060 next month. Even a cut down Pascal 1280SP @ ~2GHz should comfortably match a RX480 with far better perf/W. I would gladly pay an extra $20 for that over the RX480.

        • EndlessWaves
        • 3 years ago

        Except it’s not just $20, it’s $120 once you factor in the cost of the screen as well. The huge g-sync price premium means nVidia often aren’t a credible mid-range choice unless you’re gaming on a TV, projector or other fixed refresh rate display.

          • Prestige Worldwide
          • 3 years ago

          Sorry, but nobody is forcing anybody to buy a VRR monitor and that really doesn’t factor into the GPU price here at all, especially at this price point.

            • rechicero
            • 3 years ago

            I’d say a VRR monitor is useful specially at this price point? VRR shines where the cards struggle…

            • Prestige Worldwide
            • 3 years ago

            True, but if you already have a decent monitor, you would be better buying a more powerful GPU that wouldn’t need VRR to compensate.

            • travbrad
            • 3 years ago

            Yep or if you are buying a new monitor get one that is a low enough resolution that you don’t need VRR to try to make up for bad performance. In my experience a low framerate still looks like a low framerate even with VRR, albeit a bit better than without VRR.

            Pretty much all of these games that TR tested at 1440p I just thought “I’d rather play them at 1080p on that card”, and that’s with current games. Future games are likely to be even more demanding.

            • EndlessWaves
            • 3 years ago

            They’re the same price as any other monitor you might buy, so if you don’t have specialised requirements there’s no reason why you wouldn’t.

            Sure, they’ll be people who bought screens just before VRR came up and aren’t likely to replace them in the lifetime of the card, but there’s a huge chunk of the market that’ll either buy a new screen with a new card or replace an older screen within the lifetime of the card. A card like this might be kept for 4 years, how often does the average monitor last? Six years? Eight?

          • Kretschmer
          • 3 years ago

          Yeah, but a nice monitor will last you many years.

          If I could have done it again, I would have paid the slight premium to stay in Nvidia’s ecosystem. Everything just *works*.

        • Lans
        • 3 years ago

        I am not buying the reference RX 480 for same reason as I disliked the GTX 1080 (besides prices on that thing): heat and noise. I am waiting to see custom coolers but am hoping the GTX 1060 shows up by then.

        Extra $20 for GTX 1060? No thanks. Unless it is at least $20 more in perf or perf/W.

      • Stochastic
      • 3 years ago

      Fair enough. Even disregarding all the hype, however, I can’t help but feel somewhat underwhelmed. I was hoping the move to FinFET + an architecture refresh would really move things forward in a big way. Maybe with custom coolers and an 8-pin power connector we will see the true potential of this card unleashed, but the present results aren’t very inspiring. Power draw figures are especially disappointing. Now I’m wondering whether the PS4 Neo and the Xbox Scorpio will be all that much of a performance jump

      The best thing about the 480 seems to be the greatly improved frame pacing relative to the 380. That’s real, tangible progress.

      If we can get a price war going between AMD and Nvidia, maybe in a few months we can have 8GB custom versions of the 480 with an 8-pin power connector at just north of $200. I would have a very hard time saying no to that.

      • chuckula
      • 3 years ago

      [quote<]They've got a winner until nVidia gets a cut down Pascal out to compete, which could take a while.[/quote<] Try July 7: [url<]http://videocardz.com/61583/nvidia-geforce-gtx-1060-to-be-released-on-july-7th[/url<]

        • barich
        • 3 years ago

        There’s no way that’s going to be $200 or less. And we’ve got nothing on what the GTX 1050, a more likely competitor, is going to be yet.

          • chuckula
          • 3 years ago

          Well, the Rx 480 might be advertised at $200, but nobody is buying the cards at that price.

            • barich
            • 3 years ago

            It also just came out today. Give it a minute.

            • Generic
            • 3 years ago

            He doesn’t have it in him.

            The roof of Chuckula’s mouth is purportedly permanently scarred from hot pizza. 😉

            • travbrad
            • 3 years ago

            My local Microcenter had some $200 4GB versions of the 480 in stock today, but they sold out fast. They sold out of the $240 8GB cards too, with just some $250 cards left now. That being said if I was interested in this card I’d wait for versions with better coolers and/or see what the 1060 does anyway.

            There will probably be some cheap 970s for sale too (sort of like the cheap leftover stock of 980 TIs), which seem to use the same amount of power somehow despite being 28nm.

            This whole generation of cards from both AMD and Nvidia seems to be all about waiting. We waited years for 14/16nm cards to come out and now we are waiting for coolers that don’t suck and non-FE cards to be in stock at good prices.

            • chuckula
            • 3 years ago

            Interesting.
            See if you could ask around there about the stock levels of the Rx 480 vs. the GTX-1080 and 1070 parts.

            I’m expecting the Rx 480 to be available in larger numbers simply because it is directed to a broader segment of the market, but some information about the supply levels could put some of the FUD about “paper launches” to rest.

          • BoilerGamer
          • 3 years ago

          Except 1060 looks to be faster than 480 as 50% the core of 1080 could mean 60% of its performance(1070 have 75% of the core and 80% the performance) which would be 5-10% faster than 480

          • Klyith
          • 3 years ago

          It won’t be $200, but nvidia could probably price it at $50 higher and still outsell AMD. Frankly nvidia hasn’t even bothered to compete on price for the last 2 years, they have such an enormous brand advantage. Between AMD’s inability to make a competitive top-end GPU recently, and the huge damage their CPUs have done to their name, they’ve got way less mindshare with PC gamers.

          Nvidia moving the 1060 release window up is a sign they’re at least a bit worried though. If polaris had stunk they’d wait to sell more $$$$ cards to the impatient.

        • pranav0091
        • 3 years ago

        I dont understand why you are being downvoted.

          • chuckula
          • 3 years ago

          I’m not being downvoted for posting something that isn’t accurate (in as much as a rumor article can be trusted).

          I’m being downvoted for posting something that some people don’t want to hear about.

            • barich
            • 3 years ago

            It is inaccurate in that the GTX 1060 is almost certainly going not to be in the same price range as the RX 480.

            • chuckula
            • 3 years ago

            Yeah, we’ll see about that.
            BTW, $250 most certainly counts as “in the same price range” given the likely performance of the GTX-1060 and what will be required with custom Rx 480 cards to match it.

            • barich
            • 3 years ago

            You could play the “it’s only $50 more, but it performs x % better” game over and over again until you end up buying a GTX 1080. $200 is my limit, period. More likely something in the $150 range.

            • JustAnEngineer
            • 3 years ago

            The first rule of downvotes is that you do not talk about downvotes.

        • Froz
        • 3 years ago

        That’s for FE though, which is likely to have 50 or more $ premium, so it won’t really be competitive to RX 480.

        And actually even that will be later then July 7: “Card will go on sale a week later on July 14th”. It’s just a marketing trick to try and persuade people to wait and not buy AMD now.

      • NarwhaleAu
      • 3 years ago

      Yeah and let’s not forget for that for the past year or two it has been almost impossible to find a 970 under $320, often selling for $350. This is better performance at a much lower price point. That’s the reason I didn’t buy a 970 and instead went for a 960.

      I’m tempted by the 480 and it is definitely what I would be buying today if I didn’t already have the 960.

        • DPete27
        • 3 years ago

        [url=https://techreport.com/news/29255/early-deals-of-the-week-a-hopped-up-gtx-970-for-280-and-more<]Where do you live?[/url<] (Yup, that article was 8 months ago)

        • travbrad
        • 3 years ago

        [quote<][Yeah and let's not forget for that for the past year or two it has been almost impossible to find a 970 under $320, often selling for $350.[/quote<] My ASUS Strix 970 was just under $290 (<$310 before MIR) in August, and there have been plenty on sale for cheaper than $320 or $350. Maybe not immediately after the card launched, but there were a lot of 970s for cheaper than $320/$350 in the last year. That's assuming you are talking US prices. If you aren't in the US you pretty much have no hope of getting cheap graphics cards from anyone until our banks inevitably crash the economy again. 😉

        • OneShotOneKill
        • 3 years ago

        And that is exactly the problem. The majority of ppl have a card that they are not going to replace with a mediocre rx 480.

        If AMD released this for $279 and beat the 980 I would be happy to hand them my money.

          • barich
          • 3 years ago

          The number of people who have a 970 or higher is miniscule compared to the number with lower end/older cards. Your perspective is a bit skewed. We don’t all upgrade every year.

            • OneShotOneKill
            • 3 years ago

            How about those with 290x, GTX 960, 390, 390x. would you recommend this card to them?

            Yes the price is great but the incentive to upgrade in terms of performance is not there.

            • travbrad
            • 3 years ago

            That’s not who this card is aimed at clearly. It’s aimed at people with older/slower cards. Look towards the bottom of these charts to see the kind of cards people will be upgrading from: [url<]http://www.guru3d.com/articles_pages/amd_radeon_r9_rx_480_8gb_review,18.html[/url<]

            • Ifalna
            • 3 years ago

            Hmm I have a 7870 (gaming at 1080p or ~900p, depending whether I play fullscreen or windowed) and find the 480X to be rather underwhelming compared to a 1070 (IF that 1070 will ever be available at the 350€ price point that Nvidia teased us with).

            Esp the high temperatures of 83°C worry me. Need to see cards with proper coolers.

            • travbrad
            • 3 years ago

            I agree the 1070 is a more impressive card. Nearly double the performance with the same/lower power consumption. However it is also twice as expensive right now. If they ever get it down to the MSRP it will be better bang-for-buck than a 480 though.

            Some people have strict budgets though, and even at $380 they aren’t going to buy it.

            • Ifalna
            • 3 years ago

            Yeah, I’m one of these people.
            I am looking for a new card but currently the gap between 250€(480X) and 500€(1070) is annoying the crap out of me.
            I typically buy in the 300-350 region.

            To me, the 480X just seems too weak to be future proof for the next 3-4 years.

    • EzioAs
    • 3 years ago

    An interesting article [url<]http://www.legitreviews.com/amd-radeon-rx-480-4gb-video-cards-8gb-memory_183548[/url<]

    • Theolendras
    • 3 years ago

    Thanks for the nice article. I would think DX12 performance, compute performance and VR would be a very nice followup subjects.

    • DancinJack
    • 3 years ago

    Whoa. This turned out very poorly for AMD. From all the leaks it seemed it would be a lot closer to the 980. Shame.

      • willyolio
      • 3 years ago

      the real shame is that you believed in rumors that spawned from some random dude on a random forum.

        • madseven7
        • 3 years ago

        I guess leaked slides showing 480cf beating 1080 had nothing to do with it.

          • slowriot
          • 3 years ago

          [url<]http://www.techpowerup.com/reviews/AMD/RX_480_CrossFire/12.html[/url<] Depending on the game and resolution it can match a GTX 1080. Which isn't surprising at all. The lesson is the same its always been... one benchmark is not representative of overall performance.

    • rudimentary_lathe
    • 3 years ago

    Great to see improvements in the performance/power and performance/price ratios from the red team, but overall I consider this very disappointing.

    Based on its specs this card *should* be trading blows with the 390X and possibly even the 980. Why isn’t it? Either the drivers are behind or somehow the GCN 4 architecture – or the architectural decisions specific to this card – are a step back. Neither seems plausible to me.

    I’m glad I held on to my current card, as this doesn’t move the needle much at all. Hopefully performance improves significantly with newer drivers, and the AIB cards allow significant overclocks for not much more money.

    • xeridea
    • 3 years ago

    Some issues with the article.

    When comparing power to the 290, you say it is 400W system vs 260W system, then state 35% efficiency improvement. This doesn’t account for system CPU, RAM, storage, motherboard, etc under load, so it is likely to be more like 160 vs 300, which would be 47% improvement by your scale, but the scale has a flaw. It would use 35 (really 47%) less power, but translated to efficiency, it would be about 90% more efficient, not 35%.

    On testing methods, the 480 is listed with unknown boost clock, and 1000MHz base clock, though the previous page list them as 1266MHz and 1120MHz.

      • Jeff Kampman
      • 3 years ago

      You’re right—I probably should not try to do math at the tail end of reviews like this one. I will try and correct it or remove that analysis if I can’t. Thanks for pointing out those other issues, as well.

        • xeridea
        • 3 years ago

        Thanks. I am just estimating for system power draw under load. The Core i7-5960X from your review has 115W while video encoding, which will use all cores. Gaming won’t necessarily use all cores 100%, but clocks and voltages will still be up, so I figured 100W was good estimate. Checking other reviews that isolated GPU power draw, this seem about right. Also, The card has 1 6 pin power connector, which should limit it to 150W (then some inefficiency in power supply increases total power draw at the wall).

        Otherwise good job on review, and good job getting it out fast. Hopefully non reference cards will improve things in the noise department a lot, as reference coolers are generally pretty terrible. Temps also have somewhat of an effect on power draw, so that should limit thermal and power throttling.

        • xeridea
        • 3 years ago

        Hmmm, 6 hours later, these obvious issues have not been fixed. What happened to attention to detail with these reviews? I am fearing for the future quality as these mistakes seem to get more common, and not fixing such a blunder is unacceptable.

          • Jeff Kampman
          • 3 years ago

          Sorry for the delay—I tried to incorporate your feedback into the article. Thanks again for pointing out our error.

    • ermo
    • 3 years ago

    I currently have two Tahiti/HD 7970 GHz ed. driven by a 3770k. From what I can tell, the Tahiti arch has aged very well as AMD has optimized its drivers for GCN.

    Would I like two 480X cards? Sure, but only once GloFo optimizes its 14nm and puts out a new and better 480X stepping and the drivers have matured a bit more. Looking at how 28nm played out, I’m guessing we’ll see +10% performance from clock speed bumps and +10% performance from driver optimizations, so around +20% perf 6-9 months from now.

    And all that at a slightly lower price point. So, yeah, my Tahitis stay where they are. For now.

    EDIT: Totally forgot to applaud TR for the day one review and some very relevant comparisons. Good job gents!

      • TwistedKestrel
      • 3 years ago

      I guess the nice thing about all the Tahiti/Pitcairn rebrands is that they’re still bankrolling 7950/7970 driver updates for a good while. My single oc’ed 7950 is still holding up (though the fans are starting to buzz)

    • chµck
    • 3 years ago

    This is the card to get for most users who don’t go for the high end.
    But what from AMD will compete with the gtx1070/1080?

      • Pwnstar
      • 3 years ago

      Look at AMD’s roadmap. Vega is the next chip.

    • deruberhanyok
    • 3 years ago

    Jeff, Robert, thanks for getting this up right on the dot of the NDA lifting! Looks like you put a lot of effort into it. 🙂

    I like that the RX 480 is basically matching/exceeding the performance of a GTX 970 at the same price(s) as the GTX 960 2/4GB when they launched. That’s a pretty good performance gain for the same price vs. last year, looks like anywhere from 50% to 80% or more?

    But I’m really surprised by the power use. I thought the 14nm shrink to provide a much bigger advantage over 28nm parts. I expect (as has already been mentioned in the comments) this just points to “early days” for the 14nm process. So we could possibly (hopefully) see improvements as the process improves.

    One thing I’m definitely disappointed to see is the stock cooler’s performance. I just don’t understand. If I were to buy one there’s no way I’d go for the reference design. Any idea if there’s any GPU throttling going on as a result of the temps?

    I think we’ll see higher performance from custom board designs (higher GPU clocks, for one, and significantly more effective/quieter cooling that might result in less GPU throttling, if that’s happening), but I also wonder if the custom designs will improve power usage, too. Isn’t there a thing with power use / leakage at higher temperatures?

      • deruberhanyok
      • 3 years ago

      Odd, I’m looking at Anandtech’s preview results and they’re showing that some GPU-specific power usage (furmark) shows the RX 480 to be well ahead of, say, the GTX 960. That’s the sort of power/performance increase I would have expected. But then as soon as they do a game to test the power draw the numbers climb way up, to match a 970/1070.

      They speculate it’s being offset by increased CPU power draw, possibly from some driver inefficiencies, but that would have to be… really inefficient to load up the CPU that much? I remember seeing DX11 vs. DX12 numbers early on that showed AMD benefited greatly from the increased efficiency of DX12, but I can’t imagine it’s so much as to draw that much extra power.

        • deruberhanyok
        • 3 years ago

        Okay, now I’ve looked over the Tom’s review and their extremely detailed power measurements and I’m wondering about the design of these reference cards.

        The power draw from the PCIe slot that they’re showing is downright terrifying, and the card going past its rated TDP is also odd, although, part of that may be due to heat (thanks for that, Xeridea, I knew there was some relation but my brain isn’t work too well right now).

        I’m wondering now if custom designed cards from the various manufacturers will have a significantly different power / heat profile. Anything with a 6+2 phase setup and an 8-pin PCIe power, along with a custom cooling solution that’s better than the old “block of aluminum” it is using now could really change the way the GPU behaves.

        • tsk
        • 3 years ago

        PCper measures the power draw directly from the card. They also do FCAT testing, by far the best RX 480 review IMO.

        [url<]https://www.pcper.com/reviews/Graphics-Cards/AMD-Radeon-RX-480-Review-Polaris-Promise/PC-Perspective-Advanced-Power-Testin[/url<]

      • xeridea
      • 3 years ago

      Transistors use more power at higher temperatures. Not sure the math on it, but it is the reason the Fury X had a liquid cooler, it allowed better clocks, part from lower temps, part from better efficiency.

        • deruberhanyok
        • 3 years ago

        Aha, that’s what I was thinking. Thanks xeridea!

    • smilingcrow
    • 3 years ago

    I hope Zen isn’t on the same process as this as GoFlo seem lost.

      • tipoo
      • 3 years ago

      [url<]http://wccftech.com/amd-contracts-tsmc-produce-zen-16nm-woes-14nm-process-troubles-globalfoundries/[/url<] They should switch their GPUs back after that too...Or is it a capacity thing? Uh, hey Intel, I see you have some extra fab space there...

      • travbrad
      • 3 years ago

      Yep it just further highlights how misleading those nm numbers can be when it appears the “16nm” stuff from TSMC has much better efficiency than “14nm” GloFlo (and Intel’s 14nm stuff is better than either of them I’m sure).

    • Krogoth
    • 3 years ago

    This is what the 960 should have been at launch. The 480 is a 960 killer and it brings a sorely needed boost in the $199-$249 demographic that had rather lackluster options for almost three years.

    It is no wonder why Nvidia is rushing to get a binned “1060” out the door ASAP and probably drop prices on 970 and 980.

      • PixelArmy
      • 3 years ago

      Don’t know why I’m replying to this… The 960 at launch should have been a 970?

        • Krogoth
        • 3 years ago

        It should had at least had near-970 performance but instead we got was a cooler 680-like chip for 1/2 the 680’s launch price tag. 960 was marginally faster than 760 it had replaced at the same price points. It was cheaper for Nvidia to make and they were running out of binned “GK104” chips to be used in 760s.

          • EzioAs
          • 3 years ago

          Marginally was a best case. There are benchmarks showing the GTX 960 sometimes losing to the GTX 760.

            • southrncomfortjm
            • 3 years ago

            GTX 760 is in my machine and it is still a pretty decent card at 1080p despite its age. Also helps that I don’t play games like Witcher 3 yet.

            • EzioAs
            • 3 years ago

            And I’m still using a GTX 660. I need more choices in the sub $250 segment before I make an upgrade. Also, like you, I’m not playing relatively new AAA titles yet. Finishing my backlog of games ranging from 1996 – 2015 from PS1, PS2, PSP, GameCube, Wii and PC.

            • Ninjitsu
            • 3 years ago

            But wait I thought Nvidia sabotaged Kepler and stuff? \s

            • DoomGuy64
            • 3 years ago

            Depends on the game. Newer games are not optimized, while older games are. That’s why you have weird scenarios where the 960 beats the 780. Maxwell is pretty damn efficient out of the box too, so the only way the 960 loses is if the older card has proper driver optimizations.

            But yeah, the 480 is a 960 killer, and possibly a 1060 killer. Why? It has 32 ROPs and can keep up with the 970. For all intents and purposes a 32 ROP card is a 1080p card, but somehow the 480 can match the 970 in 1440p. I don’t think Nvidia can compete if the 1060 is also 32 ROPs. Maybe through brute force clockspeed.

        • chuckula
        • 3 years ago

        No no no, the GTX-960 at launch should have been the Rx 480!
        :-p

    • wierdo
    • 3 years ago

    Not bad for a $200-ish card, looks pretty good in the FPS per dollars chart at the end. I hope this pressures nVidia into price drops.

    Not crazy about the idle power draw but it’s not a deal breaker, a CFL lightbulb difference basically.

    Thanks for the review guys.

    • NTMBK
    • 3 years ago

    Price/perf is good, but that power consumption is a mess. I guess my HTPC will be going NVidia when I upgrade the graphics :\

    • OneShotOneKill
    • 3 years ago

    Another disappointing release.

    Let’s observe a minute of silence for the impending death of AMD.

      • Krogoth
      • 3 years ago

      Not really.

      It is aimed at the market that makes the bulk of the revenue in PC graphics. The high-end gaming market barely pays for itself in R&D costs.

        • OneShotOneKill
        • 3 years ago

        I vote with my wallet and my wallet at the moment is saying 1070 when supply is at MSRP.

          • Krogoth
          • 3 years ago

          1070 is aimed at $349-399 demographic assuming you can find a unit without resorting to ebay scalpers who are more than happy to add an extra $100 on top of that.

          GPU guys are businesses at heart. The high-end gaming market is nothing more than halo market that is more about prestige and brand name then anything else. The last company that try to be “high-end” only and never bother with mid and lower markets was [b<]3Dfx[/b<]. Nvidia understand this which is why they are rushing to get a 1060 out ASAP and are doing price cuts on 970 and 980.

          • raddude9
          • 3 years ago

          If you really vote with your wallet you’d want the card with the best “FPS per $”

            • OneShotOneKill
            • 3 years ago

            Picked up a used Fury for the price of an 8GB 480. Wallet happy as a clam.

            Sorry AMD, you need a more impressive release to get my money on a new card. Next potential upgrade Navi.

    • ronch
    • 3 years ago

    This is a nice card but it doesn’t exactly fill me up with confidence about Zen.

      • nanoflower
      • 3 years ago

      Depends on what you expect from Zen. If you are expecting it to be a market leader then you will be disappointed. While AMD has said good things about Zen nothing they’ve said leads me to believe it will be more than a decent desktop CPU. It’s not going to take the performance crown from Intel but it might take the price/performance crown from Intel. At least I hope it can. Then if they can take that tech over to their APUs they may have something that can start to make a dent in the notebook/tablet marketplace.

      • Anonymous Coward
      • 3 years ago

      You shouldn’t expect more than “a nice CPU” from Zen. That said, Zen will doubtless be an awesome bit of engineering… good enough for 2nd place on the 2-player desktop CPU market.

    • Tristan
    • 3 years ago

    Yet another disappoitment from AMD.
    This review is havily biased towards AMD, for obvious reasons. You picked only games where 480 is faster than 970, while on other sites it is significantly slower in many games. 970 is also cheaper.

      • Jeff Kampman
      • 3 years ago

      ok

        • Jigar
        • 3 years ago

        Best response ever 😀

        • tipoo
        • 3 years ago

        Imagining that in this voice

        [url<]https://i.makeagif.com/media/11-19-2015/re81Us.gif[/url<]

        • Redocbew
        • 3 years ago

        So much awesome in only two letters. UNPOSSIBLE!

        • chuckula
        • 3 years ago

        Whoa… a -112 followed by a +112.

        The freakin’ universe is in balance man!

          • KeillRandor
          • 3 years ago

          Please forgive me – I accidentally upvoted Tristan, which automatically meant having to downvote Kampman to keep the balance…

          Sorry 🙁

        • tipoo
        • 3 years ago

        Two letters and has been ruling the TR top post spot for a while

          • NeelyCam
          • 3 years ago

          ‘k’ would probably have worked too

            • Ninjitsu
            • 3 years ago

            Would have been even funnier!

        • Toby
        • 3 years ago

        It’s troubling in a number of ways that this is the #1 comment on the site by far.

        • BurntMyBacon
        • 3 years ago

        I think this post needs to be Sticky-ed as the top post of all time. Never before has so many thumbs been brought to bear for so few letters. (142 and counting)

      • chuckula
      • 3 years ago

      Yeah, well that’s like your opinion man.

      More seriously: In absolute terms the Rx 480 is neither amazing nor disastrous. However, as somebody who is regularly accused of having it in for AMD, if some of the people spreading hype about these cards had actually listened to what AMD said [and what I reiterated] then there wouldn’t be much in the way of disappointment since expectations would have been realistic.

      AMD’s marketing wasn’t the worst with one unfortunate exception being the Computex demo against the GTX-1080 that is not going to look good going forward. If they had instead shown off some actual VR applications with the Rx 480, it might have been better.

        • Tristan
        • 3 years ago

        Hype was inline with expectations. 970 is 2 years old, and still better by perf, power consumption, OC, temperature and noise levels. Despite this, 480 look like unpolished design, with low speed, high temperature and small OC. They do not have cash to R&D, and that is the result.

          • NarwhaleAu
          • 3 years ago

          It’s faster than a 970, (a lot faster on some DirectX 12 titles)… for $100 cheaper. It’s about half the speed of a 1080 for one third the price! It isn’t a flagship card – it’s a 960 competitor that is much, much faster.

            • chuckula
            • 3 years ago

            It’s a 1060 competitor!

            • NarwhaleAu
            • 3 years ago

            I suspect they will have similar performance and price points.

      • Forge
      • 3 years ago

      No, no, TR has a serious pro-Nvidia bias. At least that’s what I read in all the 1080 review comments.

      I seriously can’t keep it all straight anymore.

        • CScottG
        • 3 years ago

        No, the bias is to the particular card they are reviewing at the time they are reviewing it. It’s so that they can keep getting timely review samples (..even if they are sometimes having troubles with getting the review published in a more timely manner).

        Basically you get accuracy within a more narrow “rose-tinted” constraint.

        Ex. Nvidia – value metric at conclusion without current prices.

        Ex. AMD – value metric at conclusion without other current-gen. products and showing “average” prices of older product and constraining the graph to dis-include current-gen..

        Yeah, it’s hinky – but it’s expected and it’s also what a lot of other sites are doing as well. I’m not sure it’s what I would expect of TR though, at least not TR as it was about 2 years ago (..but my memory could be “rose-tinted” as well).

        ..sigh.

          • derFunkenstein
          • 3 years ago

          Everything is awesome!

          • derFunkenstein
          • 3 years ago

          You kind of have to go back to 2012 when the 28nm cards were brand new. In December 2011, the Radeon 7900 series was OMG AWESOME performance at its price. The Radeon 7800 series followed up and OMG AWESOME. Then Kepler came and it was more of the same. Even TR’s reviews reflected that, because it was the state of being at the time.

          And then for four years, we were deprived of process shrinks. It’s not a surprise that now 4 years later they’re able to say “OMG AWESOME” again, because Nvidia redefined performance at the $400+ level and AMD has redefined performance at the 200-250 level.

            • nanoflower
            • 3 years ago

            Yep. Raja Kadouri (sp?) even admitted that was the case. That AMD decided not to keep investing into their hardware and more or less coasted. His joining the company was a start (along with Keller) at turning the company around but it’s going to take time to get things going at full steam. Still, this is a good start even if it doesn’t meet the dreams of the enthusiasts.

        • cphite
        • 3 years ago

        I’ve noticed that cards with better test results tend to receive higher praise… clearly this indicates a bias towards higher performance. My worry is that the people who manufacture the lower performance cards are going to feel bad about themselves.

        TR should strive to avoid picking winners and losers in these reviews. Give everyone the same score. What matters is that they’re trying!

      • Mat3
      • 3 years ago

      “This review is heavily biased towards AMD, for obvious reasons.”

      Utterly clueless you are. Why are Hitman and Tomb Raider running in DX11 mode? Where is Ashes of Singularity or Total War Warhammer?

      This review, if anything, is insidiously designed to show the 480 in the worst light.

      • steelcity_ballin
      • 3 years ago

      I feel like you’re so far up green’s butt that you’re incapable of seeing the irony of your so-called ‘biased’ claims. I mean, what you say here is not only factually false, but just bizarre to the point of me thinking you’re some sort of shill or astroturf account for Nvidia.

      The review clearly states that sometimes the card was faster than a 970, and sometimes it wasn’t. Then you talk about price, which is just dead wrong. Find me a single place where the 970 is cheaper than the r480. This is difficult to do because the R480 isn’t for sale yet. And on that note, the R480 has a MSRP of $200. The cheapest 970 I found on major e-tailors is around $250 give or take, and that’s without taxes included.

      I really don’t understand fanboys. There’s two choices in the market for GPUs anymore. AMD or Nvidia. You pick your price point, and decide for your usage, which card will perform better at that price point. Why would you do it any other way?

      • Theolendras
      • 3 years ago

      970 cheaper ? Never saw a rebate to put it anywhere close to RX480 200-240 $ price.

      • southrncomfortjm
      • 3 years ago

      Yep, because it is utterly inconceivable that Tech Report just, you know, tested the card and reported the results… No way that’s what happened.

      • PrincipalSkinner
      • 3 years ago

      Nerd in me loves how sum of Tristan’s downvotes and Jeff’s upvotes equaled zero several times now.
      edit :
      Don’t disrupt the balance of nature.

        • Krogoth
        • 3 years ago

        [Superintendent Chalmers]

        SKINNER!

        [/Superintendent Chalmers]

      • ultima_trev
      • 3 years ago

      While I would have much preferred to see RX 480 launched with the non-reference coolers / board designs, you have to realize that GTA5, Witcher 3 and Rise of the Tomb Raider are Gameworks titles that would typically favor nVidia. The fact that RX 480 matched GTX 970 in said titles is pretty miraculous.

        • sweatshopking
        • 3 years ago

        At 100000x the power consumption on 14nm vs 28nm? I have a 290, and I’m looking at getting another, but let’s not pretend this chip is miraculous.

      • Mr Bill
      • 3 years ago

      ‘bump’ “for great justice”

    • ronch
    • 3 years ago

    That RX 460 without the power connector really reminds me of one of my most cherished graphics cards ever: my HD 4670. No power connector, cheap, fully capable of playing all my games very well at that time. It had 320 stream processors at something like ~750MHz.

    That 46[s<]7[/s<]0 really smokes it.

      • USAFTW
      • 3 years ago

      That was actually my first card after the dreaded FX 5200 that could actually handle games. And my tolerance for sub-prime framerates was higher then so I ran everything I played at 1080p but no AA.

    • f0d
    • 3 years ago

    better get in fast
    2/4 of the models at newegg are out of stock (as of writing this post anyways)
    [url<]http://www.newegg.com/Product/ProductList.aspx?Submit=ENE&IsNodeId=1&N=100007709%20601203818[/url<] unless im doing newegg wrong or its not listing the rest of them for me from australia

      • derFunkenstein
      • 3 years ago

      Those all have this crappy blower cooler anyway, so being out of stock is doing everyone a favor.

        • Concupiscence
        • 3 years ago

        Asus has already announced a Strix variant; that’ll be the one to wait for.

        *edit: Well, for me, anyway.

          • derFunkenstein
          • 3 years ago

          It’s unfortunate that the extra $$ for the cooler will push this thing back up into GTX 970 territory. Same performance, same power usage, same price…not really a win for AMD there.

            • looncraz
            • 3 years ago

            “Same performance, same power usage, same price…”

            But better features, more RAM, higher DX12 performance, and more likely to improve as time flows.

            • derFunkenstein
            • 3 years ago

            There’s some speculation here. Ashes of the Singularity != all of DX12 performance always everywhere. The rest of it is…maybe. We don’t know.

            • Pettytheft
            • 3 years ago

            Look at Hitman and Total War: Warhammer benches as well. Tomb Raider decreases performance for both Nvidia and AMD.

            • Freon
            • 3 years ago

            If you look at Hitman in DX11 and DX12 it seems the story there is more of a “Hitman is a Gaming Evolved title” rather than DX12 being a magic bullet.

            • Pettytheft
            • 3 years ago

            [url<]http://www.guru3d.com/articles_pages/amd_radeon_r9_rx_480_8gb_review,11.html[/url<] Looks like it's doing just fine.

            • chuckula
            • 3 years ago

            Even AMD’s cards show some rather suspicious behavior in those benchmarks.
            It’s a little concerning when the R9 390X is separated from the R9 Fury X by a grand total of 1 FPS at 2560×1440 resolutions.

    • Froz
    • 3 years ago

    Thanks for the quick review. It was a very nice read, as always.

    What’s exactly wrong with GTX 960? Especially for Crysis (page 7), most graphs do not even include that card, but judging by the frame times graph, it fares terribly bad. How much FPS it had, 20? Is that just something wrong with your particular unit or poor support for old low-middle end from nvidia? Did you also left it out from the scatter plots for the same reason?

    • ronch
    • 3 years ago

    I wasn’t expecting the 480 to blow me away but I suppose it’s a good proposition given the price. 970-level performance for $230 should be OK. Not gonna call it a hit until I see what Nvidia plans to pit against it.

    • DPete27
    • 3 years ago

    Largely a let down IMO. Typical AMD marketing. Hype it up to be the greatest thing since sliced bread, and the actual product is just average at best. DANGIT, I really needed AMD to produce a winner so nGreedia would stop their GSync child-play.

    Performance & power consumption = GTX 970 (a 28nm part)
    Price = same as [url=http://www.newegg.com/Product/Product.aspx?Item=N82E16814487088&cm_re=GTX_970-_-14-487-088-_-Product<]GTX[/url<] [url=http://www.newegg.com/Product/Product.aspx?Item=N82E16814121899&cm_re=GTX_970-_-14-121-899-_-Product<]970[/url<]..meh Reference cooler = garbage (just like we all predicted) Oh, congrats AMD! You smoothed out your frame delivery!.... (thanks Scott) No mention of any VR tech like nVidia's SMP. I think this is a HUGE fail. Granted, devs need to actually implement SMP in their game code, but unless I understood it wrong, that's a very promising/revolutionary tech. What I'm not sure about (I could go back and read it, but I'll ask instead) is whether nVidia needed any specialized hardware in the GPU to support SMP? (edit: links)

      • steelcity_ballin
      • 3 years ago

      Where are you finding the GTX 970 for $230? The cheapest on major vendors I see is about $270 before taxes. If this actually hits shelves anytime soon is the real question for me. So many paper launches or in such limited quantity that the market demand jacks the price skyward.

      It looks like it’s a roughly equivalent performer, and granted it’s a new card on par with something that’s years old already, but the price I think is going to be a factor. If they can bundle it with a decent game or other value-add, the 970 won’t be a talking point as much, plus I suspect the 970 isn’t actively being produced anymore, so I’d imagine supply will dry up. Total speculation on my part.

        • slowriot
        • 3 years ago

        The issue is all the RX 480s out there are $240~ and not $199 right now.

          • slowriot
          • 3 years ago

          Someone down voted me? If I’m wrong feel free, but I’d also appreciate a link to RX 480s for $199 too. They’re all $239 and up on Newegg right now.

          • ImSpartacus
          • 3 years ago

          I bet amd is only shipping 8gb versions at launch.

          Looks like a similar strategy to Nvidia’s founder’s edition, but slightly more reasonable.

            • slowriot
            • 3 years ago

            A $199 4GB model just popped up on Newegg… out of stock. For AMD’s sake I think its crucial they get plenty of the $199 cards out there. People are comparing these right now to 4GB 970s after all.

        • DPete27
        • 3 years ago

        The 8GB card TR tested is $240 as stated in the article. The second link I provided to the Asus GTX970 is on sale for $240 at newegg and that cards clock speeds are on the high end.

      • TheBulletMagnet
      • 3 years ago

      Man I’ve been reading the same argument for years from you guys. “I need AMD do to better so Nvidia, Intel, blah blah blah will lower their price”. Why? You’re not going to buy AMD anyway. pfftt.

        • slowriot
        • 3 years ago

        Huh? If AMD is competitive then we get a price war. It benefits you regardless of which card you buy. Healthy competition is a GOOD THING, especially for us buyers. When AMD falters it concerns me because a world with only one legitimate GPU vendor isn’t good. And I say that as someone whose last 2 cards and likely next card all have Nvidia GPUs.

          • TheBulletMagnet
          • 3 years ago

          AMD is not going to GET competitive if no one buys their products. And if you don’t want to buy AMD because it doesn’t match your performance/dollar you set for yourself that’s cool. But don’t piss and moan about the state of AMD when you never planned on supporting them anyways.

        • DPete27
        • 3 years ago

        The last two GPUs I’ve owned were a 6850 and a GTX660. I’d consider myself biased toward buying AMD this round because of my disdain for GSync proprietary BS.

        Don’t lecture me about being a fanboy. I buy from whichever company is offering the best product in my price range at time of purchase.

      • Krogoth
      • 3 years ago

      Protip: The 480 is meant go after the 960 not the 970.

      The 970 goes for ~$299 at this time, but I suspect that Nvidia is going to cut 970 price to meet the 480’s price tag. Either way, it is a win-win for the $199-$249 demographic.

        • OneShotOneKill
        • 3 years ago

        When was the 960 and 970 released?

          • Krogoth
          • 3 years ago

          960 came out 18 months ago and it was a massive disappointment for its market. 680-7970 like performance for 1/2 the price and three years later.

          970 came out almost two years ago but it went for ~$349-399 at launch and stood roughly at that price point until beginning of this year in advent of Pascal stuff.

        • DPete27
        • 3 years ago

        Check my links, that Asus GTX 970 is $240 after MIR.

          • RAGEPRO
          • 3 years ago

          Did you read Jeff’s GTX 1080 review? I don’t think I’d buy a Maxwell card at this stage. Certainly not for the same price as an RX 480. 🙂

            • DPete27
            • 3 years ago

            I think that was more geared toward nVidia cards, not a cross-platform statement. The GTX 1060 will likely perform between the 970 and 980 and while it may cost the same as those older cards, it will likely use less power and will have the aforementioned Pascal tech like SMP that the Maxwell cards don’t. That’s what Jeff was getting at.

            The GTX970 and RX 480 are currently directly equal price/power/performance/features. Yes, the RX 480 will only get better with driver updates, but nVidia will likely support the GTX 970 through the Pascal generation also. The saving grace of the RX480 is its 8GB VRAM compared to the GTX 970’s 3.5GB. From that standpoint, all else being equal, yeah, I’d probably buy the RX480.

            I just think I’m a little irked because the 480 was hyped to disrupt the price/performance curve and it didn’t at all. Not surprising from an initial launch standpoint, heck, nVidia did the same thing with the 1070 and 1080, but still.

          • Krogoth
          • 3 years ago

          Which is a direct response to the 480’s launch a.k.a competition.

            • DPete27
            • 3 years ago

            True. The GTX 970 was hovering around the $270 after MIR for quite a while, but has dropped in price recently in anticipation of the RX 480. But that’s a primo cooler on the GTX 970 vs the $240 for this reference blower PoS, so likely custom cooled RX 480’s will cost more…I’ll shut up.

    • Anovoca
    • 3 years ago

    With all of AMD’s crossfire performance claims, I am kind of curious to see what the power draw and noise level would be with two of these reference cards.

      • EzioAs
      • 3 years ago

      TechPowerUp has got that covered [url<]http://www.techpowerup.com/reviews/AMD/RX_480_CrossFire/[/url<]

        • Anovoca
        • 3 years ago

        Thanks, cool to see it performs as expected but it is a shame they didn’t do noise testing.

          • Airmantharp
          • 3 years ago

          Trades blows with 970 SLI (what I’m running)- that’s not a bad showing, really, though I’d much prefer a frame-time analysis over just raw FPS, *especially* for multi-GPU analysis.

      • blahsaysblah
      • 3 years ago

      Why?

      RX 480 has TDP of 150W, sites measure mid 160s in games.

      GTX 1080 has TDP of 180W, sites measure mid 180 Watts in games.

      GTX 1080 idles at 7W, RX 480 at 16W.

        • blahsaysblah
        • 3 years ago

        [url=http://www.tomshardware.com/reviews/amd-radeon-rx-480-polaris-10,4616-9.html<]Toms hardware[/url<]: We’re also left to wonder what we'd see from a CrossFire configuration. Two graphics cards would draw 160W via the motherboard’s 24-pin connector; that's a tall order. Switching from the bars back to a more detailed curve makes this even more evident. Read the previous two paragraphs. Its much worse. Edit, i love me some downvotes: We skipped long-term overclocking and overvolting tests, since the Radeon RX 480’s power consumption through the PCIe slot jumped to an average of 100W, peaking at 200W. We just didn’t want to do that to our test platform.

    • PrincipalSkinner
    • 3 years ago

    After reading all the leaks and promises, this card is a big MEH. Not bad, but far from expectations. Claims of 2.5x performance per watt improvements are a complete load of parrot droppings. And they still haven’t fixed multi monitor power consumption.
    Certainly not worth upgrading from my 380X.

    • ptsant
    • 3 years ago

    Thanks for the very timely review. I was pleasantly surprised to find it available immediately after NDA lift.

    So, the short summary is that this is a midrange $200-$250 card that significantly improves the performance available in its price range. People who have a Tonga (285) or a 7870 or a 270 are going to be very happy.

    Personally, I’ m sitting in the huge price gap between the RX 480 and the nVidia 1070. Ideally, I would have preferred the performance of a 980 and a better cooler at $300.

    I guess we are going to have to wait for the 1070 to drop in price or AMD to launch something slightly faster (or drop Fury prices–which is not going to happen).

      • Mr Bill
      • 3 years ago

      Yep this is perfect for an upgrade of my XFX 7870 Black Edition and at the about the same price (~$240). I will wait for one of the vendor improved cooling solutions.

    • Aranarth
    • 3 years ago

    This is about what I expected to see.

    A decent $200 and $250 card.
    Give it 3-6 months for the sales to start and the $250 model will drop to the $200 price point.

    In the meantime there will be tons of these cards sold. I expect a nice revenue uptick for AMD for their next shareholder meeting.

    It does need a better cooler but Gigabyte, Asus, XFX etc. all have that covered so no big deal.

    Now lets see what AMD has in mind for their top of the line model….

      • cpucrust
      • 3 years ago

      Good points.

      If I didn’t already have a R9 390, which I picked up last year, I’d get the RX 480 for my main system.

      My R9 390 is an ASUS STRIX and they did a great job of cooling and keeping the noise levels low at stock and mild overclocking settings.

      I may jump on getting the RX 480 for another build, but not until I see what the AIB vendors come up with.

      Maybe TR received a bad sample RX480 with a noisy fan?

      On another note, I wanted VRR, but did not like the proprietary VRR and incremental card+monitor solution cost from Nvidia at the time. So I paired up my R9 390 with a MG279Q IPS freesync monitor.

      Hopefully, freesync monitors will drop to near non VRR monitors at some point this year.

      I also wonder when Intel will have chipsets in products that support freesync

        • nanoflower
        • 3 years ago

        Yes, I would definitely be picking up a custom RX 480 if I didn’t pick up a R9 390 a few weeks ago at a good price. The RX 480 may not be all that everyone hoped it would be but it’s a good performer aimed at the mainstream with a decent price. All that’s needed now is for the AIB designed cards to be released to provide more choice and remove any issues over power as I know the Sapphire Nitro+ RX 480 will come with an 8 pin power connection (oh and DVI for those that want that in addition to HDMI and DP.)

    • flip-mode
    • 3 years ago

    It’s been a long time since the $200 price point has seen this kind of action. I’m very glad to see it.

    • Unknown-Error
    • 3 years ago

    1st of all thank you [b<]Jef[/b<] and [b<]Robert[/b<] for the review. Good work as usual. now for the rant! Kyle @ HardOCP vindicated. What BS is this Polaris? Where is the so called performance per watt gains? Where is the power efficiency? GPU temps are awful. What is most embarrassing in this review is that the competition is all 28nm! WTF AMD!

      • chuckula
      • 3 years ago

      [quote<]Where is the so called performance per watt gains? Where is the power efficiency? [/quote<] Ah but you didn't properly look at the entrails of the chicken to divine the true meaning of Raj's words. He was comparing Polaris to some of the worst offenders in the prior GCN lineup and he was expressly [i<]not[/i<] comparing the power efficiency to the somewhat better Fury series of parts.

        • blahsaysblah
        • 3 years ago

        TomsHardware has really good power measurement setup. And multiple sites agree with numbers.

        GTX 1080 pretty spot on with 186W vs 180W TDP
        RX 480 is 165Watts versus 150W TDP.

        Also, it uses 16W idle versus 7W of 1080.

          • chuckula
          • 3 years ago

          Some people claim those numbers aren’t possible because the PCIe slot + 1 6-pin cable are nominally rated for 150watts.

          However, those power ratings are minimums that must be met to be compliant with the PCIe standard. They aren’t absolute maximum values and a halfway decent PSU can exceed them, especially over the separate 6-pin cable that doesn’t have to run through the motherboard PCB.

          TL;DR: I can buy it.

            • blahsaysblah
            • 3 years ago

            [url=http://www.tomshardware.com/reviews/amd-radeon-rx-480-polaris-10,4616-9.html<]TomsHardware[/url<], scary: AMD’s Radeon RX 480 draws an average of 164W, which exceeds the company's target TDP. And it gets worse. The load distribution works out in a way that has the card draw 86W through the motherboard’s PCIe slot. Not only does this exceed the 75W ceiling we typically associate with a 16-lane slot, but that 75W limit covers several rails combined and not just this one interface. ... With peaks of up to 155W, we have to be thankful they're brief, and not putting the motherboard in any immediate danger. However, the audio subsystems on cheaper platforms will have a hard time dealing with them. This means that the "you can hear what you see" effect will be in full force during load changes; activities like scrolling may very well result in audible artifacts. We’re also left to wonder what we'd see from a CrossFire configuration. Two graphics cards would draw 160W via the motherboard’s 24-pin connector; that's a tall order. Switching from the bars back to a more detailed curve makes this even more evident. edit: We skipped long-term overclocking and overvolting tests, since the Radeon RX 480’s power consumption through the PCIe slot jumped to an average of 100W, peaking at 200W. We just didn’t want to do that to our test platform.

            • dragosmp
            • 3 years ago

            It is scary if this is what it does. However keep in mind that albeit the stock PCB design may be flawed, nothing stops the partners to reroute the power draw from less dangerous places (like an 8 pin PCIe).

            I agree with Chris’ conclusion – look like they OCed this card at the last moment and accepted the higher than normal power draw.

            • blahsaysblah
            • 3 years ago

            They are knowingly voiding your MB’s warranty than?

            edit: clarification, they are not PCI-E certified and cannot use that term,… 150W is the PCI-E limit for one 6 pin card: 75 W bus power(3.3 V × 3 A + 12 V × 5.5 A) and 75W (12 V x 6.25 A) for 6-pin. /edit

            I would feel very bad to recommend a friend to put the reference card in their aging system during coming summer heat waves.

            The rumors are there are 20x the number of RX 480 reference cards out in stores right now than 1080. What happens to them?

            Im willing to keep open mind and see how this pans out. As it stands, i would not put a reference card in my system.

      • NeelyCam
      • 3 years ago

      [quote<]What BS is this Polaris? Where is the so called performance per watt gains? Where is the power efficiency? GPU temps are awful. What is most embarrassing in this review is that the competition is all 28nm! WTF AMD![/quote<] I guess now we know what the "new devil" talk was all about, with all those flames in the ads.

      • Krogoth
      • 3 years ago

      It is more efficient than Tonga and Hawaii chips at load. I would say that that’s an improvement.

      The temperatures aren’t really that much of a shock. GPUs have been running into a massive thermal wall after moving past-40nm. GP104 and GP200 chips also run very toast at load with their reference HSF.

        • madseven7
        • 3 years ago

        It should be more efficient than Tonga and Hawaii at load. It’s at 14nm. You would think so wouldn’t you?

      • EndlessWaves
      • 3 years ago

      Uh, it’s [b<]half[/b<] the power consumption of the previous Radeon at the same performance level, the 390X

      • nanoflower
      • 3 years ago

      Their claims of up to 2.8X performance per watt improvements was with the RX 470. The 480 is more of a performance card so the improvement isn’t nearly as good at the current clocks. Undervolt it and you can get the savings at the cost of some performance.

    • unclesharkey
    • 3 years ago

    Waiting for a version with a better cooling solution, and I am sure it will get slight benchmark improvements from better drivers. Good replacement for my 7850 that has held me over since 2013. I think it’s a winner!

    • chuckula
    • 3 years ago

    I know TR doesn’t cover the Linux side of things heavily, so here are some Linux benchmarks of the Rx 480: [url<]http://www.phoronix.com/scan.php?page=article&item=amdgpu-rx480-linux&num=1[/url<]

      • Concupiscence
      • 3 years ago

      Thanks for those. It looks like a pretty decent solution there, even right out of the gate.

      • AnotherReader
      • 3 years ago

      It seems AMD has been making more of an effort on the Linux side. Kudos

    • ultima_trev
    • 3 years ago

    Bad News: Not the 980/Nano/Fury killer it was hyped to be. Less than optimal reference cooler. Only 32 ROPs.

    Good News: Immature drivers, therefore it should be possible for AMD to extract a few more horses from its engine in future driver revisions.

      • flip-mode
      • 3 years ago

      At $200/$250… hell yeah it lives up to the hype. The reference cooling is not a concern either – everyone and their mom will have a custom cooler. Cards with reference coolers are often difficult to find.

    • Anovoca
    • 3 years ago

    Just to clarify, was that the 2gb or 4gb GTX 960 in your tests?

      • Jeff Kampman
      • 3 years ago

      4GB, sorry.

        • Anovoca
        • 3 years ago

        Thanks.

    • f0d
    • 3 years ago

    not really impressed

    around as fast as my 290 is and cost about as much as a 290X was last year
    [url<]https://techreport.com/news/27755/deal-of-the-week-a-radeon-r9-290x-for-233[/url<] so similar performance for a similar price they have offered before with the 290X

      • mesyn191
      • 3 years ago

      290X was their top end card that ate more watts while being firesaled for that price though.

      480RX is their new mid range card that MSRP’s for $239 and will eventually go on sale for less while performing a bit better and using lots less power.

      If you want a new high end card to maybe be impressed about you’re supposed to wait for Vega.

    • madseven7
    • 3 years ago

    Very disappointed. In Canada 970 is cheaper than this and 480 is not faster.

      • tipoo
      • 3 years ago

      Which Canada?

      [url<]http://www.newegg.ca/Product/ProductList.aspx?Submit=ENE&DEPA=0&Order=BESTMATCH&Description=gtx+970&N=-1&isNodeId=1[/url<]

        • madseven7
        • 3 years ago

        look at Canadacomputers in Toronto.

        [url<]http://www.canadacomputers.com/search_result.php?brand=0&price=1&location=0&checkVal0=1&checkVal1=1&checkVal2=0&subcat21=3&subcat235=40&checkVal3=1&checkVal4=1&pagePos=0&keywords=&manu=0&search=1&ccid=1200&cPath=43_1200[/url<]

          • sweatshopking
          • 3 years ago

          [url<]http://www.memoryexpress.com/Search/Products?Search=radeon+480[/url<]

            • madseven7
            • 3 years ago

            I’m sure you can find better deals for 970 right now compared to 480

            • rechicero
            • 3 years ago

            That would be a mistake. even at similar prices o a tad cheaper, a 480 will be probably faster in the long run (thats the thing with AMD, they need time for the drivers), it will have 2x memory (unless you find a $200 970) and won’t tie you to Gsync (and with mid level cards I’d say is when Gsync-Freesync is more useful). IMO, there is no reason to buy a 970 instead of a similarly priced 480 8GB.

            Unless you are already tied to G-Sync, of course.

            • Prestige Worldwide
            • 3 years ago

            Good ol’ ME. Too bad their Uber Price Beat Guarantee is less susceptible to abuse these days 😉

          • tipoo
          • 3 years ago

          That’s great but Canada isn’t Toronto. By and large the GTX 970 is nearly double the 480s average price here. And that with 8GB of ram, updated HDMI and DP and freesync. Of course there will be fringe deals for the card that’s been out longer.

          • Prestige Worldwide
          • 3 years ago

          Canadacomputers is a terrible shop that overprices the f*** out of everything

      • NimaV
      • 3 years ago

      And 480 can’t overclock nearly as much as 970.

      • slowriot
      • 3 years ago

      This has +3 right now and is dead wrong.

      • Prestige Worldwide
      • 3 years ago

      Wrong, they’re starting at the same price, and you will pay more for a good GTX 970 model with better cooling.

      RX 480 @ NCIX = $329-380 CAD
      [url<]http://www.ncix.com/search/?qcatid=0&q=amd+rx+480[/url<] GTX 970 = Starting at $329 [url<]http://ca.pcpartpicker.com/products/video-card/#c=186&sort=a8&page=1[/url<]

      • Vaughn
      • 3 years ago

      I don’t see any 970’s in canada under $400-$469 retail MIR’s don’t count.

      the 480 is priced at $359-$369 currently.

      • Sammael
      • 3 years ago

      In fairness, it’s about even to the 970 in dx11, and vastly ahead in the more modern and properly encoded dx12 titles, on top of having dp1.4. Even if they were the same price, there is no reason to break for a 970 over an 4x 480, anyone recommending such a thing to a kid is guilty of child abuse.

    • Demetri
    • 3 years ago

    Good card for the price, but those power efficiency figures are extremely disappointing. 1060 will crush it in that area. Eagerly anticipating more info on the 470; I think that’s probably the card for me.

      • DreadCthulhu
      • 3 years ago

      I am also really interested in the RX 470; good to know that is has 32 compute units, which makes it 8/9ths of a RX 480; even if the clocks are a bit lower it should have 80-85% of the performance of the RX 480. Going by end graph that should be comfortably ahead of the 960/380x & not too far behind the 970.

      • blahsaysblah
      • 3 years ago

      You dont think the 1050 will crush the 470? Might not even require power pin to do it? Not gonna wait to see before you decide?

        • Demetri
        • 3 years ago

        I’m already in the AMD camp because of the adaptive sync situation. The way I look at it, if you want adaptive sync you have to add $75-100 to the GeForce cards, which throws off the value equation for all but the most expensive ones.

          • blahsaysblah
          • 3 years ago

          Very good point! Especially for RX480’s market.

          Please read Tom’s Hardware review, especially the power page. Other sites match it. Dont risk your PC with a reference RX 480, wait for custom with 8 pin power.

          • Theolendras
          • 3 years ago

          The vendor locking of the G-sync situation is preventing me to switch to the green team side. Once Geforce support freesync I’ll look back to them. Right now VRR improve responsiveness beyond competitive portrait advantage that Nvidia has strictly on the GPU side and value is competitive, albeit at a higher power consumption.

            • Leader952
            • 3 years ago

            [quote<]The vendor locking of the G-sync situation is preventing me to switch to the green team side. [/quote<] Yet you are now vendor locked to AMD because of freesync. How is that better?

            • Theolendras
            • 3 years ago

            Intel and Vesa vowed to support it, effectively making this a standard, it’s a matter of time before NVIDIA bent to it otherwise they will lose to efficiency of pretty much any monitor on the market. Monitor in the next few years will slowing migrate to this standard.

            There “may” even be a way to support Freesync trough Intel iGPU even with a dGPU at some point (altough not a certainty). The only way I see this reasoning fail is if the competitive become such that AMD fails to launch competitive design value wise, which is a risk I am more willing to take than encourage propriatary standard and obviously condemned vendor-locking going forward.

            • rechicero
            • 3 years ago

            It’s anon propietary standard and you dont pay extra for the monitor. If you are going G-sync or freesync, that means the equivalent card from nVidia is actually 75-100$ more expensive because of the monitor. When talking about a $240 that’s a BIG difference (almost 40% more expensive in prf per dollar).

            Right now, if you want the best, no matter what: 1080 + G-sync monitor.

            The best bang per buck, 480 + freesync monitor. And nVidia will at the very best match the perf/dollar of the card, without the extra cost of the monitor.

            • Theolendras
            • 3 years ago

            If you have to have a 4k-HDR panel capable graphics that’s probably the best way to go (GTX1080 g-sync monitor). I prefer going on the 200-300$ GPU and refresh it every 2 years (especially with expected democratization of HBM by that time), it’s less risky and let me have a semi decent secondary PC to have family coop play. Analyzing it that way value is king to me.

      • paulWTAMU
      • 3 years ago

      are there any price estimates on the 1060? If it’s 200 bucks, similar performance but quiter it may be an upgrade for me, and some of my friends

        • slowriot
        • 3 years ago

        Expect closer to $300 and closer to GTX 980 performance. It will definitely be much more efficient than the RX 480 though and should therefore be quieter.

        You should also keep in mind that RX 480s with third-party coolers will start showing up in mid July. Those should at minimum be significantly quieter/cooler running cards.

        We’ll have clear answers though in a couple weeks.

    • tipoo
    • 3 years ago

    NDA lift hour 1 TR review, much appreciated!

    Looks decent for the price. People were getting their hopes way too much for the 200 dollar card, but it does nicely compete with 300 dollar ones at least. The 970 it sometimes touches, sometimes is 15fps behind, on immature drivers, but some games are just biased to either hardware, I don’t think it’ll fully close the gap.

    But again, everything with the caveat, “for 200 dollars”.

    The stock heatsink looks pretty junky, probably has more overclock potential on third party.

    [url<]http://i.imgur.com/W6iznQg.jpg[/url<] [url<]https://i.imgsafe.org/ace6945383.jpg[/url<]

    • Jigar
    • 3 years ago

    Great job again guys – excellent review. Thanks.

    • blahsaysblah
    • 3 years ago

    Which GB 970 card? What is the boost clock? HUUGE range of choices on newegg.

    RX480 is first day driver versus 970s very mature drivers. So it can only get better.

    OC on reference edition not worth it?

      • pranav0091
      • 3 years ago

      You have to read the review before asking those questions. Maybe if you did, then you’d have known.

      I’m assuming that the table I see wasnt invisible when you commented.

        • blahsaysblah
        • 3 years ago

        There is no such card as “Gigabyte GeForce GTX 970”

    • RdVi
    • 3 years ago

    Looks decent [i<]for the price[/i<]. I really wish it performed more like Pitcairn did at the start of last gen - faster than anything past gen and a decent amount cheaper. For anyone close to an enthusiast this is disappointing as most will already have something close to being this powerful. Even if it is cheap and efficient it is hard to justify a side-grade. Where is the 7950 and 670 of this gen? All we have so far is cheap and no improvement, and expensive with a somewhat decent improvement.

      • EzioAs
      • 3 years ago

      The HD 7870 wasn’t really priced at release for the typical mid-range segment . I believe it was above $300 and performed slightly slower than the GTX 580.

      Though you did say [i<]last gen[/i<] so this may be somewhat moot.

        • RdVi
        • 3 years ago

        It was expensive, $350 actually, but it outperformed the GTX 580 on average which was damn impressive. I think the 40nm generation architectures where just so much less efficient that the move to 28nm was actually a big one. Everyone was underwhelmed by Tahiti but I think it was more of a price/driver issue than hardware.

        The problem here in comparison is that back then a x800 series card beat out the old x900 series and then some. Now the x800 is undoubtedly failing to do that for performance. The price is good, but there have been firesale Hawaii based cards out there at similar prices for 2 years now. The only bonus here is power/heat and if using a decent cooler, noise.

          • ermo
          • 3 years ago
          • EzioAs
          • 3 years ago

          Just for the sake of accuracy, the original Radeon HD 7870 was [i<]slightly slower[/i<] than the GTX 580 at launch. There's tons of reviews out there but TPU has a more concise chart(s). [url<]https://www.techpowerup.com/reviews/AMD/HD_7850_HD_7870/26.html[/url<] I knew my memory wasn't failing me.

    • Concupiscence
    • 3 years ago

    At $200 it’s a heck of a solution, and even the baseline model comes with 512 MB more useable memory than my GTX 970. It wouldn’t be hard to persuade me to replace my venerable 3 gig 7970 with an 8 gig RX 480, but somebody’s [b<]got[/b<] to do something about that reference blower.

      • jokinin
      • 3 years ago

      Indeed, this will be my next GPU upgrade from a 4 year old Radeon HD 7870, but only when custom cooled 8GB versions get available.

        • Stochastic
        • 3 years ago

        I’m holding onto a mildly OC’d 7850. Looking forward to upgrading in the coming months!

    • chuckula
    • 3 years ago

    [quote<]An improvement that large is impressive until one considers that the GTX 1080s in the TR labs need 265W-300W of total system power to do their thing. To be fair, our power numbers are one measurement taken under one particular load and in one particular testing environment, and modern power management is a complicated thing with many input variables. All that said, our gut impression upon seeing these numbers is that Pascal is frighteningly efficient, more so than the GTX 1080 taken in isolation might have suggested. Our completely wild hunch is that Nvidia has tons of headroom to play with in designing a Pascal GPU to target this price class, if it wants to.[/quote<] Huge takeaway that goes beyond the particulars of this particular product since it speaks volumes about the state of GloFos 14nm process vs. TSMC's 16nm process. Given the power draw & performance levels, AMD has beaten the GTX-970 at power efficiency... but not by a huge margin. As to the card itself, the performance of the Rx 480 isn't bad, but let's be real here: If you already owned a GTX-970 or already owned an R9-390 (that was absent from this review), would you actually consider this to be a real upgrade?

      • NTMBK
      • 3 years ago

      It’s the Pitcairn replacement card, don’t think it’s aimed at Hawaii owners.

        • sweatshopking
        • 3 years ago

        It’s roughly the same as my 290, but retails for far less. Going to build a new system I think, maybe. Idk. And this looks like a winning chip imo for the price.

        Power draw is surprisingly high given performance, but the 970 has had 2 years worth of driver tweaks.

          • smilingcrow
          • 3 years ago

          “Power draw is surprisingly high given performance, but the 970 has had 2 years worth of driver tweaks.”

          Pascal kills this for performance per watt and that is a new card also so there’s clearly an issue probably with the fabrication process.

            • sweatshopking
            • 3 years ago

            Almost certainly a fab issue.

            That being said, my point was that performance should improve beyond where it is today, making it more competitive with the 970.

            • Krogoth
            • 3 years ago

            Nah, it is more like a difference in architecture choices. Pascal is more energy efficient but it is almost as toasty when loaded.

            • chuckula
            • 3 years ago

            It’s also “loaded” at substantially higher clockspeeds and with more transistors running at those clockspeeds.

            • Krogoth
            • 3 years ago

            Polaris chips are smaller and denser though which makes it more difficult to remove that heat.

            GPU guys have been running into own power consumption and thermal wall since post-40nm era. GCN family has always been less energy efficient at gaming than its Kepler, Maxwell and Pascal counterparts.

            • sweatshopking
            • 3 years ago

            Less efficient at everything

            • Krogoth
            • 3 years ago

            Not in SP compute. GCN in some cases actually beat their Kepler/Maxwell counterparts. That’s part of the reason why GCN chips were sought after during the whole crypto-currency craze (which depended on SP compute).

            • willmore
            • 3 years ago

            Actually it was because GCN cards did 32 bit int efficiently while nVidia only did 24 bit ints.

            • chuckula
            • 3 years ago

            Smaller yes, but that also means fewer units generating the heat.
            Denser: About 5% if you run the numbers. Not enough to make a substantial difference.

            • Krogoth
            • 3 years ago

            That mere 5% does makes a significant difference at package sizes and transistor densities/clockspeeds we are dealing with.

            • chuckula
            • 3 years ago

            No, it really doesn’t.

            A “small” 232 mm^2 die is pretty much what you expect from a large CPU die that also clocks to around 4 GHz with small thermal hot-spots that are much harder to cool than the same sized die with a much more distributed transistor layout that’s only tooling around at 1.2 to 1.3GHz or so. A 5% change to the transistor density of a GPU is not going to make or break the part, especially when the denser part is running at a substantially lower clock.

            • looncraz
            • 3 years ago

            The effects of density on heat are largely exponential, so density really does have quite an outsized impact.

            5% higher density results in 10% or so less material absorbing the same amount of heat energy from any given transistor and can cause more critical path issues (resulting in lower clock-speeds).

            CPUs are very much less dense than GPUs.

            Intel’s i7-6950X has a die size of 246.3mm^2, 3.2B transistors, and uses an inherently denser process than 14nm LPP.

            RX 480 has 5.7B transistor in 232mm^2. You don’t even have to do with the math to see just how dramatic of a difference that is.

            • smilingcrow
            • 3 years ago

            For a 10% difference to be considered significant it would have to push a non-linear system over a wall.

            • Beelzebubba9
            • 3 years ago

            Also a huge portion of the i7’s die size is cache, which generally generates lower w/mm^2.

            • smilingcrow
            • 3 years ago

            But why has the idle power consumption jumped so much? I’ve scanned a few reviews and this stands out.
            I hope that is due to the process used otherwise AMD really screwed up the design of this chip.
            And it’s consuming 40W for multi-monitor usage or Blu-ray playback versus 7W for Pascal so they haven’t sorted out that side of the architecture. I don’t think they have the resources to focus on the fine details such as these.

            My sense from reading the reviews of Intel’s 14nm performance parts (80 – 160W inc Xeons) is that the power efficiency gains aren’t substantial. So only Nvdia alongside TSMC seem to have done particularly well at 1xnm.

            • looncraz
            • 3 years ago

            All of that is driver controlled issues AMD has never addressed – in the TEN years it’s been a problem. I don’t know why they don’t do the obvious fixes.

            I have much lower multi-monitor and playback power consumption simply because I use a different profile for my GPU and VRAM clocks for the desktop than for gaming. Using AMD’s defaults my system can pull 186W while watching a 480p HTML5 video (YouTube w/ Firefox). Just by dropping the memory clocks from 1250Mhz to 625Mhz I drop to 150W. That’s the entire difference in power consumption between Pascal and the RX480 (though the RX480 probably won’t see as large of a difference – more like 15W).

            When you are using multi-monitor configurations the memory clocks will often refuse to idle at all – running at full speed. When I use DisplayPort on my R9 290 (which is now all the time), the memory clocks are always maxed. I can save 12W at **IDLE** by just dropping the memory clocks – that would be about 6W on RX 480).

            With a single monitor, using HDMI or DVI, AMD has no idle power consumption issues – but the memory clocks always jump to full speed when the GPU sees a load, such as when watching a video. So easy to fix… a tiny driver update.

            If AMD would allow user access to the idle clock configurations – including the multi-monitor and high refresh configs – this problem would vanish quickly.

            • smilingcrow
            • 3 years ago

            So they haven’t even bothered to fix a driver issue this significant. Yikes!
            It’s things like this that harm AMD at the platform level. It’s all very well offering good value but if the perception is that your platform is inferior many will be happy to pay more to move to a better platform. In other words it’s not just about raw performance.

            • DoomGuy64
            • 3 years ago

            This. Latest driver maxes out my ram sitting idle at the desktop. Power efficiency will improve as soon as AMD addresses the bug.

            • nanoflower
            • 3 years ago

            I think they are on the right track with the new power control system and their Watt Manager. They just need time to tune it so that it does just what you describe as there is no reason they can’t drop the clock rates when the GPU is basically idle and raise it as needed. It’s probably just a matter of manpower since AMD is working with much less engineering talent so features that I’m sure they saw a need for (and this is an obvious one) had to be sidelined for now to meet their launch goals.

            • DoomGuy64
            • 3 years ago

            I’ve heard at least one other reviewer say this was a driver problem. Considering the issues AMD has had lately with broken low power states, I’d consider this to be the most likely scenario.

            • Freon
            • 3 years ago

            I don’t think it’s fabrication, I think it’s AMD R&D.

            NV showed several iterations of improvements on 28nm, ending in a trouncing of AMD on power and memory bandwidth efficiency that simply extends to 14/16nm.

            Not saying fabrication is completely isolated, but I feel more than comfortable saying simple R&D and design can easily account for why AMD is still behind.

        • NovusBogus
        • 3 years ago

        Exactly, this is for folks like me who are in the Geforce x50-60 / Radeon x70-80 range. It’s an impressive card for 200 bucks, if only because NV has totally failed to offer a compelling $200-300 product for eons.

      • EzioAs
      • 3 years ago

      True, let’s be real here. Why would you consider this card as an upgrade from a GTX 970? You don’t.

      This is priced (at release) for mid-range segment: those people looking to play newer AAA titles at 1080p with good framerate (and acceptable for 1440p).

        • spanner
        • 3 years ago

        Exactly. If you purchased your last card in this same price segment, you bought a R9 380 or R9 380X, a GTX 960, R9 270X or maybe a R9 280, a 760, or perhaps a 7850 or a 660, depending on how long ago your last purchase was. This card has plenty to offer owners of any of those.

      • Concupiscence
      • 3 years ago

      Nah, but I’ve got a 3 gig Radeon 7970 kicking around… If someone fixes the 480’s cooler, an 8 gig model would offer a 40+% improvement to performance and power efficiency with a huge increase in available memory*.

      * Which would immediately be used on the most egregious post-processing available… Has anyone managed to write a TSSAA 8x injector yet?

      • smilingcrow
      • 3 years ago

      Look at the performance per watt charts at TechPowerUp and that doesn’t bode well at all for the high end Radeon GPU as Pascal is killing it and it only just beats the 970.
      That’s the only real big surprise here:

      [url<]http://www.techpowerup.com/reviews/AMD/RX_480/25.html[/url<]

        • chuckula
        • 3 years ago

        Yeah, the price is something that is under AMD’s direct control.
        The power efficiency improvements were supposed to be the other huge deal for Polaris.
        Technically they succeeded if you look at older GCN products (especially Hawaii), but the problem is that they have to compete with Nvidia too.

        As for the whole “power efficiency doesn’t matter” argument that was temporarily suppressed but may make a comeback again, let me reiterate: The raw power consumption numbers aren’t the big complaint. However, the power consumption issues lead to this:

        [quote<]We're not fans of blower-style coolers for the most part, and the RX 480's isn't doing anything to change our minds. While the card runs at or below our 40-dBA-ish noise floor at idle, its load noise level climbs to 51 dBA—just short of the triple-fan cooler on our Gigabyte Windforce GTX 980. Despite all the sound and fury, the RX 480's load temperatures reached 83° C under load, too. One of the characteristics we've come to associate with efficient graphics cards is polite manners in the noise department, but the RX 480's reference design doesn't deliver. The blower fan isn't pleasant-sounding, either—it's grindy and obtrusive. We hope AMD's board partners have custom coolers in the works that deliver a better experience in the noise, vibration, and harshness department.[/quote<] Incidentally, nice use of the word "fury" there TR.

      • XTF
      • 3 years ago

      GTX 970 is a 28nm chip, RX 480 is a 14nm chip, so beating it by a low margin is a low goal IMO.

      Compared to Tonga (28nm) the die size seems quite large as well.

        • chuckula
        • 3 years ago

        Assuming the estimated transistor counts are relatively accurate, the Polaris 10 die has about a 5% transistor density advantage over GP104 too. So GloFo’s process + the Polaris 10 design are denser, but not by an overwhelming margin.

          • XTF
          • 3 years ago

          Perhaps, but shouldn’t 14nm compared to 28nm (x0.25 in area) result in even denser chips?
          Something doesn’t seem quite right.

            • raddude9
            • 3 years ago

            Not all features in a chip are shrunk by the same amount on the lower process nodes. And each companies process shrinks the features by differing amounts. The whole “14nm” thing is just a rough guide to what you get.

      • flip-mode
      • 3 years ago

      [quote<]the particulars of this particular[/quote<] Nothing to say, just wanted to quote that.

        • chuckula
        • 3 years ago

        I see you are very particular.

      • steelcity_ballin
      • 3 years ago

      I sincerely doubt anyone who already owns a 970gtx would even consider this a side-grade let alone an upgrade.

      People who buy video cards for gaming generally do not give a gerbil’s bare ass about power draw, noise, or performance-per-watt ratios. They care about what settings they can push at their native resolution and with respect to what quality settings.

      For that, it’s hard not to recommend this card to anyone building a new PC on a budget. $200 is less than what I’d spend on RAM. I have a 970gtx and it’s been good to me. It’d probably be better for gaming if I didn’t have a multi-monitor setup, and if my primary wasn’t a ROG swift 27″. The 970gtx has been struggling lately at native resolution and with > medium graphics on.

      I think AMD has a potential marketing homerun on its hands – $200 for a card that will let Joe Gamer run max settings at 1920 is a significant portion of the market.

      I am slightly worried that my aging 2500K setup might bottleneck a 1080 but that’s where I’m heading once the price gouging gives way to the availability.

      So then, when are the R480s hitting the shelf? It’s nice to see some life from the AMD camp, I haven’t recommend a single thing they’ve produced in ages.

        • Freon
        • 3 years ago

        Gamers don’t care?? A few watts of power draw, no. 50% power draw, maybe. Noise, definitely yes.

      • slowriot
      • 3 years ago

      The people who buy $200 graphics cards don’t do it every generation. Maybe you read parts of the web I don’t but the people I know who are excited about the RX 480 have cards like a 750 TI or 7870 or something even older, honestly. 460 TIs and crap like that. GTX 770s maybe in the best case scenarios. Not GTX 970 owners who just a year or so ago dropped likely ~$350+ on a card.

      • tipoo
      • 3 years ago

      Yeah, my impression from reading the reviews too. Polaris is a big jump, but still not to new GTX efficiency.
      Nvidia learned the lesson Intel did a long time ago that efficiency = power in a universe with physics and thermal limits. Do wonder what their 200 dollar-ish answer will be (they’ll probably upcharge from AMD a bit).

      • NeelyCam
      • 3 years ago

      So, GloFo nukes AMD again. Is Zen going to be on GloFo 14nm?

        • Freon
        • 3 years ago

        AMD was way behind on the same TSMC 28nm process as NV had for the past few years (particularly Maxwell), but now it’s suddenly Glofo’s fault?

        Come on. Think a little harder here.

        • tipoo
        • 3 years ago

        Nope, thankfully they contracted TSMC for exactly these concerns for Zen. Gives me a teeny tiny morsel of hope that they think Zen will be a high end part.

      • ImSpartacus
      • 3 years ago

      No, but that’s the point.

      From the very start, Amd has said that Polaris 10 was a bargain vr min spec gpu. And that means that it performs like a 970 or 290, but not a whole lot better because the whole point is to meet the vr min spec at the lowest possible cost.

      Roy Taylor stood up at vr la months ago and literally said that. It wasn’t recent and it wasn’t from a sketchy rumor site. It’s a high level amd employee literally telling us what to expect from Polaris 10.

      I’m sorry if I’m being a little crass, but I’m just repeatedly amazed that people are even a little surprised by how all of this is shaking out. Amd showed us their hand months ago in various official events. Most of it is on their YouTube channel ffs.

        • chuckula
        • 3 years ago

        There’s what AMD executives say.
        And then there’s the perception.

        Go into the forums to see what I’m talking about with the n-th degree hype that was generated.

        Additionally, go back and look at a bunch of my comments that got mad downthumbs. You’ll find that my unpopular comments had a tendency to match up pretty well with the results of TR’s review. What’s even funnier is that they tended to match up pretty well with what AMD actually said… just not with what people [b<]wanted[/b<] AMD to say.

      • puppetworx
      • 3 years ago

      Suggestions that AMD was holding off due to process immaturity appear to be right. Likely also why they won’t release a competitor to Nvidia’s high end for many months.

      • anotherengineer
      • 3 years ago

      I wonder if this is a fab issue, or a silicon issue or doping, or if AMD just set voltages too high in BIOS or all of the above??

      I posted this somewhere else, but it’s interesting, in that it shows, voltages and clocks for different settings and dynamic clocks.

      [url<]http://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_1070/28.html[/url<] [url<]http://www.techpowerup.com/reviews/AMD/RX_480/28.html[/url<] I wonder if the radeon would still run and operate at the same voltages as the geforce, and what would power consumption be then?? edit - looking again, AMD really should be able to run at 1230Mhz on 0.80V

        • AnotherReader
        • 3 years ago

        The difference in idle voltages is stark; it looks like a process issue.

        Global Foundries: screwing AMD since 2011

          • anotherengineer
          • 3 years ago

          It’s too bad AMD couldn’t get a few test wafers made with the same silicon on the same node as nvidia at TSMC and see. If it is better then maybe they could leverage a discount price from GloFo.

          I’d still be interested to see if the radeon would run without issues at the same voltage/clock ratio as the 1070.

        • Rza79
        • 3 years ago

        Here they test undervolting:
        [url<]https://www.computerbase.de/2016-06/radeon-rx-480-test/12/[/url<] Seems there's a good margin AMD is putting on these chips. Lowering the voltage, increases the performance.

          • AnotherReader
          • 3 years ago

          Nice. I wonder if their adaptive voltage tech (AVFS) is being too conservative.

      • Tirk
      • 3 years ago

      But to be realistic it took you 3 years after the 770’s release for you to upgrade it. Not only did it take you 3 years but you paid $250 more for the 1080 (going off the 770’s MSRP of $399) [url<]https://techreport.com/forums/viewtopic.php?f=3&t=118108[/url<] . People are not looking to replace their GPU every year and I highly doubt AMD was gearing this card to sell to those who just recently bought a new one. The 970 was released in 2014 and AMD has over a year from now to meet YOUR upgrade cycle. Most of AMD's/ATI's financial successes have been in releasing competitive mid range cards because when they do release some high end product consumers used to buying Intel/Nvidia ask that they release industry beating products for 3 years plus before they switch which is a ridiculous standard for any company to have to keep. Polaris is a test to see if they can succeed like they always have in selling competitive mid ranged products. Its a risk but one far cheaper than if they released a high end card first and lost. I'm not looking to purchase any GPUs this year unless someone asks me to build them a whole new system. And for a whole new system this card is great if its in the budget range you wish to spend. If someone wanted the best of the best out now and didn't care it costed $650 or more I'd refer them to the 1080, does that shock you? Some of your tech info is insightful but your attitude comes off as a fight between Dr. Jekyll and Mr. Hyde. Can you put Mr. Hyde to bed so that Dr. Jekyll can come to the discussion?

      • Stonebender
      • 3 years ago

      [quote<]Huge takeaway that goes beyond the particulars of this particular product since it speaks volumes about the state of GloFos 14nm process vs. TSMC's 16nm process.[/quote<] Is TSMC's superiority coming at the price of yields? 1080 cards are pretty much unavailable across the board. Of course we haven't seen if this will be an issue with the 480, but I suspect it won't.

    • NTMBK
    • 3 years ago

    Great to see AMD’s FIRST 14nm card!

    EDIT: Oh goddammit Stochastic

      • Concupiscence
      • 3 years ago

      Y’know, you tried, and I’m proud of you.

        • Mr Bill
        • 3 years ago

        He did say ‘FIRST’ first.

        • DrDominodog51
        • 3 years ago

        He tried so hard
        And got so far
        But in the end
        He wasn’t even first

      • Redocbew
      • 3 years ago

      Hard to predict when that guy will pop up, isn’t it?

    • Stochastic
    • 3 years ago

    Wow, awesome job on the quick turnaround! Looking forward to reading the review.

Pin It on Pinterest

Share This