AMD’s Radeon R9 285 graphics card reviewed

As a guy who reviews video cards, it’s pretty easy to become cynical about these things. That’s been especially true during the past couple of years, as we’ve seen the same handful of graphics chips spun into multiple “generations” of products. The core GPU technology is a technological wonder, but the endless re-spins get to be tiresome.

When AMD revealed the imminent arrival of the Radeon R9 285 recently, I have to admit, I wasn’t exactly thrilled. Yes, the R9 285 would be based on a new chip, code-named Tonga, but that chip just looked to be a cost-reduced and slightly tweaked variant of existing silicon—not quite the stuff of legend.

The Radeon R9 285 by the numbers

Heck, have a look at the specs for the Radeon R9 285 versus the card it replaces, and you’ll see what I mean.

GPU

boost

clock

(MHz)

ROP

pixels/

clock

Textures

filtered/

clock

Shader

processors

Memory

interface

width

(bits)

Memory

transfer

rate

Board
power
Starting
price
Radeon
R9 280
933 32 112 1792 384 5 GT/s 250W $279
Radeon
R9 285
918 32 112 1792 256 5.5 GT/s 190W $249
Radeon
R9 280X
1000 32 128 2048 384 6 GT/s 250W $299

In terms of key specs, the principal change from the Radeon R9 280 to the R9 285 is the move from a 384-bit memory interface to a 256-bit one. The narrower interface should make the R9 285 cheaper to produce, but it will also mean less memory bandwidth—and memory bandwidth is one of the primary performance constraints in today’s graphics cards. Aside from the reduction in memory throughput, the R9 285 appears to be very similar to the R9 280 card that it replaces (and to the Radeon HD 7950 that came before it and was essentially the same thing.)

Ho-hum.

Worse, the $249 starting price for the R9 285 doesn’t seem like much of a bargain, given that the R9 280 is going for $219 at online retailers right now, presumably while they close out stock to make room for the new card. That all felt like kind of a raw deal, frankly. What could AMD be thinking?

My, uh, lack of enthusiasm was dampened somewhat when the first example of the R9 285 arrived in Damage Labs. Behold the MSI Radeon R9 285 Gaming OC Edition:

This puppy is gorgeous, and its twin-fan cooler performs as well as its looks suggest. There’s good news on the performance front, too, since MSI has cranked up the Boost clock to 973MHz, 55MHz above stock.

At least MSI was doing good work with its part of the equation.

Still, I thought, a snazzy cooler and paint job couldn’t fix the basic problem with the R9 285. Although the new Radeon was up to snuff elsewhere, its memory bandwidth just looked a bit anemic.

Peak pixel

fill rate

(Gpixels/s)

Peak

bilinear

filtering

int8/fp16

(Gtexels/s)

Peak

shader

arithmetic

rate

(tflops)

Peak

rasterization

rate

(Gtris/s)

Memory
bandwidth
(GB/s)
Radeon
R9 280
30 104/52 3.3 1.9 240
Radeon
R9 285
29 103/51 3.3 3.7 176
Radeon
R9 280X
32 128/64 4.1 2.0 288
GeForce GTX
760
33 99/99 2.4 4.1 192
GeForce GTX 770 35 139/139 3.3 4.3 224

The MSI card’s higher boost clocks would give it a bit more oomph in some categories than the stock numbers shown for the R9 285 above, but it wouldn’t do anything to address the biggest issue. The R9 285’s 176 GB/s of memory bandwidth is just a lot less than the R9 280’s 240 GB/s—and quite a bit less than what the competing GeForces have to offer, too.

Tonga’s dilemma

So you can understand my reticence. Tonga looked to be nothing more than a dreary re-spin of AMD’s existing technology, based on the same Radeon DNA as the Hawaii GPU introduced one year ago—and the Bonaire chip first outed in March of 2013. Those chips weren’t that different from the Tahiti GPU that debuted at the end of 2011.

To be fair, AMD did make some notable improvements in Hawaii and Bonaire. Both of those chips have the TrueAudio DSP block onboard, so that games can offload audio processing to a dedicated hardware unit on the GPU. Those chips include a new XDMA data transfer mechanism for CrossFire, which allows frame data to be transferred via PCI Express instead of over an external bridge. Hawaii and Bonaire also have updated display outputs with support for the latest DisplayPort standards.

In fact, AMD tells us that only Radeon cards based on Bonaire, Hawaii, and Tonga will support the variable refresh displays being enabled by its Project FreeSync initiative. I wasn’t aware that older Radeons would be excluded, but apparently they will.

One other addition in Hawaii—and now Tonga—is a smarter version of AMD’s PowerTune dynamic voltage and frequency scaling (DVFS) scheme. The new PowerTune monitors the GPU’s current state constantly and makes fine-grained adjustments to clock speeds and supplied voltages in order to keep the GPU within its pre-defined thermal and power peaks. The smarter PowerTune algorithm allows the graphics chip to squeeze out every ounce of performance possible within those limits.

These tweaks are all well and good, and Hawaii in particular is a truly impressive GPU specimen, but they don’t do that much to improve the GPU’s fundamental performance or efficiency. Hawaii gets its potency from sheer scale more than anything else.

That reality was a problem for Tonga, in my view, because Maxwell is coming. Nvidia has already released a small-scale version of its new GPU architecture aboard the GeForce GTX 750 Ti, and we know the little Maxwell is about twice as power-efficient as the corresponding chip based on the Kepler architecture. Nvidia is widely rumored to be prepping larger Maxwell derivatives for release soon. Those chips are likely to convert this architecture’s increased power efficiency directly into higher performance. If Tonga were just a cost-reduced Tahiti chip with TrueAudio added, AMD could be in for a world of hurt.

Tonga: it’s a magical place

Turns out my worries were misplaced, because Tonga is not just a smaller version of Hawaii. A year after the release of that bigger GPU, AMD has slipstreamed some significant new technology into Tonga—and has done so rather quietly, without a branding change or any of the usual fanfare. In fact, I had to prod AMD a little bit in order to understand what’s new in Tonga. I don’t yet have a clear picture of how everything works, but I’ll share what I know.

By far the most consequential innovation in Tonga is a new form of compression for frame buffer color data. GPUs have long used various forms of compression in order to store color information more efficiently, but evidently, the method Tonga uses for frame buffer data is something novel. AMD says the compression is lossless, so it should have no impact on image quality, and “delta-based.” Tonga’s graphics core knows how to read and write data in this compressed format, and the compression happens transparently, without any special support from applications.

We don’t have many details on exactly how it works, but essentially, “delta-based” means the compression method keys on change. My best bet is that whenever a newly completed frame is written to memory, only the pixels whose color have changed from the frame prior are updated. ARM does something along those lines with its Mali mobile GPUs, and I expect AMD has taken a similar path.

The payoff is astounding: AMD claims 40% higher memory bandwidth efficiency. I’m not quite sure what the basis of comparison is for that claim, nor am I clear on whether 40% is the best-case scenario or just the general case. But whatever; we can measure these things.

3DMark Vantage’s color fill test has long been gated primarily by memory bandwidth, rather than the GPU’s raw pixel fill rate. Here’s how Tonga fares in it.

Whoa.

Compare the R9 285 to the Radeon HD 7950 Boost, which we used in place of the Radeon R9 280. (Only 8MHz of clock speed separates them.) The 7950 Boost has 240 GB/s of memory bandwidth to Tonga’s 176 GB/s, yet the new Radeon maintains a substantially higher pixel fill rate. That’s Tonga magic in action.

Perhaps my concerns about Tonga’s memory bandwidth were premature. We’ll have to see how well this compression mojo works in real games, but it certainly has my attention.

That’s not all. Tonga has inherited a new front-end and internal organization from Hawaii that grants it more potential for polygon throughput. The triangle setup rate has doubled from two primitives per clock in Tahiti to four per clock in Tonga. Beyond that, Tonga adds some of its own provisions to improve geometry and tessellation performance, including a larger parameter cache that spills into the L2 cache when needed. The division of work between the geometry front-end units has been improved, and these units can better re-use vertices, which AMD says should help performance in cases where “many small triangles” are present.

These architectural modifications more than bring the R9 285 up to par with its nearest rival, the GeForce GTX 760, in terms of geometry throughput. Tonga also surpasses the Hawaii-based Radeon R9 290X in this synthetic test of tessellation performance.

Between the new color compression method and the geometry performance gains, Tonga could plausibly claim to usher in a new generation of Radeon technology. The use of the GCN or “Graphics Core Next” label has proven incredibly flexible inside the halls of AMD, but what we’re seeing here sure feels like a fundamental shift.

That’s not the full extent of the changes, either. AMD has revamped Tonga’s media processing capabilities in order to ensure fluid performance and high-quality images in the era of 4K video. That starts with a new hardware image-scaling block in the display pipeline. This scaler is capable of upscaling to and downscaling from 4K video targets in real time.

In a related move, the graphics core has gained some new instructions for 16-bit integer and floating-point media and compute processing at reduced power levels. Also, both the video decode (UVD) and encode (VCE) engines on Tonga have been upgraded to allow for higher throughput. The UVD block now supports the MJPEG standard and can decode high-frame-rate 4K video compliant with the High Profile Level 5.2 spec. The beefier VCE block can encode 1080p video at 12X the real-time rate and is capable of encoding 4K video, as well.


Source: AMD

We’ve had limited time to test Tonga, so we haven’t been able to scrutinize its video processing chops yet. Above are some encoding performance results that AMD supplied to reviewers showing the R9 285 outperforming the GeForce GTX 760. Make of them what you will.

But wait, there’s more!

One of the strange things about Tonga’s introduction to the world is that it’s debuting in a product where’s its at less than full strength. AMD hasn’t provided a ton of info about the full GPU, perhaps as a result of that fact, but below is my best guess at how Tonga looks from 10,000 feet.

The image above shows eight compute units per shader engine, with four shader engines across the chip. AMD has confirmed to us that Tonga is indeed hiding four more compute units than are active in the R9 285, so the diagram above ought to be accurate in that regard. Here’s my best estimate of how Tonga stacks up in terms of key metrics versus its closest competition.

ROP

pixels/

clock

Texels

filtered/

clock

(int/fp16)

Shader

processors

Rasterized

triangles/

clock

Memory

interface

width (bits)

Estimated

transistor

count

(Millions)

Die
size

(mm²)

Fabrication

process node

GK104 32 128/128 1536 4 256 3500 294 28 nm
GK110 48 240/240 2880 5 384 7100 551 28 nm
Tahiti 32 128/64 2048 2 384 4310 365 28 nm
Tonga 32 128/64 2048 4 256 5000 359 28 nm
Hawaii 64 176/88 2816 4 512 6200 438 28 nm

The die size and transistor count for Tonga above come directly from AMD. What fascinates me about these figures is that Tonga is barely any smaller than Tahiti. The idea that Tonga is a cost-reduced version of Tahiti pretty much goes out of the window right there.

Look at the transistor count, though. Tonga packs in roughly five billion transistors, while Tahiti is less complex, at 4.3 billion. Both chips are made at TMSC on a 28-nm process. How is it that Tonga’s not quite as large as Tahiti yet has more transistors?

Since the chips are separated by three years, I suspect GCN compute units in Tonga are more densely packed than those in Tahiti. AMD has had more time to refine them. That said, we know that the two GPUs have the same number of compute units, so presumably Tonga doesn’t get its much higher transistor count from its shader core. All of the other additions we’ve talked about, including the TrueAudio DSP block, the color compression capability, and video block enhancements, add some complexity. I doubt they’re worth another 700 million transistors, though.

My best guess is that most of the additional transistors come from cache, perhaps a larger L2. SRAM arrays can be very dense, and a larger L2 cache would be a natural facilitator for Tonga’s apparently quite efficient use of memory bandwidth. I’ve pinged AMD about the size of Tonga’s L2 cache but haven’t heard back yet.

Another question these numbers raise is whether Tonga natively has a 256-bit memory interface. Generally, the size of a chip like this one is dictated by the dimensions of the I/O ring around its perimeter. Since Tonga occupies almost the same area as Tahiti, it’s got to have room to accommodate a 384-bit GDDR5 interface. Surely we’ll see a Radeon R9 285X card eventually with a fully-enabled Tonga GPU clocked at 1GHz or better. If I were betting, I’d put my money on that card having a 384-bit path to memory.

Our testing methods

As ever, we did our best to deliver clean benchmark numbers. Our test systems were configured like so:

Processor Core i7-3820
Motherboard Gigabyte
X79-UD3
Chipset Intel X79
Express
Memory size 16GB (4 DIMMs)
Memory type Corsair
Vengeance CMZ16GX3M4X1600C9
DDR3 SDRAM at 1600MHz
Memory timings 9-9-9-24
1T
Chipset drivers INF update
9.2.3.1023

Rapid Storage Technology Enterprise 3.6.0.1093

Audio Integrated
X79/ALC898

with Realtek 6.0.1.7071 drivers

Hard drive Kingston
HyperX 480GB SATA
Power supply Corsair
AX850
OS Windows
8.1 Pro
Driver
revision
GPU
base

core clock

(MHz)

GPU
boost

clock

(MHz)

Memory

clock

(MHz)

Memory

size

(GB)

Radeon
HD 7950 Boost
Catalyst 14.7 beta
2
925 1250 3072
Radeon
R9 285
Catalyst 14.7 beta
2
973 1375 2048
Radeon
R9 280X
Catalyst 14.7 beta
2
1000 1500 3072
GeForce
GTX 760
GeForce
340.52
980 1033 1502 2048
GeForce
GTX 770
GeForce
340.52
1046 1085 1753 2048

Thanks to Intel, Corsair, Kingston, and Gigabyte for helping to outfit our test rigs with some of the finest hardware available. AMD, Nvidia, and the makers of the various products supplied the graphics cards for testing, as well.

Also, our FCAT video capture and analysis rig has some pretty demanding storage requirements. For it, Corsair has provided four 256GB Neutron SSDs, which we’ve assembled into a RAID 0 array for our primary capture storage device. When that array fills up, we copy the captured videos to our RAID 1 array, comprised of a pair of 4TB Black hard drives provided by WD.

Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests.

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Watch Dogs


Click on the buttons above to cycle through plots of the frame times from one of our three test runs for each graphics card.

Yep, Tonga’s color compression magic translates pretty well into performance in real games. The R9 285 effortlessly outperforms the Radeon HD 7590 Boost—which, again, is essentially the same thing as the Radeon R9 280. (Sorry, didn’t realize until too late that Damage Labs didn’t have an R9 280 on hand, so you’ll have to settle for a different name on the graph labels.)

The newest Radeon also outdoes the GeForce GTX 760, its closest competitor, and that card’s bigger brother, the GTX 770.


These “time spent beyond X” graphs are meant to show “badness,” those instances where animation my be less than fluid—or at least less than perfect. The 50-ms threshold is the most notable one, since it corresponds to a 20-FPS average. We figure if you’re not rendering any faster than 20 FPS, even for a moment, then the user is likely to perceive a slowdown. 33 ms correlates to 30 FPS or a 30Hz refresh rate. Go beyond that with vsync on, and you’re into the bad voodoo of quantization slowdowns. And 16.7 ms correlates to 60 FPS, that golden mark that we’d like to achieve (or surpass) for each and every frame.

Not only does the R9 285 perform well it terms of FPS averages, but it also generates new frames with consistent quickness. The new Radeon barely ever surpasses our 33-ms badness threshold. In other words, the frame rate barely ever drops below 30 FPS, even for an instant. For that reason, Watch_Dogs is quite playable on the R9 285 at 2650×1440. In fact, that’s the display resolution AMD has identified as a good target for this card.

Crysis 3


Nvidia has done some nice work with its graphics drivers in the past year or so, cutting out cases where games slow down for whatever reason. That work pays off here, as you can see in the plots, in the two places where I fire exploding arrows at the bad guys during our test run. The Radeons have a couple of big frame-time spikes in their plots, while the GeForces don’t.

Frame times are a little less variable overall on the GeForce cards, and that reflects in the 99th percentile results. Even though the R9 285 produces a slightly higher FPS average than the GeForce GTX 760, the GTX 760 comes out at ahead in our more time-sensitive metric.

The frame time curve reveals that the R9 285 generally outperform the GTX 760, but the GeForce is quicker in the most difficult two to three percent of frames.


My sense is that the frame time spikes and “badness” we see from the Radeons here is something AMD needs to fix with driver optimization. I doubt it’s a reflection on the underlying GPU tech. That said, it does reflect on the quality of the experience gamers will have with the product.

Borderlands 2

Since the last couple of games were a bit challenging for this class of GPU at 2560×1440, I thought I’d include my favorite game as an example of how well these cards can handle a fairly typical Unreal Engine 3-based title.


Wow. That Tonga devil magic does work wonders. The Radeon R9 285’s frame time plots illustrate how very consistently the GPU produces frames in this test scenario. Although it doesn’t produce the most total frames (and thus doesn’t have the highest FPS average,) the Radeon R9 285 takes the top spot in our 99th percentile frame time metric.


No matter which of our metrics you use, the Radeon R9 285 handles Borderlands 2 flawlessly, ahead of both the GTX 760 and the 770. That latter card is based on a full-blown GK104 graphics processor. In our time-sensitive metrics, the R9 285 also beats the full-fledged Tahiti card, the Radeon R9 280X.

Thief

I decided to just use Thief‘s built-in automated benchmark, since we can’t measure performance with AMD’s Mantle API using Fraps. Unfortunately, this benchmark is pretty simplistic, with only FPS average and minimum numbers (as well as a maximum, for all that’s worth.)

Chalk up another shocking win for Tonga. The R9 285 beats both the R9 280X and the GeForce GTX 770 in the Thief benchmark. Good grief.

Notice that the R9 285 doesn’t fare as well with AMD’s close-to-the-metal Mantle API as it does with the game’s default Direct3D mode. By contrast, the Tahiti-based Radeon HD 7950 benefits a bit from the switch to Mantle. Looks to me like Mantle support for the R9 285 may not quite be ready for prime time.

Power consumption

Please note that our “under load” tests aren’t conducted in an absolute peak scenario. Instead, we have the cards running a real game, Crysis 3, in order to show us power draw with a more typical workload.

AMD hinted that the R9 285 ought to be more power-efficient than its older Tahiti-based graphics cards. In our tests, under load, that’s not the case. I suspect the cause of the disconnect here may be the fact that MSI has pushed the frequency up—and likely the voltage, as well—on this Radeon R9 285 Gaming OC Edition card. Given the performance that we’ve seen out of this card, though, its power efficiency seems pretty reasonable.

Noise levels and GPU temperatures

This dual-fan MSI cooler is scary good. As I tested, I thought subjectively that the R9 285 card got quieter under load than it was at idle. Then, uh, the meter confirmed it.

MSI has been very aggressive in choosing a 65°C tuning point for its R9 285 card. With the new PowerTune, AMD has typically exploited quite bit more temperature headroom. In truth, though, the R9 285 doesn’t seem to need it.

Conclusions

I haven’t yet had time to test the Radeon R9 285 as extensively as I’d like. I had to throw together this review in a few days after finishing up the Core i7-5960X, and there just wasn’t time to test more games or display resolutions.

That said, I think we have a sense of how surprisingly potent this video card really is, thanks to a GPU that’s packed with more innovation than we expected. Here’s one more reminder of how wildly this card is overachieving given its memory bandwidth. We’re averaging together the results from all of our game tests here.

Compared to the Radeon HD 7950, the R9 285 is a big move up and to the left, in the direction of goodness. The R9 285 easily makes the most effective use of its memory bandwidth of any of the cards we tested. Like I said earlier, this is generational change on the GPU front.

Our famous value scatter plot tells the rest of the story. In our limited tests, the R9 285 looks to be fast enough to justify its $249 price tag pretty well. I suppose that’s what AMD was thinking, huh? The GeForce GTX 770 sure looks like a raw deal by comparison.

This MSI Gaming OC Edition card is a stone cold killer, too. I’m smitten. MSI has nailed it with this cooler and the board’s overall design. Consider also that AMD is throwing in a trio of games via its Never Settle bundle with the purchase of an R9 285, and you might be tempted to order one right away. I can’t say I’d blame you. R9 285 cards are supposed to be on store shelves today, and AMD tells us it expects an ample initial supply. Most of them will have 2GB of memory, but there will be 4GB variants coming, as well.

Only thing is, I’m pretty sure this is the opening salvo of a protracted battle. If I’m right about Tonga having a 384-bit interface, then the Radeon R9 285X could turn out to be quite the thrill ride. We don’t yet know exactly what Nvidia has cooking, either, but it may well be a Maxwell variant that’s direct competition for Tonga. I suspect we’ll have more reasons to test this magical new GPU in the coming weeks.

Many of my sentences are compressed by 40% in order to fit on Twitter.

Comments closed
    • wiak
    • 5 years ago

    i wonder how delta-based lossless compression will work in newer APU (Kavari) chips?

    • south side sammy
    • 5 years ago

    does this card include predator game capture ?

    and does it support FreeSync………… a couple of things not all cards support. would be nice to know something other than frame rates and latency.

    • WulfTheSaxon
    • 5 years ago

    A dBA vs. fan speed graph would be very interesting for that cooler. Might be worth locking it at a higher speed if it really is quieter.

    • Klimax
    • 5 years ago

    If somebody ever wanted to see perfect example why Mantel is bad idea:
    [url<]http://www.hardocp.com/article/2014/09/02/msi_radeon_r9_285_gaming_oc_video_card_review/5[/url<] [quote<] Right off the bat we had some performance issues with the new MSI Radeon R9 285 GAMING OC video card in BF4 running under AMD Mantle. If you look back on the second page of this evaluation there is a disclaimer warning from AMD that the current version of Mantle may not work as intended on this new GPU. At first, we did not know this information, so we began our evaluation under Mantle as we always do. We discovered abnormally low performance, performance much lower than GTX 760 running under Mantle with the R9 285. We reached out to AMD to see what was going on and were informed to try it in DX11 mode. Running the game in DX11 mode improved performance and brought it up to now be competitive with the GTX 760. This fixed the problem. [/quote<] Incidentally, it looks like there will be necessary table of features for various revisions of GCN cores, because every "iteration" will be under general banner of GCN, no version or other label available. [url<]http://www.hardocp.com/article/2014/09/02/msi_radeon_r9_285_gaming_oc_video_card_review[/url<] [quote<] The first update, or iteration was released on the AMD Radeon 7790 (Bonaire GPU ASIC) video card. Many websites, including us, fell into to referring to this GPU update as GCN 1.1. However, AMD tells us that there really isn't any specific naming scheme associated with the GCN updates that have occurred with new GPU ASICs since the first iteration. AMD has stated that GCN was always meant to evolve, it was designed at its heart to be extensible, upgradable via new GPU ASICs. [/quote<] Talk about confusion. (And I thought it can't get worse then NVidia's three different chips under same name from different eras...)

      • Airmantharp
      • 5 years ago

      Reading the blurb, I didn’t catch what it would take to make BF4 (etc.) run in Mantle mode on the R9 285?- will there need to be a driver update, a game update, or both?

        • Klimax
        • 5 years ago

        I read it as game update, because with Mantle they removed many things driver can affect. Alias cost of eliminating reliance on driver is having game developers to do the work and issue updates.

        As I warned when AMD announced Mantle. There is never free launch, somebody will always have to do the work. Either GPU vendor or game dev.

    • odizzido
    • 5 years ago

    I thought AMD was going to be screwed when nvidia’s product line become maxwell from the top down. Looks like things will be more interesting though.

    • Klimax
    • 5 years ago

    Interesting. Looks like AMD managed to adapt well similar technology NVidia used for Kepler. (sadly, I don’t recall they ever said what they did for K)

    Delta based compression? Larger similar patches of colors would obviously get best hits, which usually goes for benchmarks and “cartoon-like” games. That would be Borderlands. Or graphically simpler games lie Watch Dogs. When you get photorealistic game like Crysis series, it won’t be able to do much due to large changes in colors.

    Card might have some problems with Bioshock Infinite, because of light effects.

    Anyway, it will be very interesting when Maxwell comes.

    • itachi
    • 5 years ago

    Meh, nice review always telling it like it is, I like the improvement parts, but in the end they cut the 2 gbs so it limits you in VRAM I think most people who are “in the know” would take a 7950 over that thing, anytime, and overclock it, just like I did.

    Of course the +5 fps in 1440p is not negligible either.. i guess if you got a or want a 1440p but dont have alot of money that could be an interesting choice, especially since we can expect another +5 fps (from the 7950) when overclocked, but that is only if you don’t plan to play games like skyrim with tons of mod and stuff like that are are very VRAM intensive.

    Also of course 2Gb vram isn’t future proof these days..

      • Damage
      • 5 years ago

      Note that the frame buffer info stored in Tonga’s DRAM is natively compressed, presumably giving it a smaller footprint. I’d like to test the 285 in 4K. I suspect it could get by almost as well as a 3GB Tahiti in some cases.

        • Airmantharp
        • 5 years ago

        Understanding that the frame buffer is compressed would help reduce bandwidth usage, but frame buffers are really, really small, aren’t they? I mean, for 1440p, we’re talking about 2560x1440x32bits, which is all of 14MB?

          • Damage
          • 5 years ago

          Yeah, they’re not huge. You will have several of them going at once, possibly more, and they’ll be larger at higher resolutions. Memory capacity does appear to be a constraint at 4K for some 2GB cards, so every little bit helps. I’m hoping to chat with AMD about how much of the memory contents is compressed via this mechanism in a typical game. Also will be fun to test at 4K!

        • itachi
        • 5 years ago

        yea well then take advantage of that improvement and make it on a 3gb it would make more sense than stripping the VRAM and making it more efficient ! at least to me doesn’t sounds logical.. unless… UNLESSS they’re going the same route as Nvidia and making their card with lower VRAM for the sole purpose of releasing higher VRAM versions later to cash in more ! (brb getting my tinfoil hat)

    • mark625
    • 5 years ago

    So, the MSI cooler is so good that the system is quieter under load (36.1 dBA) than it is at idle (36.5 dBA), or even with the display off (36.3 dBA).

    Not sure that I buy that.

    Other than those dubious stats, great review!

    • lycium
    • 5 years ago

    Some OpenCL tests (LuxMark) would be great to see, especially with all this talk of extra cache!

    That’s a really big deal for GPU rendering engines 🙂

    • anotherengineer
    • 5 years ago

    Looks like a nice replacement for my HD 6850, but since my 6850 is overkill for me, hopefully they will comeout with a cut down version of this guy.

    Always liked how quiet these cards are from MSI.

    • Krogoth
    • 5 years ago

    The only people who should feel so fortunate are the early buyers of 670, 660Ti and 7950. They held their price points and performance delta for so long. They are almost repeats of 5850/5870 in their heyday.

    • Derfer
    • 5 years ago

    This is the most positive review I’ve seen for this card. Seems most would prefer a 770 or a used 670.

    • revcrisis
    • 5 years ago

    I don’t know if you can really call this card overachieving. It has slightly better performance than my GTX 760 I bought in June 2013 for $250. So 13 months later, it comes out at the same price point for ~5% more performance. If anything, I see this as a boring card. True, it is NOW the king of the $250 price point, but for people already at a GTX 760 or 280X level for the last year or so it’s not even worth considering.

      • Krogoth
      • 5 years ago

      You realize that 760 is just a tweaked 660Ti/670 which came out almost two years ago?

        • HisDivineOrder
        • 5 years ago

        Is that supposed to make this card look better or worse for only barely besting a tweaked card from almost two years ago?

      • Phartindust
      • 5 years ago

      I don’t see how you can’t call it an overachiever.

      It beats the GTX 770 on 3 out of the 4 games tested while running cooler and quieter, and it’s only $250. Sounds like a winner to me.

        • Klimax
        • 5 years ago

        Too few games. Nature of compression. We’d need quite bit more to be able to say more. (Bioshock Infinite, Tomb Raider, Metro,…)

          • sschaem
          • 5 years ago

          Tons of review all over the web re-enforce those results (using your quoted games and more).
          Overall the 285 does seem to be a GTX 770 class card.

            • Bubster
            • 5 years ago

            Overall its about 10% slower. Just above the R9 280

            • Klimax
            • 5 years ago

            It looks like it strongly depends on game/benchmark style. (With compression, this is obvious, because if you get mostly white section like in beginning of Ep2 of Bioshock Infinite BaS)

            And then we get game like Tomb Rider, where it apparently doesn’t have much of effect.
            [url<]http://www.guru3d.com/articles_pages/amd_radeon_r9_285_review,11.html[/url<] I'd say, when it works, it works well, but you are bound to hit bad situations, where it won't do the trick. (For good examples, initial release of Kepler.) Note: Looks like I didn't estimate some games right. But then, I didn't get to play with them much, so only familiarity I have with them is through reviews of CPUs/GPUs.

      • travbrad
      • 5 years ago

      You could say the same for pretty much any cards released in the last few years (only small performance improvements between generations). It all adds up after awhile though. Most of the people buying this card won’t be GTX760 owners. They will probably have something more like a GTX 560 or 460, and this card IS a huge step above those.

      I agree the progress is slower than would be ideal, but there is only so much AMD and Nvidia can do while they are stuck at 28nm.

    • Meadows
    • 5 years ago

    Nice Twitter hook.

    • DPete27
    • 5 years ago

    WOO HOOO! They put the aux power clips on the “PCB side” of the card instead of up inside/underneath the heatsink. That means Asus and MSI are thinking about their design, hopefully the other manufacturers will follow suit.

    • ptsant
    • 5 years ago

    The great improvement in performance per bandwidth makes me think that some nice iGPUs are on the way. This technology is ideal for bandwidth-starved APUs having to do with ordinary DDR3.

      • Damage
      • 5 years ago

      Yep, and DDR4 and stacked on-package DRAM are coming, too.

        • swaaye
        • 5 years ago

        IGPs are going to have a mini revolution with stacked DRAM methinks. Depending on what it does to costs.

        • deruberhanyok
        • 5 years ago

        I was just thinking this as I read about the efficiency improvements. Combined with DDR4 at system level, IGPs will be getting a big boost. It’s definitely going to change the whole “video cards under $100” market.

        I mean, if you can get a processor with 512 shader things, whatever we’re calling them these days, in a system with >50GB/s of bandwidth (dual channel DDR4-3200, for instance), all of those add-in cards with DDR3 and a handful of shader things would be a downgrade. Even the lower-end ones with GDDR5 (R7 250, for instance) would be questionable.

        There’d have to be a shift in the low-end card performance targets, or we’ll just see less and less sub-$100 parts on the market.

      • Bensam123
      • 5 years ago

      Interesting correlation… Very interesting. I definitely could see some cross development here between improving their APUs and GPUs at the same time.

      • UnfriendlyFire
      • 5 years ago

      Memory bandwidth optimization + DDR4 + stacked DRAM cache = “You really should stop selling those Fermi chips, Nividia”

        • Theolendras
        • 5 years ago

        This is the kind of equation that sound fine for the APU, if AMD goes over the top with the scale of the GPU with Excavator, this could finally be a decent entry-level gaming system. Thinking simply about the density improvements that where planned independently of the node with this one, if it is indeed combined with a node switch like 20nm, this might enable either a lot of DRAM cache or a relatively large GPU part.

        This would be a niche no doubt, be very differentiating one and hopefully would get a lot more interesting if they are able to at least make a decent improvement in single-thread improvement with K12, but I suspect the road until that destination will be kinda bumpy.

          • ImSpartacus
          • 5 years ago

          Let’s hope the next APU GPU goes “over the top.”

          We could use a little of that.

          AMD likes 200+W CPUs, so why not give us a 200+W APU and dedicate most of that extra heat to the GPU?

    • brucethemoose
    • 5 years ago

    Is there a chance that AMD is lying to us, and that the 285 has some disabled graphics cores? That’s a lot of transistors/R&D just to match a 7970.

    • Melanine
    • 5 years ago

    Whatever the gain there might has been is totally negated thanks to the disabled portion of the chip. The card is not even more power efficient than Tahiti-based cards.

    • Freon
    • 5 years ago

    Well, at least there’s some hope they can fix those poor minimum and 99th percentile frames in Thief and Crysis 3. I’m slightly worried that the compression tech may have some worst-case scenarios that will be tough. Loss-less compression can have a lot of variation in ratio.

    Neat that they get the performance for the bandwidth, but until that translates to lower cost for the consumer it’s trivial information. It seems like it will slot in at its price just fine, but not much more.

    Also, Scott, a bit confused by the commentary about the Thief results. That’s not what I would call a “win” for the 285.

    Still hugging my 7970 that I got for $250 in October, 2013…

      • Damage
      • 5 years ago

      Thief only has a “minimum FPS” number, which I ignored because FPS averaged over a full second doesn’t tell you much. You could have a massive, 300-ms delay “hidden” in a min FPS number that’s *higher* than the min FPS number on another card where frame delivery was relatively consistent and never that bad.

      So… yeah. I didn’t comment on the minimum. You need higher resolution in order to know what’s going on.

      • Klimax
      • 5 years ago

      Since they used compression, there are non-trivial and real corner cases. (All compressions have it, cannot be avoided)
      Crysis 3 already showed that. Next would be I suspect Bioshock Infinite, Tomb Raider and co.

        • UnfriendlyFire
        • 5 years ago

        Do you know any publishers that write uncompressed movies onto Blu Ray discs?

        Yeah, no. There’s a reason why MPEG2, H.264, and other video formats exist.

          • Freon
          • 5 years ago

          Those are *lossy* compression algorithms. According to TFA, AMD 285 frame compression is lossless so nothing guarantees you don’t get a few frames in a row of worst-case data with no way to mitigate it resulting in a few high render-time frames in that 99th percentile area.

          With a lossy algorithm you might get a short term loss in image quality (ex. macroblocking), or just have to encode with a higher bitrate temporarily, rather than actually dropping frames. 2-pass VBR (i.e. look-ahead) pretty much makes worst case problems moot as you only need to average out to a bitrate to fix on a disk and within the maximum bandwidth of the disk reader (which I think is well more than sufficient, 54mbps is minimum spec and typical movie is only 25-30 mbps).

          It’s really not a valid comparison since we’re talking lossless versus lossy.

          • Klimax
          • 5 years ago

          I do. I use H264 and co pretty heavily. But you are talking about lossy. Heavily lossy, which is not case here.

          Have you used lossless compression like Huffyuv or Lagarith? They show perfectly limitations. Or look at efficiency of WinRAR and 7z (any algo) Why one doesn’t use them on MP3 or Blu-Ray video? Because you cant compress it again. (mostly)

          Compression has hard limit directly stemming from informatics theory, because you go from large set of symbols to smaller one and thus by definition there will be string of them not representable in new set and thus lowering efficiency. (it was proven)

          And in case of frame buffrer compression we know what will be best for delta-compression: more uniform image, where changes are small. Cartoon-like or high-res textures, which are mostly bland. Watch_Dogs.

          But when you have large differences between each pixel like in Crysis 3 or Bioshock Infinite, then compression won’t gain you almost anything. (Too many symbols)

          ===

          TL.DR: There is no escape from reality and there is no escape from theory. There is no free launch to be found. See Kepler and how it benchmarks in some games.

        • Freon
        • 5 years ago

        That’s exactly my understanding and worry as well.

          • Klimax
          • 5 years ago

          There’s a reason, why small Kepler didn’t fare well in bandwidth expensive cases when debuted. We have seen it already with Kepler.

            • Freon
            • 5 years ago

            I’ll hold out at least a bit of hope for improvements in driver updates.

            • Klimax
            • 5 years ago

            Not that optimistic. Most likely per-game tweaks required, which might be bit expensive. (Also they push Mantle. That creates some conflicts.)

            (Sidenote: Voting here is funny. Just appearance of being mildly critical or neutral will get you down votes. Just above post by me got already -4,. yet nobody bothered to even reply. Looks like inconvenient facts are hated. Yet… they will not go away… Funny.)

            • Freon
            • 5 years ago

            I would have to figure the compression is fairly random based on the specifics of the benchmark loop and possibly not all that related to the game. When you whip side to side with your mouse (pitch or yaw at a high rate), there could be some really bad cases in almost any game for temporal lossless compression as the entire image changes drastically. I wonder if this is less an issue with games but an issue with the specifics of the benchmark loop and how the image just happens to change frame to frame. Perhaps games with motion blur would actually fair better, though, as that might hide temporal changes to some extent.

            And indeed, voting has all but ruined the article comments experience for me, here. I hope Scott et al. come around some day and just remove it. It’s only encouraging passive aggressive behavior on discussions that should be had with facts, not vote buttons. It’s really abhorrent and needs to go. The sooner the better.

            • swaaye
            • 5 years ago

            I think the voting system should go away. It doesn’t improve the quality of discussion whatsoever.

            • JustAnEngineer
            • 5 years ago

            +1

            • Klimax
            • 5 years ago

            😀

        • UnfriendlyFire
        • 5 years ago

        I remember converting a 2 GB .WAV video to a 200 MB .MP4. No noticeable video/audio quality differences, but one of them was only accepted by Facebook for uploading due to its size.

          • Freon
          • 5 years ago

          Alt-Prtscrn your browser right now and save that as a PNG. Now open a similarly sized photo of an outdoor scene (forest, sunset, etc) and that save as PNG.

          There will be a *massive* difference in file size for the two because PNG is lossless compression which does very well with images with vector-like images with lots of areas of flat color (like this webpage with a simple white background, Excel spreadsheets, etc) vs complex data (image of outdoors, gradients, with almost no two pixels equal in color to adjacent pixels).

          Save the two as JPG and they’ll be (more) similarly sized, but the JPG of the browser capture will have artifacts.

        • dragontamer5788
        • 5 years ago

        More like because they use compression [b<]and[/b<] also reduced memory bandwidth by 33%. Hopefully the R9 285X has compression AND the 384-bit memory interface, so we can get the best of both worlds.

        • Jason181
        • 5 years ago

        I just don’t see that being a terrible problem. Corner cases are thus called for a reason, and I just don’t think it’s likely at all to be noticeable above the ambient hitches that sometimes occur regardless of compression. Since it’s a higher end card, the framerate is likely to be higher during gameplay, which works to temporal compression’s advantage.

        The amount of bandwidth video cards have now is staggering, and I don’t see it as a primary bottleneck much anymore. It used to be much more so, and I see this as a solution that more than addresses the shortfall in bandwidth because I don’t think it was a bandwidth starved part to begin with.

          • Klimax
          • 5 years ago

          A corner case might be just 1% of frames, but ruining experience all the same. Just “good” enough spread around rest of frames will kill it. We have seen it quite often since people started to pay attention to microstuttering and co.

    • jihadjoe
    • 5 years ago

    PW was right. This definitely should have been called the 275.

    • swaaye
    • 5 years ago

    It looks like a solid refinement though I think this review was a little too excited about the results I read. It’s a shame that manufacturing advances come so slowly these days. Imagine if this was also on 16/20nm…

      • HisDivineOrder
      • 5 years ago

      This article reads like somebody who thought they were getting nothing but the same, found out they got a minor update, and was excited that a minor update was a change at all. Joygasming, the article is glad not to be another, “Yeah, this is the same card but worse” article.

      That is, AMD won points for playing it low key until the moment of truth and suddenly instead of being cheaper and sucking a lot, it was not much cheaper but also didn’t suck in some things.

      Typical political strategy. If you think you might lose, downplay your chances and upsell the competition so if you win it’s a real win. “How could you possibly win!?” kinda win. Etc.

      AMD, the Spin Company.

        • Damage
        • 5 years ago

        A 40% savings in memory bandwidth that allows a 172GB/s card to perform nearly as well as a 288GB/s R9 280X card is a very significant innovation.

        Seeing a gimpy Tonga whip a full-blown 290X in geometry throughput ought to give anyone who appreciates GPU tech warm thoughts, too.

        The R9 285 is a solid product at an OK price, but Tonga is really interesting. With this tech, AMD could deliver a lot more performance for the money when needed. And they can maybe compete with Maxwell on all fronts.

        What AMD did not do here is spin. Don’t dismiss the notable new tech just because they didn’t call it AMD DeltaCrunch Technology and make a Ruby demo about it.

          • swaaye
          • 5 years ago

          Yeah but think of R600 vs RV670 vs RV730. R600 simply had far more bandwidth than it needed.

          The synthetic results are amazing indeed but it still didn’t make R9 285 worthy of its name considering it is slower than 280X in games most of the time. Power consumption is certainly not pretty either.

          There’s no doubt that AMD needs Tonga to be an improvement. The Hawaii and Tahiti GCN tech don’t look likely to do well against a big Maxwell chip. It’s going to be another interesting GPU release cycle!

          • MathMan
          • 5 years ago

          Scottt,
          Damien at hardware.fr downclocked the memory clock of an R280 to get exactly the same MC BW as R285. He also kept the same internal clock.

          After that, it’s clear that the compression is decidedly less impression as it first seems. Tonga in that configuration performs max 15% faster than 280. But that’s mostly in configurations with lots of tessellation.

          So the performance increase due to compression is likely even less.

          Tonga is a nice chip (used in a boring product), but let’s not go all crazy about the compression.

            • Jason181
            • 5 years ago

            I’d say 15% is actually very impressive. How much more memory bandwidth do you need to your cpu to produce a 15% performance improvement? I think the technical term is A LOT.

          • Jason181
          • 5 years ago

          It would be interesting if they could somehow use that framebuffer compression on OpenCL datasets, and to see what kind of increase in performance that might provide. Since it’s delta based, I suppose you’d be taking deltas of the data at intervals? Dunno if it’s even possible, but it might be ideal for certain applications.

    • the
    • 5 years ago

    This was a bit of a surprise. Based upon the rumors and leaked specs, I was expecting this card to come in below the R9 280 but it is relatively close to the R9 280X.

    The only negative is a relative wash in power consumption: looks like nothing changed. I’m not predicting the massive power savings as everyone else is with nVidia’s Maxwell generation but I think it is safe to say that nVidia will have the lower wattage card at a given performance level. (GM108 doesn’t show much architectural performance improvements over Kepler so I see the power efficiency gains being eroded by higher clocks/voltages to attain more performance.)

    With regards to the frame buffer compression Tonga implements, is there any noticeable change in VRAM usage during gaming? I wouldn’t expect a massive savings as Tonga’s changes don’t reportedly touch raw texture data.

    [quote<]"Look at the transistor count, though. Tonga packs in roughly five billion transistors, while Tahiti is less complex, at 4.3 billion. Both chips are made at TMSC on a 28-nm process. How is it that Tonga's not quite as large as Tahiti yet has more transistors?"[/quote<] One possibility is that AMD likely used the GDDR5 memory controllers from Hawaii. Remember that [url=https://techreport.com/review/25509/amd-radeon-r9-290x-graphics-card-reviewed<]AMD claims Hawaii's 512-bit memory interface occupies 20% less die area than Tahiti's 384-bit interface.[/url<] Such a reduction would allow for Tonga to have a 384 bit wide interface in the same area as the 256 bit wide controller in the Radeon 7850. The only downside is that GDDR5 clock speeds are not as high. The new compression scheme offsets the lack of bandwidth rather well. Really what I'd like to see now is a refresh to the R9 290X (R9 294?) the incorporates Tonga's new features and better memory controller. A 512 bit wide bus at 7 Ghz and this new compression schema would go a long ways in enabling playable 4k from a single card. Though I suspect that this wishful thinking will come in the form of the Radeon R9 300 series next year. In fact, this has me thinking that Tonga will be rebranded as the R9 370 lineup and could genuinely have only a 256 bit wide memory bus. That'd leave the R9 380 (384 bit wide) and R9 390 (512 bit wide) as options when TSMC gets their 20 nm/16FFT process in order. Though it may also be feasible to start using hyper memory cubes at the high end next year.

      • Jason181
      • 5 years ago

      Hyper memory cubes. Were you testing to see if anyone read your entire post?

      I did.

        • the
        • 5 years ago

        [url=http://en.wikipedia.org/wiki/Hybrid_Memory_Cube<]We are the Borg. Your biological and technological distinctiveness will be added to our own. Resistance is futile.[/url<]

          • travbrad
          • 5 years ago

          Hi Mr. Borg. Can you ask Locutus if he can hurry up and absorb some 20nm GPU manufacturing? Surely there’s a planet somewhere with 20nm GPUs.

      • Freon
      • 5 years ago

      Worth noting 384 and 512 bit memory interfaces can still potentially increase the expense of the PCB. It’s not just the die space.

    • NarwhaleAu
    • 5 years ago

    I love this time of year. More!

      • NarwhaleAu
      • 5 years ago

      PS GREAT review!!!

    • ronch
    • 5 years ago

    If anything, the thing about Tonga that really moves the ball for AMD is better utilization of memory bandwidth. I remember reading the HD7870 review here not too long ago and it would seem like for all the compute resources and memory bandwidth available to AMD GPUs, it seemed like Nvidia GPUs which had inferior raw specs took better advantage of what they had. Most were quick to blame drivers but I thought the hardware may be partly to blame. So with Tonga, energy efficient may not be up there with Maxwell but it’s good to know AMD’s engineers are still managing to move things forward for AMD.

    • ronch
    • 5 years ago

    I wish TR reviewed a plain vanilla 285 so we’ll know what least to expect from a 285 from any board partner. I’d rather know the least common denominator than seeing fattened benchmark numbers then getting something less than I expected.

      • shank15217
      • 5 years ago

      Or you could buy the MSI one.

      • DPete27
      • 5 years ago

      I think they should have at least labeled it “MSI R9-285 OC” in all the charts to reflect that this is NOT a reference card.

    • Longsdivision
    • 5 years ago

    Best I can come up with in terms of sports analogy.

    On paper, this team sux.

    But in reality, it could be the money ball team.

      • fredsnotdead
      • 5 years ago

      1987 MN Twins?

    • Ryhadar
    • 5 years ago

    I think the 285 could be a really good value in a few months time, unless you’re really aching for TrueAudio or gaming FreeSync support ([s<]280 and 280X will support variable refresh rates for movies and desktop[/s<] ... maybe not: [url<]https://techreport.com/news/27000/amd-only-certain-new-radeons-will-work-with-freesync-displays)[/url<] in the $250 price point. For now, the 280 and 280X still look like good values in comparison.

      • the
      • 5 years ago

      Indeed. I think launching at $249 is appropriate given the cards performance between the 280 and 280X. However, the 256 bit wide bus should enable the designs to slip below $200 USD come the holiday season and/or competition from nVidia. It wouldn’t surprise me if 4 GB models take the $249 in the coming months of competition.

    • Krogoth
    • 5 years ago

    Tonga is just a cheaper version of Tahiti design that incorporates some elements from Hawaii. It happens to fit perfectly with the current mid-range line-up.

    It is akin to jump from Cypress (HD 5870/HD 5850) to Bart (HD 6870/6850).

      • Damage
      • 5 years ago

      Yeah, nope.

        • wierdo
        • 5 years ago

        Yeah I came into the article expecting a simple semi-rebadge, was quite surprised by what’s under the hood.

        Thanks for the review, now I’m curious to see what this means for future Radeons.

      • derFunkenstein
      • 5 years ago

      Did you read the article?

        • Krogoth
        • 5 years ago

        That’s exactly what I got from the article.

        It is technically smaller than Tahiti (not by much, just more dense), but the design was clearly meant to go with 20nm process but TSMC isn’t ready yet. AMD like Nvidia are forced to make their next-generation stuff on the existing 28nm process.

        Tonga (R9 285) performs like it does on paper. It has almost as much resources as Tahiti, but the architecture improvements (Hawaii) help bridge the gap. It only loses 280X when memory bandwidth is a factor (256bit versus 384bit). The 384bit version will likely be limited by its resources and only push slightly ahead of 280X in these areas and rival 770.

        The only thing going for 285 at this time is its MSRP for its level of performance but Nvidia is about unleash middle Maxwell in the next fiscal quarter and it will throw a price war with its Kepler line-up to clear it out.

        For all intends and purposes, 285 is a cheaper 280X replacement. Similar performance for a lower price point just like 6850/6870 did for 5850/5870 back in the day.

          • shank15217
          • 5 years ago

          You can’t gloss over technical details by looking at a few benchmarks. There are some major architectural changes that makes it far more interesting.

          • dragontamer5788
          • 5 years ago

          [quote<] It is technically smaller than Tahiti (not by much, just more dense), but the design was clearly meant to go with 20nm process but TSMC isn't ready yet. [/quote<] Yeah, because chips route themselves and transistors map 1-to-1 between process nodes. </sarcasm> [quote<] For all intends and purposes, 285 is a cheaper 280X replacement. Similar performance for a lower price point just like 6850/6870 did for 5850/5870 back in the day.[/quote<] No, it isn't. R9 285 is a R9 280 replacement and is clearly inferior to the R9 280X. When compared to the R9 280, it looks relatively favorable. In any case, AMD is leaving R9 280x on its roadmaps, and will likely continue to produce R9 280X until R9 285X comes out.

      • USAFTW
      • 5 years ago

      It has around the same die area (Barts was considerably smaller than Cypress and had fewer transistors). Also if you happen to read page two it adds some new not-so-minor tech.

    • jdaven
    • 5 years ago

    On the first table on the first page, you have the R9 280 at 250W.

    Videocardz.com and Wikipedia has it at 200W.
    [url<]http://videocardz.com/amd/radeon-r200/radeon-r9-280[/url<] [url<]http://en.wikipedia.org/wiki/List_of_AMD_graphics_processing_units#Volcanic_Islands_.28Rx_200.29_Series[/url<] Which is correct?

      • Damage
      • 5 years ago

      AMD specs at launch said 250W:

      [url<]https://techreport.com/news/26106/radeon-hd-7950-gets-second-life-as-radeon-r9-280[/url<]

        • jdaven
        • 5 years ago

        Seems odd that the 280 and 280X share the same TDP given that the 280X has a higher core/turbo/memory clock and Stream/TPU processor count. Oh well.

          • the
          • 5 years ago

          There is likely a binning factor at play here. Essentially there are likely a handful of cards that could have been 280X but due to high power consumption (> 300W ?), they were dropped to 280 speeds. The resulting TDP of this bin was likely equal to 280X target. As yields improved (remember that Taihiti launched in December of 2011) this binning became less important. Also of note is that this bin would be ideal for overclocked Radeon 7950’s and vanilla R9 280 since they already passed the higher clock speed test, just not the power test. Of course this isn’t the only possible binning AMD could be doing.

          The other thing to note about the R9 280 is that comes with an 8+6 pin power configuration. The implies at least a 225W power consumption. Otherwise a 6+6 or a single 8 pin configuration would be used.

    • ultima_trev
    • 5 years ago

    I was looking for something to replace my ancient HD 7850 but the power consumption is way too high. Looks like Maxwell is in my future.

    Farewell, AMD/ATI. I have gone from X1300 > HD 5770 > HD 5850 > HD 6850 > HD 7850, but clearly you can no longer compete with nVidia.

      • anotherengineer
      • 5 years ago

      Um, Damage used an over clocked card. Also I don’t think he checked the voltages vs. stock oem card to see if there was a difference. Lastly, it’s on TSMC, I know Nvidia is as well, and the one and only current Maxwell is great for power efficiency, but that does not mean it will scale linearly for larger Maxwell chips either.

      If you’re concerned about the cost of 50W of electricity then maybe you should re-consider gaming??

        • superjawes
        • 5 years ago

        I have a feeling that ultima_trev is more concerned having either a small foorprint system or a very cool/quiet one (more power used means more effort to cool it).

        That’s a legitimate concern…not something I’m overly worried about, but I can empathize with the goal.

        • Ryhadar
        • 5 years ago

        Be that as it may, the load power consumption does look disappointing here compared to Tahiti — especially considering that the 280X overtakes in some metrics.

        It’s a slight difference in wattage, and you’re objectively getting more features for your dollar compared to the 280X but if load power consumption is a point of concern for you I wouldn’t discount it.

        With that said, the 280X (being a rebadged 7970 GHz) is about as mature as you can get. It will be interesting to see how more efficient AMD can get this new architecture.

        • jihadjoe
        • 5 years ago

        > If you’re concerned about the cost of 50W of electricity then maybe you should re-consider gaming??

        Coin mining!

        • maxxcool
        • 5 years ago

        agree.. <=50 watts == meh

      • dragosmp
      • 5 years ago

      Are you sure about this? “HD 5850 > HD 6850”

        • ultima_trev
        • 5 years ago

        Absolutely. The 5850 was slightly faster but the power consumption was too much for me so I was all too happy to replace it. I did briefly have a GTX 470 before getting the HD 7850 and was all too glad to ditch that as well.

        Superjawes is correct, I am only interested in SFF PC gaming. Full, even mid towers do not work for me.

      • ptsant
      • 5 years ago

      [quote<] I was looking for something to replace my ancient HD 7850 but the power consumption is way too high. Looks like Maxwell is in my future. [/quote<] The 285 is more quiet and gives more FPS for the dollar. If power consumption is all Maxwell has to offer, I don't think it's going to be enough.

        • Firestarter
        • 5 years ago

        Better performance per watt will allow nvidia to create one monster of a GPU that will be the fastest thing ever, but as usual they’ll probably cost an arm and a leg. Their upcoming mid-range parts will be the most interesting.

        • Bensam123
        • 5 years ago

        Aye… power consumption takes a back seat to performance in the desktop market.

      • Bensam123
      • 5 years ago

      Power efficiency matters why?

      How many hours a day are you going to game? What do you think the total cost will be? Does it even matter?

      People pick the weirdest shit to base decisions on. If this was a laptop, definitely. This isn’t a laptop card.

        • HisDivineOrder
        • 5 years ago

        Perhaps he lives in California? Perhaps Tahiti was already hot enough compared to nVidia’s GPU’s and now if this is worse, he’s checking out? Perhaps Maxwell is going to show up and do to Tonga what Kepler did to Tahiti with regards to performance per watt?

        If you’ll recall, AMD was LOVING the performance per watt game of doing marginally better than the competition at less power right up till the moment when nVidia showed up with the Geforce 680. Suddenly, they got destroyed in it and AMD switched gears, along with all the fanboys who had been saying, “Power efficiency matters SO MUCH!” moving to arguments of, “Power efficiency matters why?” and vice-versa.

        Brush it aside, power efficiency matters in that wasting power is useless if it’s unnecessary.

          • Bensam123
          • 5 years ago

          Power efficiency for the sake of power efficiency doesn’t matter in the PC world unless you’re talking about mobile products, which equates to battery life. But people would be gaming on a outlet anyway, so it doesn’t matter that much all things considered.

          20w of energy or even 100w doesn’t account for much.

          If you think I’m a argument turn coat, most people who have seen me post here can attest that I’m not as far as efficiency goes, especially when you start talking cheaper products that produce similar performance. Efficiency is a secondary attribute as far as weight goes, price and performance definitely come first. So if you have the same prices and the same performance, then things like that start mattering.

      • UnfriendlyFire
      • 5 years ago

      Perhaps you should replace a few incandescent bulbs with LEDs.

      Trust me, going from 60W to less than 10W will provide bigger power savings than fiddling around with desktop components.

      Unless if you’re using an OCed Pentium 4 or first-gen Bulldozer with a Fermi GPU…

      • Freon
      • 5 years ago

      Please show your math.

      • Kaleid
      • 5 years ago

      Seems it is better for you to wait for a die-shrink.

    • RdVi
    • 5 years ago

    I can’t say I expected that. I was ready to see the opposite – low power usage but average performance. The power use isn’t bad considering the die size I guess (another surprise). I can’t wait to see if the 285x can actually compete with nvidia’s 970/980. I was expecting it to not stand a chance before reading this review.

    • Pantsu
    • 5 years ago

    The bandwidth efficiency is certainly nice to see, but I was equally surprised by the size of the chip and the lack of power efficiency improvements. I guess it would make sense if the full Tonga ends up with a 384-bit bus, though I’d assume that would guzzle even more power.

    Ultimately while this is a decent refresh for ye olde 7950, I’m more interested in how AMD will bring the improvements to high end. I won’t be upgrading my 280X CF before I get similar performance with a single GPU, unless I really need to update for a freesync monitor before that happens.

    • superjawes
    • 5 years ago

    Wow. That is just a straight-up good deal.

    I would hold off on this card, though. If the 800 series debuts this month, I suspect that the 770’s performance will drop around the 760’s price.

    Still, it’s very nice to see AMD offering strong competition on the graphics front.

    • derFunkenstein
    • 5 years ago

    Wow, makes my shiny new (relatively) GTX 760 look positively ancient for the $260 I spent on it at the start of summer. A nice midrange bump by AMD here.

      • September
      • 5 years ago

      You knew you should have splurged for the GTX 770 before you bought it!

        • derFunkenstein
        • 5 years ago

        Yeah, I guess I kinda did, but holy crap it was $80 more. Couldn’t talk myself into it.

        • Bensam123
        • 5 years ago

        He could’ve also bought a 280x for the same price.

          • derFunkenstein
          • 5 years ago

          OK so I’ve had the card longer than I thought:

          [url<]https://techreport.com/forums/viewtopic.php?f=12&t=81496&p=1201364&hilit=GTX+760#p1201364[/url<] In early April (early spring, not start of summer), we were still in the middle of the crypto mining bubble. That's my bad, because I didn't think I'd had the card that long. I knew Radeon prices were through the roof, though. The card I was comparing the 760 to was the 270.

            • Bensam123
            • 5 years ago

            Yeah, a 280x in the springtime was like $400.

        • itachi
        • 5 years ago

        And yea when you see the 770 with 2gb VRAM I’m personnaly more tempted by amd 3gb or was, I had order an inno3d 770 kind of in a “rushed buy” didn’t really VRAM would be so much of a limiting factor, and trust me, whatever what people tells me, I did try it, on BF4 runs maxed to stress test, and it definately reaches 2gb+ at 1200p.. some people claims it doesn’t, could it be because of the few extra pixels I have ? I don’t know.. but I know that with the 770 I had massive spikes to like 15 fps, I think it’s when I hit a VRAM wall.. not sure though.. but I think..

        So I simply returned the card, and got a hd 7950 on ebay for 100$, trust me been very happy with it running a mix of high and meds on BF4 in the hardest maps like pearl market, it doesnt go below 40 fps, and I have OC’ed it too btw :), also have a fx 8320@4.7..

    • USAFTW
    • 5 years ago

    I was pleasantly surprised to see how Tonga did in the synthetic tests, as well as Watch_Dogs.
    Would be nice to see how Tonga does with all of it’s functionality enabled and an unnerfed clock speed.
    This might just make it into my next build, whenever that comes. This or the 285x.

    • Forge
    • 5 years ago

    MSI, I can has? I am oddly stimulated by this new development.

    • Dezeer
    • 5 years ago

    Nice review, as always.

    I was expecting a lot less power usage than that of over 280x power consumption.

    I wonder how many of those games, and of the upcoming, get limited with the 2GB of VRAM in 1440p and how much in practice 4GB would help.

      • Krogoth
      • 5 years ago

      28nm process is the bottleneck at this point.

      There’s no way that 285 could significantly reduce its loaded power consumption over its spiritual predecessor when it is almost as large and far more dense.

        • pranav0091
        • 5 years ago

        You seem to be forgetting about the 750 Ti, which did the exact same thing. It was even larger than the chip it replaced, right ?

    • DragonDaddyBear
    • 5 years ago

    Wow, AMD didn’t completely disappoint. I shared your lack of enthusiasm when the card rumors were floating around.

    I would like to see how 2X 285’s with 4GB of RAM in Cross Fire if you get the time. I’ve been noticing multi-GPU reviews not trashing the idea any more. With XDMA at $250 each it may be pretty good value.

Pin It on Pinterest

Share This