AMD’s Radeon R9 280X and 270X graphics cards

Graphics cards don’t tend to have much of a shelf life. The two major GPU suppliers have typically cranked out a new generation of chips on a more or less yearly cadence. For instance, AMD introduced the Radeon HD 4870 in June of 2008. Although the chip was unquestionably a success, it was scrapped and replaced by the first DirectX 11 GPU, the Radeon HD 5870, in September of 2009. Then came the Radeon HD 6970 in December of 2010. The Radeon HD 7970 followed one year later, at the end of 2011, along with the rest of the HD 7000 series. Each successive generation was better than the one before, and each one included major architectural enhancements over the prior model.

After that, a funny thing happened: not much.

In mid-2012, AMD introduced the Radeon HD 7970 GHz Edition, with slightly higher clock speeds than the original 7970. Late 2012 came and went without much fanfare. AMD talked about delivering a new lineup of GPUs code-named Sea Islands in early 2013, but it later backtracked, claiming there wasn’t any such plan after all—or at least not like everyone seemed to think. Instead, the firm introduced the Radeon HD 8000 series for large PC makers, made entirely of the same chips as the 7000 series. Consumers were spared the burden of that re-badging exercise, fortunately.

Finally, a couple of weeks ago at a press event that we live blogged, AMD revealed the first details of a next-gen high-end GPU, code-named “Hawaii.” This big, new GPU sports some novel features, including twice the geometry performance of the prior generation and an integrated DSP block for the acceleration of gaming audio. With over six billion transistors, 4GB of memory, and upwards of 300 GB/s of memory bandwidth, the forthcoming Hawaii-based Radeon R9 290X should provide some much-needed new blood at the top of AMD’s lineup.

To go with the 290X, the firm also announced top-to-bottom refresh of its Radeon offerings, like so:

Yep, the Radeon HD naming convention is gone, which is probably appropriate since it was running out of numbers—and since “HD” doesn’t feel so shiny and impressive anymore.

What you may not realize by looking at all of those reconfigured numbers and letters is that the bulk of the lineup—everything shown above except for the 290X—is based on the same chips as the Radeon HD 7000 series. They’ve just been given a few tweaks, renamed, and, at least in one case, dramatically reduced in price in order to make room for Hawaii-based cards.

I’d love to tell you more about the Hawaii GPU, but the time isn’t quite right yet. Instead, those re-badged offerings are the reason we’re gathered here today.

Oh, come on, if you sat through the GeForce GTX 760 and 770, surely you can deal with this. That’s what I keep telling myself, at least.

AMD is introducing a host of graphics cards today, including the lowly R7 240 and 250. Cyril is reviewing arguably the most interesting of the new Radeons, the R7 260X. The 260X is a lower-end card based on the same chip as the Radeon HD 7790, but its Bonaire silicon has actually been hiding several features from the same technology generation as the big Hawaii chip. My task is to look at the two higher-cards, the R9 270X and 280X.

R to the ninth

Here’s the Radeon R9 270X 2GB reference card from AMD, with a handsome cooling shroud design that looks at lot like the R9 290X’s. The 270X is based on the 28-nm “Pitcairn” graphics chip, just like the Radeon HD 7870. This chip has a 256-bit memory interface and a solid collection of graphics resources, including 1280 shader processors. Here are the key specs for the 270X and the card it replaces:

GPU

base

clock

(MHz)

GPU

boost

clock

(MHz)

Shader

processors

Textures

filtered/

clock

ROP

pixels/

clock

Memory

transfer

rate

Memory

interface

width

(bits)

Radeon
HD 7870 GHz
1000 1280 80 32 4.8 GT/s 256
Radeon
R9 270X
?? 1050 1280 80 32 5.6 GT/s 256

Yeah, I put some question marks into a table. That bugs me, but I don’t know what else to do. You see, AMD has decided that it will only disclose a single clock speed, the peak or “boost” clock, for its GPUs going forward. In the case of its Hawaii GPU, that decision makes some sense, because a new power-saving algorithm will likely make Hawaii’s clock frequency, well, difficult to summarize.

The R9 270X shouldn’t be so complicated, though. AMD tells me the 270X has Boost dynamic voltage and frequency scaling tech, which in this case means two clocks: a base clock and a boost one. Usually they’re not too far apart. On the Radeon HD 7950 Boost, for instance, the base speed is 850MHz and the boost speed is 925MHz. Thing is, as far as I can tell, the 270X pretty much just runs at 1050MHz. It stayed steady at that speed during our power and noise testing, according to our logs from GPU-Z. So… whatever. It’s 1050 frickin’ megahertz, 50MHz faster than the Radeon HD 7870.

The bigger change is the memory clock, which has been bumped up from 4.8 GT/s to 5.6 GT/s. That change ought to translate pretty directly into higher performance, making the 270X up to 14% faster than the 7870.

I know, right? Shivers.

Both the 270X and the 280X are slated to arrive at online stores in a few days, on October 11. AMD says the R9 270X 2GB will list for $199.99. Some board makers may offer 4GB variants of the card, as well, and those will start at $229.99. Those prices may be a bit lower than where the Radeon HD 7870 has been recently. You can find some 7870 cards for less right now, but AMD says their availability is limited. Presumably, they’re being cleared out in favor of the new hotness that is the 270X.

One thing that may make grabbing a Radeon HD 7000-series card on clearance more attractive is the Never Settle Forever bundle that lets you pick two or three titles from a list of pretty decent games. The new R-series Radeons won’t take part in this program.

Peak pixel

fill rate

(Gpixels/s)

Peak

bilinear

filtering

int8/fp16

(Gtexels/s)

Peak

shader

arithmetic

rate

(tflops)

Peak

rasterization

rate

(Gtris/s)

Memory
bandwidth
(GB/s)
Radeon HD 7870 GHz 32 80/40 2.6 2.0 154
Radeon
R9 270X
34 84/42 2.7 2.1 179
Radeon
HD 7950 Boost
30 104/52 3.3 1.9 240
GeForce GTX 660 25 83/83 2.0 3.1 144
GeForce GTX 760 33 99/99 2.4 3.1 or 4.1 192

The 270X’s most direct competition from Nvidia is the GeForce GTX 660, which also lists for $199.99. As you can see in the table above, the 270X has substantially higher rates of ROP throughput, shader arithmetic, and memory transfers than the GTX 660. The 660 is pretty seriously outgunned. Nvidia does have some recourse, if it wants. The GeForce GTX 760 is a much closer match for the 270X, but it’s currently priced at $249.99. Perhaps we’ll see a price cut to counter the new Radeon? We’ve tested both the 660 and 760 against the 270X, just in case.

The Radeon R9 280X

GPU

base

clock

(MHz)

GPU

boost

clock

(MHz)

Shader

processors

Textures

filtered/

clock

ROP

pixels/

clock

Memory

transfer

rate

Memory

interface

width

(bits)

Radeon
HD 7970
925 2048 128 32 5.5 GT/s 384
Radeon
HD 7970 GHz
1000 1050 2048 128 32 6 GT/s 384
Radeon
R9 280X
?? 1000 2048 128 32 6 GT/s 384

The formula for the Radeon R9 280X is simple: it’s very much like the Radeon HD 7970 GHz Edition—the boost clock is actually 50MHz slower—but with a price reduction to $299.99. That’s the most dramatic news of the day, really, in my book. Yeah, the game bundle is gone, but this is a real-money price drop of about a hundred bucks.

Peak pixel

fill rate

(Gpixels/s)

Peak

bilinear

filtering

int8/fp16

(Gtexels/s)

Peak

shader

arithmetic

rate

(tflops)

Peak

rasterization

rate

(Gtris/s)

Memory
bandwidth
(GB/s)
Radeon HD
7970
30 118/59 3.8 1.9 264
Radeon HD 7970 GHz 34 134/67 4.3 2.1 288
Radeon
R9 280X
32 128/64 4.1 2.0 288
GeForce GTX 770 35 139/139 3.3 4.3 224
GeForce GTX 780 43 173/173 4.2 3.6 or 4.5 288

The 280X competes most directly with the GeForce GTX 770, in terms of key graphics rates, but the GeForce is still selling for around 400 bucks. You can imagine how that’s about to play out.

We have a couple of examples of the R9 280X on hand. Pictured above is a stock-clocked version from XFX with a snazzy-looking dual-fan cooler. Like the 7970, 280X cards have 3GB of memory onboard, making them a little more future-proof than the 2GB competition.

This is Asus’ R9 280X DirectCU II TOP, also with a fancy cooler. Since this card was first to arrive in Damage Labs, it was the one on which we focused most of our testing. This baby is clocked up a little bit from stock, with a 1070MHz GPU frequency and 6.4 GT/s memory. You’ll pay for extra juice, though—Asus says the card will list for $309.99 at online stores.

I should note a particular feature of both of these 280X cards: their coolers stick up a long way above the top of the retention bracket at the back of the card. The shroud on the Asus card protrudes about 1.25″ past the bracket, and the heatpipe is another quarter inch taller than that. Clearance may be an issue in some PC enclosures.

Test notes

Ok, look, I just couldn’t do it, all right? I couldn’t bring myself to spend hours testing the 270X and 280X against their not-quite-identical counterparts in the Radeon HD 7000 series in order to show you the small-percentage performance differences involved. Testing like we do takes a lot of work, and we already know the stakes here are pretty darned low.

Rather than look at incredibly minor differences under the microscope, I chose to test the new Radeons against the direct competitors from Nvidia. I’ve also tested against a couple of much older Radeons, the HD 5870 and 6970, in order to show would-be upgraders what they’re missing. I think that will make for a more interesting comparison.

Peak pixel

fill rate

(Gpixels/s)

Peak

bilinear

filtering

int8/fp16

(Gtexels/s)

Peak

shader

arithmetic

rate

(tflops)

Peak

rasterization

rate

(Gtris/s)

Memory
bandwidth
(GB/s)
Radeon
HD 5870
27 68/34 2.7 0.9 154
Radeon
HD 6970
28 85/43 2.7 1.8 176
Radeon
R9 270X
34 84/42 2.7 2.1 179
Radeon
R9 280X
32 128/64 4.1 2.0 288

There’s a case to be made that the Pitcairn chip in the R9 270X is pretty much just a newer, smaller version of two legendary Radeons of yore, the HD 5870 and HD 6970. You can see how closely they match up in nearly every key category except for rasterization rate. Thing is, the 270X’s GCN architecture ought to be more efficient, allowing it to achieve higher performance despite similar theoretical peak specs. I’m curious to see how this contest plays out.

The performance results you’ll see on the following pages come from capturing and analyzing the rendering times for every single frame of animation during each test run. For an intro to our frame-time-based testing methods and an explanation of why they’re helpful, you can start here. Please note that, for this review, we’re only reporting results from the FCAT tools developed by Nvidia. We usually also report results from Fraps, since both tools are needed to capture a full picture of animation smoothness. However, testing with both tools can be time-consuming, and our window for work on this review was fairly small. We think sharing just the data from FCAT should suffice for this review, which is generally about incremental differences between video cards based on familiar chips.

Our testing methods

As ever, we did our best to deliver clean benchmark numbers. Our test systems were configured like so:

Processor Core i7-3820
Motherboard Gigabyte
X79-UD3
Chipset Intel X79
Express
Memory size 16GB (4 DIMMs)
Memory type Corsair
Vengeance CMZ16GX3M4X1600C9
DDR3 SDRAM at 1600MHz
Memory timings 9-9-9-24
1T
Chipset drivers INF update
9.2.3.1023

Rapid Storage Technology Enterprise 3.5.1.1009

Audio Integrated
X79/ALC898

with Realtek 6.0.1.6662 drivers

Hard drive OCZ
Deneva 2 240GB SATA
Power supply Corsair
AX850
OS Windows 7
Service Pack 1
Driver
revision
GPU
base

core clock

(MHz)

GPU
boost

clock

(MHz)

Memory

clock

(MHz)

Memory

size

(MB)

GeForce GTX 660 GeForce
331.40 beta
980 1033 1502 2048
GeForce GTX 760 GeForce
331.40 beta
980 1033 1502 2048
GeForce GTX 770 GeForce
331.40 beta
1046 1085 1753 2048
Radeon
HD 5870
Catalyst
13.11 beta
850 1200 2048
Radeon
HD 6970
Catalyst
13.11 beta
890 1375 2048
Radeon
R9 270X
Catalyst
13.11 beta
? 1050 1400 2048
Radeon
R9 280X
Catalyst
13.11 beta
? 1070 1600 3072

Thanks to Intel, Corsair, Gigabyte, and OCZ for helping to outfit our test rigs with some of the finest hardware available. AMD, Nvidia, and the makers of the various products supplied the graphics cards for testing, as well.

Also, our FCAT video capture and analysis rig has some pretty demanding storage requirements. For it, Corsair has provided four 256GB Neutron SSDs, which we’ve assembled into a RAID 0 array for our primary capture storage device. When that array fills up, we copy the captured videos to our RAID 1 array, comprised of a pair of 4TB Black hard drives provided by WD.

Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests.

In addition to the games, we used the following test applications:

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Crysis 3


Click through the buttons above to see frame-by-frame results from a single test run for each of the graphics cards. You can see how there are occasional spikes on each of the cards. They tend to happen at the very beginning of each test run and a couple of times later when I’m exploding dudes with dynamite arrows like the Duke boys. Yee-haw.

Happily, the average FPS results and our latency-focused 99th percentile frame time metric tend to agree. That’s a good indicator that we aren’t seeing any major problems with frame delivery. Even the Radeon HD 5870 is just generally slow.

I’ve zoomed in on the very tail of the latency curve in order to show you a problem with the Radeon HD 6970, which is that it very occasionally runs into spots where rendering the next frame takes a quarter of a second or more. This isn’t a horrible problem because it doesn’t happen too often, but it does affect each test run on the 6970, and none of the other cards are affected.


We can add up the time spent working on frames beyond several thresholds in order to get a sense of “badness”—of those episodes where rendering took longer than we’d like. The first threshold is 50 milliseconds, which corresponds to three refresh cycles on a 60Hz display. We expect that you’ll begin to notice an interruption in animation smoothness if it takes longer than 50 ms to render a frame.

As you can see, the new Radeons perform reasonably well in each metric. The R9 270X is a bit quicker than the GeForce GTX 660 overall but a bit slower than the GTX 760. The R9 280X just barely outdoes the GeForce GTX 770.

Meanwhile, the 270X proves to be more efficient than the older Radeons with similar graphics rates. The 6970 isn’t far behind, but the 5870 is just overmatched, even though we’re using a 2GB version of that card.

Battlefield 4 beta

Testing a multiplayer-only game online can be fraught with problems. Most notably, I tend to get killed by an instant headshot about six times in a row before I can finish a test run. Repeatability is a challenge. In order to sidestep that problem, I tried out the new BF4 beta on an empty server, using the same map and route each time. Real games will likely be more challenging to render than our simple test loop, but this at least gets us started with BF4 performance testing. I probably set the image quality settings too high, too, as you’ll see.

Oh, also, because of a problem with the FCAT overlay, I had to test this game with Fraps.



Like I said, these image quality settings are a little too strenuous. There are frame time spikes to over 100 ms even on the fastest cards, which isn’t what you’d want for fluid gameplay, particularly in an online shooter. Still, we do have a nice sorting of the cards, with some good news for the new Radeons. The R9 280X is fastest overall, and the R9 270X turns in a lower 99th-percentile frame time than the GeForce GTX 760.

Far Cry 3: Blood Dragon



Despite average frame rates well below 60 FPS, the four fastest cards deliver smooth animation in this test scenario, with nearly every frame generated in 33 milliseconds or less. That’s not bad. The GeForce GTX 660 struggles, though, with general slowness. You can see what I mean about the 270X having it outclassed.

GRID 2


This looks like the same CodeMasters engine we’ve seen in a string of DiRT games, back for one more round. We decided not to enable the special “forward+” lighting path developed by AMD, since the performance hit is pretty serious, inordinately so on GeForces. Other than that, though, we have nearly everything cranked to the highest quality level.



Even the old Radeon HD 5870 handles GRID 2 reasonably well at these settings. Still, our latency-focused results offer some nice insights into the relative performance of these cards. For instance, both the GTX 770 and the R9 280X produce about 98% of the frames rendered in 16.7 milliseconds or less. In other words, they almost deliver a steady-state 60 FPS. (Note that they don’t quite achieve that goal in spite of having FPS averages in the 70s.) That’s the benefit of opting for one of these higher-end graphics cards.

Tomb Raider




The Tomb Raider reboot is a great-looking game, but it’s not that hard to render smoothly, so don’t let those relatively pokey FPS averages get you down. Notice that we’re using 2X supersampled antialiasing, which essentially amounts to asking the game to render at twice the monitor’s resolution of 2560×1440. That’s a lot to ask, and we’re only doing it so we can show you the differences between the faster and slower video cards. You could just play with FXAA and get decent quality with half the GPU workload.

Happily, the new Radeons handle our unreasonable demands quite nicely. The top four cards don’t spend any time beyond our 50-ms “badness” threshold. Subjectively, the animation they produce here feels quite smooth. Once again, the 280X leads the pack, and the 270X easily beats the GTX 660.

Guild Wars 2



Once more, we’re using supersampling to increase the graphics workload (and image quality.) Even so, most of the cards perform quite well. The lone exception is the Radeon HD 5870, which is just a basket case. Makes me wonder if this 5870 2GB card is really using all of its memory properly.

Power consumption

The Radeons have a unique capability called ZeroCore power that allows them to spin down all of their fans and drop into a very low-power state whenever the display goes into power-save mode. That’s why they tend to draw less power with the display off.

Please note that our load test isn’t an absolute peak scenario. Instead, we have the cards running a real game, Skyrim, in order to show us power draw with a more typical workload.

That said, the “new” Radeons look to be a pretty close match to their competition in terms of power efficiency.

Noise levels and GPU temperatures

Well, that massive cooler on the Asus R9 280X TOP card certainly pays off handsomely. The 280X produces the lowest noise levels under load—the same decibel level as several other cards at idle—and turns in the lowest GPU temperatures. That’s the card that drew the highest wattage under load, no less. Tons of credit to Asus and the big hunk of metal it strapped on the card for making that happen. These are certainly better results than we’ve seen from the relatively noisy 7970 reference cards we’ve tested in the past.

Speaking of reference cards, the 270X doesn’t perform too poorly, but I’m a bit surprised that it registered a higher noise level on the meter than the GTX 760. The quality of the sound coming from the 270X isn’t terribly grating, and well, the quality of the noise coming from the GTX 760 reference cooler is terribly grating. Sounds cheap.

Conclusions

These test results have been unusually free of drama. Our latency-oriented metrics haven’t detected any weird, brand-specific hiccups during gameplay, and all of the newer graphics cards have generally been competent. That’s a bit boring, perhaps, but in a good way. We’ll take boring and competent over exciting and busted any day of the week.

Now we’re left to boil down the results to single visual using one of our price-performance scatter plots. The performance scores are a geometric mean of the results from all the games we tested. We’ve converted our 99th-percentile frame time results into FPS for the sake of readability. As always, the better values will be situated closer to the top left corner of the plot.


Since they’re not selling new anymore, we’ve listed the Radeon HD 5870 and 6970 at their original launch prices. These cards have held up pretty well over time, given their ages, but the more compelling story here is how much GPUs have progressed. Despite fairly similar specs, the 270X easily outruns both the 6970 and the 5870 due to improvements in architectural efficiency. GPU power is much cheaper these days, too. The 270X’s introductory price is dramatically lower than those older cards’. Heck, the 280X costs quite a bit less, too, and it more than doubles the performance of the 5870.

The new Radeon R9 cards match up very well against the competition from Nvidia, too, mostly because they’re priced quite aggressively—especially the 280X. Nvidia needs to lop 50 bucks off the price of the GTX 760 and 100 bucks off the GTX 770 in order to remain competitive. Even then, the R9 280X looks to be slightly faster than the GeForce GTX 770. And I’m taken with the cooler Asus put on its rendition of the 280X; it’s massive but excellent, as long as it’ll fit into your PC’s case.

If you’ve been holding off on buying a new graphics card in order to see what the next generation will bring, well, it seems part of the answer is: the last generation at lower prices. Given everything, that’s really not a bad state of affairs. Still, now that the way has been cleared, I can’t wait to see what that big Hawaii chip has in store for us.

Comments closed
    • Freon
    • 6 years ago

    The last cost/performance graph is very useful. Glad to see some old cards on there to gauge. It’s often hard to get data like that. Sadly, it seems to confirm what I’ve been thinking for the past few months.

    I got my current PC setup well over 2 years ago now. I bought a 6950 2GB for $276.99 at that time (3/3/2011 to be exact for the video card) and unlocked it to be a full 6970, knowing full well when I bought it that it would unlock.

    Now as I eye upgrades, I’d have to spend $300+ on a R280X to get a true leap in performance, which is crazy given how old my hardware is now. We used to get those kind of increases every 12 months, not every 30. The $200 GTX 660 is nothing more than an even match! I guess I’ll chalk up some of that to the rare and opportune unlocking, but, again, 2 1/2 years… Yikes. The jump in performance (~25%?) I’d get now going to a $250 GTX 760 is what I’d have expected every six months at worst.

    Sad state, I’d say.

    • itachi
    • 6 years ago

    When are the R9 290x benchmark coming is it much much later ?

    • chµck
    • 6 years ago

    I don’t care about performance.
    I just want them cause the fan shroud hits my eruption button.

    • CatheyBarrett23
    • 6 years ago
    • alienstorexxx
    • 6 years ago

    we want downsampling back AMD.. since they changed the adl it isn’t avaiable anymore.

    we want more utilities support for the users

    • itachi
    • 6 years ago

    My 5870 do feel outdated when I see this, however I run 1900×1200 res.. still playable in most games, and Im bottlenecked by my cpu (e8500@4.4), though I tryed crysis 3 the other day and the first mission with a real open map i started getting 18-20 fps, I tryed reducing graphics, resolution etc, still had crappy fps, anyone got a clue what it might be ? or just need cpu upgrade.. sounds weird though !

      • auxy
      • 6 years ago

      Not enough cores. Crysis 3 wants >2 cores. Memory performance of those old Core 2 chipsets is not great, either.

        • itachi
        • 6 years ago

        baaaah, it seems my HD5870 is getting outdated too now anyway.. in BF4 i get horrible 20(even less sometimes, spikes etc) to 30 fps most of the time, ridiculously optimised cause it does’nt look much better than bf3 to me. lol

        I know they’re trying to push latest hardware to the max but come on, this game needs next year technology to run smooth at 60 fps i seen the benchmarks

          • Airmantharp
          • 6 years ago

          For reference, I get ~25-35 on my gaming laptop- 3.0GHz Ivy with HT and direct mobile equivalent of a GTX560Ti 2GB at 1080p- and it wouldn’t be so bad if the TV weren’t across the room :).

          The game does appear to be smooth- now I just need to learn the maps so that I can start playing on Hardcore, cause this people not dying on easy mode thing is a bit annoying!

            • itachi
            • 6 years ago

            yea it would be playable above 25 fps stable but that mission im at (you know the first 1 thats kinda open) it just unplayble it hits 18 fps lows, I really need an upgrade Lol, this and BF4 will motivate me to get a damn job at least haha, getting 20-30 fps there too in that shanghai map.

            And the difficulty seemed ok to me i dont know what you talking about, im playing on normal ?

            Maybe you’re further in the game and ennemys are stronger, they have any armor ?

            But yea they do feel a bit bullet proof now that I think bout it,. don’t know to be honest have’nt played it in 2-3 weeks

    • tbone8ty
    • 6 years ago

    Why didn’t you compare 7000 series?

      • auxy
      • 6 years ago

      Look at the difference between the 7790 and the R7-260X in the other review. That’s why Damage didn’t do the 7000 series in this review.

    • jessterman21
    • 6 years ago

    Remember when brand-new flagship GPUs were priced at <$400? Those were the days…

      • Star Brood
      • 6 years ago

      What about the GTX9800+ that launched at around $1000?

        • jessterman21
        • 6 years ago

        The 8800 Ultra? Yeah. Those were not the days…

        NV’s next flagship was dual-GPU and $650
        Next flagship was $650
        Next three were $500
        And then the Titan.

          • Meadows
          • 6 years ago

          Wasn’t the 8800 Ultra only $800 at launch?

            • flip-mode
            • 6 years ago

            Yeah. I think $1000+ graphics cards are a relatively recent thing. First one I can think of is the Asus Ares HD 5990 or whatever it was called.

    • swaaye
    • 6 years ago

    Considering I don’t play at 2560×1440, the 6970 should hold me over for awhile yet. Especially since I’ve already played most of these games just fine. 🙂 Crysis 3 has bugs on Cayman cards though with Very High object detail.

    The 5870 must have screamed, “it’s a trap!”. It appears to be running out of local memory with these settings.

    • cmrcmk
    • 6 years ago

    Hey Damage, will you be able to delve into True Audio anytime soon? Or is that something that will need support from software developers first? I’m really curious if AMD has figured out a way to make audio interesting again since we seem to have reached Good Enough status several years ago.

      • Airmantharp
      • 6 years ago

      …when those cards, you know, get here… and the drivers get here… and the games get here…

    • superjawes
    • 6 years ago

    I went from a GTX 460 (plugged into an i7-920 system with no SSD) to a 7950 (plugged into a i5-4670K system [i<]with[/i<] SSD) and I am qute happy. The sales from two weeks ago dropped the price down to $200 with Saints Row IV, which I had not picked up yet. If you're building a whole new system and can get a sweet deal on the 7xxx cards (with Never Settle), I would recommend it, but if you're just looking to replace something that's a couple years old, you should wait for the 7xx and R9 2xx cards to really start competing.

      • Airmantharp
      • 6 years ago

      Yes! Now is the time to pick up great cards on the cheap; expect them to sell out soon enough, get them while the getting’s good :D.

        • superjawes
        • 6 years ago

        I would still qualify that with “if the upgrade is worth it.” Like I said, I got a pretty big GPU upgrade, and the card was relatively modest in the total cost of hardware I was purchasing.

        But yeah, if it [i<]is[/i<] worth it, the 7xxx cards are still great.

          • Airmantharp
          • 6 years ago

          I did pick up on that in your last sentence, and I agree- I’d advocate for waiting for cards with more VRAM too, but a 3GB AMD card is still a pretty damn good budget buy, in my opinion :).

    • Chrispy_
    • 6 years ago

    Scott, what’s your opinion on the reference blower that is coming with full-length R9 cards?

    AMD reference coolers have always been a little disappointing compared to the more common Nvidia blowers, and it matters to those of us that want to exhaust hot air out of a case (mITX and compact mATX builds).

    I know you think it sounds better than the 760’s cooler, but having experienced those myself that’s not really saying very much 😉

    • HisDivineOrder
    • 6 years ago

    I hope AMD is not using the same cooler with the same fan profiles on the R9 290/290X as they did ont he R9 280X because that reference design sounds horrible. Especially given they’re releasing the R9 290/290X as a “Limited Edition” with Battlefield 4/BF4 Premium at a premium price.

    It’s hard to believe that post-690, post-Titan, post-780… you have AMD releasing a reference cooler that is so, so bad.

    • anotherengineer
    • 6 years ago

    Very nice clean review as usual Scott.

    In the future when testing graphics cards will you be expanding the gauntlet to explore the full potential that the cards can do besides gaming, such as

    -perhaps a few HTPC benchies? (de-interlacing, encoding/decoding, HDMI sound, scaling, etc.)
    -misc tests (AF texture filtering tests, AA quality, etc.)
    -mantle
    -true audio
    -openCL (compute, F@H)

    I think it would help in showing the overall capabilities and qualities of the cards.

    Maybe even a source engine based game??? (pretty please)

      • Fighterpilot
      • 6 years ago

      Yeah,I agree…we used to get cool AF and AA comparison pics….now its frame times, frame times, frame times…ugh make it stop.

        • Essence
        • 6 years ago

        Well, Nvidia fans were ecstatic when AMD was losing in Frame Times – Now that AMD are winning Frame Times (Smoother) and FPS = Lets change it to something else that makes Nvidia look good right? The problem with that is; AMD are better (Mostly) in Compute also LOL.

        I wonder if we are going to get an outcry of people complaining Nvidia not being Smoother as AMD, even though the numbers wouldn’t make much difference to the play, as was the case when AMD was slightly behind Nvidia… The Hypocrisy is mind boggling to say the least.

        PS I do agree with “cool AF and AA comparison” or Quality and Sound features in future etc. as an added bonus since AMD are bringing their TrueAudio.

          • Airmantharp
          • 6 years ago

          The ecstatic part, at least for me, was the validation- not the advantage to Nvidia, which was and is quite bad for the market.

          Many of us ran AMD dual-GPU setups and experienced firsthand what AMD was denying, right up until someone shoved it in their face. I was just glad that the ‘truth’ got out, and that fixes are coming.

      • Airmantharp
      • 6 years ago

      So, what you’re saying is, you want them to repeat the tests they did on the same exact GPUs when they were originally released? Because look:

      -These cards have the exact same capabilities as far as HTPC usage goes- they excel at it, just like they did two years ago
      -AF filtering is damn near universally perfect, and ‘AA quality’ is now more dependent on game-specific post-processing, while hardware/driver level AA is still stupidly expensive, and at the same time, hasn’t changed in two years
      -Mantle is still at buzzword status- as in, not a single thing has been released yet, there’s nothing to test!
      -True Audio- reference Mantle above; also, these cards don’t have the True Audio blocks!
      -OpenCL hasn’t changed in two years!

      The only ‘overall capabilities and qualities of the cards’ that they need to show is that AMD didn’t somehow bork them- the cards have been on the market for two years.

      And what Source-engine game would you suggest that would show that these cards are anything other than far faster than is remotely needed?

        • anotherengineer
        • 6 years ago

        Um no, if you read the words, I asked if in the future would they be expanding the testing, that is all.

        Edit
        Are you saying you don’t want to see tests like this done?
        [url<]http://www.anandtech.com/show/7400/the-radeon-r9-280x-review-feat-asus-xfx/19[/url<]

          • Airmantharp
          • 6 years ago

          Why would they test them again? I admit that I did miss your ‘in the future’ qualification, but really, they’ve already tested all of this stuff on these cards.

          Expect them to be testing the new R9 290(X) more extensively, as they do for every actually ‘new’ release, but as mentioned in the review, there’s no reason for them to test the same cards all over again.

            • anotherengineer
            • 6 years ago

            I agree, which is why I asked the question. There is no point testing things that would be the same, since the R9 270X is basically an HD 7870, which is why I put ‘in the future’ referring to new cards.

            From the 7870/7850 review [url<]https://techreport.com/review/22573/amd-radeon-hd-7870-ghz-edition[/url<] TR did do more testing on a 'new' card, however the compute testing was limited compared to Anand, and there wasn't really anything pertaining to the HTPC capabilities of the cards. Hence my question, would they be expanding testing in the future to include more?

            • Airmantharp
            • 6 years ago

            Put that way, I agree- it is a relevant question.

            Of course, I’d like to see how compute actually helps consumer applications, particularly since AMD is stronger in the consumer compute space. But I don’t really see the relevance of HTPC testing, given that it’s been mostly figured out now. I mean, hell, the HD3000 in my beater Toshiba laptop makes a great HTPC GPU.

            But I will absolutely concede that verifying continued support and stability is important, and I do hope to see more of that when time isn’t so limited too!

    • chuckula
    • 6 years ago

    So if I already own a 7970GHz edition or GTX-770, what is the upgrade proposition here?

    Or should these cards only really be considered for people with old systems that want to upgrade to something made in the last 2 years or so?

    [Edit: I’m willing to be that a large number of downthumbs are from the exact same people who asked the same question (in much snarkier tones) when Haswell came out. Well guess what, Sandy Bridge –> Haswell has shown a lot more progress in about the same timeframe as these cards have shown).]

      • JustAnEngineer
      • 6 years ago

      R9 290X or R9 290 are your upgrades.

      • jimbo75
      • 6 years ago
        • Airmantharp
        • 6 years ago

        For now- expect Nvidia to respond with lower prices, it’s not as if they’re unaware of AMD’s launch.

          • superjawes
          • 6 years ago

          I know everyone’s pointing it out as well, but since these new cards aren’t coming with Never Settle codes, you might as well wait for AMD to phase these cards into that program, which will probably happen after 7xxx series cards sell out. Hopefully by Christmas shopping season.

        • chuckula
        • 6 years ago

        Or for buying a 7970Ghz edition? Or are you just being a one-sided shill as usual? How come you aren’t singing the praises of the overpriced and under-performing 260X that is losing to the 650 TI there?

      • superjawes
      • 6 years ago

      Waiting is probably best. I think those cards are going to be the top of the line for awhile. The GTX 8xx series will (hopefully) feature new silicon, and the R9 3xx series might reuse the 290X and 290 chips, but maybe with some extra upgrades like speed and/or memory.

    • Krogoth
    • 6 years ago

    AMD’s counterattack at 760 and 770 which are based on refined designs of the previous generation.

    • jimbo75
    • 6 years ago
      • HisDivineOrder
      • 6 years ago

      I’d have said that regardless.

      • ikjadoon
      • 6 years ago

      The upside is that SLI is now a cheaper proposition.

    • bfar
    • 6 years ago

    Looks a very solid card at that price. Would prefer if you included comparison benchmarks with some of the other higher end cards, such as the 7970, 7970GE and the GTX780. Their omission is conspicuous, as they’d fall into a reasonably close performance cluster as the R9 280.

    Can I ask, were you instructed not to include them?

      • Spunjji
      • 6 years ago

      Their omission and the reason for it is not so much conspicuous as blindingly obvious, as it’s specifically addressed several times in the lead-up to the results.

        • HisDivineOrder
        • 6 years ago

        Yet I would have preferred they be included anyway.

      • Damage
      • 6 years ago

      [quote<]Can I ask, were you instructed not to include them?[/quote<] I own TR. No one instructs me about what to include or not to include. Although, if you know me, I'd probably make a point of including them if someone attempted to tell me not to. 🙂 I simply chose to test other cards for this review. Time was limited, and I worked long hours up to the publication of the review. There is, you know, another review coming soon where including the GTX 780 and Titan would be relevant. If you pretend very gently, the 7970 GHz will appear on these graphs where the 280X is now since, you know, same thing.

    • Jon1984
    • 6 years ago

    Damn, I was thinking a GTX760 would be a good upgrade this time for 250€ but for 300€ a 280x is simply awesome!!!

    Can someone give me insight about the PSU requirements of this beast? It doesn’t seem to take more juice than a GTX570, which is about the power my overclocked 560TI is consuming.

      • odizzido
      • 6 years ago

      Just look at the total system power draw on the review(280w)….that should give you a pretty good idea of how much you need.

        • Jon1984
        • 6 years ago

        I have a 500w PSU with 44A on the 12v rails, 8+6 Pin for the graphics card, I suppose it should be enough?

          • Airmantharp
          • 6 years ago

          I ran two HD6950 2GB cards and an overclocked Intel quad with a 650w Seasonic… so one would hope :).

    • ptsant
    • 6 years ago

    The price is a nice surprise. You can clearly see the cards standing out in the price/performance chart. Considering this is MSRP at launch, without any “special offers” or rebates, it can only go down from here. AMD could have jacked the price up (which would have been hard to swallow for a rebadge) but didn’t.

    • ET3D
    • 6 years ago

    No 50ms and 33ms for GRID 2? There’s time spend beyond 16.7ms, but the other buttons make everything 0, which seems like a glitch rather than the real result.

      • Damage
      • 6 years ago

      Look again at the frame-by-frame plots. Those are real results. The cards just run this game quickly.

    • Pantsu
    • 6 years ago

    Some of the settings were a bit extreme for these cards. Only the 280X and 770 have any business running 2560×1440 and high levels of AA. IMO this skews the results compared to what settings people would be actually running on the lower end. Especially the 5870 seems to completely choke under pressure. 1080p or 2560×1440 without msaa/ssaa would’ve been a better test when the average fps drops <30 for most cards.

    • Pantsu
    • 6 years ago

    Some of the settings were a bit extreme for these cards. Only the 280X and 770 have any business running 2560×1440 and high levels of AA. IMO this skews the results compared to what settings people would be actually running on the lower end. Especially the 5870 seems to completely choke under pressure. 1080p or 2560×1440 without msaa/ssaa would’ve been a better test when the average fps drops <30 for most cards.

    • PopcornMachine
    • 6 years ago

    As underwhelming as a re-badge is, the 280x is still a bit faster than a 7970GHZ for +$100 cheaper.

    Card to get at the moment.

      • ermo
      • 6 years ago

      This is what I don’t get, though.

      They took the 7970 GHz Ed, clocked it down a wee bit, yet it ends up [i<]faster[/i<]? Is this a driver thing, so that those of us who own a 7970 GHz Ed can expect to see similar gains? Or is it a minor silicon respin tweak, which has a measurable impact on efficiency in the newer revision hardware? Inquiring minds would like to know...

        • jimbo75
        • 6 years ago
        • Krogoth
        • 6 years ago

        280x is actually faster than 7970 Ghz Edition (provided that thermals permit it) and it comes with faster GDDR5 chips.

        280 uses a similar boosting scheme that is found in Nvidia’s 6xx/7xx chips.

          • HisDivineOrder
          • 6 years ago

          Krogoth is impressed?

          • cynan
          • 6 years ago

          How exactly is the 280x faster than a 7970 at GHz edition clocks? From the info provided in the tables they should be pretty much identical..

            • Krogoth
            • 6 years ago

            The default BIOS’s clock-boosting is more aggressive on 280x than it is on the 7970 Ghz Edition. The 280x comes from a later stepping of Tahiti which doesn’t require as much power and voltage to achieve the same stock clockspeed. This means that has more overclocking headroom and AMD’s engineers are confident enough that they are willing to crank-up clock-boosting. The 280x uses faster GDDR5 chips which means that it has more bandwidth. This clearly shows in 4Megapixel gaming w/ AA/AF. It is really no surprise that is able to edge out its 7970 predecessor at such conditions.

            Nvidia did the exactly the same thing with second generation GK104 parts (750, 760 and 770) and some cases rebalance the GPU block’s resources. The result is that 7xx parts are usually faster than their predecessors at their given price points. (760 is only faster than 660Ti when memory bandwidth is concerned).

            • cynan
            • 6 years ago

            The difference in boost speed ramping algorithm is too esoteric to be of much concern to me. And as far as I can tell, the “stock” memory speed on the 280x is the same as was on the 7970GHz (ie,1500/6000MHz).

            As far as I’m concerned, they’re on par. 7970, 7970GHz, 280x – take your pick. If you look at the benchmarks at Anandtech, most show the 7970 < 280x < 7970GHz for average FPS, differentiated by a couple of FPS.

          • ermo
          • 6 years ago

          I’m just trying to get a handle on the default settings here — my 7970 cards are slightly OCed from the factory, so I may have gotten things confused a little.

          Of the two 7970 GHz Ed cards I own, the oldest runs at 1100 MHz Boost and has 1500 MHz (6GT/s) RAM chips. The newer one runs at 1050 MHz Boost and has 1500 MHz (6GT/s) RAM chips. I just RMAed the newer of the two because it wasn’t stable.

          Over at PC Perspective, Ryan downclocked the same ASUS R280X model as the one Scott tested, to the official 1000 MHz Boost and 1500 MHz RAM settings, which made the (stock) R280X ever so slightly slower than the (stock) GTX 770 in his review.

          In the TR review, the slightly overclocked R280X beat the stock GTX 770.

          What interests me is whether the 280X is a proper respin (aka a new revision) of the core, or if it is the exact same core as the 7970 GHz Ed, just with a few subtle differences in the other components included on the board.

          EDIT: Corrected my numbers. Oops.

          EDIT #2: From the Anand review:

          [quote<] Moving on, while the GPUs behind today’s cards are unchanged, the cards themselves are not. All of the cards receive new firmware with new functionality that’s not present in the 7000 series. [/quote<] ... that settles it then.

            • Airmantharp
            • 6 years ago

            It certainly doesn’t look like a re-spin- and AMD could have made use of that here, to include the audio stuff and to possibly tweak the core for either lower power usage or higher clocks. Looks like they just let their partners explore board design and then pushed it out the door with a shiny new label.

          • ermo
          • 6 years ago

          Per the Anand review, the ASUS Factory OCed 280X in the present review has decidedly non-stock base and boost clocks AND it has faster memory. As his numbers show, it is unreasonable to expect a stock 280X to be faster than a stock GHz Edition.

          In other words, the ASUS card as tested here is faster than most stock 280X cards will be. It also happens to have a higher boost clock than a stock GHz Edition, and since it appears to have a higher PowerTune limit baked in from the factory as well, it can run at its boost clock in all tests.

          1070 MHz boost > 1050 MHz boost && 1600 MHz (6.4 GT/s) > 1500 MHz (6 GT/s), which explains the observed higher performance of the ASUS card nicely. But at $309, it’s a steal and it looks like it can be overclocked nicely as well.

      • ermo
      • 6 years ago

      Also, don’t forget that the numbers in this review is from a factory overclocked version ([s<]+20[/s<] +70 MHz core, +100 MHz RAM for an effective 6.0 -> 6.4GT/s improvement). I'd like to see the GTX 680/770 vs. 7970/7970 GHz Ed/R9 280X charts, in particular with the newest available drivers, in both 1080p and 1440p w/AA. But I understand if this is not something that TR sees any particular value in doing (as per Scott's testing notes). EDIT: Got my stock R280X numbers fudged up. Oops.

        • ermo
        • 6 years ago

        … seems I have a ‘fan’.

      • sschaem
      • 6 years ago

      Check anand review, he did include the 7970. And guess what, in many, many test the overclocked r9-280x are slower then the 7970ghz…

      Where do you get a r9-280x now BTW? newegg doesn’t even carry them for pre-order. but does list the r9-290x… so if the 280x get listed after the 290x, their is ‘no card to get’

      oh, and the asus directcu (same cooler as the asus r9 reviwed) sell for $260 and from what I read overclock to ghz level. So will beat the r9-280x…
      If the r9280x was $100 cheaper they would sell for $160, not going to happen.

        • Airmantharp
        • 6 years ago

        Yeah, I don’t get the ‘card to get right now’ comment. There’s still a week left before they’re really available, and in that time things can change.

      • Bubster
      • 6 years ago

      Uh no its slower.

    • Airmantharp
    • 6 years ago

    Tall of you insightful commenters wondering why there aren’t HD7000-series cards in the review- [b<]it's explained in the review that you're commenting on, that you forgot to read![/b<]

      • spuppy
      • 6 years ago

      I usually skip the “test notes” page since it’s usually the same info. Thanks for pointing this out

        • auxy
        • 6 years ago

        It usually isn’t the same info. How would you even know, since you “usually” skip them?

    • sschaem
    • 6 years ago

    “Long story short: fast graphics just got a whole lot cheaper.”

    Those cards are more costly then the 7970 they replace, with the same performance..

    I dont get this comment.

    How going from $260 over $300 is a “Whole lot cheaper” ???

      • Stickmansam
      • 6 years ago

      MRSP, not the sales + rebate price

      Cheapest 7970 I see is $280 after a $20 MIR and on sale (Regular is about $350)

      Cheapest 7970 ghz is $335, on sale down from $380

        • superjawes
        • 6 years ago

        This. Since AMD is re-badging everything below the R9 290, all of the current Radeons are in phase-out mode, which explains why the new cards aren’t yet budled with games. After a few weeks, I imagine stocks of old cards will start to dry out, and by the time holiday shopping kicks in, so too will sales and bundles on these new cards.

        • sschaem
        • 6 years ago

        Di you even look ?

        newegg, amazon, carry 7970 under $300. And a many drop to ~$280 after rebate.

        So under $300 without rebate (Fry does carry the asus directcu2 7970 for $280, 260 AR)

      • cynan
      • 6 years ago

      No HD 7970 sells for $260. I don’t know why you keep reiterating that price. The cheapest I’ve seen after MIR is $270. And with the limitations on today’s MIR (not a cheque, but prepaid credit card that can be a pain to spend due to restrictions and expires in 6 months, etc)… They’re worth about half their value at best as far as I’m concerned.

      That’s to say that, at least to me, the HD 7970 and r280x are more or less the same price. If the 7970 is cheaper, at least for now, it’s by a negligible amount.

        • sschaem
        • 6 years ago

        The asus Directcu2, the exact same as asus r9-280x does sell for 280 at frys – $20 rebate .

        “Long story short: fast graphics just got a whole lot cheaper.”

        Hoe can a rebadged 7970 selling for >$300 warrant that statement when top of the line asus 7970 already sell for <$300.

          • cynan
          • 6 years ago

          I concede your overall point.

          Some would argue that as the HD 7970 GHz are still well over $300, the r280x is coming in at slightly cheaper. However:

          1) They are still no where near $100 cheaper retail
          2) We’ve yet to see retail pricing for the 280x as far as I’m aware
          3) I personally don’t buy there being much, if any, practical difference between the HD 7970 and HD 7970 GHz.

          • NarwhaleAu
          • 6 years ago

          MIRs are largely a scam in my opinion. I tend to ignore them when doing price comparisons. I’ll actually pay a little more just to avoid a MIR.

            • sschaem
            • 6 years ago

            Then its $279 at checkout, without the MIR

    • spuppy
    • 6 years ago

    edit not anymore

    • JohnC
    • 6 years ago

    Why didn’t you include their original variants – 7970 GE and 7870? Would be interesting to see how they compare to new rebadged versions.
    Also, what’s the point of using BF4 beta for benchmarking on an empty server? Sure, it’s more consistent but it is absolutely impractical – nobody plays on empty servers. And you should’ve at least tried to bring down that skyscraper each time – it has a much greater impact on system performance than staring at a static walls…

      • BlondIndian
      • 6 years ago

      Please just read the testing methods page .
      Testing multiplayer on a real server is never reproducible . A few random explosions would skew the values drastically , not to mention the network errors and ping variances and lag introduced by networks . The network problems are variable in that no two runs would be similar . You would have to average out at least 10 games to get a decent idea of performance while discarding outliers(tricky business that).
      The 7970GE is same as R9 280X . The only thing you would get is +/- 5% at max .
      The R9 270x vs 7870 is a little interesting with a memory boost of 14% (yawn!)

      These are basically OLD WINE IN A NEW BOTTLE AT LOWER PRICES . SO stop aking for comparsion to the old bottle .

      • auxy
      • 6 years ago

      [url=http://weknowgifs.com/wp-content/uploads/2013/08/didnt-read-lol-gif-4.gif<]JohnC's face when commenting[/url<]

    • Bensam123
    • 6 years ago

    You guys have been doing some really weird things as far as cherry picking what chips to compare things to in your reviews (since haswell). I don’t necessarily think it’s a bad thing if done right, but I don’t think your matchups are done right. The 7xxx series most definitely should’ve been included and it’s a glaring omission. You’re right it wouldn’t be that much of a difference, but what we would be looking at is not the performance difference, but the price difference.

    You can get a 7950 for $180 w/mir regularly, you can get a 7970 for $280 w/mir, the 7870 isn’t even relevant because it’s around $150. Sale prices drive them down even more. The 7950 out of all the cards is an extremely good deal right now and if it was in the graph, below the 270x in price with infinitely better performance it would vastly change the conclusions of this review and anyone making a purchase based off of the graph at the end. That should’ve been the card that was included in the very least as it’s the most relevant comparison and the one people will probably look for the most.

    I can go back and look at older reviews that include these cards and extrapolate the results against these newer cards, but that’s not the point. People who are just recently visiting the page and haven’t seen those older reviews will not have the information at hand to make a informed buying decision and that’s the whole point of benchmarking isn’t it? I mean if you’re going to include a 6xxx and a 5xxx card (which I think is good), then skip the second newest generation that just seems… like a bad idea.

    Putting this aside, I thought there were extra features being added with the R9 series? I expected there to be talk of mantle and what not. The features that actually separate this older generation from the newer one. Perhaps that’s still under NDA with the 290x…

    Something else worth considering, actually testing the noise and temps inside a case. The open air coolers are definitely awesome when you have a whole room full of air available, but when you put them in a enclosure that would change things. Yes everyone’s case is different, but if TR used the same case for all the noise and temp tests it would be a relevant baseline.

      • BlondIndian
      • 6 years ago

      I agree with testing in cases .
      Open benches would favour open air coolers. Its not a real world situation .

      Testing atleast the noise and temps inside everyday cases with standard cooling seems more practical !

        • Bensam123
        • 6 years ago

        Yeah, you could even base the cases off of the intake and exhaust fans. Maybe a 120 in front and a 120 in back, maybe one on top too. Either way it’d definitely be more realistic and less synthetic (I thought TR tried to get away from this stuff).

      • HisDivineOrder
      • 6 years ago

      I agree that the last gen should have been included.

      [url<]https://techreport.com/review/24996/nvidia-geforce-gtx-760-graphics-card-reviewed/5[/url<] That review included the Geforce 770, too. You tested current gen and the refreshes then to show how the difference in performance along with the new pricing for said product lines. Here you ignore the last gen, which gives off the impression to those who do the quick 'n dirty glance that the R9 280X and 270X are solid upgrades over the generations you're actually showing. I say that's not really a good idea. Plus, who knows what kind of performance per watt improvements might have been seen versus the cards they're based off of? Did you guys enjoy your free trip to Hawaii? Perchance did you guys get some advice not to review the R9 270X/280X against the 7870 and 7970GHZ directly? I don't think you should be taking advice from your local AMD guys, even if they give you free trips to tropical islands. 😉 After all, you didn't do that with the Geforce 760/770 reviews, so what changed?

      • DPete27
      • 6 years ago

      I also agree that AMD 7xxx cards should have been shown at least in the value plots. Unassuming readers can’t be expected to sift through the mess of TR GPU review results, compile everything, and update the plots with current pricing.
      If you’re going to say the 280X is a great buy, you ought to show the price [url=https://techreport.com/review/24996/nvidia-geforce-gtx-760-graphics-card-reviewed/10<]comparison to a 7970.[/url<] (the two reviews are so similar, surely there is little effort required to fully combine the two plots?)

        • Bensam123
        • 6 years ago

        Aye or the 7950, which is the same price as the 270x right now (cheaper with MIRs).

      • sschaem
      • 6 years ago

      Mantle is claimed to be available for all GCN architecture, so its native to the Tahiti (7 serie and the rebadged models reviewed here)

      I agree on the cooler. Swirling hot air around in a closed case is a bad design.

        • Bensam123
        • 6 years ago

        Yeah, I thought that maybe the ‘big’ article was with the 290x and that’s why they haven’t explained it at all…

    • colinstu12
    • 6 years ago

    No titan???

      • Damage
      • 6 years ago

      Can’t you just see 780 and Titan results dropping into the next review alongside the 290X? And maybe all of those synthetic GPU test results I’ve been compiling and didn’t even publish yet? 🙂

        • Jigar
        • 6 years ago

        So you have that shinny R9 290X with you ehh ?

          • Airmantharp
          • 6 years ago

          I wish that NDA’s allowed review houses to announce that they have a new card under review. Don’t see that much anymore.

            • BIF
            • 6 years ago

            NDA may or may not explicitly prevent such a disclosure.

            But once you sign one, it’s easy to err on the side of caution and not talk about ANYTHING. It avoids misunderstandings.

    • JosiahBradley
    • 6 years ago

    Still just want to know if I can crossfire a R9 280X and a 7970 Lightning. They are the same chip, so it should be technically possible right?

    *Edit: Brent over at HardOCP confirmed for me you can crossfire 7970 GHz and R9 280X. Great mixes ahead.

      • Airmantharp
      • 6 years ago

      That’s actually pretty cool- and not something I expected, given some of the changes AMD has touted. Props to AMD for keeping compatibility!

        • HisDivineOrder
        • 6 years ago

        It may technically work, but would you really want to complicate already complicated AMD Crossfire snags by doing it?

        Sometimes you have to remind yourself that just because you CAN do a thing it does not mean that you SHOULD do that thing.

          • Airmantharp
          • 6 years ago

          Well, I wouldn’t recommend Crossfire until actual WHQL drivers ship that fix all of the known problems, as much as they can be fixed on current designs. But the fact that upgrades were done to the new cards without breaking Crossfire compatibility is actually quite nice, in my opinion.

    • Airmantharp
    • 6 years ago

    Here’s my pessimistic response:

    The cards are exactly as fast as expected, and the blower still sucks (or blows?).

    And the bundles are gone, which means that Nvidia actually has the advantage here- they just have to lower their prices, and boom.

    Lotta hype for nothing, eh AMD?

      • sschaem
      • 6 years ago

      You can still buy the r9-280x with 3 free games, its just call the 7970.. and its $260 with 3GB

      Reselling a 2 year old chip as new, with a higher price… I guess its the new ‘innovation’

        • BlondIndian
        • 6 years ago

        The R9-280x is a rebadged 7970GE not a 7970 . You could overclock the 7970 but thats a whole different scenario .
        Basically with the 280x AMD removed the game bundle and made it CHEAPER !!!
        The unofficial prices of the 7970 and 7970GE are lower atm so they still make sense however .

        Stop your Hating mates! I didn’t see you guys crying over the nvidia 7xx rebadging.

          • HisDivineOrder
          • 6 years ago

          nVidia upped the clocks on the 770 across the board and dropped the price by $100. AMD dropped the clocks and dropped the price by $100.

          In comparison, when nVidia first showed up they excluded the initial offerings from bundles just as AMD did.

          So a plus is that nVidia upped the clocks from the prior gen and AMD didn’t, but then again AMD is lowering their prices $100 from where the 770 wound up, so by comparison to pre-770, AMD is dropping by $200. I think it is a shame the bundle is being de-emphasized because that would have helped AMD on selling these refreshes more readily once nVidia does the price drops you know they will.

          I think AMD really need to work on getting better reference coolers to sell the product as a premium product. Especially for the R9 290/290X.

          • cynan
          • 6 years ago

          People who buy into there being a difference between the 7970GE and 7970 “original edition” are fooling themselves. The 7970GE is just a 7970 with a bios that supports boost clock.

          But if you really want to split hairs over stock clocks, the 280x comes in with a max boost of 1000MHz – 50 MHz slower than the 7970 GHz. So that would place it right in the middle of the 7970 and 7970GE.

          And about pricing, I’ve yet to see an HD 7970 listed for cheaper than $300 before MIR. So as far as I’m concerned, the 280x and 7970 are more or less the same price. For today at least.

          • auxy
          • 6 years ago

          The GTX 700 series cards are not rebadges. The GTX 780, 770, and 760 all have different specifications than the Titan, 680, and 670 before them; notably, they each have more memory bandwidth relative to their shader power than their predecessors.

            • Krogoth
            • 6 years ago

            Not exactly.

            760 = rebalanced GK104 silicon that removes a few shaders and a TMU from each block from the standard layout, but it has a wider memory bus than 660Ti and 660 (256bit versus 192bit) and is clocked higher. This shows in benchmarking where 760TI pulls ahead when memory bandwidth matters, but falls behind slightly 660Ti and 660 when shading power and GPGPU power are #1.

            770 = Same full GK104 silicon as 670 and 680, just from a later stepping that requires less power and voltage at achieve clockspeed x. 770 uses faster, newer GDDR5 chips. Nvidia crank up the stock clockspeed and made the clock-boosting more aggressive on 770 this how it is faster than its 670 and 680 predecessor.

            780 = GK110 (same silicon as Titan/K20) but with 1/4 of double floating precision point performance (Nvidia crippled it mostly for marketing reasons). 780 is just about as fast as Titan in gaming and single floating precision point performance. It is only slightly faster than 770 and 680 at double floating precision point performance and is hopelessly outclassed by Titan/K20.

            • auxy
            • 6 years ago

            That’s just saying what I said.

            • Airmantharp
            • 6 years ago

            We need to work on your ‘saying’ skills- I also definitely like Krogoth’s explanation better.

            • auxy
            • 6 years ago

            I’m on a phone, and at work. Unless you’d like me to type while driving?

            Besides, Krogoth is a little off on a few things — there is no 760Ti, and memory bandwidth is almost always more important (that is, having enough, not having more) than shader power, especially with cards that are as bandwidth-starved as the 600 series. Plus, the 670 doesn’t use the full GK104 silicon.

            • Airmantharp
            • 6 years ago

            I’m not responsible for your inability to type while driving without killing yourself or others :).

            And while Krogoth is absolutely wrong on a few details- he almost always is- he STILL said it better.

            • auxy
            • 6 years ago

            He said it more thoroughly. Mine had essentially the same meaning, was factually more accurate, and much more concise. His is a wall of text to say what I said. I [b<]clearly[/b<] said it better. ヽ(´ー`)┌

            • Diplomacy42
            • 6 years ago

            What you said was intellectually dishonest, therefore krogoth’s explanation of the technical refinements between generations was more apt.

            • auxy
            • 6 years ago

            It wasn’t intellectually dishonest! What the hell?

            • superjawes
            • 6 years ago

            [quote=”auxy”<]The GTX 700 series cards are not rebadges. The GTX 780, 770, and 760 all have different specifications than the Titan, 680, and 670 before them; notably, they each have more memory bandwidth relative to their shader power than their predecessors.[/quote<] Krogoth pointed out that the 700 series is all derived from exisiting/previously released silicon. Basically, the same GPUs from the 600 series (+Titan) were dropped into 700 series cards at different price points than they were before. That is rebadging silicon. Your post suggests that because a 780 =/= 680 or a 770 =/= 680, Nvidia is not rebadging cards, hence the intellectual dishonesty. Just use this rule: old silicon in new card = rebadging. New silicon in new card = legitimately new product.

            • auxy
            • 6 years ago

            The’re not the same chips. They’re not a new design, but saying that a new chip using the same design is “old silicon” means the Core i7-3970X was a “rebadge”. It wasn’t. And neither are the new Geforces.

        • Essence
        • 6 years ago

        Stop being such a Nvidia fanboy – The R9 280X is (Up to 7-10%) faster, (3-5 DB) Quieter, ($100+) Cheaper and (3-5 Celsius) Cooler over its competition (GTX770).

        BTW Where were you when the Nvidia re-brands were released? These AMD re-brands (R9 280X/270X) do have some new features and work with older 7 (7970/7870) series in XFire unlike Nvidia which doesn’t work I.e 6 series (670/80) will not work with 7 series (760/770).

          • Airmantharp
          • 6 years ago

          Please don’t compare a custom aftermarket open-air cooler to an OEM blower. Not only can the aftermarket cooler be affixed just as easily to an Nvidia card, but also AMD doesn’t have a blower that’s nearly as effective as Nvidia’s blowers.

          With Nvidia, you get the choice- with AMD, you don’t. That’s not an advantage.

          As for the ‘new’ features, well, those aren’t actually here yet- notice how they weren’t part of the review!

      • clone
      • 6 years ago

      you call undercutting Nvidia by $100 nothing?

      I rest my case, the reason we all pay quite a bit of coin for gfx cards is because no one really cares and instead just wants to complain.

      forcing Nvidia to drop the price by $100 is not boring and it’s certainly not nothing…. it’s an SSD, a faster CPU, more ram, much better case, much better PSU….. yeah nothing.

        • Airmantharp
        • 6 years ago

        But that’s normal- that’s what’s supposed to happen. They’re supposed to undercut- otherwise, what would be the point of introducing a new ‘product’ that has less features than the product it’s replacing, i.e. no game bundle?

        So no, $100 isn’t ‘nothing’, but it’s not a problem, either, as long as Nvidia responds appropriately.

          • clone
          • 6 years ago

          what? Nvidia apparently didn’t get that memo.

          the new cards have more features than Nvidia’s cards, the new gen is introducing and enabling more features than the previous generation had, they are a little faster, they are cheaper, they use less power than the previous generation, the drivers are apparently solid…. I mean really it’s all gotten so boring?

          seriously anyone who bought one of the older gen 7xxx cards should be excited as well as they are getting DX 11.2 support and digital audio as well.

          yes…..nothing to see here, so very…. “boring.”,nothing to see at all.

          I’m not saying they moved heaven and earth with this generation but c’mon already, for all the ppl whining and moaning how prices have gone North since the HD 4870 release….. you are getting exactly what you deserve.

          you are correct why reward the company that has consistently undercut Nvidia with a purchase, instead just go buy Nvidia now or after they price match…. that’ll get AMD to lower prices even further I’m sure of it?

          it all seems so notably pointless when better in every way is the new “nothing to see”.

          p.s.I’m not trying to rag on you but when the article says “fast gfx just got a whole lot cheaper”….. nope, nothing to see?

            • Airmantharp
            • 6 years ago

            Well, the MSRPs got cheaper, sure- but the street prices didn’t, and the performance didn’t change.

            And just how many new features are you getting with the re-badged HD7970? How many of those are actually available for testing?

            That’s kind of my point- relative to what’s already on the market, the hype far exceeds what’s actually being delivered.

            • clone
            • 6 years ago

            the review came out yesterday…. my goodness, were the street prices on anything static over the course of a product launch?…. what a horrible condemnation.

            if you get 1 new feature it’s worth more than before but in this case AMD is going to sell it for less, how many new features did Nvidia introduce when they rebadged GTX 770?…. and how low did that price go btw?

            you are disappointed by the loss of the gaming bundles, personally it doesn’t bother me, I’d prefer to buy the games I want….. I suspect once the new gfx line has been around for a while they’ll return but gaming bundles don’t determine the card I buy, price & performance does.

            as mentioned before: when better in every way becomes “nothing to see” it all seems kinda pointless.

          • NarwhaleAu
          • 6 years ago

          Fanboi – you should be celebrating these cards if only for the simple facts that they force nvidia to drop prices AND they will force nvidia to innovate. Even if you emotionally and irrationally hate AMD, you should admire their ability to deliver improved performance at lower prices.

          Just wait until these are $50 cheaper. 7970 performance at $250? $230 with MIR? YES PLEASE!

            • Airmantharp
            • 6 years ago

            I’m a fanboy… oh boy.

            I’m not really celebrating these cards- they’re straight rebadges. And that’s all I’m really saying.

            I’ve owned plenty of AMD cards, along with 3Dfx cards, Trident cards… yeah. Plenty of integrated Intel stuff too, which works great, even for gaming.

            It’s the ‘lower prices’ bit I’m not buying. MSRP is one thing, but MSRP is not what people pay; they pay retail, and these cards aren’t even available at retail.

            When they do hit retail, and we can compare their pricing to Nvidia’s in a true price vs. performance context, then we can talk- but for now, this is a lot of hubbub for a rebadge.

            • clone
            • 6 years ago

            given MSRP’s set the pattern, AMD’s being $100 lower than Nvidia’s is compelling.

        • cosminmcm
        • 6 years ago

        Unfortunately, it seems they are not forcing anything, except the cheaper 650ti and 660 (20$ off the MSRP). The more interesting 760 and 770 didn’t move at all. As always, AMD has to be cheaper to even have a chance of staying relevant.

          • clone
          • 6 years ago

          the review came out yesterday, do you believe Nvidia has a guy with his finger on the pricing button just waiting for the bench results to appear on the web?

            • cosminmcm
            • 6 years ago

            They had one for the 650ti/660, didn’t they?

        • HisDivineOrder
        • 6 years ago

        I think people are just hungry for some new GPU tech. We’re sitting on refreshes of cards that came out almost two years ago. This release almost guarantees that we’ll be sitting on old tech for 2.5+ years–aside from high end products most of us won’t buy–and when we’re a group of enthusiasts used to new cards every year, it’s not very interesting. Some of us even remember days when nVidia and ATI were releasing cards every six months. I remember the fast turnaround on the Geforce to GF2 or the FX5800 (haha, vacuum) to FX5900. Or the Radeon 9700 to 9800.

        Refreshes used to take six months, not two years.

        It’s just so boring. AMD released the Radeon 7970 in late 2011. The 7950 followed right after along with the 7870 and 7850. nVidia spent most of 2012 releasing the 600 series from 680 through 660. Aside from egregiously expensive high end cards, all of 2013 has been 2012 with lower prices and at times higher clocks. That’s it.

        That really is boring. It’s hard not to think the cards they’re releasing today and even next week with the R9 290/290X are the very products they didn’t release back earlier this year. Especially given the talk of Bonaire having the audio DSP’s built in. Those were the only new cards released back around the original timeframe of the entire platform refresh and they were supposedly a hint of the refreshes to come.

        Hell, even the refreshes today aren’t even updates to the existing Radeon 78xx and 79xx series except in name and clocks in some cases. They couldn’t even be bothered to respin the chips to give us some GCN 2.0 improvements or audio dsp’s. It’s a great way to save money and a great way to repurpose existing 7xxx series product to sell again as new.

        It’s just not very interesting.

          • Airmantharp
          • 6 years ago

          Blame TSMC (if there’s any blame at all)- The current architectures from both vendors are stop-gap hybrids that were originally developed for TSMC’s next node, but had to be scaled back and shoehorned into 28nm instead.

          Given that limitation, I think both companies have done pretty well- especially AMD, who has pursued technologies to lower GPU power draw across their product range, and has produced a totally new product at 28nm, the R9 290(X), in order to compete at the high end.

          Of course, that means that the R9 290(X) is literally going to be the only interesting product release until SKUs based on silicon from TSMCs next node start shipping :).

            • cosminmcm
            • 6 years ago

            If this is all that AMD had planned for 20nm at the high end (which I think it is), then it is really dissapointing. When your competitor has 7+ billion transistrors products at 28nm, and all you can do is 6+ at 20nm, it really shows how cheap you can be.
            This means that they were only relying on high clocks because of the new node to beat GK110, and that is not something that I like. I want each step to be something bigger, better, fatter, and not just add a few transistors, pump up the clock and get a smaller die to get their profits up.
            AMD needs to aim higher to earn my confidence, I don’t like mediocre companies (that is why I have never liked them).

            • tipoo
            • 6 years ago

            He said scaled back. If they had 20nm available they may well have thrown more transistors on there.

          • auxy
          • 6 years ago

          Yep, yep. Still my favorite commenter. After brute, that is.

          • clone
          • 6 years ago

          while true refreshes came faster, the FX 5800 to FX 5900 was desperation, Nvidia found out it was impossible to sell vacuum cleaners. the 9700 to 9800 was just a frequency bump and worse those cards cost $500 in today’s dollars vs an R9 280 msrp at $299

          if you’d purchased a Radeon 7970 just under 2 years ago I believe it’d be exciting to find out AMD is going to add new features via drivers for the card including DX 11.2 support and digital audio and at the same time they are lowering prices further to make the cards more affordable for new buyers.

            • Airmantharp
            • 6 years ago

            Fragmented DirectX support? How wonderful. Developers are more likely to take advantage of Mantle.

            And NO, these cards don’t get ‘digital audio’; nothing changes there. Good try though!

            • clone
            • 6 years ago

            ahh, limited to Bonaire (HD 7790 and r9 250), Tahiti doesn’t have…. my bad.

            DirectX support has always been fragmented, don’t blame AMD for raising the bar higher and to be clear if their is a transition to a lower level API that boosts efficiency over and above DX…. that’s a good thing.

            p.s. why would you be crapping on new features, innovation, more value anyway? this far into the thread and you are still repeating the mantra that better in every way is bad now adding updated DX to the growing list of complaints ….. yeah I can see that, I wish we were still using DX 8.1 cards that cost $500, maybe we could limit the amount of ram as well to like 64mb’s of DDR 2…….I agree progress sucks.

            • Airmantharp
            • 6 years ago

            Progress doesn’t suck, but recall DirectX 8.1 vs. 8.4, or 9.0c, etc. That’s the thing; these features will not get used universally because of fragmentation- and this has largely been an ATi/AMD marketing ploy over time. Hell, they were the first with tessellation in a DirectX 8 fragment, yet that didn’t see widespread use until DirectX 11!

            Further, as cool as Mantle is (which I’ve recognized elsewhere, I am genuinely excited), it’s not here yet, and it’s advantages at this stage extends no further than talking points for marketing.

            So no, there’s really nothing new here- the pricing doesn’t matter till they’re actually on the shelf, and the performance and delivered features (other than increased TDMS support, which is nice) remain unchanged.

            If I say that I was hoping for more- like, an actual WHQL driver that fixes their known issues, or better clocks/lower power usage/heat output, or a cooler that doesn’t ‘blow’, does it not sound reasonable for me to be rather unimpressed?

            • clone
            • 6 years ago

            you know that features are a chicken and egg thing, I’ll let it go at progress is good.

            it’s a given that consoles will support DX 11.2 so their is a very good chance that it will be used extensively.
            [quote<]If I say that I was hoping for more- like, an actual WHQL driver that fixes their known issues, or better clocks/lower power usage/heat output, or a cooler that doesn't 'blow', does it not sound reasonable for me to be rather unimpressed?[/quote<]given competition necessitates pushing boundaries like thermals, consumption and cooling I suspect your desires weren't on either companies [b<]desktop[/b<] roadmap.

      • Fighterpilot
      • 6 years ago

      So,in other words…you failed to read or understand the review numbers?

        • Airmantharp
        • 6 years ago

        Which state that the new cards are magically faster than the cards they’re replacing? They don’t? They’re the exact same speed? Wow! I couldn’t have just said that!

          • clone
          • 6 years ago

          actually the article states the new gen is notably cheaper and a wee bit faster than the old gen….. and both gens are getting a slew of new features for free which is a helluva lot more than Nvidia brought to the table.

          Anandtech has a proper comparison review.

            • Airmantharp
            • 6 years ago

            Read comment above concerning MSRP versus market pricing- but you’re absolutely right if comparing MSRP.

      • Chrispy_
      • 6 years ago

      You clearly got out of bed on the wrong side this morning 😉

        • Airmantharp
        • 6 years ago

        This one was before I went to bed- but I did put a warning at the top!

        I just didn’t see a whole lot of positive here- they’re straight-up re-badges. The GPU die are still larger and more complex than Nvidia’s, which means that Nvidia is still making bank if they drop their prices to compete- if they even feel that they have to, since AMD also dropped the game bundles.

        And where’s the WHQL driver that fixes all of the Crossfire bugs?

        Stuff like that, really. I mean, these are definitely good cards; but they’re the same good cards we’ve always had.

          • Chrispy_
          • 6 years ago

          Yep, it’s a price cut and a rebadge to bring MSRP back into line with street pricing.
          AMD are just doing what everyone’s been doing since forever.

          The only new news is that AMD have released an [b<]ALL NEW[/b<] reference cooler, but nobody seems to be focusing on that as much as I'd like :\

            • Airmantharp
            • 6 years ago

            It’s still loud- that’s all I needed to hear.

            • clone
            • 6 years ago

            don’t typically buy reference cooler equipped cards nor do I typically pay the MSRP for the product so while the OEM cooler might be interesting the cooler my card comes with will be more important.

            • Airmantharp
            • 6 years ago

            Would you buy an OEM cooler if it didn’t suck? You see the hole you’re digging here?

            And weren’t you just arguing for AMD’s new MSRP schedule, against Nvidia’s older schedule? For cards that aren’t even on the shelf yet?

            • clone
            • 6 years ago

            you seem to be reaching for anything to poop on, here I’ll make it easy for you, Nvidia’s products just got overpriced, they just became feature weak, compared to r9 280 they are also louder…. overall they just became less competitive.

            btw: I have a general dislike for all reference coolers from all companies, cpu and GPU, Nvidia had the chance to lower the bar in price, they didn’t, they typically don’t, it’s why ppl talk in reference to AMD when they mention doing a “4870”….. y’know part of the review says “graphics cards just got a lot cheaper”, that’s not me saying it Airmantharp it’s everyone and in response you seem to be looking for anything to dispute it.

            • Airmantharp
            • 6 years ago

            Hey now, I started this topic off with negativity, and I put a warning up first! You’d call me a hypocrite if I changed my tune :).

            So yes, Nvidia just became ‘less’ competitive, but remember that they’re the ones with the most leeway here- their GPU is both smaller and more power efficient, when compared to the 280-series and below. Not that this has changed for this whole generation, mind you.

            When I say Nvidia can ‘just lower their prices’, I’m referencing the fact that their cost of goods should be a good deal lower, due to the above reasons.

            Now, as to the HD4870- that phenomenon happened because of the HD2900, not because AMD was feeling especially generous. It was a business tactic, and it rightly worked; but it’s not sustainable, rather, it was an aberration. It’s not going to happen again unless the market really changes or someone screws up really bad.

            • clone
            • 6 years ago

            you are correct, HD 2 and 3 motivated the “4870” but if you look back AMD for whatever reason has been the company that will compete more aggressively on price… ATI didn’t, they tried to avoid it at the expense of everything, look at the X800 to X18 transition where they had to write down all of their unsold inventory.

            AMD on the other hand has always been willing to shave a hundred or more off the top just to make Nvidia’s life more painful.

      • itachi
      • 6 years ago

      Well the hype is gonna be for the R9 290x my friend, I have a feeling it gonna put Nvidia to it’s knees.

      As I read on another blog “Is it perhaps the so awaited AMD return, with hopefully their next gen CPU coming out next year? ” I certainly hope so !

        • Airmantharp
        • 6 years ago

        See, now that’s something to get excited about. The 290-series is AMD actually trying to compete with Nvidia on the big-GPU stage, in both the gaming and compute arenas.

        And I remain cautiously excited about their CPU technology, too. Bulldozer is a great idea,I think, if they could get it balanced right.

      • pixel_junkie
      • 6 years ago

      Awesome nerd rage thread. I just love lamp.

    • Stickmansam
    • 6 years ago

    Great review

    I wonder how the R9’s stack up against their 7000 series counterparts. Seems like the 270x would be a rough competitor to the 660ti despite having the core of a 7870 that is more comparable to an 660. Seems like the higher memory clocks and slightly higher core clock would account for that.

    I wonder how much the 7000’s series price will drop, great time to upgrade!

      • Porkbelly
      • 6 years ago

      I would have liked to see a 7850 thrown in the mix too. There’s been some great deals on them lately and I almost grabbed another. I’ll have to bide my time and see what happens.

      Great Review

        • Stickmansam
        • 6 years ago

        There be an 7850 in the R7 260x review

    • Commander Octavian
    • 6 years ago

    This is win-win product from AMD. $150 cheaper than its direct competitor while being faster, and we haven’t seen the Mantle performance yet. Oh and the 280X is both cooler and quieter than Nvidia’s GTX 770.

      • Airmantharp
      • 6 years ago

      The 280X uses an aftermarket cooler- one that can be affixed (and probably is affixed) to a GTX770 as well. There is no point for comparison here; except to say that the blower on the 270X is still pretty fricken’ loud; but we haven’t seen the blowers on the high-end cards yet.

        • Commander Octavian
        • 6 years ago

        Hmm, I didn’t notice that. The ASUS cooler is quite a bit better than the after-market coolers used by both AMD and Nvidia.

          • Airmantharp
          • 6 years ago

          AMD and Nvidia don’t use aftermarket coolers- they are the OEMs!

    • albundy
    • 6 years ago

    they did good on heat and power considering they are rebadges.

      • Alexko
      • 6 years ago

      But isn’t there something wrong with ZeroCore on the 280X? It just doesn’t seem to be working.

        • clone
        • 6 years ago

        no it’s fine, TR mentioned their testing didn’t allow it to work.

          • Alexko
          • 6 years ago

          They did? I can’t see it on page 10, I guess it must have been somewhere else. But the 270X draws very little with the display off, so it seems to be working for this card, at least. This is pretty odd.

            • Damage
            • 6 years ago

            I didn’t mention anything. Looks like the Asus card has a problem with ZeroCore power. I’ve since tested the XFX, which works properly on that front. Will have results from it in my next review!

      • itachi
      • 6 years ago

      yea true, sometimes people have to consider that also, not only the raw power (i mean pixel..)

    • JustAnEngineer
    • 6 years ago

    Where’s the R9-290X?

      • Airmantharp
      • 6 years ago

      I know, right?

      I read the review, of course- usual excellence on the part of TR. But as noted in the review, we know how these cards perform already :).

Pin It on Pinterest

Share This