AMD’s Radeon R9 380X graphics card reviewed

If you’re shopping for a graphics card in the $200 to $250 range these days, your choice mostly boils down to one question: 2GB or 4GB? Nvidia’s GeForce GTX 960 comes with 2GB of RAM to start, and fancier versions come with four gigs. AMD’s similarly priced Radeon R9 380 performs comparably and can also be had in 2GB and 4GB flavors. Simple enough.

AMD is shaking up that comfortable parallel today with the Radeon R9 380X. This card’s Tonga GPU has more resources enabled than in the familiar Radeon R9 380. On the 380X, all of Tonga’s 32 GCN compute units are turned on, for a total of 2048 shader processors. This card also packs 128 texels per clock of texture-filtering power, versus 112 in the plain 380. The Radeon R9 380X will come with 4GB of GDDR5 RAM clocked at 1425MHz for a theoretical bandwidth peak of 182 GB/s.

Sapphire’s Nitro Radeon R9 380X

Aside from those slightly more generous resource allocations, the R9 380X’s spec sheet looks much the same as the R9 380. This card maintains its counterpart’s 32-pixel-per-clock ROP throughput and 256-bit memory bus. Since Tonga is one of AMD’s newer GPUs, it also gives the R9 380X support for modern AMD features like FreeSync, TrueAudio, Virtual Super Resolution, and Frame Rate Target Control.

We’ve long suspected that a fully enabled Tonga would have a 384-bit memory interface, along with more ROP throughput (48 pixels per clock). In fact, several sources appear to have confirmed that fact. However, the 380X has “only” a 256-bit path to memory. We’re not complaining, though. The 380X’s price and likely performance look to be quite attractive, even if they’re not exactly what we’d expected. Tonga’s color compression capability ought to help wring the best possible performance out of the card’s available memory bandwidth.

Here’s a quick look at the R9 380X’s specs, bracketed by those of the Radeon R9 380 and R9 390 for easy comparison:

  Base

clock

(MHz)

Boost

clock

(MHz)

ROP

pixels/

clock

Texels

filtered/

clock

Stream

pro-

cessors

Memory

path

(bits)

GDDR5

transfer

rate

Memory

size

Peak

power

draw

Price

(street)

R9 380 918 32 112 1792 256 5.5 GT/s 2GB/4GB 190W $210
R9 380X 970 32 128 2048 256 5.7 GT/s 4GB 190W $239
R9 390 1000 64 160 2560 512 6 GT/s 8GB 230W $319

Those are AMD’s reference specs, and as you can see, the 380X offers a little more juice than the R9 380 across the board.

What’s more, AMD’s board partners have already worked over the Radeon R9 380X with custom coolers and boosted clock speeds. Cards from those partners are the ones most builders will be using in their systems, and those cards are also the ones we’ll be using to test the Radeon R9 380X today.

We already shown you Sapphire’s Nitro Radeon R9 380X above. This card comes with an eyebrow-raising 1040MHz GPU clock speed out of the box. The company also gooses the card’s memory clock to 1500MHz, for an effective speed of 6 GT/s. Sapphire’s card arrived in our labs first, so it’s the one we’ll use to represent the 380X’s performance in most of our tests.

Sapphire keeps the 380X’s Tonga chip cool with one of its attractive Dual-X heatsinks. This cooler’s twin ball-bearing fans can stop spinning under light loads for silent operation. From this angle, you can see this card’s numerous copper heat pipes, too.

It might be unusual to note for a graphics card, but the Nitro feels hefty and dense in the hand. That weightiness, and the copper on display, suggests a top-shelf cooler under the Nitro’s shroud. At about 9″ long, this card should be able to fit into most cases without a fuss, too.

Sapphire reinforces the Nitro 380X’s PCB with an attractively-finished aluminum backplate. The card draws power through twin six-pin PCIe power connectors. Sapphire tells us this Nitro 380X will carry a suggested price of $239.99. The company will also sell a reference-clocked model for $229.99.

Asus is also getting in on the R9 380X game, and it sent us one of its Strix R9 380X OC4G Gaming cards to put through the wringer. The Strix comes with a 1030MHz clock speed by default, and a setting in Asus’ included software utility can push the clocks all the way to 1050MHz.

This card’s brawny DirectCU II cooler carries heat away from the GPU with massive heat pipes that snake through an equally substantial fin array. Like the Sapphire card, the Strix can stop its fans at idle for silent running. Builders will want to double-check that their cases can swallow this card’s 10.5″ length without issue, though.

From the top down, we get a better look at this card’s attractive backplate, that enormous heat pipe, and the twin six-pin power connectors. You can’t see it in this picture, but Asus helpfully includes an LED near the power connectors that will glow red if you forget to plug in the required cables. The company also throws a one-year premium subscription for Xsplit Gamecaster in the box for the streamers out there.

 The hot-clocked OC4G Gaming card seen above will carry a $259.99 suggested price. Asus will also offer a reference-clocked Strix R9 380X with the same cooler for $239.99.

No, you’re not experiencing deja vu. We’ve also included an Asus Strix Radeon R9 380 card in this review. This card will represent the slightly-less-powerful R9 380 on our bench today. For the unfamiliar, the R9 380 is essentially just a re-badged Radeon R9 285—only this one has 4GB of memory, versus the 2GB on most R9 285 cards.

From the outside, this card looks a lot like the Strix 380X. It’s got a lot of the same perks from its more muscular sibling, like the semi-silent cooler and the Xsplit subscription. This version of the 380 sells for $219.99 on Newegg right now, and Asus is offering a $20 rebate card to sweeten the deal.

 

The GeForce GTX 960 goes 4GB, too

Going by price, the most natural foil for the Radeon R9 380X in Nvidia’s lineup is the GeForce GTX 960. We already know and love the Gigabyte’s Windforce GTX 960 2GB from when we first reviewed that GPU, but card makers are now offering versions of the GTX 960 with 4GB of GDDR5 that are closer to the 380X’s sticker price. It seemed only logical to pick up the 4GB version of this card to represent the green team this time around.

This card goes for about $230 right now on Newegg. Larger memory size aside, the Windforce is practically identical to its 2GB cousin. This card gives us higher-than-reference 1216MHz base and 1279MHz boost clocks, and it keeps the hot-clocked GPU cool with a whisper-quiet twin-fan heatsink.

Our testing methods

Most of the numbers you’ll see on the following pages were captured with Fraps, a software tool that can record the rendering time for each frame of animation. We sometimes use a tool called FCAT to capture exactly when each frame was delivered to the display, but that’s usually not necessary in order to get good data with single-GPU setups. We have, however, filtered our Fraps results using a three-frame moving average. This filter should account for the effect of the three-frame submission queue in Direct3D. If you see a frame time spike in our results, it’s likely a delay that would affect when the frame reaches the display.

We didn’t use Fraps with Ashes of the Singularity, Battlefield 4, or the Fable: Legends benchmark. Instead, we captured frame times directly from the game engines using the games’ built-in tools. We didn’t use our low-pass filter on those results.

As ever, we did our best to deliver clean benchmark numbers. Our test systems were configured like so:

Processor Core i7-5960X
Motherboard Gigabyte X99-UD5 WiFi
Chipset Intel X99
Memory size 16GB (4 DIMMs)
Memory type Corsair Vengeance LPX

DDR4 SDRAM at 2133 MT/s

Memory timings 15-15-15-36 1T
Hard drive Kingston SSDNow 310 960GB SATA
Power supply Corsair AX850
OS Windows 10 Pro

Here are the full specs of the cards we used in this review, along with their driver versions:

  Driver revision GPU base

core clock

(MHz)

GPU boost

clock

(MHz)

Memory

clock

(MHz)

Memory

size

(MB)

MSI GeForce GTX 960 Gaming 2G GeForce 358.91 1216 1279 1753 2048
Asus Strix Radeon R9 380 4GB Catalyst 15.11.1 990 1425 4096
Gigabyte GeForce GTX 960 4GB GeForce 358.91 1216 1279 1753 4096
Sapphire Radeon R9 380X Catalyst 15.11.1 1040 1500 4096
MSI GeForce GTX 970 Gaming 4G GeForce 358.91 1114 1253 1753 4096
XFX Radeon R9 390 Catalyst 15.11.1 1015 1500 8192

Thanks to Intel, Corsair, Kingston, and Gigabyte for helping to outfit our test rigs with some of the finest hardware available. Our thanks to Sapphire, Asus, XFX, and MSI for providing the graphics cards we tested in this review, as well.

Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests.

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

 

Fallout 4

Let’s kick off our testing with a brief stint in Bethesda’s newest post-apocalyptic romp. Fallout 4 uses several of the latest rendering techniques to provide players with the prettiest Wasteland yet.


Looking over this plot of the frame times from a single test run, we can see that all of the cards here are generally delivering a smooth experience, especially considering the lack of large or frequent spikes to the range of 50 ms. Big spikes like that would correspond to a frame-rate drop below 20 FPS, which can translate to a noticeable slowdown during gameplay.

The slightly more copious serving of pixel-pushing resources in the R9 380X helps it edge out the GeForce GTX 960s and the Radeon R9 380 when it comes to average FPS numbers. The Radeon R9 390 and GeForce GTX 970 are in a whole different league, though. We’d expect nothing less of those considerably more muscular cards in this measure of potential performance.

As the plots above hinted, the R9 380X is off to a good start in our advanced 99th-percentile frame time measure.  Neither the GeForce GTX 960 cards nor the Radeon R9 380 can deliver frames quite as smoothly as the 380X. To be fair, all of the cards here are hanging pretty close to the 33.3-ms threshold, so they all render the majority of their frames at a rate of 30 FPS or better. Meanwhile, the Radeon R9 390 offers a smoother experience still, while the GeForce GTX 970 rules the roost by a wide margin.

Looking at the “tail” of the frame time distribution for Fallout 4 lets us see how these cards handle the toughest frames they’re tasked with. The R9 380X has a curious hump in the middle of its curve as we move toward the 99th percentile. Even so, its worst frames don’t fall much above the 30-ms mark, so gamers can expect decently smooth frame deliver from the card overall.

Our numbers thus far have already told this tale, but the R9 380 and the GeForce GTX 960s don’t manage quite as low a frame time curve as the 380X here.


In our measures of “badness,” none of the cards spend any time past the 50-ms threshold that tends to produce noticeable drops in smoothness. The 380X doesn’t spend any time beyond the 33.3-ms barrier, either. The R9 380 and the GTX 960s do have a few bad frames past the 33.3-ms mark that might drop the frame rate under 30 FPS briefly, though.

 

Call of Duty: Black Ops 3

 


With Black Ops 3, our frame time plots continue to show smooth performance from the GTX 960 4GB and the R9 380X. The plots from the GTX 960 2GB and the R9 380 get a little spikier than in Fallout 4, but both cards still manage to stay below the 50-ms mark.

The R9 380X comfortably slots in between the GTX 960 4GB and the GeForce GTX 970 in average FPS, and it delivers lower 99th-percentile frame times than the GTX 960 cards and the R9 380 while doing it. Not bad.

Here’s a weird plot. That stairstep pattern you see in these curves isn’t a product of incorrect data. We think it’s actually an artifact of some kind of internal quantization going on in Black Ops 3‘s engine. Hm. We may have to investigate that phenomenon further. For now, though, you can see that the 380X’s frame time curve sits comfortably lower than the GTX 960s and the 380 once more, and even its worst frames aren’t too far above the crucial 33.3-ms threshold.


In our measures of “badness” for Black Ops 3, none of the cards spend any time past the troublesome 50-ms mark. The R9 380X and 4GB GTX 960 barely spend any time above our critical 33.3-ms mark, too. The Radeon R9 380 spends somewhat more time in this range, while the 2GB GTX 960 really struggles. Perhaps this is one game where the extra memory helps.

 

Battlefield 4


Battlefield 4 is the first game that gives the R9 380X a bit of trouble. The 380X and the GTX 960s are right on top of one another in our average FPS measurements for BF4, but the Radeon is slightly worse off than the 960s in our 99th-percentile frame time metric.


The 380X only spends a tiny bit of time past the 50-ms mark, but it struggles for nearly a second with frames that take longer than 33.3 ms to produce. The Radeon R9 380 has a harder time still. The GTX 960 cards have much less trouble here. The GTX 960 4GB is a little worse off than its 2GB counterpart, but the difference is minuscule.

 

The Witcher 3


Here’s another game that trips up the 380 and 380X a bit. You can see a few spikes toward the 50-ms range from both cards. The GTX 960s don’t appear to have as much trouble with whatever the Radeons were chewing on.

As expected from those spikes, the R9 380X leads its class in the potential measure of average FPS, but it falls behind the GeForce cards in our 99th-percentile frame time metric.

 


In our badness measures, the lower-end Radeons spend a bit of time past the 50-ms mark, resulting in noticeable slowdowns we could feel while play-testing. They also deliver more than a few frames beyond the 33.3-ms threshold. That all adds up to a less smooth experience than the GeForce cards can deliver in this title.

 

Ashes of the Singularity

Although Ashes of the Singularity‘s built-in benchmark allows us to collect both DirectX 11 and DirectX 12 data, we’ve chosen to collect and crunch numbers in DirectX 12 mode only for the graphs below.


Here’s another test where the GTX 960 4GB and the R9 380X are neck-and-neck in both performance potential and delivered results. If you were hoping for a test that demonstrates the advantages of one card over another, keep reading. This ain’t it.


As the frame time plot above hinted at, Ashes of the Singularity may be the hardest game in our test suite for these cards to run smoothly. Save for the heavyweights, all of the cards tested spend a lot of time past the 50-ms mark, and even the 33.3-ms threshold is a major challenge with our test settings.

 

Fable: Legends

We ran the Fable: Legends DirectX 12 benchmark at its 1080p preset. For more information about this benchmark, take a look at our full Fable: Legends rundown.

 


The R9 380X has a better time of it in the Fable: Legends benchmark. Our frame time plot doesn’t reveal any major spikes in the 380X’s graph, and the card takes the top of its class in both potential and delivered performance.


Though the R9 380X delivers generally smooth performance in Fable: Legends, this benchmark appears to give the GeForce GTX 960 2GB a pretty hard time. That card spends quite a bit more time than we prefer to see past the 50-ms and 33.3-ms thresholds. The GTX 960 4GB is much closer to the rest of the pack.

 

Power consumption

Please note that our “under load” tests aren’t conducted in an absolute peak scenario. Instead, we have the cards running a real game, Crysis 3, in order to show us power draw with a more typical workload.

No great surprises here. The Radeon R9 380X needs about 50 watts more juice under load to do its thing than the Nvidia cards or the Radeon R9 380. Cards based on Nvidia’s Maxwell silicon continue to be paragons of power efficiency. Let’s see how the extra power used by the Radeons translates into noise and heat.

Noise levels

At idle, these cards are all about as quiet as can be. Any variations in noise levels at idle are likely attributable to changes in the noise floor in our testing environment, not the cards themselves. We’d expect any of them to be inaudible inside a PC case at idle.

Under load, Asus’ Strix R9 380X “beats” the Sapphire card by one decibel. Both cards hang right with the custom-cooled GTX 960s from MSI and Gigabyte, too, despite their bigger appetites for electricity and the corresponding increase in heat production that brings. At least the Asus and Sapphire cards won’t transmit that fact to your ears when they’re running all-out.

Load temperatures attest to the effectiveness of the aftermarket coolers on the Asus and Sapphire cards, too. All of the R9 380X and GTX 960 cards we tested are within a degree or two of one another under load, even with their factory-boosted clocks. Nothing to complain about here.

 

Conclusions

As usual, we’ll wrap up our tests with a couple of value scatter plots. To make our graphs work, we’ve converted our 99th-percentile numbers into FPS. The best values tend toward the upper left of the plot, where performance is highest and prices are lowest. We use a geometric mean to limit the impact of any outliers on the final score.


And there you have it. Whether you measure the R9 380X by our preferred 99th-percentile method or in traditional average FPS, this card performs better than the GeForce GTX 960 4GB. It also costs slightly more. All told, that means these cards offer similar value propositions. As AMD hoped, the 380X delivers solid performance in the games we tested, even at 2560×1440 with considerable amounts of eye candy turned up.

It’s true that the Maxwell-based GeForce GTX 960 is more power-efficient than the R9 380X, but the aftermarket coolers on the Sapphire and Asus Radeons we tested are more than up to the task of keeping the underlying silicon cool without making any more noise than the competing GTX 960 cards. Builders might notice a bit more heat from their PCs with an R9 380X pushing pixels, but that’s mostly hair-splitting. The R9 380X is a very solid option for its price.

The biggest problem for the R9 380X might be those two dots in the upper-right corner of our scatter plot. Some variants of the GeForce GTX 970 can be had for under $300 right now, and the R9 390 is right there with it. If you’ve been paying attention to our test results, you know that the extra cash buys you a lot more graphics card. Unless the expected $240 street price of an R9 380X is the absolute top of your budget, we think it’s worth saving up the cost of one or two PC games and getting into an R9 390 or a GTX 970.

Comments closed
    • Rageypoo
    • 4 years ago

    These numbers seem insignificant with Pascal coming out next year.

      • derFunkenstein
      • 4 years ago

      Personally I don’t see Pascal launching anywhere near this price range until much later. Nvidia will replace the GTX 970 and 980, I would think. That’s where the margins are and this new process will be expensive for a while, I think.

        • NoOne ButMe
        • 4 years ago

        The new process will be more expensive forever based on current information.

        22FDX should bring a slightly denser and better performing transistors in 2017 for commercial products. But it likely will cost the same as 28nm.

        Expect the mid and lower end to stay 28nm for a while.

        Cost per transistor goes up slightly, cost to design and validate goes up DRASTICALLY. If you don’t get better performance per transistor you can burn an extra 50-100+ million for one card. By some counts the cost is over 200 million more.

      • ptsant
      • 4 years ago

      This card is loaded with all the new technologies. At around 160-180, it would make very good competition and a very solid choice for 1080p. Pascal is not likely to compete in this price bracket. Unless of course if there are yield questions and they launch medium or low-end components first, as with the 750Ti.

      Don’t forget that AMD is also launching new GPUs in 2016. All HBM parts will have a price premium, though, and I don’t expect them to be placed under $300.

        • NoOne ButMe
        • 4 years ago

        At $200 it would be good. At 160-180 it would be a steal.

        I wouldn’t say loaded with new tech though.

    • ColdMist
    • 4 years ago

    On video card reviews, it would be really nice to have (maybe not always, but on a few games where it makes sense) the top-of-the-line card from the same company, like a Fury X, and a price-comperable-at-release previous architecture (not numbering scheme).

    With that, it would help a lot to know when upgrading from a $200 card from 2 years ago to a current $200 card, how much extra performance am I going to get. Drivers change over time, so trying to compare apples to oranges with benchmarks from 2 years ago is hard to quantify.

    And, showing a top-of-the-line card shows what ‘can’ be done today, just to know how it fits in the grand scheme of things. Is this card 50%? 95%? etc. Showing a 970 helps, but just go a bit farther one 1 or 2 games 😉

    • ish718
    • 4 years ago

    Weak.
    R9 290 for $240 on newegg.
    512-bit 4GB GDDR5
    2560 Stream processors

      • f0d
      • 4 years ago

      i wish we got those prices in australia 🙁
      i would buy 2

      the 960/380x is actually about $200 cheaper than a 970/390 (cant get 290/290x’s at all anymore) here in australia ($289 vs $479 at the cheapest)

      would i stretch my budget for something faster than a 960/380x? for sure i would
      but for some people in other countries the price difference is pretty big and they cant afford to

      edit: examples
      960 [url<]https://www.pccasegear.com/index.php?main_page=index&cPath=193_1732&vk_sort=1[/url<] 970 [url<]https://www.pccasegear.com/index.php?main_page=index&cPath=193_1692&vk_sort=1[/url<] 390 [url<]https://www.pccasegear.com/index.php?main_page=index&cPath=193_1769&vk_sort=1[/url<]

    • TopHatKiller
    • 4 years ago

    Just had a ‘orrible thought. Tonga IS 48ROP/384bit i/o…. 380x is still fused off, because in 2016, it will still be here….Two discrete gpus from amd on 14/16 finfet, according to statements made by amd, are not enough to the replace the whole product stack…. so tonga re-re-resurfaces next year at a fully enabled & lower price.
    BUGGER! i JUST FOUND OUT I WAS RIGHT! i hate it when that happens. [it’s never happened before]

      • Freon
      • 4 years ago

      If it’s that easy I don’t know why AMD is letting the 970/980 pillage their coffers when the 390/390X in the same price bracket are probably significantly more expensive to manufacture than a Tonga.

      • NoOne ButMe
      • 4 years ago

      Do you know what enabling the rest of Tonga will do to it’s performance? Almost nothing.

      But, you have less bins available to sell. Meaning you have to artifically cut down more chips to the lower Tonga bin. Meaning you get lower ASP for your dice.

        • TopHatKiller
        • 4 years ago

        Performance: yeh, except maybe at higher resolution. Then a little bit…
        But, crap, the more you make, the cheaper they become. Yields no doubt are fantastic already.
        But quantity reduces unit price and AMD doesn’t need to spend the large sums on a new discreet design for whatever segment they slot this thing into. I hope I just had a bad dream. At least bloody Pitcairn will be killled next year?

          • NoOne ButMe
          • 4 years ago

          Bandwidth for the specs doesn’t help more than 5% on average at 1440p.

          When you have a fixed wafer cost and packaging costs, etc. your costs don’t go down.

          The *reduced average selling* price per dice that AMD would get in your idea isn’t worth the reduction in R&D + other fixed costs per card.

            • TopHatKiller
            • 4 years ago

            You’d think your comments and costs would be right: except they can’t be. That’s why amd have continued to use the same chips for the past sixty or seventy years or so [ feels like] at some point you’d think enough would be enough…?

            • NoOne ButMe
            • 4 years ago

            what even are you trying to say? I’m curious.

            • Redocbew
            • 4 years ago

            Rule #1. You don’t ask what THK is talking about.
            Rule #2. You don’t ask what THK is talking about.

            • NoOne ButMe
            • 4 years ago

            What is rule #3? 😉

            • sweatshopking
            • 4 years ago

            6

            • Redocbew
            • 4 years ago

            42

            • TopHatKiller
            • 4 years ago

            TopHatKiller turning into ‘Hulk Smash’ can occur with appalling alacrity.

            • TopHatKiller
            • 4 years ago

            that’s funny. Stupid, but funny.

            • TopHatKiller
            • 4 years ago

            Sorry for the delay. Simply this: amd is desperate to cut costs throughout the whole business. If continually re-using the same chips didn’t save them a reasonable amount of money they wouldn’t do it. Now, if take this saving but then subtract the loss of sales by not introducing superior designs you’d probably come up with a very different financial equation, but amd seem unable to see past their own nose* at this point
            {* Zen! Their nose is called Zen!. Ha.]

            • NoOne ButMe
            • 4 years ago

            Which has no relations to Tonga. The cost of making a Tonga/Tahiti equivalent chip with less die area at this point is overall less total income per card sold.

            now, if Tonga had been designed to be smaller/less transistors when it was made (presumably) for the iMac 5K than things would be different.

            • TopHatKiller
            • 4 years ago

            i didn’t say they were smart! a bit desperate, perhaps…

            • NoOne ButMe
            • 4 years ago

            The cost to make a superior design isn’t worth it. Either you make a shrunken Tahiti (should end up about 300mm^2) or you make a Hawaii with the FP64 rate set to 1/16 and a new 384b bus w/compression. Probably dropping a bit below 400mm ^2.

            No option will really increase sales there. And the savings on production will be minuscule compared to the cost of even just the masks.

            Making a 24CU part with a 192b bus might be a valid chip. Replacing Pitcairne. Shouldn’t be to much larger thanks to the loss of a bus and process advances. That would be an interesting part.

            Don’t think the increases sales over Pitcairne + increased sale price wasn’t high end to be a positive return.

            • TopHatKiller
            • 4 years ago

            Well… you’re assuming amd didn’t have an improved gpu design fit for the current node.
            Logically, I think they did. But, accountants / mangers said… no.
            “Save money, keep it as pocket-change, look!: stock-price, cash-reserve, mi! lunch allowance! Mi! Bonus! on share-price! Mi! Money and attachments mi liker.”
            Or. whatever. Saved for 14/16nm next year.

            • NoOne ButMe
            • 4 years ago

            AMD’s woes have little to do with GPUs. They have made improved GPUs. They refuse to move off of GCN for good reasons. Mainly GCN everywhere. I wouldn’t be to shocked if on 14nm they licence it to a company making an SoC. Phone, Tablet, Console, Laptop, desktop, workstation, server, HPC. All GCN. GCN everywhere.

            The year delay of Llano followed by almost a year delay of Kaveri along with terrible handling of the channel inventory is what has really hurt them.

            Well, Bulldozer being crap also, but, well. When you introduce an architecture 2-3 years late below clockspeed targets that happens.

            AMD’s issues are larger CPU based. Their GPU troubles wouldn’t be “trouble” if the CPU side wasn’t f-cked.

            • TopHatKiller
            • 4 years ago

            Allegedly MediaTek have already done that. Also, even more allegedly Samsung were sniffing around.

      • Chrispy_
      • 4 years ago

      I don’t think Tonga has 48 ROPs

      ROPs are part of the shader engines and Tonga has four of those in an 8CU x 64SP per engine, giving 512shaders per engine.
      If Tonga had hidden ROPs, it would also have hidden shaders, so a 48 ROP part would assume that Tonga was actually a 3072-shader part with 6 geometry engines and that’s just crazy.

      Less scientifically, AMD seems to be geometry-limited and ROP-limited a lot of the time; GCN 1.2 has very capable shaders indeed, so if unlocking more was truly that simple they’d be doing it to push their products into a higher price bracket and making more profit. Enabling any disabled ROPs would be easy, since they are internal to the shader engines – there’d be no need for a different package or increase in complexity – it would be as trivial as enabling the disabled CUs that differentiated the 7950/7970, 280/280X, 380/380X. No matter how many variants AMD make of a chip, they never change the Geometry or ROP setup. That stuff is baked in and too integral to the larger shader engine to be easily enabled or disabled.

      The extra two memory controllers on the Tonga die appear to be unused, but enabling them would cost a lot; it would require a larger, more complex package for the die, it would need a more expensive board with more layers, it would cost clockspeed because more of the TDP would need to be allocated to the memory controllers, and lastly, it would add the cost of extra GDDR5 modules. Granted, you’d get more memory and more bandwidth, but Fiji has proved that in common scenarios that doesn’t really matter at the moment.

        • NoOne ButMe
        • 4 years ago

        I believe GCN ties ROPs to the bus. For GDDR5 chips. Each 64b bus has 8.

          • Chrispy_
          • 4 years ago

          Then Tahiti should have had 48 ROPs, not 32 and Oland should have had 16 ROPs not 8.

          If you look at the functional diagrams of GCN 1.2 parts like Fiji, Hawaii and Tonga, the ROPs are integral to the shader engines, and all three models have 4 shader engines.

          When those models lose shaders (specifically 1CU per engine) for the transition from XT to Pro variants, they lose TMUs, but not ROPs. I know function diagrams aren’t 100% accurate (like for example, ignoring the missing two memory controllers from Tonga) but if the ROPs were tied to the memory bus they would be drawn separately to the shader engines between them and the memory controllers. They’re not, though. They’re the resource through which the whole shader engine outputs.

            • NoOne ButMe
            • 4 years ago

            I stand corrected! 🙂
            Oh, is it Nvidia they’re tied to the controller than? I though one of them.

            • Chrispy_
            • 4 years ago

            Yep, that was part of the 3.5GB + 0.5GB GTX970 debacle.
            Nvidia’s ROPs are tied to the L2 cache between the SMM’s and the memory controllers.

            Severing one of the memory controllers also culled 8 ROPs and 256KB of valuable L2 cache, which had some people upset because Nvidia mistakenly claimed 64 ROPs at first when it actually only had 56.

            • namae nanka
            • 4 years ago

            Tahiti was an exception(not sure of Oland but that doesn’t look like the same case anyway) with a crossbar for the mismatch. It went away with Hawaii and most probably hasn’t returned with Tonga.

        • namae nanka
        • 4 years ago

        Hawaii has double of them without doubling the shader count. Though having 3 of them instead of 2 or 4 does seem to be out of the picture.

        Besides AMD haven’t got an arrangement like nvidia’s where shader engines seem to be directly connected to ROPs leading to a difference in how their cut down chips compare in pixel fillrate tests with their full fledged siblings.

          • Chrispy_
          • 4 years ago

          In the Fiji launch article, AMD explained why Hawaii had an oddball number of CUs per engine (11, rather than a power-of-two).

          Hawaii hit the limit of physical size that could be fabbed. If the 28nm lithography process and fabs had been able to make a physically larger die, Hawaii would have been a 4096-shader part. Fiji’s HBM was not just a solution to memory bandwidth, it was also a way to free up die area driving high-speed GDDR5 controllers.

          As for why they added more ROPs to the shader engines, rather than increasing the number of shader engines to eight, that’s because GCN scalability has a maximum of four shader engines, and also why we now know Fiji to be geometry limited, because there are still only four geometry engines, just like Tonga, though they are “beefed up” over Hawaii’s GCN 1.1 geometry engines.

            • NoOne ButMe
            • 4 years ago

            AMD could have made Hawaii larger with the physical process.

            Nvidia was doing so. Of course AMD likes to not screw their own yields.

            • namae nanka
            • 4 years ago

            The point was that they aren’t tied in to number of shaders. And the oddball number here would be of ROPs and not CUs with 3 of them for a shader engine.

            “Hawaii hit the limit of physical size that could be fabbed”

            Strange then that nvidia could get a 100mm2 bigger behemoth out before Hawaii.

            “rather than increasing the number of shader engines to eight”

            You are going on a different tangent altogether.

            • Chrispy_
            • 4 years ago

            Yes, Nvidia were making 20% larger chips than Hawaii but in the super-niche $650+ low-volume, high-profit market. As [i<]NoOne ButMe[/i<] hinted - yields were so low at this size that TSMC was reporting 85% scrap rates on initial GK110 runs; 10 usable dice from a $6000 wafer is woeful, which is why the 780 was so cut down - because so much of it was defective and had to be disabled. AMD chose well to avoid making a supersized die back then. And I'm not saying that ROPs are tied to the number of shaders, I'm saying that they're NOT tied to the memory bus. I think when I say "shader engine" (SE) you're confusing that for shaders, but they are TOTALLY different terms. A tonga SE contains: 1 Geometry Processor 1 Rasterizer 7 or 8 CU's (depends on model), which contain 4*16=64 shaders each 8 ROPs An SE is the AMD equivalent of an Nvidia GPC, and with GCN. The 8 ROPs per SE are seperate to the CUs but internal to the SE, and not on the outside bus. The reason I mentioned GCN's four SE limit is because AMD had to find a way to increase raster througput for Fiji and Hawaii, and they did this by doubling the number of ROPs [b<][i<]for those specific chip designs[/i<][/b<]. In an ideal world, GCN would scale to 8 SEs allowing for 8 Geometry processors and 8 rasterisers too, but it doesn't - for whatever reason that only AMD can answer.

            • NoOne ButMe
            • 4 years ago

            Spot on. And it makes me wonder how long until the 17b tranny Pascal comes.

            Nvidia has had very bad yields compared to everyone else for two nodes already at the start.

            Even cost-insensitive FPGA parts on 20nm (cannot remember who made it) did 4 5b tranny dice to get one 20b tranny part. And they can afford to get one good dice off of a $6000 wafer. (Actually that’s probably not quite true).

            And, once the 780ti was released Yields should have been viable for the price point with healthy margins. It took them over 2 years from first tape out to that part though, if memory serves.

            • Mr Bill
            • 4 years ago

            +++ Because he is right.

            • Chrispy_
            • 4 years ago

            Thanks. I’m always right, even when I’m wrong:

            It’s my prerogative as a grumpy old bastard 😉

            • Mr Bill
            • 4 years ago

            How much of a bump might Tonga get with 4GB of HBM instead of GDDR5? I wonder if they can get that close to this price point.

      • Mr Bill
      • 4 years ago

      28 nm Antigua based GPU (full Tonga)
      256 bit bus
      2,048 stream processors
      128 texture mapping units
      32 raster operation units
      4GB GDDR5
      The 380X gives 4.0 TFLOPs (FP32) and has a 16:1 FP32 to FP64 ratio for 0.25 TFLOPS (FP64)
      The GTX 960 gives 2.4 TFLOPS (FP32) and has a 32:1 FP32 to FP64 ratio for 0.075 TFLOPS (FP64)
      The GTX 970 gives 3.5 TFLOPS (FP32) and has a 32:1 FP32 to FP64 ratio for 0.109 TFLOPS (FP64)

    • ermo
    • 4 years ago

    [b<]@JeffK:[/b<] Do you know how the R9 280X/7970 GHz Ed. 3GB cards compare to the 380X 4GB?

      • NoOne ButMe
      • 4 years ago

      The 280x and the 380x are neck and neck for performance.
      So are the 7970Ghz and 280x.

      • jihadjoe
      • 4 years ago

      Ditto. I wish they included the 280X so we could see how the new card compares to the previous generation.

      Edit: TPU has a 280X on [url=http://www.techpowerup.com/reviews/ASUS/R9_380X_Strix/23.html<]their review[/url<], and it appears the 380X is actually 5-10% slower than the 280X, and the factory OC'd ASUS 380X Strix version they had just about matches the stock 280X.

        • NoOne ButMe
        • 4 years ago

        Um, where did that 5-10% slower come from? Are you reading your own link?

        The 280x has about 3% more GLFOPs and a lot more effective bandwidth.
        And according to techpowerup it about 1% faster than the 380x at 900/1080p and about 4% faster at 1440/2160p.
        Or, the 380 is 1-3.5% slower than the 280x.

          • jihadjoe
          • 4 years ago

          Err sorry I was looking at the individual games, not the averaged score even though I did link to their summary page. My bad.

          BF4 in particular since it was mentioned elsewhere in this thread has the old 280X with a huge advantage.

            • NoOne ButMe
            • 4 years ago

            Yeah, the individual games on TechPowerUp and how they do their summary can be quite misleading at times. I’ve not heard it being to bad with cards of the same vendor, but, some reviews it will be one vendor leads 5-10% for 15 games and the other leads 50%+ for 2. And they end up being “the same performance”.

            All cool, thanks for the explanation! 🙂

    • Pettytheft
    • 4 years ago

    Considering I’ve seen 970’s as low as $260 and custom coolerd 390’s at $280 I would skip this unless it was discounted.

    • tuxroller
    • 4 years ago

    Although it may not look it, the 280X and 290 have nearly identical slopes, so fps/$ is identical.
    Interestingly the card with the highest slope is the 970.

      • Freon
      • 4 years ago

      Well, as the conclusion points out, the 970 is realistically very available under $300. They marked the original MSRP on the graph, but I check /r/buildapcsales daily and I’d guess on 80% of days you can get a good 970 for $280 or less (Asus Strix, eVGA ACX2.0, etc), and the Zotac 970 has been $250 many times if you don’t mind what is probably a bit more of a cutrate base speed card.

      Feel free to F5 that subreddit for a few days and see.

      If you shift the 970 point on the graph to a more street price value you can see there’s a huge ramp from the $200-250 cards up to a 970.

      If the 380X starts selling for a similar percentage discount off its MSRP then of course the calculation changes. I.e. if we see them for $199-209, ok, $80 more for a 970 is a stretch for someone trying to hold to a specific budget.

    • sweatshopking
    • 4 years ago

    MY 4850 WENT TO SILICON HEAVEN

      • anotherengineer
      • 4 years ago

      MINE STILL LIVES!!!

        • ermo
        • 4 years ago

        Mine too. And it’s running Linux OSS drivers now, using an aftermarket dual fan Arctic Cooling heatpipe cooler and custom BIOS power saving settings (low: 250 MHz/mid: 500 MHz/high: 690 MHz). Pretty impressive run all things considered.

        [i<]*knock on wood*[/i<]

      • christos_thski
      • 4 years ago

      Mine too. Damn things kept burning out, so far as I can tell, lots of people lost their 4850/4870 cards. 😉 I do have a 7870 that’s going strong, though.

      • Jigar
      • 4 years ago

      Mine still lives, and i use it as my back card – Unfortunately the same cannot be said for my HD5850.

      • HisDivineOrder
      • 4 years ago

      In Silicon Heaven, the Green and the Red and, yes, even the Blue dance together. Their long wars forgotten, their biting words and complaints about drivers a distant memory.

      In Silicon Heaven, there is only dust and the loud whine of the FX5800’s all in the corner.

    • DPete27
    • 4 years ago

    How are we determining pricing here? I get that some people don’t like mail in rebates, but I’m not sure that justifies ignoring them altogether. The vast majority of GTX 960 2GB cards on newegg are averaging $170 (some after MIR, some without), GTX 960 4GB cards are averaging $210, and the 380X’s are going for average of $230. I get that readers can adjust the pricing to fit current market trends, but…

    I’m saying averages because, as I mentioned before, there are/have been numerous sales on GTX 960 4GBs for $160-175 after MIR.
    (also, I’m not a fanboy of either side)

      • Demetri
      • 4 years ago

      MSRP is probably the only fair way to do it, at least from a review standpoint. The prices are in too much flux. The 380 4GB is already in the same range you mentioned for the 960 4GB. You’ll see rebates and sales on the 380X soon enough, we just have to wait a little while. It was just released today.

    • anotherengineer
    • 4 years ago

    On a side note, has it been officially confirmed if these chips have a 384-bit bus?

    • Bensam123
    • 4 years ago

    Battlefield 4 with no Mantle benchmarks. 🙁

    Although this may not apply to most consumers. You can increase performance with AMD cards on FO4 by adding it as a custom title to the CCC and then setting the tesselation to 8x or lower. FO4 over uses tesselation on somethings and it doesn’t really change much of anything.

      • BobbinThreadbare
      • 4 years ago

      Techreport showed Mantle is slower with new AMD cards than DirectX, so they dropped it.

        • Bensam123
        • 4 years ago

        It’s slower on Fury… Why wouldn’t you use it on cards it’s faster on? The option is there, they aren’t doing anything special.

          • NoOne ButMe
          • 4 years ago

          The Mantle implementation on BF4 was pretty much a card by card basis.

          I doubt it is faster on the 380x under that game. Games since that have been more generalized in how support is done to my knowledge.

            • Bensam123
            • 4 years ago

            It was a architecture by architecture basis. The only card it’s proven to be slower on is Fury and that was at the launch of Fury, which was odd to say the least when it happened.

            • BobbinThreadbare
            • 4 years ago

            Fury is the same architecture as the 285/380/380x. It’s all GCN 1.2

            • Bensam123
            • 4 years ago

            Hmmm… Chip difference then? The Toms Hardware review that Jihad posted said that they have problems with less then 3GB of memory and the ones TR tested have 4.

            For some reason I thought the 380/x were still based off of Tahiti and not Tonga.

            • Chrispy_
            • 4 years ago

            You could be forgiven for thinking that, given how the rest of the entire 300-series was a huge re-brand-a-thon.

            But no, Tahiti was dropped for the 300 series whilst the 7870 lives on for its second rebrand (and on a tangential note, both AMD and Nvidia need something modern down at the 370/750Ti price point!)

      • jihadjoe
      • 4 years ago

      Check out the R9-285 numbers on [url=http://www.tomshardware.com/reviews/amd-radeon-R9-285-tonga,3925-6.html<]these BF4 results from Tom's[/url<]. No reason why [s<]results[/s<] scaling would be any different with the 380X since it's the same architecture.

        • NoOne ButMe
        • 4 years ago

        It would be equal to the 380 ;). And the 380x should be about 10% faster.

          • atari030
          • 4 years ago

          The 380 is actually a bit faster than the 285 due to some uprated specs. It wasn’t completely untouched when it went from a 200 series to a 300 series card. At least in terms of what the card vendors have done in tweaking their versions of it.

          • jihadjoe
          • 4 years ago

          Good point lol. Fixed.

        • Bensam123
        • 4 years ago

        I think you missed the point… I can extrapolate all I want, that doesn’t mean the readers do that. That’s the whole reason the cards are tested in the first place. You could’ve said the same thing about this whole review.

      • arbiter9605
      • 4 years ago

      So you want to do custom settings in the drivers to cheat the results to benefit AMD? That would be seriously unethical thing for a reviewer to do.

        • DoomGuy64
        • 4 years ago

        You should be telling the developer that. FO4 has a lot of issues that require editing .ini files to fix. People have been comparing it to Batman:AK, in terms of how buggy the engine is. Not really a good benchmark, unless you’re trying to find a “gotcha” example, which doesn’t work because it runs equally bad on most hardware. I’ll be waiting for patches before buying this title.

        [url<]https://www.reddit.com/r/pcmasterrace/comments/3seb6u/fallout_4_performance_and_assistance_megathread_2/[/url<] [url<]https://www.reddit.com/r/pcmasterrace/comments/3sa5gu/why_does_clunkout_4_get_a_pass_w_bugs_but/[/url<] [url<]https://www.reddit.com/r/pcmasterrace/comments/3sgrto/bethesda_should_not_be_allowed_to_get_away_with/[/url<]

        • Bensam123
        • 4 years ago

        Not sure how it’s cheating. They didn’t optimize their game properly. If you’ve played Fallout 4 for any length of time you know it needs a healthy dose of this.

        This reminds me of back when Crytek tesselated the fuck out of of road dividers in Crysis 2 for the sake of having tesselation.

          • erwendigo
          • 4 years ago

          Ok, but:

          Why aren’t you complaining about the use of that two DX12 benchmarks in this test?

          One is directly sponsored by AMD, hell, all the things that come from Oxide Games was and is sponsored by AMD (first, the fake game and only benchmark, finally, Star Swarm, and now Ashes of Singularity).

          And the other, indirectly from microsoft to AMD, a DX12 test that is with the AoS the testbed of a very recently updated and optional feature to DX12, the async shaders.

          So, you have two tests that you can argue that are biased of untrusted , sinthetic ones. But no, you need to complain about a AAA+++ game, a bestseller, that had one of the best launchs with the BEST version of the game for PC (the best visual and smoother experiences against consoles). It’s laughable that the argument against Fallout 4 was “lack of optimizations”, when it’s a game with the best launch state in PC for the last months.

          Or you are complaining too about the only AMD sponsored game (yes, BF4 is, indeed, a sponsored game, with 5-8 millions of dollars from AMD to EA for this) because in the test don’t use the ethernal beta state of Mantle. The last one that this was tested with Tonga chips, the Mantle version was slower and with more stuttering than the D3D, so, why are you complaining about it?

          It’s a deprecated version of game, forgot better support now, with a deprecated API, with a gpu that doesn’t run WELL with the mantle version, never.

          For all your complainings, are much stronger ones for the contrary. Accept the test, like I accept the DX12 sinthetic tests, yet with the reluctance for their origins (Oxide games and its heavy sponsoring from AMD), or use of cuestionable techniques that doesn’t make any difference in graphics, and only are useful for ONE arquitecturae, GCN.

            • Bensam123
            • 4 years ago

            I don’t believe your baselines directly translate to mine. I am not you. I can think in whatever way I want to, it doesn’t need to be logically or illogically dictated by you.

            That aside, the DX12 games currently in use are there because there is nothing else to test. It doesn’t matter who they’re sponsored by. Nvidia performs well them, it’s not completely skewed… There isn’t a game rendering bug in them that makes the tesselation go off the charts and causes one side to perform poorly until the developer patches them.

            Nothing here is questionable. FO4 is a buggy mess and that’s one of the ways to help fix it, same thing goes with all the other .ini tweaks you can do.

      • K-L-Waster
      • 4 years ago

      AMD has… shall we say, “repurposed” Mantle. Why should reviewers still test it?

      Let it go.

    • GeForce6200
    • 4 years ago

    Performs about where one expected it to. Power consumption jump is expected but unwelcomed. Surprised how close in consumption the 380/960 are. With how cheap 380s are I would personally go that route. As for the $200-250 bracket mentioned, Zotac had some new $235 shipped GTX 970s not long ago. I’d try and squeeze that or a 390 into budget.

    • NovusBogus
    • 4 years ago

    This would be a good choice for folks running 1920×1200. The 960 is too weak to go full speed ahead and 970/390 are overkill. It looks like they’re getting the efficiency situation under control, too.

    • travbrad
    • 4 years ago

    It’s a nice card but I agree the 390/970 seem like better buys right now since they have been on sale for <$300 for the past couple months and give you a lot more performance (40-50% more performance for 20-25% more money).

    I’d certainly get a 380X over a GTX960 , but the 960 has never been a great value IMO. I’m glad I got a 970 a couple months ago, especially since I’m playing Fallout 4 nonstop right now.

      • ultima_trev
      • 4 years ago

      Indeed so however I’d say even at the $330 MSRP, the 970 and R9 390 make way more sense than the 960/380/380X. Originally I was going to wait for the fully fledged Tonga for my new build but honestly now I’m relieved I decided to go over-budget and get the R9 390 instead.

    • meerkt
    • 4 years ago

    -1 for no HDMI 2.
    +0.5 for idle power lower than Nvidia (which is a surprise).

      • brucethemoose
      • 4 years ago

      AMD’s been advertising that ever since Tahiti came out.

        • jessterman21
        • 4 years ago

        ZERO CORE POWAAAAAAAAAA!

        • meerkt
        • 4 years ago

        Lower power with display off is one thing (which I wonder why Nvidia didn’t implement), but it’s also lower when active but without much load.

          • brucethemoose
          • 4 years ago

          Maybe the 960 can’t downclock all the way at 1440p.

          My 7950 clocks pretty low at 2560×1440/110hz, but the memory speed goes back up if I add a 2nd monitor. Meanwhile, I’ve heard people complain about Kepler cards staying at high clocks with high-res single displays like that.

      • xeridea
      • 4 years ago

      +2 for DP

        • meerkt
        • 4 years ago

        Doesn’t help with TVs.

          • xeridea
          • 4 years ago

          No, but you can get a $12 converter that will handle 4K if using for an HTPC. For computer purposes DP is superior.

            • BobbinThreadbare
            • 4 years ago

            TVs will start shipping with DP too, it’s cheaper than HDMI so I expect it to take over as the dominant port at some point.

            Also, Apple is pushing it, and they have a mini version going right now, don’t think mini-HDMI exists.

            • frumper15
            • 4 years ago

            mini-HDMI is definitely a thing, but I don’t anticipate there will be widespread use outside of mobile-type applications. My Canon DSLR has a Mini-HDMI output on it, for example. Monoprice has a selection of cables/adapters as well.
            [url<]https://www.monoprice.com/Category?c_id=102&cp_id=10242[/url<] I imagine it has the same licensing costs as HDMI. Given HDMI's foothold in the livingroom I don't think we'll see it being replaced by DP any time soon, it's just got too much inertia IMHO.

            • meerkt
            • 4 years ago

            There’s mini and micro:
            [url<]https://en.wikipedia.org/wiki/HDMI#Connectors[/url<] As usual, the non-standard types are used also where it makes no sense. Like the mini on my desktop graphics card, which I hate. I don't think DP will be coming to TVs any time soon, if it hasn't until now. And when/if it does, it will definitely not supplant HDMI. At the very least, too much existing gear needs it, and I think DP is also limited in terms of cable length. In terms of costs, if the Wikipedia article is correct, DP is more expensive to license than HDMI: [url<]https://en.wikipedia.org/wiki/HDMI#Relationship_with_DisplayPort[/url<]

            • meerkt
            • 4 years ago

            Can you point me to that $12 DP->HDMI2 adapter? I believe most are just HDMI 1.4. And I’d expect to be hit with compatibility problems in some cases.

            Any idea if these things add latency?

      • Leader952
      • 4 years ago

      If your computer is idle for any length of time why does it not go into S3 sleep mode where everything except the memory (which is in refresh only mode) is powered off.

      In S3 sleep mode the GPUs from AMD and Nvidia are at ZERO watts.

        • meerkt
        • 4 years ago

        Because it’s downloading or uploading (if you mean monitor-off).

    • Mat3
    • 4 years ago

    Awesome, another fully enabled card to pick from, no disabled or defective parts. Most people will say only price/performance matter, but for me I like my GPUs to be all they can be.

      • brucethemoose
      • 4 years ago

      Partially disabled GPUs are almost always the ones to get, because:

      1: They perform similarly to the fully-enabled big brother for much less $$$
      2: They overclock better because they have lower stock clocks.
      3: They have the potential to be unlocked. It’s not common these days, but it’s a big deal with Hawaii and Fiji.
      4: They usually get the same aftermarket coolers as their more expensive brethren, vs. cheaper coolers for the next fully unlocked card down the chain. It’s also cheap for manufacturers to use the more expensive PCB/memory ICs of other cards for “premium” versions.

    • BobbinThreadbare
    • 4 years ago

    There is an r9 285 for $160 on newegg right now ($140 after mir) [url<]http://www.newegg.com/Product/Product.aspx?Item=N82E16814202128[/url<] Seems like a very good deal if you're looking for a card in this price range.

      • Demetri
      • 4 years ago

      Good price. I’m thinking about getting this one:

      [url<]http://www.newegg.com/Product/Product.aspx?Item=N82E16814150730[/url<] $30 more but you get 4GB instead of 2, and a dual fan cooler. Also has a $20 rebate. Newegg sent me a 10% off coupon and a $10 promotional gift card that expires on Tuesday, so I've got a hole burning in my pocket. They've also got this 290 for $200 ($180 AR), but it's got that awful reference cooler on it. [url<]http://flash.newegg.com/Product/N82E16814202043[/url<] (Looks like the auto-linking doesn't work for NeweggFlash; you'll have to copy-paste the url)

        • NoOne ButMe
        • 4 years ago

        Adjust powertune. You probably can hit near the level of performance you would normally while lowering heat and power (and noise) quite a bit. There is a thread about it somewhere on the forums.

          • Demetri
          • 4 years ago

          It’s mainly the noise I’m concerned with; I remember hearing a lot of complaints on that reference design when Hawaii first came out. You may be right though; it could potentially be de-tuned a bit to keep it quiet, and at that price it’s probably worth a shot. With the rebate and the coupons they gave me, it essentially knocks it down to $150, so worst case scenario it’s too loud and I unload it on Ebay and get my money back.

            • Chrispy_
            • 4 years ago

            Yeah, the reference cooler isn’t a complete waste of space; it’s a decent vapor-chamber design with a reasonably well-made fin stack on top. It just can’t handle more than 250W without making a lot of noise – I think the default clocks and power draw are just a bit too much for it.

            You can probably get your clocks pretty stable between 800 and 900MHz whilst dropping powertune down by 30% or so, reducing your power draw by 90W and allowing you to drop fan speeds by 20%.

            Fan noise is on a perceptively measured logarithmic scale so dropping the fan noise by a small amount (say from 60% to 45%) can be the difference between having your own private hurricane and an unobtrusive hum.

        • Bensam123
        • 4 years ago

        And this just poops all over any other deal out there. I have a R9-290 and bought it when it first came out with the stock cooler. It never runs into throttling issues even with powertune turned up. Then again I haven’t played Crysis 3, so maybe…

        Nice AD links BTW, that would be why the hot linking doesn’t work.

      • jessterman21
      • 4 years ago

      Even better is the [url=http://www.newegg.com/Product/Product.aspx?Item=N82E16814161459<]R9 290 for $229[/url<]

        • Bensam123
        • 4 years ago

        Demetri linked one for $180.

          • jessterman21
          • 4 years ago

          Yeah but with the reference cooler… bleh

    • Meadows
    • 4 years ago

    That difference of 50 W is still a sore point in my opinion, even considering they’re overclocked (slightly). Although it’s not as bad as it used to be.

    This is a card I might actually not recommend against, but I’d still favour the GTX 960 overall.

      • auxy
      • 4 years ago

      Why? (´・ω・`)

        • Meadows
        • 4 years ago

        Cheaper, cooler, has better OC headroom, and it probably still uses less power even if you do OC.

          • NeelyCam
          • 4 years ago

          Does running cooler or having lower power consumption matter to you if the card is just about as quiet as the other guy’s?

            • NovusBogus
            • 4 years ago

            Power consumption is part of the total cost of ownership, so it definitely does matter. This one is less egregious than recent Radeons though, especially since you’re getting slightly more performance than a 960.

            • Demetri
            • 4 years ago

            Also consider how much time your GPU spends at full load, which is where the 50W gap comes into play. At idle, it’s 10W more efficient than the 960. I know my GPU definitely spends at least 5X the amount of time at or near idle than it does at full blast.

            • NeelyCam
            • 4 years ago

            [quote<]Power consumption is part of the total cost of ownership[/quote<] Sure, but like Demetri pointed out, even gaming PCs tend to be idling most of the time. For me, the power consumption concerns are usually focused on cooling noise. The cost of extra electricity doesn't really add much to the total cost, especially when you consider that during the winter the PC will double as an extra heater, reducing other heating costs. (Of course, during the summer, it might increase cooling costs, so it depends on your climate.)

            • anotherengineer
            • 4 years ago

            Indeed

            I shut my pc down completely when I am not using it. Sometimes it goes a day or two without even getting turned on. Sometimes it doesn’t see more than 3 hrs a week of gaming.

            If it uses 100W more I don’t really care.
            I look for quality of card, such as regulators, cooler, warranty and noise.

            My old HD 6850 is still going strong, wouldn’t mind upgrading it, but can’t find nothing appealing or the need. Might use it until it dies, or until the W10 ends up going on the SSD, then a DX12 will be replacing it.

            • jihadjoe
            • 4 years ago

            One cold winter many years ago I had a Pentium 4 tower that sat under my desk.

            I turned it sideways so the side exit exhaust fan pointed at my feet, making them warm and comfy.

            Ah, the small joys of computing!

          • auxy
          • 4 years ago

          (´・ω・`) You saw the Radeon uses less power at idle, right?

      • DoomGuy64
      • 4 years ago

      Considering the 380X beat the 960 in Gameworks titles like FO4 and TW3, I’d say it’s quite obvious which is the better card. Imagine how much faster the 380X would be in normal games, not to mention VSR or 1440p. No contest. Nvidia’s only saving point would be price cuts to the 960, which AMD would cut the 380, and then Nvidia would have no cards left in this performance/price range.

      Conclusion: After a token price cut, and saving less than a light-bulb’s worth of electricity at load, the “best” choice would be to waste your money on a 960, because Gameworks perform better on nvidia cards. Except in this case, they don’t. Nice try, though. The 380X is the red pill that broke the illusion. Now let’s see how many people still choose the illusion.

        • Meadows
        • 4 years ago

        Sure, but at $50 more it better be faster at least slightly.

          • DoomGuy64
          • 4 years ago

          It is. The 380X is in a performance class between the 960 and 970. AMD can easily justify their price point because Nvidia has nothing to offer in this segment. Don’t forget the 380 is the direct competitor with the 960, not the 380X, and AMD will price it accordingly.

            • MathMan
            • 4 years ago

            One almost has to feel sorry for AMD, that they need a 380mm2 die to marginally outperform one that’s 225mm2…

            • AnotherReader
            • 4 years ago

            Yeah that has to hurt. In late 2013, the PS4’s APU die was estimated to cost [url=http://press.ihs.com/press-release/design-supply-chain-media/sony-nears-breakeven-point-playstation-4-hardware-costs<]$100[/url<]. The die size of [url=http://www.chipworks.com/about-chipworks/overview/blog/inside-sony-ps4<]348 mm2[/url<] is almost the same as [url=http://www.anandtech.com/show/8460/amd-radeon-r9-285-review<]Tonga[/url<]. Even if we assume that cost increases linearly with die size, the cost differential between the GTX 960 and the 380X turns out to be close to $ 35. The cost difference may be lower than that given that the 28 nm process is very mature now. Of course, I have ignored yields as I have no way of estimating them. In the end though, since this will be priced higher than the 960, AMD should be able to make a profit on the 380X. It has also come out at a good time when many might upgrade their GPU.

            • NoOne ButMe
            • 4 years ago

            It costs $100 to Sony. And AMD makes about $15 on that die.

            So $85. Although that seems expensive to me for the area. Especially given the die has extra CUs for yield. Given on 28nm the cost for 1mm^2 is far under the ~24 cents that the cost to produce implies. Xbox One has cost issues due to SRAM (costs about 10-20% more than Sony’s APU) despite being near the same die size. So I guess yields are enough of an issue. I suppose the XBO APU has no extra CUs to harvest, that could be it. update: The Xbox One does have extra CUs. My bad.

            • NoOne ButMe
            • 4 years ago

            One should almost feel sorry for Nvidia/AMD they needed a 562/365mm2 die to compete witha 438/294mm2 one. (GK110 v. Hawaii and Tahiti versus GK104)

            Blah blah. In the end Kepler, GCN and Maxwell all have pretty similar performance per mm^2 once you focus on gaming. With the order roughly being Kepler =< GCN < Maxwell. Maxwell isn’t that far ahead though. Now, that Maxwell did increase it slightly while having much lower power. That is impressive.

            • anubis44
            • 4 years ago

            The Radeons have hardware schedulers. That’s partly why they’re using more die space and more power. The nVidia cards lack a hardware based scheduler, which is really going to kill them in DX12 doing context switching between compute and rendering. So you’re not getting higher power consumption with the Radeons for no reason: just the opposite. You’re getting better DX12 capability than Maxwell.

            • NoOne ButMe
            • 4 years ago

            And you get down voted for saying facts. Well, primarily facts. Tonga shouldn’t be close to the size it is. Which is the chip in question being discussed. Every other GCN part outside of Tahiti has far better die space to hardware spec than Tonga.

            Otherwise the DX12 stuff is spot on. And part of Nvidia’s lower power draw is scheduling on the CPU.

            • namae nanka
            • 4 years ago

            AMD are feeling the sting of being on the losing side of the clockspeed wars, where earlier they would do this to nvidia.

            TPU do however put the 380X around 30% faster than the 960 at 1080p. Tonga has a lower die size and even lower considering the gaming-redundant hardware and 384-bit bus that it’s carrying. AMD didn’t really ‘need’ to bring out this card or else they’d have done this sooner.

      • Generic
      • 4 years ago

      I agree.

      All other things being close to equal; I always go with the less wasteful product.

        • rxc6
        • 4 years ago

        Why would you go with the product that used more power at idle (which most computers spend most of the time)?

          • Generic
          • 4 years ago

          I wouldn’t. That’s why I included the caveat, “all other things being equal”.

          I was agreeing with Meadow’s sentiment within the context of his post.

          I didn’t go review the article vet his comment, but I’m happy to learn that AMD can at least dial it back on this particular card.

      • xeridea
      • 4 years ago

      It uses more power, but has higher performance… If there was a equal 900 series card the performance delta would be smaller. Also it has 10w lower idle power, so the overall power usage would be similar.

      • Bensam123
      • 4 years ago

      I’d look at the 10w difference between the cards when the monitor is off How often do you leave your computer on and have the monitor shut off? How long do you play games for?

      I think there is a point to be made for the ultra low power mode considering the amount of time people have their computer just idling. So much so I’d say this is a complete wash at this point unless you’re doing something like high performance computing.

        • Leader952
        • 4 years ago

        If you have the monitor OFF and your computer is doing NOTHING then enable S3 Sleep and use ZERO watts instead of burning watts on the CPU/GPU.

          • Bensam123
          • 4 years ago

          If your your computer is doing NOTHING, then turn it off or unplug it so it doesn’t leech watts from the wall!

          People have their own quirks and things they do. You can micro your own life bro. That doesn’t fix this usage scenario.

      • Mikael33
      • 4 years ago

      If the 380X were around the same speed on average you would have a point, however it’s measurably faster and the 960 even with 4GB has trouble with a few games as you can see from the frame time plots which does affect playability and will be noticeable to you without a fps/frame time enabled in your game.
      Also unless electricity is incredibly expensive where you live you won’t be able to see the difference on your bill. 50 watts is less than the average non energy efficient light bulb…

      • EndlessWaves
      • 4 years ago

      The GTX 960 still gobbles up the same power as an FX-8350 so it’s not exactly a low power component. Your choice at the moment is between excessive power consumption and even more excessive power consumption.

      Plus there’s the G-sync conundrum. Unless nVidia continue to support G-sync on future cards for the useful life of the monitor you’ve got the electricity usage incurred in manufacturing a new monitor sooner than would be required otherwise.

      Not to mention the G-sync price premium makes the 960 & 970 look substantially worse value when considered as part of a complete build.

      nVidia cards have their good points, particularly as upgrades, but AMD does seem to be holding the stronger hand at the moment.

    • Kretschmer
    • 4 years ago

    I bought a 290X for $270, which makes this GPUlook like a really, really bad deal. Nothing in the 3XX series looks compelling;

      • Kretschmer
      • 4 years ago

      Why the downvotes? Is there anything with a “3” in front that compares favorable to the R9 290 and 290X?

        • brucethemoose
        • 4 years ago

        I think its a good deal, but others would say the 290(X) is technically phased out.

        • travbrad
        • 4 years ago

        I didn’t downvote you but based on current prices yes the 3xx compares favorably to the 290X. On Newegg and Amazon right now the 390 is $30 cheaper than a 290X and performs nearly identically. You got a good deal on that 290X for $270 though, no doubt about that.

        I do wish the 390/390X had 4GB versions for slightly cheaper though, as that would make them a much more compelling option. 8GB is pointless until you go above 4K, and even at 4K these cards don’t provide playable framerates.

        • NoOne ButMe
        • 4 years ago

        well, if you run 3x4K displays [A bunch of shit that exceeds 4GB of VRAM here while being unplayable no matter the card]

        • K-L-Waster
        • 4 years ago

        Don’t let the downvotes bug you – just remember how happy your bank account is while you’re playing….

    • DPete27
    • 4 years ago

    So since you can get [url=http://www.newegg.com/Product/Product.aspx?Item=N82E16814127898&cm_re=GTX_960-_-14-127-898-_-Product<]4GB GTX960s for $170ish on sale after MIR[/url<] these days, the 380x is going to need to settle in comfortably below $200 under similar sales in order to be attractive.

    • anotherengineer
    • 4 years ago

    Just curious why these cards weren’t tested at 1920×1080?

    I thought that these cards were more targeted for that resolution?

    And AMD, it would be nice to see 3 display ports at v.1.3 and 1 hdmi 2.0 port on future cards.

      • MathMan
      • 4 years ago

      Why DP 1.3, especially on a relatively wimpy chip like Tonga?

      DP 1.2 can go up to 2560×1440 @ 165Hz or 4K @ 60Hz. Is that not enough for this chip?

        • anotherengineer
        • 4 years ago

        Just more for features/future proofing, same goes with hdmi 2.0, no so much the resolution capabilities.

          • MathMan
          • 4 years ago

          HDMI, I understand, since it allows 4K@60.
          DP1.3 doesn’t have any major new features other than a higher link rate and (optional) lossy compression (which would allow for even more ridiculous resolution/refresh rate.)

            • anotherengineer
            • 4 years ago

            DP 1.2a has adaptive sync
            DP 1.3 The new standard features HDMI 2.0 compatibility mode with HDCP 2.2 content protection.

            Plus extra resolution like you mentioned
            [url<]https://en.wikipedia.org/wiki/DisplayPort#1.2a[/url<] Also DP1.3 was approved over a year ago back in Sept 2014, so why not start supporting it and making it standard feature in video cards?

        • BobbinThreadbare
        • 4 years ago

        People use cards for things besides 3d rendering. Also at 4k you can run games at half resolution and still get 1080p.

          • auxy
          • 4 years ago

          1/4 resolution*

            • TruthSerum
            • 4 years ago

            (1/2) ^2

        • Mr Bill
        • 4 years ago

        Tonga and Fiji are the newest generation designs, that’s why.

      • Damage
      • 4 years ago

      We did test at 1920×1080 some, and some at 2560×1440. See the settings used for each game. AMD has positioned this card for 2560×1440 and/or 1080p, and as the results indicate, it can perform pretty well at that resolution–especially with 4GB. You might not need this much power for 1080p, generally.

        • anotherengineer
        • 4 years ago

        Yes, just makes it a bit harder to follow though, 2560×1440 for some games and 1920×1080 for others. Doesn’t really make sense to me why one or the other or both resolutions were not used consistently through-out, that’s all.

          • NoOne ButMe
          • 4 years ago

          Look at the FPS. I assume they used 1440p if it was over ~30-40fps and otherwise dropped down to 1080p.

          The only concern I have with this is they don’t clearly state that they used different resolutions in the article, or which resolution on a per game basis. Leaving it to pictures is not quite enough when using mixed resolutions.

          I would like Damage to update with a note of the resolution in writing per game. Or for the future added into the graph(ic)s next review. Or both 🙂

        • NovusBogus
        • 4 years ago

        You should do some tests at 1920×1200. Come to the dark side Damage–we’re taller. 🙂

      • Krogoth
      • 4 years ago

      To see if 4GIB of VRAM made any significant difference.

      Unfortunately, this isn’t quite the case. Just save $10-40 and stick with 2GiB versions if budget is a serious concern.

        • NoOne ButMe
        • 4 years ago

        4GB of VRAM versus 2GB won’t make a difference in the majority of games at 1080p versus 1440p when considering playable framerates.

        That’s pretty well known.

    • NoOne ButMe
    • 4 years ago

    Eh. To expensive. Although, it does rule the “u don’t have enough money to buy a 970/390” slot.

    After the 4GB 380 cards disappear it will be much more viable. Enables higher settings than 960. Still, I think not worth extra over 380/960 4GB right now. If it drops to $200…

    • brucethemoose
    • 4 years ago

    I’m very happy with my 7950 I got for $200 in 2013. 2 years later, the performance in this price bracket hasn’t budged an inch.

      • mesyn191
      • 4 years ago

      GCN AMD cards tend to have pretty good value over time even if at launch they seem not so great.

        • ptsant
        • 4 years ago

        Exactly, the cumulative benefits of incremental driver updates are not negligible.

        GCN is a very solid architecture, very flexible and designed for computing. Nvidia has gotten very good P/w by simplifying their design and focusing on gaming pathways. They compensate with stellar pro drivers, but the raw hardware potential is not necessarily as good.

      • auxy
      • 4 years ago

      The people who bought Radeon HD 7950s and 7970s back in 2013 are the real winners.

        • brucethemoose
        • 4 years ago

        Non-ref 7950s were down to $180 before the cryptocurrency craze.

        The 380X/960 might be overpriced, but that was also an insane bargain back then.

          • auxy
          • 4 years ago

          Yah I remember that, that was back when I was still working at the hospital & I had a fair amount of disposable income. I was still rocking a single GTX 460 at that time and I was strongly considering a move to a 7950.

          In the end I decided to wait and picked up a second 1GB GTX 460 for $50 used on eBay and after cleaning/repasting I was able to run them both at 822Mhz! They served me well until a user here gifted me a GTX TITAN (!!!).

            • sweatshopking
            • 4 years ago

            Why in in the world would anybody on tr gift you a titan? That seems like a pretty pricy gift to accept from a stranger.

            • auxy
            • 4 years ago

            That’s not very nice.

            • Chrispy_
            • 4 years ago

            I’m also curious about how you came by a Titan!

            I get a whole lot of stuff as cast-offs from work so I’ve gifted loads of surplus cards to friends – mostly 7850’s and 7970s in recent years but as you say, they’re still relevant even today.

            I’ve never stumbled across a spare $999 card though, and if I did I’d be thinking about flipping it on eBay if I wasn’t going to use it myself. That’s a pretty generous gift!

            • auxy
            • 4 years ago

            [url=https://techreport.com/forums/viewtopic.php?f=3&t=86657<]It was right here on the forums.[/url<] I was trying not to brag about it too much or anything but I guess that guy doesn't post here on the forums anymore (I think he got banned?) so whatever.

            • Mr Bill
            • 4 years ago

            Interesting thread, a generous fellow.

        • Firestarter
        • 4 years ago

        Bought mine at launch, still worth it!

        • orik
        • 4 years ago

        The people who bought the 7950’s back when you could unlock the shaders and make it a 7970 are the real winners.

          • Firestarter
          • 4 years ago

          When did that happen? AFAIK this can’t be done with any 7950, even though it did work with the 6950 => 6970

            • DrDominodog51
            • 4 years ago

            This is doable on quite a few AMD cards though I’m not sure about 7950 to 7970. Fury to half Fury X and Fury X, 290 to 290x, and 290x (8 GB) to 390x are all doable. The only one of the unlocks that is unlikely to happen is Fury to Fury X. Most Fiji dies have at least one thing wrong with them. When I put “half Fury X”, I meant of the 8 disabled units (execution units???) only 4 have been renabled by a modded BIOS.

          • MEATLOAF2
          • 4 years ago

          I got a 290 for $250 and unlocked it to 290X, and it holds a solid 14% overclock at stock voltage. I’d say people buying 390/x cards are the losers based on that graph (since performance is largely the same).

          I’d be curious to know if you can unlock 390s though, haven’t looked into it since I modded mine.

        • ptsant
        • 4 years ago

        I’ve been so happy with my 7950 and 7970 that I still can’t find an excuse to upgrade. Sure would like FreeSync, but not enough to pay $200+. Hawai is faster, but too hot. Fury too expensive.

          • auxy
          • 4 years ago

          Too hot in terms of TDP? Fury’s is higher… as is 980Ti.

          My 290X @1111Mhz (a small but significant overclock) never breaches 70C even when playing games in 4K, and that’s without raising the fan speed manually.

          I’m really not suggesting that you need to upgrade; if you’re still happy with your Tahiti cards, then by all means carry on. However I do want you to be aware that all of the fuss over Hawaii overheating was due to the overly conservative fan curve that AMD implemented to prevent the reference cooler from being ludicrously loud. With a non-reference cooler the GPU is perfectly reasonable in terms of heat output, at least given its performance envelope.

            • ptsant
            • 4 years ago

            The TDP is practically the same between the 290X and the Fury X, but it’s ~80W more than the 280X. Although this can be reasonably managed by a good cooler, it does make a lot of heat. If price weren’t an issue, the perfect card for me would be the Fury Nano: a decent step up from the 280X and just a little bit more TDP.

            Anyway, I’ll get my 1440p 144Hz FreeSync monitor first and if I can’t settle with medium settings, I’ll consider an upgrade in the beginning of 2016.

            • NoOne ButMe
            • 4 years ago

            You can always power tune the 290x down about 25%. Performance should drop under 10% based on a forum thread here and you’ll be at around the same power as the 280x.

            • ptsant
            • 4 years ago

            That is good advice. However, I’m already undervolting the 280X at the BIOS plus running it at powertune -10%. I’ll see what sort of performance I get at 1440p and by then I hope the roadmap for new GPUs (especially 14nm) will be clearer.

      • Srsly_Bro
      • 4 years ago

      I bought my 7950 Oct of 2012. It’s slowly dying 🙁

        • Firestarter
        • 4 years ago

        How? Lower overclocks or artifacts or somethings?

        • mesyn191
        • 4 years ago

        If its artifacting like Firestarter mentions then check the HSF mount and fan. Fans tend to go bad and mediocre thermal paste jobs can cause lots of problems down the road.

        If its something else I’d look to the drivers 1st rather than the hardware. Hardware tends to last a long time if it doesn’t break within the first 30 days or so of being bought. Not saying it can’t happen, just that its unusual, and if its your case it sounds like bad luck.

          • TruthSerum
          • 4 years ago

          I bet his fan is dusty and he hasn’t cracked it open to clean it out..

        • ptsant
        • 4 years ago

        It pays to clean the heatsink from time to time. A lot of dust will greatly affect performance. Obviously, capacitors can also fail, but the most recent failure I had was on my 4850 (6 months ago), so I’d guess it’s too early for the 7950. Finally, a very good PSU is crucial for the long life of all components. Never underestimate the effect of ripple current and noise on your components.

          • Srsly_Bro
          • 4 years ago

          Thanks for the tips, nerds. I’ll check my HSF.

      • Mr Bill
      • 4 years ago

      Hmm, the R9 380X is ~35% faster than the XFX HD 7870 Double D Black Edition (~ R9 270) that I bought for $200 (after rebate) in Sept 2012. That is… 35% according to this link… [url=http://www.game-debate.com/gpu/index.php?gid=3130&gid2=1235&compare=radeon-r9-380x-vs-radeon-hd-7870-double-d-black-edition<]game debate radeon-r9-380x-vs-radeon-hd-7870-double-d-black-edition[/url<] [url=https://techreport.com/review/25642/amd-radeon-r9-270-graphics-card-reviewed<]proof HD 7870 is approximately equal to an R9 270[/url<] I was wrong to think the R9 380X was twice as fast.

    • Ninjitsu
    • 4 years ago

    Well, nice effort from AMD. May need to drop the price a bit, but I guess they can always do that for Christmas.

    I’d argue that idle power is more important than load power in most cases – so I think AMD wins the power consumption context.

    Maxwell is of course much more efficient on load, but I’m not sure how much that actually matters given the differences involved.

      • AnotherReader
      • 4 years ago

      It is interesting to see that the power consumption difference between the 4 GB GTX 960 and the 4GB 380 is only 17 W at the wall. Idle power favours the Radeon by 10 W. I remember the difference in power being much more in [url=https://techreport.com/review/27702/nvidia-geforce-gtx-960-graphics-card-reviewed/11<]Nvidia's favour at the time of the 960's introduction[/url<].

    • PrincipalSkinner
    • 4 years ago

    Ah, there it is.

    • Srsly_Bro
    • 4 years ago

    I got a GTX 960 4GB FTW off Newegg for $189.99 a few weeks ago. This card looks quite overpriced, comparatively.

      • auxy
      • 4 years ago

      Does it? It performs better in most of the games.

      The raw specs are way better, too. Maxwell relies really heavily on the driver (which is how they got such good power savings in Kepler and Maxwell, by moving hardware bits to software) and so if you don’t have a game profile the performance is going to suffer a lot.

      There’s also that whole async compute boogeyman in the future…

        • MathMan
        • 4 years ago

        These comments about ‘saving power by moving stuff from hardware to software’ are really weird, especially since it was largely a move that was more towards the AMD architecture.

        In Fermi, there was a scoreboard for indicidual registers that allowed for very flexible scheduling of warps. It was also total overkill. In Kepler and Maxwell, they switched to simpler system, like GCN.

        Similarly, in Kepler and Maxwell, they switched to a reference counted system for texture accesses, just like in GCN.

        Furthermore, going from Fermi to Kepler and Maxwell, the instruction set added guidances about the pipeline depth of each instruction, which makes it much easier for the scheduler to plan how long a warp needs to be stalled. I don’t think GCN has that, though.

        Another thing that was added by Maxwell, is a temporary register store, which heavily reduced power hungry banked register accesses. (Google ‘maxas’ for details.)

        Yes, that means that the compiler needs to work a bit harder in choosing instructions, but that’s just a one time thing.

        It’s not as if the driver is tweaking, at warp level, what the GPU needs to do. When you look at the Maxwell specifically, there are a ton of architectural changes that make the hardware more power efficient that have nothing to do with software.

          • AnotherReader
          • 4 years ago

          Thanks for this informative snippet of Maxwell’s innovations

          • namae nanka
          • 4 years ago

          “In Kepler and Maxwell, they switched to simpler system, like GCN.”

          AMD went to fermi-like hardware scheduling with GCN or so I read. Both sides interchanged their positions with regard to that.

          And nvidia themselves proclaimed that it’d cut down on power, AMD’s cards did sip less of it before GCN.

    • auxy
    • 4 years ago

    I like this GPU, but it’s kinda pointless as the article itself even points out. You can find R9 290s for $200-$240 now, brand new, and while they’re older GCN 1.1 stuff, they’ll still play the same games and they’ll still run rings around this thing. I dunno who this is for… ;つД`)

    Nice review tho Jeff. I saw you used the 15.11.1 driver for the Radeons, nice catch! It makes a big difference in Fallout 4.

    • flip-mode
    • 4 years ago

    28 nano rides again. When will that horse finally be put out to pasture?

      • DancinJack
      • 4 years ago

      1H 2016

      • UnfriendlyFire
      • 4 years ago

      When Apple stops hogging all of the sub-28nm supply.

        • BobbinThreadbare
        • 4 years ago

        Isn’t that going to be when sub-28nm supply increases sufficiently?

          • NoOne ButMe
          • 4 years ago

          Depends on yields, and where Apple is making the A10/A10x.

          Assuming there are no issues with yields and A10/A10x is split or only on Samsung than Nvidia should have 1H2016 for some consumer cards. AMD should have consumer cards out 1H2016 no matter what, but, AMD manages to surprise (in the bad way) often.

        • nanoflower
        • 4 years ago

        Is that really a problem? I was under the impression that the Apple ARM CPUs are made using the low power process so that would be a separate production line from what AMD and Nvidia would be using their high end GPUs.

          • NoOne ButMe
          • 4 years ago

          It’s basically one process that has tweaks to get the higher performance variants. It’s all the same tools, etc.

    • Krogoth
    • 4 years ago

    960 4GiB is the new 256MiB Geforce 4 MX 440. While 380X and 380 4GiB are X1300 512MiB.

    The GPU simply isn’t powerful enough to keep in situations where it would need 4GiB of VRAM. If you are on a budget just stick with 960 2GiB or 380 2GiB.

    380X just looks good when compared to 960 4GiB, but 960 and 380 2GiB are better deals on a tight budget. If you want to get a GPU with 4GiB of VRAM. Don’t settle anything less than a 970 or 290.

    In all, 380X does little to change the market landscape. ~$200-299 tier is pretty much 2Megapixel land while $300-$499-tier is 4Megapixel land. It will remain this way until 16nm chips come along.

      • derFunkenstein
      • 4 years ago

      No, but thanks for playing. The GeForce 4MX was completely cut out of a DX feature set. That’s not true of any of these cards.

        • Krogoth
        • 4 years ago

        They are underwhelming parts that try to upsell themselves with more VRAM to people who still think that more VRAM = better!

        It costs Nvidia/AMD a few bucks to slap on extra DRAM chips onto the PCB and they charge customers an extra $40-50 for it.

          • derFunkenstein
          • 4 years ago

          $10. Cheapest GTX 960 2GB is $179 on Newegg. Cheapest GTX 960 4GB is $189. Both of them are factory-OC models made by MSI.

          AOTS and Fable Legends show that coming soon that extra RAM will help reduce spikes which are just the worst part about playing PC games. Seems like ten bucks well spent.

            • Krogoth
            • 4 years ago

            Cherry-picking examples.

            The difference between 2GIB and 4GiB for the majority of the models/vendors out there is closer to $20-$50.

            • auxy
            • 4 years ago

            I would argue you’re the one cherry-picking. The fact is that as a user I can buy a 4GB 960 for $10 more than a 2GB card, and that’s the important thing to the argument. What you said is irrelevant to me as a purchaser.

            • Krogoth
            • 4 years ago

            >implying that selecting a single model from one etailer isn’t cherry picking

            • auxy
            • 4 years ago

            Are you okay, man? You seem to be scheisse-posting a lot lately. (´・ω・`)

            • Krogoth
            • 4 years ago

            Just pointing out that $10 difference for that model is the exception not the rule.

            Just look at other vendors, models and etailers if you want a more accurate assessment. Not everybody shops at the same etailer and/or prefers other vendors for whatever reason.

      • Krogoth
      • 4 years ago

      I see that the fanboys and people with buyer’s remorse are coming in fast.

        • auxy
        • 4 years ago

        Just people downvoting you for making more statements with no basis or justification. You can easily write a program which will run on a Geforce 2 (much less anything newer with some actual fillrate) that will eat up 4GB of video memory. There are games out there which run just fine on older/slower GPUs but which can eat up tons of VRAM. 64-bit Second Life clients are one such thing.

        Just because the usecase isn’t yours doesn’t mean that it doesn’t exist.

          • Krogoth
          • 4 years ago

          Programs that uses tons of VRAM (GPGPU) aren’t geared towards demographics that none of the cards cater towards. Games that uses tons of VRAM also need a ton of shading power and fill-rate. The GM206 and partially crippled Tongo chips simply doesn’t have enough to keep up without becoming a shudder-fest. 8GiB Hawaii cards (390/390X) have the same issue in situations where 4GiB of VRAM doesn’t cut it.

          I’m just saying that 4GiB of VRAM on GPU that have last-generation level of performance is just a waste of time and money.

            • derFunkenstein
            • 4 years ago

            All it takes his high-res textures. Fire up GTA5 with max textures on a card with only 2GB of memory sometime. Max textures put the Magnus into a stuttery mess (with 3GB of memory) without changing anything else, which is why we tested it with the same settings as the GTX 950 review.

            • Krogoth
            • 4 years ago

            The benefits are there, but are rather small. You are just better off spending an extra $100 for a 970/290(If you can still find them) which not have 4GiB of VRAM but yield more greater performance gain then a simple memory bump with the same GPU for $20-40 more.

            • derFunkenstein
            • 4 years ago

            Better off? Yes. But that isn’t what you said. You’ve changed the argument because you can’t support the original hypothesis.

            • Krogoth
            • 4 years ago

            I didn’t change the argument.

            I originally said that 4GIB makes no sense on 960 and 380 chips. They don’t have the shading and texture-filling power in situations where 2GiB of VRAM doesn’t cut it and you need 3GiB-4GiB of VRAM while maintaining a smooth/playable framerate. The performance delta from 2GiB and 4GIB models in the benchmark suite within the article makes it rather apparent. You are better off getting something far more powerful if you want a 4GiB board.

            The only reason such cards exist is because Nvidia/AMD want to upsell 960 and 380 chips to people who think that more VRAM = better! It cost little for their vendors to add more chips while they can easily throw in a $20-50 premium for it. It is one of the oldest selling tricks in the book. It isn’t the first nor last time they will try such marketing shenanigans.

            256MIB Geforce 4 MX 440 is just among the most egregious examples of it.

            • xeridea
            • 4 years ago

            So you argue that $10 more to go to 4GB isn’t worth it, then you suggest the cash strapped consumer in this price range to spend an extra $100 on a card that may have less than 4GB?

            • Krogoth
            • 4 years ago

            It is closer to a difference $20-40 for most models/vendors at other etailers out there.

            That $100 difference yields a far larger jump than a small bump for $20-40 more. If you are really that cash strapped then chances are you are running with a 1920×1080 monitor where that 4GiB will go mostly underutilized. 2GiB of VRAM version will work just fine. Saving yourself that $20-40 for whatever.

            • DPete27
            • 4 years ago

            My coworker just got a GTX960 4GB for $160 after MIR less than a month ago… = $0 price difference

            • xeridea
            • 4 years ago

            Multiple people have pointed out the $10 or less price difference exists, yet you refuse to acknowledge it.

            • Krogoth
            • 4 years ago

            It exists on one model at a certain vendor/etailer at this time. It is not commonplace.

            Until you see it elsewhere on a variety of models and vendors. It is a weak point at best.

            • xeridea
            • 4 years ago

            Say that to all the BitCoin miners…. and protein folders.

            Edit: I forgot I switched to Litecoins. Bitcoins took virtually 0 memory, Litecoins took about 1.5GB on my 7950s.

            • f0d
            • 4 years ago

            when i mined bitcoins with my 6950 it used pretty much zero memory

            • xeridea
            • 4 years ago

            Oh, I forgot, I switched t LiteCoins. My 7950 took about 1.5 GB VRAM to mine. Bitcoins take like 0 memory, thats why they were the first to have ASICs.

            • Krogoth
            • 4 years ago

            Bitcoin mining has been dead on GPUs for almost two years. It is either ASIC-hardware or bust at this point.

            They are far better GPGPUs out there if you care about that stuff. 960 and 380 are cartered to gamers on a budget but do not have last-generation performance GPUs on hand.

            • xeridea
            • 4 years ago

            Bitcoin and Litecoin mining on GPUs yes, there are some other altcoins with more ASIC resistance.

            Not everyone doing GPGPU demands a high end workstation card. There are many things that can be done with midrange cards. Some apps use GPU acceleration very well. Compute is becoming more of a thing in games (the ones that aren’t crappy console ports, and are from quality developers).

            • Krogoth
            • 4 years ago

            *sigh*

            The crowd that uses GPGPUs for real-world workloads aren’t going to be looking at 960 and 380. They are going to get something a lot more potent and probably *need* ECC support. At minimal they will get the Quadro/FireGL versions of the silicon.

            • xeridea
            • 4 years ago

            So everyone doing protein folding, Photoshop, Ray Tracing, and other common GPGPU tasks are all looking at investing tons of money? I personally know several people who use their low-midrange GPUs for GPGPU tasks for work. It may not be the primary selling point of the GPU, but you can’t just shrug it off as not a thing. You ever wonder why GPGPU tasks are commonly included in reviews? It is because they are actually used for that purpose.

            • MathMan
            • 4 years ago

            I can see how more RAM can be a waste of my money.
            But how can it be a waste of my time???

          • Meadows
          • 4 years ago

          I’m not sure what’s worse, his statements or the fact he complains every time people dare downvote him.

        • flip-mode
        • 4 years ago

        No. What you are witnessing is a fairly normal reaction to a ridiculous post.

        [quote<]960 4GiB is the new 256MiB Geforce 4 MX 440. While 380X and 380 4GiB are X1300 512MiB.[/quote<] Krogoth... WHY do you have to think this way? Your mind only seems to work by substituting an old thing with a new thing. I've seen you post "X is the new Y" countless times, and every time you do it you are just being a lazy thinker, taking mental shortcuts in situations when it's not even much work to take the long route. Then there's the fatal comparison fail that the MX 440 and X1300 never sold for $250. Awful comparison dude. The rest of your post is fairly worthless. It's repeating what others have already said or simply stating the obvious.

          • Krogoth
          • 4 years ago

          They still play on the same old trick on ignorant buyers who don’t know any better. GPUs that have way more memory then they really need.

          It is sleazy marketing at worst. I’m surprised that more people aren’t annoyed by it.

            • speely
            • 4 years ago

            I’m more surprised that you’re annoyed by it at all. Let people buy what they want to buy. If it’s not what you would buy, wtf do you care? It’s not your money, your time, or your life. You’re only wasting your own personal time by spending it on TR comment threads complaining about it.

      • bwcbiz
      • 4 years ago

      I think the best news about this release is that a GTX 960 Ti can’t be far behind at a similar price point. There’s a big gap between the 960 and the 970 in terms of both price and performance.

        • NoOne ButMe
        • 4 years ago

        Why sell the cut down GM204 parts to consumers and make a mere 60% margin when they can sell them in laptops for 2-3x the price? Although TSMC should have plenty of capacity 🙂

        • NovusBogus
        • 4 years ago

        I would really, really like such a thing. More than Pascal, to be honest.

      • nanoflower
      • 4 years ago

      It all depends upon your needs. For me 1080P at 60FPS is fine and for that the 380x makes much more sense than the 970. The 2GB 960s are out because there are already some games (Assassin’s Creed Syndicate) which really NEED that extra memory if you enable all the graphic options according to many benchmarks I’ve seen. The price difference between the 4GB 960s and 380X are so little that it would be foolish to go for the 960 unless you really need Shadowplay or some other Nvidia feature. Paying an extra $100 over the 380x (at current Newegg prices) for the 970 only makes sense if you are planning to go to 1440 or higher resolution in the near future.

    • ultima_trev
    • 4 years ago

    Appears that this performs WORSE than the 7970 GHz/280X unless geometry heavy scenarios come into play. I wonder why they only clocked the reference at 970 MHz.

    In any case, Radeon fans still rocking the Tahiti XT incarnations have no incentive to upgrade if $250 is their budget.

      • NoOne ButMe
      • 4 years ago

      GCNs ideal clock is on the range of 800ish to a 950ish. 970Mhz is probably how high they can clock all of the good dies without inflating TPD.

    • shank15217
    • 4 years ago

    I think in a month or two these cards will easily drop down to $210-215. It would also be nice if a shorter board was released like with the fury nano.

      • auxy
      • 4 years ago

      Problem is that the Tonga XT GPU + 4GB GDDR5 takes up a lot more room on a board than the Fiji XT GPU + 4GB HBM (which is part of the Fiji XT GPU) used in the Fury Nano. Someone COULD make a shorter board for it, but it’d take a significant amount of rejiggering and this is probably not going to be a super high volume part so I don’t know if it’s worth it. 220mm isn’t super long for a GPU; that’ll easily fit in even most ITX enclosures (at least, ones that are designed to accept a graphics card), so I think it’s fine as is really. (´・ω・`)

    • DrCR
    • 4 years ago

    [Insert fanboyish comment before having even read the entire article]

    There, done. Just wanted to save some the trouble of posting.

    • f0d
    • 4 years ago

    nice card that clearly bests the 960 (which is in the same price range) even though i would probably spend the extra for a 970 or 390

    the big question is… why diddnt amd release this fully enabled gpu right at the beginning instead of crippling it for so long? its clearly a great card for the money and imo amd made a big mistake waiting so long to release it in its fully enabled form

    silly amd…..

      • Voldenuit
      • 4 years ago

      [quote<]nice card that clearly bests the 960 (which is in the same price range) even though i would probably spend the extra for a 970 or 390[/quote<] Pretty much. 970 cards are going for $269 on newegg (ars had a deal with a Zotac 970 for $249 the other day). An extra $30-40 for 60% more performance? You'd be crazy not to!

        • nanoflower
        • 4 years ago

        Would be great if that were really the case but I think those prices were a short time deal for the holidays. They may get repeated again at Christmas but for now the 970s are back up above $300 on Newegg and the 380x tends to be around $220.

      • Chrispy_
      • 4 years ago

      There’s already a thread discussing this, but my guess is that when Tonga was designed, AMD thought that memory bandwidth was a big thing (Tonga taped-out about 18 months ago and was probably in final design phase almost two years ago).

      The gaming scene has changed a lot since than it has become apparent that memory bandwidth has not been as necessary as the designers predicted it would be. It’s really expensive to change a design and rather than re-spin the design and add 4-6 months of delay to Tonga’s release, AMD chose to simply disable two of the memory controllers and release a 256-bit product with the silicon they had.

      Yes, a re-spun design would be marginally cheaper to produce – about 7-8% cheaper in terms of die area, but it would have to claw back the re-spin costs and further development. Even though it’s a 384-bit die, limiting it to a 256-bit allows use of a cheaper package, and allows board partners to use fewer layers in their board, a simpler design, and of course, they only have to buy 2/3rds of the GDDR5 modules.

      Another issue that came up with Hawaii’s rebranding from the 290-series to the 390-series is that GDDR5 is expensive to drive, in terms of TDP and power. By not using two of the memory controllers built into Tonga’s design, AMD can use that spare thermal headroom in a combination of ways – to allow more power draw and higher clocks for the rest of the chip, and also to reduce the TDP overall. It looks like Tonga’s TDP is 35W lower than Tahiti’s and the clocks are also 10% higher, on average. Let’s chalk that up to a combination of fewer memory modules to drive, and better yields on 28nm.

    • tbone8ty
    • 4 years ago

    so your conclusion is recommend the 970?

      • Chrispy_
      • 4 years ago

      The conclusion is that this is the best card in its price bracket, and that the whole price bracket is lousy value compared to the next bracket up.

        • southrncomfortjm
        • 4 years ago

        Especially when the next price bracket up has been seeing some pretty nice discounts lately. I have no need for a 970 at the moment, but that doesn’t mean I haven’t been tempted.

          • NoOne ButMe
          • 4 years ago

          Add in that the price bracket below it is fast enough for 1080p. And if you can buy a 1440p monitor you likely can buy the price bracket above this.

      • Srsly_Bro
      • 4 years ago

      so much contempt loll

      • tbone8ty
      • 4 years ago

      so what your saying is i should for go the 380x and wait for 14/16nm

        • nanoflower
        • 4 years ago

        Depending on what you are wanting then yes. For 60FPS at 1080P the 380X looks to be a great choice. At 4K, it’s a poor choice. In between will depend upon the game you are looking at and what quality options you feel you must have enabled as it can slow down in some games if you want all the graphic options enabled at their highest values.

Pin It on Pinterest

Share This