Reconsidering the overall index in our Radeon R9 Fury review

I've been pretty active over the weekend responding to questions in the comments section for our Radeon R9 Fury review.

As you may know, our value scatter plot puts the R9 Fury just behind the GeForce GTX 980 in our overall index of average FPS scores across our test suite. Some of you have expressed surprise at this outcome given the numbers you've seen in other reviews, and others have zeroed in on our inclusion of Project Cars as a potential problem, since that game runs noticeably better on GeForces than Radeons for whatever reason.

I've explained in the comments that we use a geometric mean to calculate our overall performance score rather than a simple average specifically so that outliers—that is, games that behave very differently from most others—won't have too big an impact. That said, the geomean doesn't always filter outlier results as effectively as one might wish. A really skewed single result can have a noticeable impact on the final average. For that reason, in the rush to prepare my Fury review, I briefly looked at the impact of excluding Project Cars as a component of the overall score. My recollection is that it didn't seem to matter much.

However, prompted by your questions, I went back to the numbers this morning and poked around some. Turns out the impact of that change may be worthy of note. With Cars out of the picture, the overall FPS average for the R9 Fury drops by 1.2 FPS and the score for the GeForce GTX 980 drops by 2.8 FPS. The net result shifts from a 0.6-FPS margin of victory for the GTX 980 to a win for the R9 Fury by a margin of 1.1 FPS.

Things are really close. This is why I said in my analysis: "That's essentially a tie, folks."

But I know some of you hang a lot of worth on the race to achieve the highest FPS averages. I also think the requests to exclude Project Cars results from the index are sensible given how different they are from everything else. So here is the original FPS value scatter plot:

And here's the revised FPS-per-dollar scatter plot without the Cars component.

Some folks will take solace in this symbolic victory for AMD in terms of overall FPS averages. Do note that the price-performance landscape isn't substantially altered by this shift on the Y axis, though.

We have long championed better metrics for measuring gaming smoothness, and our 99th-percentile FPS plot is also altered by the removal of Cars from the results. I think this result is a much more reliable indicator of delivered performance in games than an FPS average. Here's the original one:

And here it is without Project Cars:

The picture shifts again with Cars out of the mix—and in a favorable direction for the Radeons—yet the R9 Fury and Fury X still trail the less expensive GeForce GTX 980 in terms of general animation smoothness. I believe this result is much more notable to PC gamers who want to understand the real-world performance of these products. AMD still has work to do in order to ensure better experiences for Radeon buyers in everyday gaming.

Then there's the power consumption picture, which looks like so:

I didn't have time to include this plot in the review, although all of the data are there in other forms. I think it's a helpful reminder of another dynamic at play when you're choosing among these cards.

At the end of the day, I think the Cars-free value scatter plots are probably a more faithful reflection of the overall performance picture than our original ones, so I'm going to update the final page of our Fury review with the revised plots. I've looked over the text that will need to change given the shifts in the plot positions. The required edits amount to just a few words, since the revised scores don't change anything substantial in our assessments of these products.

Still, it's always our intention to provide a clear sense of the overall picture in our reviews. In this case, I'm happy to make a change in light of some reader concerns.

Comments closed
    • gigafinger
    • 4 years ago

    How do I thumb up the entire article? I really appreciate the follow up work, even if it didn’t cry victory for the underdog.

    • Westbrook348
    • 4 years ago

    Interesting article at PC Perspective that compares frame times between 980Ti and Fury X in single GPU, SLI/CF x2 and x3 (http://www.pcper.com/reviews/Graphics-Cards/AMD-Fury-X-vs-NVIDIA-GTX-980-Ti-2-and-3-Way-Multi-GPU-Performance). The type of analysis is similar to Scott’s, though not quite as good in my opinion.

    Takeaways: 980Ti is better than Fury X on frame time analysis in most games, but in dual GPU setups most of the time Fury X is actually superior. The author didn’t delve into why, but I’m wondering if it’s partly due to Fury X’s water cooling. Two 980Ti’s spit out a lot of heat despite Maxwell’s efficiency, and were thus probably throttling somewhat.

    Most games scaled really well in SLI/CF, with 50-90% gains in 4K even if you look at the end of the frame time graphs. The exceptions were GTA5 and Bioshock Infinite, which appeared to be CPU limited at 1440p, because it was better to play GTA5 with only one 980Ti instead of 2, and in BI a second card made little difference.

    I love to see frame time analysis like this using new situations. Thanks Scott for starting the revolution.

    • ultima_trev
    • 4 years ago

    AMD have released a great competitor to the nearly year-old GM204, albeit a bit overpriced and with supply issues.

    Now, where is AMD’s answer to GM200?

    • kamikaziechameleon
    • 4 years ago

    As an AMD owner. Till they fix their darn drivers most of these stats are superfluous.

    They have so many random issues that pop up from driver issues vs nvidia it’s rather silly. My next card will be the green team no matter what.

      • akaronin
      • 4 years ago

      Come on ! What random issues !?!?!?!?!?!?!?

      • SubSeven
      • 4 years ago

      Ditto

      • Nevermind
      • 4 years ago

      You are officially branded a liar.

    • beck2448
    • 4 years ago

    ” the R9 Fury and Fury X still trail the less expensive GeForce GTX 980 in terms of general animation smoothness. ”
    As highly over clocked and custom cooled 980s are easily available at competitive prices this is not an easy situation for AMD as Fury oc potential has been shown to be quite limited.

    • pandemonium
    • 4 years ago

    In context of discussing value based plots, I’d also like the TR staff to consider frametime adjustments. The difficulty with that is, how does one scale this in adjacent to framerates?

    I was thinking use the plot of value and reduce it to a non-linear scale based on frametimes, going outside of the 99th percentile metric, since that seems to capture stuttering and the experience being diminished the most. Or, possibly just use a percentage of frametimes under the 60FPS interrupt for higher end cards, 30FPS for mid-lower end cards.

    Difficult to pin the metric and how to display it, but hopefully something you guys can consider to have as an all inclusive metric to base the “value” of these cards on.

    Thanks for all of your excellent reviews and community involvement!

    • novv
    • 4 years ago

    Well that’s nice, but something really interesting you should do is to test how high end video cards perform after 2 or 3 years of intense gaming. What I mean is to stress test these gpu’s and tell people that those temperatures under load does really matter after a couple of years. Why nobody is buying an HD7970 or R9-290 if it knows it was used for bitcoin mining ? Why there are a lot of owners of 9800GX2, GTX280, GTX295, 4870X2, 3870X2 and so on that had problems after one year or so with their graphics cards? Strictly because of 50 degrees Celsius difference between idle and load ? Maybe. And don’t forget to tell people that every bench made here is OPEN, not inside a case, as most of us are using their computers. If there is something really good about Fury X that is the temperature. That will keep the card running nice and smooth for several years and after that you can sell it and not reflow (reball) it.

    • Voldenuit
    • 4 years ago

    I can’t say I agree with the decision to remove Project Cars from the average.

    If a game is performing badly with a given vendor, the discrepancy should be highlighted instead of obscured/swept under the rug.

    That’s the only way to ensure that:
    a. The GPU maker will be motivated to improve their drivers to avoid the bad press (and lost sales)
    b. The game developer will be motivated to improve their game to avoid lost sales to half the GPU users out there.

      • MEATLOAF2
      • 4 years ago

      It’s still in the review, people can still see how poorly AMD cards are performing in that game. It’s just that the “how good all these cards really are” chart shouldn’t punish one vendor over one game, Project Cars does not demonstrate what the average AMD/Nvidia user will experience. That chart is intended to demonstrate on average what kind of performance you will get out of the cards for the given price, relative to the other cards, not “This one game didn’t work that great, so this entire brand of cards kinda sucks”.

      Also, I honestly don’t believe AMD needs more reasons to improve their driver performance, they already have a long list, and that list includes pretty much every game released in recent memory, and everyone knows it, including them.

      • JustAnEngineer
      • 4 years ago

      The issue to consider is that Damage doesn’t have the time to go back and re-do the entire review each time that the drivers are updated to fix issues and improve performance. The initial negative review would stay out there forever.

        • BIF
        • 4 years ago

        So that’s just a little more incentive for hardware makers to get it right the first time, yes?

    • ermo
    • 4 years ago

    Project CARS is a [b<][i<]VERY[/i<][/b<] CPU intensive game. It is essentially exposing a CPU bottleneck in the WDDM 1.x version (Win 7/8/8.x) of AMD's DX11 driver. It is worth noting that this bottleneck is nowhere to be seen when using Project CARS with AMD's WDDM 2.x DX11 driver on Win 10 because WDDM 2.x is designed to yield much lower CPU driver overhead in the kernel mode part of the driver and give better support for multithreading in the user mode part of the driver. The reason Project CARS is so CPU intensive is partly down to the complexity of its tyre model and partly down to the fact that the movement speeds are much higher than in your typical FPS, which means that when the physics engine downcasts to scan the track geometry in front of and under the car at 600Hz, it gets of a lot of cache misses because the track geometry changes so quickly at higher speeds, which means that the processing time of the tyre physics threads (2 by default) goes up and leaves less time for everything else. At the same time, PhysX is run at 50Hz on the CPU to handle car-to-car and car-to-world collisions in CCD mode. So when you up everything to Ultra on an AMD GPU in Win 8.x, you are much more likely to see these GPU driver bottlenecks due to the engine having to spend more CPU time to calculate the Ultra effects, leaving less CPU time for actually getting the draw calls processed by the GPU driver and sent to the GPU. In contrast, NVidia's DX11 driver is apparently much more efficient and doesn't suffer from such a driver overhead bottleneck, making Ultra a much more viable proposition on NVidia hardware. And yes, the PhysX collision detection runs on the CPU even if you use an NVidia card.

      • Westbrook348
      • 4 years ago

      When you spend $500-650 on a GPU, even if it’s an awesome card with new tech and record transistor count, don’t you want the drivers to work as well as the competitor’s? Great explanation for the Project Cars problems. I’m glad Win10 is helping lower driver overhead; AMD will surely benefit. But the AMD driver inadequacies have been a recurring theme for years now.

      • BobbinThreadbare
      • 4 years ago

      I feel like there shouldn’t be many cache misses on a race track regardless of how fast you are going because it’s still a set level and you have a *pretty* good idea of what is coming next.

      I suppose it could simply be trying to process more data than can be fed into the cache, but if that’s the case it would affect Nvidia and AMD equally. That’s clearly not the case.

        • ermo
        • 4 years ago

        I’m reporting what I was told by the lead physics developer in response to one of my many questions on the inner workings of the engine on the development forum.

        Other WMD members here should feel free to search the forums for the posts in question and back me up. As I wrote in reply to auxy, links won’t do you much good unless you’re already a WMD member yourself.

        • Klimax
        • 4 years ago

        Too big dataset for given frame-frame change.

      • auxy
      • 4 years ago

      You made essentially this same remark in a reply to me, and yet you don’t provide any links. Can you provide any sources?

        • ermo
        • 4 years ago

        Not trying to be facetious here, but unless you have access to the Project CARS development forums, the links to the posts from the lead PC graphics developer and the lead physics developer where I ask them about the engine and the AMD issues won’t do you much good? And sharing their quotes here directly would be both unfair not to mention in breach of the terms and conditions to which I agreed when I joined.

        The only thing I can offer is anecdotal evidence that we have a compulsive benchmarker onboard who has done benchmarks for hundreds of builds on a pair of AMD HD 7970 cards.

        He benchmarked the exact same hardware in Win 8.1 and various Win 10 prerelease versions using the exact same Project CARS in-game settings and saw a 20-40% performance discrepancy in favour of Win 10 with much smoother frametimes on Win 10 to boot (the results vary across 15.3, 15.4, 15.5 beta and 15.7 on 8.x). On Win 10, the HD 7970 more or less reaches parity with the GTX 770 (just like in most other games). On Win 8.x … see Scott’s results.

        All I can say is that once Scott begins testing on Win 10, you’ll get to see it for yourself. Feel free to disbelieve me until then.

          • auxy
          • 4 years ago

          You’re awfully defensive! At no point did I call you a liar or even remark on implied disbelief. (゚∀゚)

          Actually, I’m interested in your claims about improvements in AMD GPU performance resulting from the move to WDDM 2. (´・ω・`) I am ever wary of claims made on the internet (especially by essentially anonymous commentors) and curious to see the data for myself.

            • torquer
            • 4 years ago

            You should consider making cute ASCII pictures in your post to defuse heated exchanges like this.

            • auxy
            • 4 years ago

            (・へ・)

            • BIF
            • 4 years ago

            Hmmm, funny you should say that. I’ve started downvoting all ASCII pictures that are not easy-to-understand emoticons.

            At first if felt like mild elitism to me, but now after months, I find that I either STILL don’t get the meaning, or I feel like I’m being talked-down to. Or cussed at in French. If I’m going to learn a new language, I would rather it be Spanish or French. If I’m going to learn a new alphabet, I’d rather it be Japanese or Chinese, not ASCII or Unicode.

            🙂

            • TopHatKiller
            • 4 years ago

            Dearest BIF; I totally agree. In my utter shame I had no idea what the ascii was about. I though it was just an ‘in’ thing around here – so hence something I was excluded from.
            Personally, though, I would hesitate in downvoting unless the post was really dumb, stupid, wrong or offensive.
            I should end this with ascii code, as irony, but I don’t know any and could not give a………………….. Cheers!

            • ermo
            • 4 years ago

            I completely understand your curiousity. Unfortunately, I haven’t got the hardware to spare to run Win 10 and Win 8.x tests side by side on the same system.

            I’ve shot Scott an e-mail in which I ask if he would be interested in doing a WDDM 1.x vs WDDM 2.x comparison using the R9 series cards with Project CARS.

            Not sure if he will find it worthwhile, but I guess we’ll see. 🙂

      • dragosmp
      • 4 years ago

      ermo thanks for the detailed explanation. It is a well known fact that AMD GPUs are more sensitive to CPU performance, hell even AMD launched the 290X (the demo rigs) on Intel CPUs to show off the less-CPU-bottlenecked performance.

      As resource-limited as AMD is today I doubt it’ll invest a lot of time in WDDM 1.x

      2.x is the future, as well as Dx12 and the few people giving their best on the driver team are better used for those projects. Glad to hear things are working better on Win10!

    • Krogoth
    • 4 years ago

    970 is the only game changing GPU so far.

    The other pieces of silicon are either underwhelming or just overpriced for their performance. It is a sad state of affairs when Kepler and Tahiti-based parts are still viable for majority of games out there at 2 to 4Megapixel gaming unless you want loads of AA/AF or trying to run a poor coded POS (see AC: Unity and Batman: Arkham Knight)

      • JustAnEngineer
      • 4 years ago

      [url=http://www.newegg.com/Product/ProductList.aspx?Submit=ENE&N=100007709%20600536049&IsNodeId=1&bop=And&Order=PRICE&PageSize=30<]$310[/url<] GeForce GTX970 3½+½ GiB [url=http://www.newegg.com/Product/ProductList.aspx?Submit=ENE&N=100007709%20600565504%20600473871%20600565674%20600473877&IsNodeId=1&bop=And&Order=PRICE&PageSize=30<]$270 -30MIR[/url<] Radeon R9-290 4GiB Radeon R9-290 launched on November 5, 2013. GeForce GTX970 launched 10½ months later, performs the same and costs more than the Radeon. What makes that "game changing?"

        • cobalt
        • 4 years ago

        Can’t speak for Krogoth, but I doubt he’s saying anything new about the 970 is a game changer TODAY. I personally think it was when it was released, though.

        (You could almost make the case for the 290X when it was released as well. It didn’t have the 970’s trifecta of price, performance, and power consumption, but it did undercut the 780 on price by a good bit and beat it on performance, causing NVIDIA to make some big price adjustments.)

          • JustAnEngineer
          • 4 years ago

          I believe that ridiculously overpriced cards like the $1060 GeForce GTX Titan X exist for the sole purpose of making one feel better about spending $670 on a GeForce GTX980Ti. “I got essentially the same performance and saved $400!”. LOL

            • cobalt
            • 4 years ago

            The original Titan had the full DP speed, so it was a steal for a CUDA workstation card at $1k. For the Titan X, it’s hard to argue with your assessment.

        • Westbrook348
        • 4 years ago

        He means at release. 970 ~ 290X on frame time analysis. When 970 was released, the 290X was over $500, so getting a $339 card that essentially tied AMD’s flagship WAS game changing. Nvidia forced AMD to cut prices substantially. Even now the 290X is about the same price as a 970, but it doesn’t have the support of the Nvidia driver team, it doesn’t support 3D vision, and it is a huge energy hog. AMD fans may choose the 390 over the 270, but these new releases by AMD are underwhelming.

        Krogoth: I’d add that 980Ti has been pretty impressive, if not quite game-changing. It had Titan X level performance it overclocks so well, yet didn’t cost much more than the 980 at launch. It competes with dual GPU setups like the 295X2 or SLI 970s despite requiring half the power and half the driver headaches.

        • Freon
        • 4 years ago

        The 290 has just had it’s price slashed over and over because AMD has nothing else to release to fill the need. I imagine they’re not making much on them due to the massive 512bit bus and high power requirements.

        So congrats on losing money quarter after quarter, AMD!

          • ImSpartacus
          • 4 years ago

          Yeah, the 970 was kickass when it released, but its price hasn’t dropped while the 290 nose dived from $400 at release to nearly half that.

            • cobalt
            • 4 years ago

            Yes, but it was basically the release of the 970 that caused the major 290 price cut (September was the 970 release, October was the 290 price cut). There’s no reason to expect a significant price change again until the landscape changes in that price performance bracket.

            That said, I’m a little surprised the 970 hasn’t come down at all, but it was so well priced when it was released, I guess it has to hang on to that price for longer. (And can do so if it’s still selling well.)

      • geekl33tgamer
      • 4 years ago

      Uhhh, no need to post it [url=https://techreport.com/discussion/28612/asus-strix-radeon-r9-fury-graphics-card-reviewed?post=921857<]twice[/url<]?

    • madgun
    • 4 years ago

    For the last 3 years AMD has been lagging behind the latency based metrics. God knows how long has it been, since Techreport introduced this method of testing only 3 years ago. AMD for all I know could have been lagging behind forever.

    And we keep praying that AMD would serve us better drivers.

    • xand
    • 4 years ago

    -deleted silly question-

      • Melvar
      • 4 years ago

      -deleted snarky response-

        • xand
        • 4 years ago

        You mean you didn’t click “post” on your reply for 4+ hours?!

    • Smeghead
    • 4 years ago

    From the paragraph immediately following the revised FPS-per-dollar plot:

    [quote<]Do note that the price-performance landscape isn't substantially altered by this shift on the X axis, though.[/quote<] Y axis, surely? The cards' points on the plot didn't shift left or right as their prices weren't altered; only the reported average FPS changed. Anyway, thanks for the update!

      • Damage
      • 4 years ago

      Yeah.

    • Anovoca
    • 4 years ago

    What, no scatter plot with arkham knight?

      • JustAnEngineer
      • 4 years ago

      How much would you have to mark off for the cards that came with [i<]Batman: Arkham Knight[/i<] in a bundle?

      • Milo Burke
      • 4 years ago

      Actually, I think it’d be pretty funny to see the frame time distribution curve for Arkham Knight on a couple of cards. TR should do it just for the humor, preferably in a Batmobile scene. =]

    • auxy
    • 4 years ago

    As an AMD fan and as someone who doesn’t play racing games (much), it heartens me to see the difference removing [i<]Project Cars[/i<] makes. With that said, as an egalitarian (in all senses of the word), it really is important to note that the [i<]Project Cars[/i<] results illustrate something that potential Radeon buyers just have to be aware of -- sometimes games are just not going to run very well on your hardware, or may require a driver update (which may be a long time coming.) I say this as someone with a 290X. It's not just Project Cars, or The Witcher 3, or Batman -- it's Rage, and Battlefield 3, and a huge mess of indie titles which will probably never get the driver optimizations or tweaks they need. I love my Radeon, but it's real frustrating when a game which used to start right up on my GTX TITAN (or which starts up fine on my wife's GTX 750 Ti) stutters or crashes on my 290X. (ノД`)・゜・。

      • ermo
      • 4 years ago

      [i<](moved to its own comment thread)[/i<]

      • Westbrook348
      • 4 years ago

      So I guess my question is, if your GTX cards work so well, and you have stuttering, crashing, and frustration with your 290X because of inadequate driver optimizations, WHY exactly do you love your Radeon and call yourself an AMD fan? Sounds masochistic

        • auxy
        • 4 years ago

        Because in plenty of other games — and even in the above-mentioned games, after driver optimizations or game tweaks — it performs fantastically, and there are various other things I like about it:[list<][*<]RadeonPro SweetFX integration which doesn't work properly on Geforce cards[/*<][*<]VSR which is higher quality than blurry DSR (even if I had to hack my driver to make it work properly)[/*<][*<]The fact that I sold my TITAN and bought a 290X which was faster even after the former's 50% overclock, and ended up with a faster GPU and $350 in my pocket[/*<][*<]No issues with sending full-range RGB over HDMI connections unlike Geforces[/*<][/list<] Don't get me wrong, I like Geforce cards too, and if NVIDIA wasn't so hell-bent on charging a premium for their technology I might still be using one. I'm not a fanatic, though, so you may have a hard time understanding my point of view. (*‘∀‘)

      • NoOne ButMe
      • 4 years ago

      Just a small note, without changing the API, in Windows 10 beta last time I checked PCars runs up to and over 30% faster on AMD’s GCN cards.

    • Ninjitsu
    • 4 years ago

    Very impressed that you went back and re-checked this, and published your findings. Doesn’t change much, I suppose, but at least those up in arms over the PCars stuff should be satisfied (if they can be at all).

    • torquer
    • 4 years ago

    Summary: “Buy what makes you happy then STFU about it.”

    No one makes a “bad” card these days

      • nexxcat
      • 4 years ago

      Unless you’re into Project Cars 🙂

        • torquer
        • 4 years ago

        Then may God have mercy on your soul :p

    • HisDivineOrder
    • 4 years ago

    I think any game that favors another GPU vendor should be excluded. Naturally, that means any review that includes any game supporting any kind of advanced technology by nVidia, AMD, or Intel should be removed from benchmarks. Because obviously those games were at least in part tested and made to work best on the hardware of the very GPU company that is advertising them as part of their stable of “optimized games.” Naturally then, Gaming Evolved and The Way It’s Meant to Be Played games should all be excluded.

    Now, get back to me on those Plants vs Zombies (not Garden Warfare! It had a Mantle version, so it’s clearly favored by AMD) benchmarks.

    I think that’ll be all that’s left after The Great GPU Optimization of Games Purge.

      • NoOne ButMe
      • 4 years ago

      Anything that has software which is shown to run much worse on one vendor than the other should be excluded. Given that card A from NVidia has 10% over card B from AMD excluding any sponsored games, and an Nvidia game comes where it is still 10% faster, that should be used.

      Same for AMD sponsored games.

      It is with games like Project Cars, many games with GameWorks [i<]when[/i<] the GameWorks features are turned on, Tomb Raider with TressFX before AMD opened the code for NVidia optimize, etc. should not be done. At least, not to conclude the overall performance. It merely should be "if you want to play Racing games/Gameworks games with gameworks on/game in Mantle/etc" don't buy AMD/AMD/Nvidia/etc.

        • HisDivineOrder
        • 4 years ago

        But how do you judge that only 10% over is the limit? What about 5%? Or 3%? What about 1%? Why are you judging what one card should do over another game? And why do you think, in a two horse race, that just because one horse is limping over the ground at one particular spot that the ground is at fault? Maybe it’s the horse was incompetently shoed?

        The case is made that certain games are going to run poorly on certain hardware and excluding those games simply because you don’t like the results or how they skew them one direction is ignoring the very real fact that the real world is full of games that do just this for AMD hardware. You don’t have to look very hard to find it, too.

        When it’s true of nVidia, keep the results. When it’s true of AMD, keep the results. Factor them in because the end user is going to be doing that when they buy their card. That said, maybe one shouldn’t be generalizing at all in the first place.

        Just give the individual results, let the end user decide without the averages or means and this’ll never be an issue.

      • f0d
      • 4 years ago

      so exclude all popular games?
      every game in the review (with the exception of GTA-V) was either an amd sponsored title or nvidia sponsored title

      in fact there was more amd games in the review than nvidia games

      excluding ALL sponsored games will actually be difficult and wont be representative of the games most people play as you would have to start using games that are not popular or free to play games which usually are not very good at testing video cards

      i DO agree that it would be good if TR could do what you say and it would be the right thing to do – i just cant see how it would be any good at testing the cards themselves because as i said before those non popular games and free to play style games dont test cards very well

        • HisDivineOrder
        • 4 years ago

        Thus, it’s silly to remove a game simply because it didn’t run as well on one brand of hardware because that’s the reality of the market and the games people like to play. One year, it’ll favor nVidia. The next year, it’ll favor AMD.

        Removing one to make things have parity is just cooking the results.

          • f0d
          • 4 years ago

          i totally agree
          at first i thought you were serious (and thats when i posted my reply) but i must have mentally skipped over….
          [quote<]Now, get back to me on those Plants vs Zombies (not Garden Warfare! It had a Mantle version, so it's clearly favored by AMD) benchmarks. I think that'll be all that's left after The Great GPU Optimization of Games Purge.[/quote<] i laughed a little and agreed 🙂

          • DoomGuy64
          • 4 years ago

          No, cooking the results is NV coercing developers to cripple AMD performance, then benchmarking that game like it’s impartial while there’s NV ads plastered over every wall in the game. NV has been doing this for too long for anyone to accept that their sponsored titles are legitimate benchmarks.

            • HisDivineOrder
            • 4 years ago

            Or AMD providing a Mantle version of a game so that that version has low level access, which in theory should provide an advantage. And how one really should include that version if it’s fully functional in a review because that’s what the end user experience one would get. Not to mention the fact that if AMD paid money for that company to make a Mantle version, they’re probably testing DX11 on their hardware extensively, too.

            I don’t think it’s worse of nVidia to provide game publishers with SDK’s to make certain effects more readily available on DX11 versions of games in ways that favor their own hardware than AMD providing them an API to make a completely different version of said game and split their resources.

            I think they’re equally horrendous.

            • DoomGuy64
            • 4 years ago

            Mantle doesn’t affect dx11 whatsoever, and it’s discontinued. It’s the same as providing an OpenGL and DX path for the game, like the old UT used to. NV on the other hand, is actively crippling dx11 performance on the competition. It’s completely one sided.

      • mcnabney
      • 4 years ago

      Or do what actual scientists do – throw out the high outlier and the low outlier and average the tests in the middle.

        • HisDivineOrder
        • 4 years ago

        Sometimes. Sometimes, they keep the outlier if the outlier proves something about the scenario in question.

      • decoy11
      • 4 years ago

      I disagree the benchmarks should be of what is currently popular at the time of the review. That is the best way to judge as those will be the most likely games that consumers will be using the video cards on. To remove games like project car because it favors Nvidia isn’t the right choice because while it might favor Nvidia the game is popular and relevant.

      • Pancake
      • 4 years ago

      It’s a nice idea but some people like me buy graphics cards solely to play a particular game. In my case it’s all about GTA V performance. I don’t care about any other game.

        • f0d
        • 4 years ago

        im the same way with project cars and planetside 2 performance
        planetside 2 will never get benchmarked so its all about project cars performance for me

    • Milo Burke
    • 4 years ago

    Thanks, Scott. Your commitment to accuracy and fairness is what keeps us coming back.

    • Westbrook348
    • 4 years ago

    Thanks for your hard work Scott. Great work. I love that you address your readers’ comments. I’m looking forward to the next podcast!

    • anotherengineer
    • 4 years ago

    So fury ~ 980, now just have to see if the price ~ price?!
    edit
    [url<]http://www.ncix.com/detail/asus-geforce-gtx-980-strix-cc-102671-1371.htm[/url<] can't find fury price yet. See that gtx980 is $100 off, nice! One thing I do like about TPU, is the power measurements, they are strictly the card and not the system. Makes things easier to put into perspective. [url<]http://www.techpowerup.com/reviews/ASUS/R9_Fury_Strix/29.html[/url<]

    • tbone8ty
    • 4 years ago

    295×2 is still a great performance per dollar card… #1 nice! Amazing how far it’s come in terms of frame time delivery.

      • Westbrook348
      • 4 years ago

      Yeah I’m impressed by the smooth, narrow frame time graphs for a dual GPU card from AMD. That was never the case for CrossFire a few years ago. Now beating the Fury X and my 980Ti. Impressive driver optimizations from a team that gets a ton of (mostly deserved) criticism.

        • Disco
        • 4 years ago

        I agree. This is what really sticks out for me in the Fury reviews. Just how good the 295X2 has become with various driver tweaks.

        Although I’m fairly disappointed in the overall performance of both Furys(ies?) for the price, I’m really looking forward to what they will do with a FuryX2. Lower power envelope and seemingly improved dual gpu drivers can only bring on some amazing performance.

        • Nevermind
        • 4 years ago

        Yeah no more Bus-Bus-Bus-Bus-Back to GPU for crossfire, it’s on the card.

    • TwoEars
    • 4 years ago

    IMHO the real problem is that the GeForce cards are much better overclockers.

    If you compare fully overclocked 980, 980Ti, Fury and Fury X cards the picture is quite different than if you’re comparing stock frequencies.

    Take for instance the Gigabyte 980 Ti G1 Gaming card. If you compare a fully OC’d G1 Gaming to a fully OC’d Fury X the difference in 3dMark Firestrike 1080p is 23% in favor of the 980Ti. 23% !!!

    If you instead compare a 980 card to the Fury it’s a much closer match-up, they are basically neck and neck overall when OC’d. One card wins some games and the other card wins some games. But of course… the 980 is cheaper. So Team Green wins this round as well.

      • ImSpartacus
      • 4 years ago

      Yeah, I hate reading reviews that compare the overclocked product against stock competition.

      I was to see an “apples to apples” comparison of overclocked to overclocked.

      • Prestige Worldwide
      • 4 years ago

      Indeed. If you are running a Maxwell card at stock frequencies, you are doing yourself a disservice.

      Even my voltage locked Asus 970 Strix can OC to 1500 MHz on air…..

    • tsk
    • 4 years ago

    Sites like Guru3D found the Fury to have really good frametimes, so I’m still a bit confused.

      • cobalt
      • 4 years ago

      Can you be specific? I just looked at their review where they show FCAT frame time results, starting on this page: [url<]http://www.guru3d.com/articles_pages/asus_radeon_r9_fury_strix_review,27.html[/url<] I don't see anything obviously inconsistent with TR. Guru3D is only showing frame times versus the 980Ti, so it's hard to judge, but the Fury looks like it's generally putting out frame times at least as large, and often spikier, than the 980Ti. They don't look bad or anything, just not as good. They also don't have a summary about the frame time results except "Looking at FCAT frametime analysis, well we see nothing that worries us." Again, not exactly inconsistent with TR's measurements. What are you seeing that I'm overlooking?

        • tsk
        • 4 years ago

        Excluding the witcher 3, the fury framtimes on guru3d are almost on par with the 980ti and at the same time smoother than techreports graphs. At least thats how I see it.

          • cobalt
          • 4 years ago

          “frametimes … almost on par?” I don’t know — across all those benchmarks, ignoring any spikes, the frame times for the Fury look like they’re about generally about 20% higher than the 980Ti. That’s within spitting distance of the TR results even including CARS (estimating 57fps 980Ti and 47fps Fury = 21%), and it may even be worse than the TR results without CARS (54fps vs 46fps = 17%). I mean, yeah, it’s hard to judge looking at only those plots, but I’m trying to do some effort to quantify it and I’m really not seeing any discrepencies.

        • tsk
        • 4 years ago

        [url<]http://www.pcper.com/reviews/Graphics-Cards/Sapphire-Radeon-R9-Fury-4GB-Review-CrossFire-Results/Battlefield-4[/url<] PCper too seems to have similar findings, reporting framtimes equal too or better than the 980.

          • DPete27
          • 4 years ago

          Call me a jaded TR fanboy, but an alarm went off in the back of my mind when I glazed over those PCPer graphs. All of the graphs for different cards just look too….similar in shape, especially with the inclusion of the crossfire setups. I question the frequency of their data collection (too coarse?). I’m tempted to just negate that entire article’s results/findings.

          • Damage
          • 4 years ago

          I don’t see how you compare our results to those based on the plots alone.

          Not to be snarky, but I see graph axes that are clipped and don’t show the full magnitude of frame time spikes. I see the stretching and compression of frame time distributions with different quantities of frames into the same length plots, with time as the axis.

          I do not see a time-sensitive quantitative summation of the frame time distribution like a percentile or a time beyond X threshold.

          I don’t understand that variability curve or its qualitative meaning at all. I suspect its utility is very limited, since with variability, you’re comparing the distribution to itself rather than an outside reference.

          I think it’s incredibly tricky to compare across different test systems, settings, test hardware, and methods. But with an incomplete visual representation of the data and without a summation of the data beyond FPS averages, how can you know?

            • tsk
            • 4 years ago

            I’m just a peasant trying to learn. I go to guru3d, pcper and techreport for graphics card reviews. Usually I compare results and findings between the sites, and they regularly match. This time I felt the two other sites in mention had come to a bit different conclusion, therefore I question why.
            I had not at all thought about differences in the graphs and plots as you explained, thanks for your answer.

            PS! Techreport is where I learned about frametimes, thank you for your awesome work Damage.

        • derFunkenstein
        • 4 years ago

        The first page shows a couple of spikes on Witcher 3 and one giant one in Thief, but they miraculously got no spikes anywhere else. Contrast that with PCPer, which got very similar results to TR, and Guru3D is the one that I think looks weird, not TR.

          • Damage
          • 4 years ago

          Remember, though, that they were not testing the same actions in the same areas of the game, presumably. Of course the distributions will look different.

          I could make practically any GPU show a totally flat frame time distribution with consistently low frame times by pointing the view up to the sky and not moving. Wouldn’t tell you much, but it’s not hard to do.

          We aim to capture a portion of a game where something dynamic and potentially challenging is happening, something that represents real gameplay. Another site could pursue the same goal and still end up testing something that produces a very different looking distribution.

            • derFunkenstein
            • 4 years ago

            That’s true, but what I was really getting at is that in Guru3d’s review, the vast majority of games had no spikes. Like this perfectly flat GTA5 test. The only way I can do that on my PC is by standing still.

            [url<]http://www.guru3d.com/articles_pages/asus_radeon_r9_fury_strix_review,31.html[/url<] The tests are also only 30 seconds. I don't see how tsk can look at that review and then look at TR's review and say that theirs is the "normal" result.

            • Damage
            • 4 years ago

            Yeah, a lot depends on how they tested.

    • WaltC
    • 4 years ago

    The only “victory” in this contest is that of the new technology over the old (and tired) technologies…I mean, the choice is simple if you plan to sink $550-$650 in a new 3d card–are you going to go for the newer, as-yet-unoptimized technology that is the way forward from now on out (nVidia would be more than happy to field an HBM card if it could–and it shall, never fear), or do you want to drop the same on yesterday’s optimized-to-the-hilt, thoroughly wrung out technologies…?

    AMD is taking the risks and forging ahead, leading the way, and I’m going to reward that behavior…

      • chuckula
      • 4 years ago

      [quote<]AMD is taking the risks and forging ahead, leading the way, and I'm going to reward that behavior...[/quote<] Just like you bought a Pentium 4 followed by an Itanium to reward Intel for coming up with a truly unique microarchitecture and taking risks. AMIRIGHT??

        • maxxcool
        • 4 years ago

        Or a Pentium-D ? 😀

          • chuckula
          • 4 years ago

          The Pentium-D represented a breakthrough in materials science since Intel managed to come up with superglue that could hold both cores together AND not melt when they were turned on!

        • TopHatKiller
        • 4 years ago

        The P6 architecture was, apparently, fosted on intel engineers by their marketing ‘team’.
        Itanic [thank s/a] was their engineers attempt to get rid of x86 baggage.

        HBM though, designed by AMD, is the way forward. Nvidia, because of the failure of their own attempt, is already on board.

        I, personally would never spend “$550+” on a gpu – in a year or so that dear gpu would be outpaced. So why would you spend that much money? I prefer the idea of buying something cheaper and replacing it in a year or so. Preferably with a really good air cooler. Sadly, though, neither the Arctic or Prolimatech coolers are compatible with Fiji. I guess that’s okay, though, I’m waiting for the next 14/16 gen.

      • ImSpartacus
      • 4 years ago

      I think you have to reward results above all else.

        • K-L-Waster
        • 4 years ago

        “Reward”?? What, are we raising cocker spaniels here?

        The only rational reason to buy a particular video card — from either vendor — is because it will meet your needs and budget better than anything else on the market.

      • Kretschmer
      • 4 years ago

      Performance > Missed Aspirations
      Mature Tech > Teething Issues

      Any questions?

      • HisDivineOrder
      • 4 years ago

      I tend to reward the best value for the money, depending on what elements I need at the time. I value performance per watt, performance per dollar, and perhaps even straight-up performance.

      But never once do I think, “Well, it performs less well, but it’s advancing technology without giving me any gain in performance so I must reward it!”

        • K-L-Waster
        • 4 years ago

        I’m assuming you also don’t think “I want company X to get some revenue so I’ll buy their products regardless of whether they fit my needs.”

          • HisDivineOrder
          • 4 years ago

          Yeah, that’d be pretty silly.

            • Ninjitsu
            • 4 years ago

            Lol one should probably just become an investor/shareholder instead.

      • steelcity_ballin
      • 4 years ago

      Could you be any less objective?

        • TopHatKiller
        • 4 years ago

        Why should ‘ee? No one else seems to.

          • TopHatKiller
          • 4 years ago

          Replying to myself. Wonder if I can downvote myself? It seems a craze.
          Edit:
          No!!!!! Damn I can’t downvote myself – I guess all you balanced and fair people will have to do it for me? Mhmmm?

            • f0d
            • 4 years ago

            done
            i balanced it out by upvoting this comment i just replied to though

            • TopHatKiller
            • 4 years ago

            Tar, Love!

    • DragonDaddyBear
    • 4 years ago

    I really, REALLY wanted AMD to take home the crown but even the chart manipulation isn’t going to change the truth: the Green Team remains the better option in all but a few niche scenarios.

    I’m still going to buy AMD and praying they don’t go completely bankrupt before zen and a FIN-FET/die shrink in 2016 can hopefully save them.

      • wimpishsundew
      • 4 years ago

      They can’t go broke in 2016 unless they’re dumping all their production into the river and keeps everybody on the payroll.

        • Kretschmer
        • 4 years ago

        AMD Management Vision 2016; you heard it here first!

        • maxxcool
        • 4 years ago

        “Keeps everyone on the payroll” .. lol no fear of that 😛

        edit if – of

      • Kretschmer
      • 4 years ago

      Zen is probably not going to save AMD for the enthusiast, as AMD’s CPU execution record is bleak. At this point, they really only have the resources to focus on a few big areas. GPUs/x86/ARM/APUs/Integrated is spreading the firm too thin.

        • DragonDaddyBear
        • 4 years ago

        Zen won’t be coming to enthusiasts, not at first. But they need revenue. Zen will do that at the enterprise, where margins are big.

        I just don’t want a monopoly. It’s bad enough having a noncompetitive oligopoly in the CPU space. At least AMD is close to Nvidia in the GPU area.

          • homerdog
          • 4 years ago

          They really should release Zen to consumers first. Let us find any potential errata (all CPUs have them) before they go into “important” stuff. Intel has been doing this for a while now and it works very well for them.

        • raddude9
        • 4 years ago

        Zen could do well with enthusiasts if they give us an 8-core for cheap with reasonable IPC. That’s what I’m hoping for.

      • HERETIC
      • 4 years ago

      “the Green Team remains the better option in all but a few niche scenarios.”
      One nice little scenario-
      R9-290 would look magnificent on that chart as best value card for 1440……
      Pity is everyone knows that and they’re disappearing fast and prices going up…..

      • Demetri
      • 4 years ago

      I actually think AMD’s cards are still a good choice, solely based on the Freesync factor. If you want adaptive sync, those $ per performance ratios look a lot better for AMD when you factor in the cost of a DP adaptive sync monitor vs gsync. Looks like there’s going to be a lot better selection of DP adaptive sync monitors too. Granted Nvidia could change that by simply supporting the standard, but as it stands now, I would personally still go with the red team.

    • BobbinThreadbare
    • 4 years ago

    Glad to see you address this. It doesn’t really substantially change things and especially if someone *does* want to play Project Cars.

    Hopefully, AMD can address the frame times with drivers though and really unlock this card’s potential.

      • K-L-Waster
      • 4 years ago

      Curious: is that kind of charitable donation tax deductible?

      EDIT: Ooops – replied to the wrong post…..

    • NoOne ButMe
    • 4 years ago

    Thank you very much

      • auxy
      • 4 years ago

      I think this is the best post I’ve seen you make. (´▽`)

      • TopHatKiller
      • 4 years ago

      Agreed. Correcting errors / re-assessing a review is something few sites will do. Yet another reason why TR is an excellent site.

    • wimpishsundew
    • 4 years ago

    Better but still still need work in latency spikes. I hope DX12 will help AMD with this problem.

    I hope you do another review in August to compare DX11 and DX12.

    thanks for putting in extra work btw.

      • chuckula
      • 4 years ago

      I think the DX12 benchmarks have the potential to be both the most & least interesting material on here for the rest of the year*.

      Most interesting: Major new API features and driver models from both AMD and Nvidia that will play to the strengths and weaknesses of their hardware. I’m sure there will be some canned demos that really show DX12 as being leagues ahead of DX11.

      Least interesting: It’s not like DX12 is going to be everywhere at launch. Even the games that claim to support DX12 this year (and I don’t know that number) are likely not going to have anywhere near the level of optimization that will really showcase DX12’s purported benefits.

      * Skylake is already pre-Krogothed.

        • renz496
        • 4 years ago

        [quote<]Least interesting: It's not like DX12 is going to be everywhere at launch. Even the games that claim to support DX12 this year (and I don't know that number) are likely not going to have anywhere near the level of optimization that will really showcase DX12's purported benefits.[/quote<] worry not. Mantle will save the day. because Richard Huddy said last year there were almost 100 developer has sign up for Mantle program. so there must be at least around 80-90 games using Mantle at the end of this year right? haha well joking aside i remember that some people were using this reasoning why more games will be available with Mantle and why Mantle is not dead despite the existence of DX12.

      • Kretschmer
      • 4 years ago

      Will DX12 really do much for benchmarks done with Core i7s? My understanding is that it would enable better use of x86 resources (irrelevant for those of us with i5+ CPUs) and allow the game developer to code at a lower level for given hardware (bad for AMD as the market trailer).

        • wimpishsundew
        • 4 years ago

        DX12 uses up to 6 cores. I have no idea what you’re talking about. Core i5 are 4 cores.

        If the CPU can feed the GPU faster and reduce latency, why wouldn’t GPUs benefit from it? We’re hitting draw call limits with some games already.

        EDIT: Forget it. Glancing through his short post history already tells me why he’s saying all these things.

          • anotherengineer
          • 4 years ago

          Didn’t AMD have an affordable 6 core like over 5 years ago?
          “AMD Thuban core was launched in April 2010 in Phenom II X6 1055T and 1090T microprocessors”
          [url<]http://www.cpu-world.com/Cores/Thuban.html[/url<] Hey MS 2010 called, says it had 6 cores back then. 2016 is calling, asking if DX will support 8+ cores??

        • Ninjitsu
        • 4 years ago

        Well, yeah, potentially.

        • BobbinThreadbare
        • 4 years ago

        Yes, you get more draw calls dispatched faster, something the AMD drivers struggle with more than Nvidia. It was ~10-15% improvement when TR tested it.

        Edit: Here’s the link [url<]https://techreport.com/review/25995/first-look-amd-mantle-cpu-performance-in-battlefield-4/2[/url<] The CPU is capable of at least 145 FPS as seen with the Geforce, but switching to mantle improves the 290x from 118 to 130.

      • DPete27
      • 4 years ago

      You assume DX12 won’t also help Nvidia?

        • BobbinThreadbare
        • 4 years ago

        I think it will help Nvidia less because their drivers are currently more efficient.

      • Klimax
      • 4 years ago

      Easily gained, easily lost is (translated saying) here. What DX 12 will give you will next GPU architecture take away. Mantle already thought us that. (for those who pay attention)

      No low level API is immune to it nor can it fix. At best you will get temporary gain which won’t last at all and at worst even buggier games then we are getting.

      Simply low level APIs were never good idea for PCs and so far there is nothing to fix that.

    • chuckula
    • 4 years ago

    Reiterated from the review thread:

    That is helpful in clearing the air. However, there’s a caveat: Nothing is ever going to be good enough until you copy-n-paste AMD’s own pre-launch press kit and then spend 25,000 words on conspiracy theories about how AMD’s own press kit is a huge, evil anti-AMD lie and that their products are really better than what AMD says they are.

      • Welch
      • 4 years ago

      I’m sure glad Scott spent the time to read and acknowledge TR members comments that he clearly thought they were warranted, just so that you could post a random smartass comment that brings nothing what-so-ever useful to the conversation.

      Come on man, stop trying to bring down every single comment section with useless comments. It’s funny at times but too much of the same thing gets sickening.

        • maxxcool
        • 4 years ago

        … hmmm I think you misses the ‘casm

          • chuckula
          • 4 years ago

          I wasn’t even being that sarcastic, I was just pointing out (maybe not that well) an issue with human nature: When you are dealing with a fanatic, attempts at compromise won’t cut it. A fanatic isn’t interested in being reasonable and finding middle ground, he just wants to push you off a cliff, so don’t expect attempts at compromise to work very well.

            • maxxcool
            • 4 years ago

            😉

            • Welch
            • 4 years ago

            Fair enough… and yeah the fanatics are always just that, fanatical. You’ll never convince them of a damn thing. It just felt like you were saying that people who pointed out that project cars is terribly optimized and its still not fixed makes it a horrible addition to the metrics… were whinning fanatics. I myself NOT being someone who pointed it out but feel it was worth mentioning. Enough that Scott figured it he would update the review to reflect the new information. I mean that is what is amazing about TechReport… staff really gives a damn about non-bias data and interpurting that data accurately. Members generally care just as much making it where we all learn something. If I didn’t want that sort of intellectual atmosphere there are millions of places on the internet that I could escape to and allow my brain to rot a little bit.

            I didn’t mean any disrespect Chuckla, I’ve seen you make plenty of good post in comments and the forums. The comment just rubbed me the wrong way, perhaps I misunderstood it like you said.

            • TopHatKiller
            • 4 years ago

            Sir, I don’t believe you were being sarcastic at all. I believe your olfactory sense has malfunctioned . And you cannot smell the offal you’re serving:
            I have not noted pro-AMD fanatics on TR. [I might be wrong…] Your exception appears to be your fanatical subservience to Nv/Intel… hence you interpret any post that doesn’t praise them as ‘AMD fanaticism.’
            Not being here for too long, I admit, my judgement may be inaccurate. What, however,
            is not inaccurate is the ‘scare’de-cat’ syndrome that appears prevalent here when anyone attacks or disagrees with you posts. That’s a lot more worrying.
            [PS Earlier I called you a ‘moron’: I retract that, and apologise. You’re idiotical not actually moronic. Sorry, Chuckles…]

            • DoomGuy64
            • 4 years ago

            Oh, the irony. Good self-description, btw.

            • chuckula
            • 4 years ago

            Says the guy who obviously never saw my forum posts.
            P.S. –> You seem AWFUL new and you are an AMD fanboy. So uh, who did you used to be before you got the banhammer and registered with a different account?

          • Welch
          • 4 years ago

          Its Chuckla, its ALWAYS sarcasm, that was my point. Believe me, I love me some sarcasm sandwhichs but not at the expense of real conversation all of the time.

          I genuinely enjoy reading chuckla’s random smartass when they are witty, but this wasn’t such a comment /shrug, sorry.

      • Westbrook348
      • 4 years ago

      [url<]https://techreport.com/r.x/fury-x-architecture/perf-results.jpg[/url<] Looking back at this chart after reading Scott's extensive review and analysis (and now blog update) make me rage at AMD. Nvidia got criticism for releasing inaccurate specs for the 970, and it was deserved, but that was about the GPU's design more than it was about performance. Performance is what matters, and AMD pretended their FuryX was better than the 980Ti in literally every game mentioned. Where's the FURY at AMD over this BS chart?

        • NoOne ButMe
        • 4 years ago

        Technically they didn’t lie. The card when used in settings that puts all the stress onto the shaders makes it much faster. Same for Fury X.

        Nvidia lied about the amount of bandwidth that can be used at once as long as a few other specs of the card.

        You can upgrade drivers to fully utilized a card over time, it is much harder to utilize hardware that is missing over time via any way.

        If you read the small print for AMD you can see how they set up the tests and the problems with it. What small print did Nvidia include with their 970 launch talking about it missing stuff they claimed was there?

        NONE OF IT. Either NVidia is incompetent or they deliberately choose to lie at a high corporate level. Or, do you think that high ups in Nvidia don’t know the specs of their cards and don’t look at AIBs boxes/etc? I don’t think NVidia is incompetent. I feel their upper management does everything possible to look the best/make their competitors look worst even if it hurts their side. Gameworks sadly is an example of that in many cases (hurting their side also).

          • cynan
          • 4 years ago

          Neither of them lied. The GTX 970 really does have 4GB of VRAM. The Fury, under specific settings, can be faster than even a 980Ti in many games. But both of them certainly didn’t go out of their way to provide the consumer with full disclosure. Par for the course when it comes to marketing.

            • auxy
            • 4 years ago

            NVIDIA did lie though. It does not have a 256-bit memory interface as claimed.

            It also has less memory bandwidth than claimed. Neither of these things has been updated on [url=http://www.geforce.com/hardware/desktop-gpus/geforce-gtx-970/specifications<]NVIDIA's website.[/url<]

            • f0d
            • 4 years ago

            it has 224bit+32bit which pretty much is 256bit

            about the same as saying your dual gpu 512bit card has 1024bit bus
            [url<]http://www.newegg.com/Product/Product.aspx?Item=N82E16814202108[/url<] just because its separate doesnt mean its not part of the memory system

            • auxy
            • 4 years ago

            I don’t think anyone would agree that the 295X2 has a “1024-bit bus” aside from marketing idiots.

            And “224bit+32bit” isn’t “pretty much 256-bit” in this case because if you actually try to USE that little 32-bit bus your performance tanks miserably.

            • Waco
            • 4 years ago

            Still better than overflowing onto the PCIe bus. This argument is pointless.

            • auxy
            • 4 years ago

            Still not as good as a 256-bit bus, fan-boy. (´▽`)

            • BobbinThreadbare
            • 4 years ago

            So as long as it’s better than hitting the PCIe bus it’s ok?

            Yeah we’re calling this 128bit memory interface 1024bit. It’s still better than hitting the PCIe bus. Stop being so entitled.

            • Waco
            • 4 years ago

            No, that makes no sense.

            There’s a 256 bit bus split between two partitions. Both are faster than the PCIe bus. How, exactly, am I “being so entitled”?

            • Krogoth
            • 4 years ago

            Not true

            The GTX 970 has its memory pool partitioned into 3.5MiB and 0.5MiB portions due to how the silicon was binned. The drivers forced the 970 to use a maximum of 3.5MiB of VRAM, unless you disable it via registry tweaks. The usage of the last 0.5MiB pool causes micro-shuttering issues and it has been demonstrated a number of times in real world applications.

            Nvidia’s PR drop the ball on how 970 was represented at launch unlike the GK106-based stuff where its asynchronous memory setup was known since day 1. The engineers knew this problem in-house and Nvidia had to later rectify this with a post-lunch news press.

            Despite this known caveat, the 970 is still an excellent deal for what it is.

            • Nevermind
            • 4 years ago

            “Despite this known caveat, the 970 is still an excellent deal for what it is.”

            Except it’s not really, if you can get a 290x for the same money or less, and you can.

            • f0d
            • 4 years ago

            that depends on the country

            in australia the 290X was much more expensive than the 970 and you cant even buy them anymore, we only have the 390X

            cheapest 970 $439 (but one with a non stock cooler is $479)
            [url<]http://www.pccasegear.com/index.php?main_page=index&cPath=193_1692&vk_sort=1[/url<] cheapest 390 $499 [url<]http://www.pccasegear.com/index.php?main_page=index&cPath=193_1769&vk_sort=1[/url<] cheapest 390X $599 [url<]http://www.pccasegear.com/index.php?main_page=index&cPath=193_1768&vk_sort=1[/url<] i know most people here only care about what happens in the USA but the prices of graphics cards are different in different countries for some reason AMD doesnt want to give us the nice prices of radeons like they did in the USA

      • f0d
      • 4 years ago

      [quote<]copy-n-paste AMD's own pre-launch press kit[/quote<] im guessing this is referring to the settings AMD "suggested" reviewers use to test fury card? those settings were some of the most stupid settings i have ever seen - i hope no reviewers actually used them

Pin It on Pinterest

Share This