A driver update to reduce Radeon frame times

Hmm. Where to begin? Probably early last month, when we discovered some performance problems with the Radeon HD 7950 in recent games using our newfangled testing methods, which focus on frame rendering times rather than simple FPS averages. Eventually, AMD acknowledged the problem and pledged to address the issues of high-latency frames in a series of driver updates.

Happily, we didn’t have to wait long for the first update in that series. Within a day or two, AMD provided us a Catalyst 13.2 beta driver that includes fixes intended to improve frame rendering times in several of the DirectX 9 based games in our test suite: Skryim, Borderlands 2, and Guild Wars 2. Our report on this driver was delayed by a couple of factors, including our attendance at CES and an apparent incompatibility between this beta driver and our Sapphire 7950 card.

We still haven’t figured out the problem with the Sapphire card, but we ultimately switched to a different 7950, the MSI R7950 OC, which allowed us to test the new driver. The results on the following pages come from the MSI card. As you’ll see, its performance under the Catalyst 12.11 beta drivers is very similar to what we saw from the Sapphire, with the same latency profile and the same intermittent spikes caused by high-latency frames.


MSI’s take on the Radeon HD 7950

We have several interesting developments to discuss, including the nature of the changes AMD has made to the Cat 13.2 beta driver, but first, let’s take a look at our test results, which should help illustrate some of our points.

Since it’s been a while and one of the cards has changed, we’ll do a quick recap of our test configs before moving on.

Our testing methods

As ever, we did our best to deliver clean benchmark numbers. Our test systems were configured like so:

Processor Core i7-3820
Motherboard Gigabyte
X79-UD3
Chipset Intel X79
Express
Memory size 16GB (4 DIMMs)
Memory type Corsair
Vengeance CMZ16GX3M4X1600C9
DDR3 SDRAM at 1600MHz
Memory timings 9-9-11-24
1T
Chipset drivers INF update
9.3.0.1021

Rapid Storage Technology Enterprise 3.5.0.1101

Audio Integrated
X79/ALC898

with Realtek 6.0.1.6662 drivers

Hard drive Corsair
F240 240GB SATA
Power supply Corsair
AX850
OS Windows 8
Driver
revision
GPU
base

core clock 

(MHz)

GPU
boost

 clock 

(MHz)

Memory

clock

(MHz)

Memory

size

(MB)

Zotac
GTX 660 Ti AMP!
GeForce 310.54 beta 1033 1111 1652 2048
MSI
R7950 OC
Catalyst
12.11 beta 8
880 1250 3072
MSI
R7950 OC
Catalyst
13.2 beta
880 1250 3072

Thanks to Intel, Corsair, and Gigabyte for helping to outfit our test rigs with some of the finest hardware available. AMD, Nvidia, and the makers of the various products supplied the graphics cards for testing, as well.

Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests.

In addition to the games, we used the following test applications:

We used the Fraps utility to record frame rates while playing either a 60- or 90-second sequence from the game. Although capturing frame rates while playing isn’t precisely repeatable, we tried to make each run as similar as possible to all of the others. We tested each Fraps sequence five times per video card in order to counteract any variability. We’ve included frame-by-frame results from Fraps for each game, and in those plots, you’re seeing the results from a single, representative pass through the test sequence.

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

The Elder Scrolls V: Skyrim

We’ll start with Skyrim since the outdoor area we tested proved to be particularly difficult for the Radeon, with a number of hiccups disrupting the flow of the animation. This test scenario was the subject of our slow-motion video comparison illustrating the problem. Below is a video showing the route we took during each test run.


Above are plots of the frame rendering times for each card throughout one of our five test runs. You can click on the buttons to switch between the Radeon HD 7950 with the two driver revisions and the GeForce GTX 660 Ti. (And yes, I have changed the look of our plots a bit. Some folks liked the idea of thicker lines, but I worry that they visually overstate the presence of latency spikes. Squint if you must, but I think this is a better way.)

You can see the difference between Cat 12.11 and 13.2 quite easily in these plots. With Cat 12.11, the Radeon’s frame times look more like a cloud than a line, and there are intermittent spikes to 50 milliseconds or more. Switch over to 13.2, and the line becomes much tighter, with less overall variance and only an occasional spike above 20 ms. The GTX 660 Ti’s line looks tighter still, but it also includes a handful of higher-latency frames.


We can zoom in on a small portion of the test run in order to get a closer look at those frame rendering times. You can see how the 7950’s frame times have grown more consistent—and, notably, the high-latency frames have been squelched—with the driver update.

The improvement here is easily perceptible while play-testing the two driver revs. The motion feels much smoother overall with Catalyst 13.2, and little things, like the plants swaying as you walk, become appreciably more fluid.

Interestingly enough, Cat 13.2’s improvements don’t move the FPS average even a single frame per second. Look at the frame time plots and you can see why: with both drivers, the Radeon HD 7950 produces about 4250 frames over the course of our 60-second test run. Thus, they both average out to the same number of frames produced per second. That fact may tell you all you need to know about the value of FPS averages.

For what it’s worth, the “minimum FPS” results that some benchmarks report aren’t much help, either, because they average frame times over one-second intervals, and that’s just too long a time window to capture important differences. In this test, for instance, the median FPS minimum from five runs with Cat 13.2 is 59 FPS. The same figure for Cat 12.11 is 58 FPS. Yet the slowdowns with Cat 12.11 are very real and perceptible.

Happily, we can capture the impact of the improvements in Cat 13.2 with our latency-focused metrics, including the 99th percentile frame time. This number is just the cutoff point below which 99% of all frames were rendered. The lower the number, the better overall frame rendering picture for the solution being tested. With the new driver, the Radeon HD 7950 comes very close to matching the GeForce GTX 660 Ti.

A look at the “tail” of the overall latency curve even better demonstrates the improvement with Catalyst 13.2. The new driver is quicker for the final 25% of the frames rendered, and it’s substantially better for the last 5-7% of frames that prove most time-consuming to render. As a result, the GeForce’s advantage in this test has essentially vanished.

Our final latency-sensitive metric tracks frames that take an especially long time to produce. The goal is to get a sense of “badness,” of the severity of any slowdowns encountered during the test session. We add up any time spent rendering beyond a threshold of 50 milliseconds. (Frame times of 50 ms are equivalent to a frame rate of 20 FPS, which is awfully slow.) For instance, if a frame takes 70 milliseconds to render, it will contribute 20 milliseconds to our “badness” index. The higher this index goes, the more time we’ve spent waiting on especially high-latency frames, and the less fluid the game animation has been.

With Cat 13.2, the Radeon HD 7950 delivers fluid animation throughout the course of our test scenario, with only a tiny 10-millisecond blip spent beyond our threshold. That outcome tracks well with our subjective sense that Skyrim smoothness has increased substantially.

Borderlands 2

As you’ll note, this session involves lots of fighting, so it’s not exactly repeatable from one test run to the next. However, we took the same path and fought the same basic contingent of foes each time through. The results were pretty consistent from one run to the next, and final numbers we’ve reported are the medians from five test runs.

We used the game’s highest image quality settings at the 27″ Korean monitor resolution of 2560×1440.


Again, the improvement from Cat 12.11 to Cat 13.2 is easily discernible in the raw frame time plots. Those quasi-regular frame time spikes with Cat 12.11 don’t mean Borderlands 2 is unplayable at these settings on the Radeon HD 7950. The spikes are generally no larger than 40 ms, so they’re not a huge hindrance to fluidity. Subjectively, however, those spikes contribute an unsettled feeling to the gameplay, a certain strangeness to the movement in this game. The new driver eliminates that pattern of quasi-regular spikes.

Although the change barely registers on the FPS average, Cat 13.2 fares better in our 99th percentile frame time metric.

The 7950 hasn’t quite caught the GTX 660 Ti overall, but it has improved greatly with the new beta driver, particularly in the last 5-7% of frames rendered.

Since the 7950 didn’t spend much time beyond our 50-ms “badness” threshold before, there’s not much improvement in this number with Catalyst 13.2.

Guild Wars 2
Guild Wars 2 has a snazzy new game engine that will stress even the latest graphics cards, and I think we can get reasonably reliable results if we’re careful. My test run consisted of a simple stroll through the countryside, which is fairly repeatable. I didn’t join any parties, fight any bandits, or try anything elaborate like that, as you can see in the video below.


The overall latency picture improves nicely from Cat 12.11 to Cat 13.2 once more. With the new driver, the 7950’s performance becomes incredibly similar to the GTX 660 Ti’s.

Amazingly, the FPS average for the Catalyst 13.2 is lower than for 12.11, even though the newer driver’s latency profile has obviously improved. That’s an unusual outcome; we’d generally expect latency-focused improvements to yield slight gains in FPS averages, as well. Given the choice, though, we’d take the more consistent frame times of the new driver over the higher FPS average of the older one.

Our 99th percentile frame time result puts things right: the 13.2 beta is clearly a better performer than 12.11, and the Radeon and the GeForce have become very evenly matched thanks to the updated driver.

The rest of our latency-oriented metrics agree. With Cat 13.2, the 7950 essentially ties the GeForce and provides smoother, more consistent frame rendering times.

What’s changed in Catalyst 13.2 beta—and what hasn’t

AMD appears to be making good on its promise to address frame latency issues via driver updates. Andrew Dodd, AMD’s Catalyst guru, tells us a variant of the Catalyst 13.2 beta driver we tested will be released via AMD’s website next week. That should be a nice first step toward shoring up Radeon performance compared to the competition

We asked Mr. Dodd whether the changes included in this beta driver would impact performance generally in DirectX 9 applications or only in the three DX9-based games we tested. We also inquired about whether the previously mentioned buffer size tweak for Borderlands 2 was included. Here’s his answer:

Basically the fix was different per application (for the DX9 applications) – each fix involved tweaking various driver parameters. In the case of Borderlands 2, yes it did involve tweaking the buffer size.

So what we have in Cat 13.2 is a series of targeted tweaks that appear to work quite well for the games in question. However, Dodd says additional improvements are coming down the pike, including a rewrite of the software memory manager for GPUs based on the Graphics Core Next architecture that should bring a more general improvement:

The driver does not yet contain the new video memory manager. Our intention is release a new driver in a few weeks, which does include the new Video memory manager, which will help resolve latency issues for DX11/DX10 applications.

We look forward to the updates and to the improved gaming experience that Radeon users should be able to enjoy as a result.

Frame latencies: a new frontier

One of the tougher questions we had for AMD, in the wake of our discovery of these latency issues and their subsequent move to fix them, was simply this: how can we know that we won’t see similar problems in the future? Dodd addressed this question directly in our correspondence, noting that AMD will be changing its testing procedures in the future in order to catch frame latency problems and prevent them:

Up until this point we had mostly assumed that there were occasional flickers in frame rate, but we had thought these were related to the fact that modern games mostly have streaming architectures and limitations of scheduling in the OS. We definitely will start regular measurements to ensure we track improvements, and stop regressions. Long term, we want to work with game developers and Microsoft to ensure these kinds of latency issues don’t keep cropping up.

That’s exactly the sort of answer we want to hear, and we’ll be watching and testing future Radeon drivers and GPUs in order to see how well AMD executes on that plan.

That answer also blows up one of the assumptions that we’ve held since we published our first Inside the second article. We’d assumed that, although we were among the first to conduct a frame-by-frame analysis of game performance in public, such analyses had been happening behind the scenes at the big GPU makers as a matter of course for a long time. Our interactions with AMD, Nvidia, and others in the industry have since changed our view.

In fact, at CES last week, I was discussing the latest developments with Nvidia’s Tom Petersen, and he told me there was one question I failed to ask in my investigation of Radeon-versus-GeForce frame latencies: why did Nvidia do so well? Turns out, he said, Nvidia has started engineering its drivers with an eye toward smooth and consistent frame rendering and delivery. I believe that effort began at some point during the Fermi generation of GPUs, so roughly two years ago, max. Clearly, that focus paid dividends in our comparison last month of the GTX 660 Ti and the Radeon HD 7950.

From what I’ve gathered, in the past, developers have used nifty tools like the one we used to dissect Crysis 2 tessellation weirdness. These tools can show you the time required by each stage of the process of rendering a frame. Reducing those time slices has often been the focus of optimization efforts. Meanwhile, the performance labs at GPU makers and elsewhere have largely focused on FPS-based benchmarks to provide a sense of overall comparative performance. It seems efforts to bridge the gap between these two domains, to look at the overall frame latency picture and to ensure consistency there, have only recently ramped up.

Of course, AMD’s participation is crucial to the success of such efforts. We look forward to seeing what sort of benefits the next round of Catalyst driver updates can provide—and to an ongoing conversation about how best to handle the complex collection of issues this new focus has unearthed.

You can measure my response times on Twitter.

Comments closed
    • l33t-g4m3r
    • 7 years ago

    Remember what I said about PhysX? Well, here it is:
    [url<]https://techreport.com/news/24291/particle-effects-swirl-in-hawken-physx-demo[/url<] lol. TR isn't nvidia biased. Right. How did I predict this? Easy. This is just how nvidia's marketing works. Rip on AMD's drivers / promote TWIMTBP, then peddle PhysX when you have nothing else. This is exactly what Nvidia did back when AMD had dx11, and they were still stuck on dx10. They peddled the hell out of Physx because they could force it into games faster than natural dx11 adoption. Completely predictable shenanigans. AMD fixed the driver, point now null, peddle Physx. Predictable. Whatever happened to the neutral integrity of journalism, that you rig testing scenarios to promote one brand over another? Whatever. Do what you gotta do to feed the family, I guess. Screw the ethics of recommending a crippled card over an uncrippled card because of a driver bug. Facts don't fit the theory, change the facts, argue semantics. Do I think TR was right to expose the bug? Yes. Do I think it was done poorly? Yes. Combine that with questionable testing procedures that have been going on since the 460, and there appears to be bias, whether or not bias exists. I think TR need to make their reviews more neutral, consistent, and balanced. Why did the 7870 review not include a 7950? Why are games like Batman and Metro not being tested consistently across the board with dx11? Why are overclocked cards being used instead of stock? Perhaps if little things like these were done better, I wouldn't be complaining about this stuff.

      • farmpuma
      • 7 years ago

      Dude! Perhaps if your tinfoil hat wasn’t on too tight you might not be out in the weeds like this!

      edit: It had to be said. It’s happened to me more than a few times.

      • havok
      • 7 years ago

      Do u think we care what u say?

      No.

    • galco093x99
    • 7 years ago
    • chelseyox9aa
    • 7 years ago
    • ultima_trev
    • 7 years ago

    Should have thrown an Intel Graphics HD 4000 in the mix so we can which of the big three PC graphics chip makers truly has the best frame times. 😛

    • alienstorexxx
    • 7 years ago

    hi everyone, i have a 6770 and just downloaded 13.2beta unofficial from guru3d.

    this far, the only game (also the only dx11) i had problems is bf3, it brokes frame latency when vsync is enabled (triple buffering is enabled in case you ask)

    take a look
    vsync enabled
    [url<]http://imageshack.us/a/img542/2690/framelatencycanalsbf3vs.png[/url<] vsync disabled [url<]http://imageshack.us/a/img692/3874/framelatencycanalsbf3no.png[/url<] closer look on vsync enabled [url<]http://imageshack.us/a/img853/2690/framelatencycanalsbf3vs.png[/url<]

      • derFunkenstein
      • 7 years ago

      That’s more to do with vsync than a real issue. It’s fluctuating between two distinct times because it’s not fast enough to feed every frame at 16.67ms. With vsync off, your frame times hover around 20-25ms, whcih is 40-50fps. There’s no problem here, other than for vsync enabled you’ll want to turn down the details more.

        • alienstorexxx
        • 7 years ago

        you’re wrong my friend and thank god i’ve managed to solve it because it was driving me crazy.

        if you played bf3 you may know it has a command named “forcerenderaheadlimit”. some time ago i changed it to 3 as i noticed more fluid gameplay, everything changed with this driver.
        check this out:

        force render ahead limit 0 (no limit)
        [url<]http://imageshack.us/a/img690/4338/canalsrenderahead0.png[/url<] force render ahead limit 1 [url<]http://imageshack.us/a/img844/4114/canalsrenderahead1.png[/url<] force render ahead limit 3 [url<]http://imageshack.us/a/img202/3673/canalsrenderahead3.png[/url<] force render ahead limit 5 [url<]http://imageshack.us/a/img217/5238/canalsrenderahead5.png[/url<] my cpu is a core i5 2320 in case you ask. off-t: too bad tr forum is so dead, i like this community

          • derFunkenstein
          • 7 years ago

          Well it’s certainly weird. I don’t play BF3, but the first two graphs don’t look like vsync is enabled at all.

            • alienstorexxx
            • 7 years ago

            it’s enabled. i would notice. also all runs are made just modifying that commandline value in-game. it has nothing to do with vsync.
            vsync triple buffering has never been a reason for stuttering/microstuttering. only problem was in times of double buffering, but that was like from 60fps to 30fps and so.

            i can make another run without vsync if you want. it already has taken me some time… and i think you should belive me as i will won nothing just talking nonsense.
            Maybe some of you have demonized vsync because of old problems, but nowdays, the performance gap is unnoticeable.

            edit: i have to say that images from first post weren’t from same “run”. but for the second one, i made some kind of a custom run to test it more accurately.

    • piesquared
    • 7 years ago
    • Krogoth
    • 7 years ago

    This entire thing was blown out of proportion. It is a silly driver issue that was exposed under controlled testing conditions that would otherwise go unnoticed under normal gameplay. I don’t see why the fanboys need to be outrage over this. At least AMD has made the effort to address it.

    Stupid trolls and fanboys turned a purely academic exercise into a massive flamewar over silly pieces of silicon.

      • derFunkenstein
      • 7 years ago

      What have you done with the real Krogoth?

      • jihadjoe
      • 7 years ago

      “Unnoticed” for some, may be “glaring” for others.

        • Krogoth
        • 7 years ago

        Under normal conditions, you will not noticed any difference at all. To see any difference require the use of specialized tools and tests that are meant to draw out the said difference. Otherwise, we would seen this problem crop up with 7950 users complaining about it in various forums and websites.

        The only reason this purely academic exercise done out of curiosity turned into a massive flamewar is because of silly fanboys hating/routing for their favorite team.

          • Firestarter
          • 7 years ago

          [quote<]Under normal conditions, you will not noticed any difference at all.[/quote<] I don't agree, especially when you're just moving inside the game world at a constant speed with high framerates like in Skyrim when you're just walking around, high latency frames can be very noticable. It can make racing games nigh unplayable. The problem so far has always been that not [i<]everyone[/i<] notices it, and that there was no metric that effectively shows the problem. People might feel that the game is not running smoothly, but then they look at the (running average) FPS display and they think that they must be seeing things because according to the computer everything is ok. With Skyrim specifically there's the question of VSync as well. The only ones likely to run into this problem are people that have a way overpowered GPU for this game, and the only way the game runs properly is if the framerate is kept to 60fps or less. The mechanism with which the Skyrim engine caps that framerate could very well have been hiding this problem for most.

            • jihadjoe
            • 7 years ago

            +1, my point was that some people can, and do notice those spiky frames.

            I recommended a 7770 to a friend because it was the best card in his budget range, but then he was complaining about spikyness in Skyrim. I went over to look at his rig, and to me everything looked completely smooth. So there we were, arguing about “obvious” and each of us having a completely different perception about what was going on.

            I’m sure he will be quite happy with this driver update, and I will now be on the receiving end of much snark.

    • mollywilkso00
    • 7 years ago
    • Haqqelbaqqer
    • 7 years ago

    Kudos to Mr.Dodd for taking the matter seriously and in such short time. 😀
    Yet it is kind of amusing to see TR taking AMD to school on how to test their own cards…
    A great read as usual, keep it up Scott!

    • DeadOfKnight
    • 7 years ago

    If there was any doubt about how good this testing method really is, that about kills it. It may not be perfect, but it lies on the same exact curve as smoothness.

    • l33t-g4m3r
    • 7 years ago

    Why am I not surprised? Thanks for writing that “article” right before the holiday though. Aside from the questionable timing, games used, wording, and trolls insisting it was a massive problem despite not actually owning a 79 card or caring about it before the article was written, let’s look at some other interesting tidbits:
    [quote<]Those quasi-regular frame time spikes with Cat 12.11 don't mean Borderlands 2 is unplayable at these settings on the Radeon HD 7950. The spikes are generally no larger than 40 ms, so they're not a huge hindrance to fluidity.[/quote<] lol, and yet it was the end of the world before. [quote<]Nvidia has started engineering its drivers with an eye toward smooth and consistent frame rendering and delivery. I believe that effort began at some point during the Fermi generation of GPUs, so roughly two years ago, max.[/quote<] Early Fermi cards had some serious stuttering issues in games, to the point of making unoptimized games unplayable. I know from experience, games like Darksiders would stutter uncontrollably bad, especially when you were walking on floating floor tiles, like each tile was being loaded into memory as they rose, and yes there has been massive improvements which were sorely needed, but Fermi cards still have occasional issues. For example, I'm sticking with 306.97, because the latest driver is too unstable with the games I'm playing. I've been replaying Rage since the DLC to get all the achievements, and I reverted since the latest driver randomly crashes the game. I suppose I should be thankful that issues involving Nvidia got a vague two sentence mention though. Drivers have problems. Period. I have never owned a card that didn't at some point have a driver issue, but it was always fixed down the road. I don't make purchase decisions based on the state of cherry picked games, but on what a card is actually capable of doing. Frame times are an issue, but not when they're due to a easily fixable bug. This issue has been trumped up beyond reason or rationality. The only positive here is that all the drummed up controversy forced AMD to fast-track the bug fix which had already been in the works, and more actively test for frame latency. Tessellation rigging is done, and frame latency is done. I suppose the only trick left is to go back to promoting PhysX, so I'm waiting to see when that article comes out.

      • Damage
      • 7 years ago

      Wow, wait. So you’re the one who’s vindicated?

        • l33t-g4m3r
        • 7 years ago

        Actually, I am. Why wouldn’t I be? I rationally and objectively looked at what was going on, and voila. AMD even said they had a working fix, so there really was no call for all the demonization. Totally blown out of proportion.

        I guess now it’s only a matter of time before nvidia is using their massive influence to make every game overuse FP16.

          • superjawes
          • 7 years ago

          [quote<]Actually, I am. Why wouldn't I be? I rationally and [b<]subjectively[/b<] looked at what was going on,[/quote<]I'd say you meant to say "objectively," but that Freudian slip speaks volumes.

            • l33t-g4m3r
            • 7 years ago

            Either or. I won’t deny that I have positive bias of my own opinions. Who doesn’t?

            • superjawes
            • 7 years ago

            But in the end, that’s all you have. Opinions. Not really facts or evidence, just claims and an attitude against things you don’t agree with.

      • UltimateImperative
      • 7 years ago

      How lucky you are, O l33t one, to be able to distinguish driver problems from hardware problems without access to the source code.

      You’re like all these people who’ve never seen the source to a modern game complaining about the “lack of optimization”, Crysis tessellation weirdness notwithstanding.

        • l33t-g4m3r
        • 7 years ago

        AMD SAID it was a driver problem, and it would be fixed. You’re a moron.

          • theonespork
          • 7 years ago

          Ridiculous. Stop, you are embarrassing yourself…you wanna stroke yourself in private, b/c in the real world people laugh at you when you get all self-lovey. Ick.

          So in the real world, this web site published an article, that existed as part of an ongoing series of articles, that were reflective of a long and growing database of records that revealed both a new area of game analysis and as possible sore spot for some gamers in real world conditions.

          Conspiracy theorists, riff raff (after a jump to the left), and regular ole fanboy nutties (you-ish? or am I profiling? Meh, it matters not, we are not familiars) come crawling out of your lairs of fortitude to decry honest journalistic integrity that defies your ability to process that something you decided was awesome might not be as awesome (not un-awesome, just not as awesome, but that distinction gets lost in the clutter of your general crazy malaise) as a farce.

          In doing so, you defame said honest journalism, make yourself look silly, and incite verbal bloodshed to accomplish little more than muddying the waters of reality so that you can declare vindication in the face of obvious defeat and then get in a tither when you are called on it.

          Let it go. What has occurred here, what TR did for all of us, perhaps on accident, certainly not in attempt to defame your beloved video card of choice, was to improve or lay a foundation for the improvement of all our gaming experiences. This is, simply, a good thing. Deal with it.

          Now, get all angsty, right your snarky-ish reply and get upset when I ignore it b/c I have no desire to get in a flame war with a twit.

            • Shobai
            • 7 years ago

            [quote<] "your beloved video card of choice" [/quote<] I may be misreading his post, but I think he's saying he runs a Fermi card

            • l33t-g4m3r
            • 7 years ago

            Exactly. Sporky doesn’t know WTF he’s talking about. I haven’t been a fan of AMD’s cards since dx9. The older dx10 cards were crap, aside from the 4870 and it’s dx10.1 fiasco, then the 5x and 6x series had horrible tessellation performance. I haven’t even been much of a fan of the 7x series because of the lousy drivers, although I do know they are more powerful than the 66x series cards in hardware. The only thing I’ve done here is voice my dissatisfaction with the cherry picked benchmarking and the over-hyping of a now fixed bug.

            Fact: AMD admitted it was a bug, and that they had a working fix. I cannot stress enough that they publicly stated that they had a working fix and were fast-tracking it to release. It wasn’t high priority because AMD wasn’t testing frametimes, but now they are.

            TR irresponsibly demonized the 7950 right before the holidays, and quite possibly threw nvidia a bunch of sales based off what is now a non-issue. Nvidia has admittedly been working on frametimes ever since Fermi (and for good reason), and I’m sure they were fully aware of AMD’s issues. Knowing how nvidia operates, I wouldn’t doubt they pressured TR to write an expose before the holidays to throw sales their way with TR getting some commission. I’m not saying that’s what actually happened, but the circumstantial evidence is pretty glaring. TR did not offer any perspective in their article, and it was extremely one sided. Perhaps because of trinity, I dunno. That’s the only reason I’ve said anything about it, and I’d say the same stuff if it was the other way around for nvidia. Shady is Shady is Shady, and I’m just voicing my opinion on what I’ve seen go down here.

            TR’s been noticeably tilting toward Nvidia ever since the 460. The 460 was obsolete ON release, and they played the readers like chumps using overclocked models and benchmark settings that painted the 460 in a better light than it deserved. That’s when I really started questioning what I was reading here. Statistics can be easily gamed when you cherry pick your variables.

            I would also like to point out that most of my neigh-sayers appear to be hardcore nvidiots, and it’s quite obvious with all the vitriol they’re spewing like sporky here. Anyone with a level head who steps back and looks at what happened objectively should realize that this issue was massively over-hyped and irresponsibly demonized, and still is. The demonization is ongoing, even though the fix is out, and no 7950 owner is disagreeing with me. These people are trolls and shills to say the least. Total braindead scum.

            Also, the charges about me not appreciating the new testing method is a lie and a straw man. I absolutely appreciate any new method of testing that helps inform people of the performance of a card, and I’m not absolving AMD of their poor drivers either. I’m merely dissatisfied with the lack of objectivity, neutrality, and perspective with TR’s testing, not to mention questionable timing. I just feel bad for AMD getting blindsided by some new testing method, and the total lack of respect or acknowledgement when they publicly stated a fix was available, and here it is not even a month later. AMD’s the Rodney Dangerfield of graphics companies here.

            • Rza79
            • 7 years ago

            [s<]I wish I could give you two thumbs up![/s<] Ever since TR started using OC'd NV cards to compare to stock AMD cards, I've taken their point of view with a grain of salt. But it seems that even after so many years and complaints, their bias has not changed one bit. Edit: l33t-g4m3r's rant is becoming incoherent and inconsistent. Therefore I'm withdrawing my two thumbs up. I agree with most that this has been blown out of water by the readers themselves and not TR (even though I still feel that TR has a slight nV bias but I don't think there's one human on this planet that isn't biased).

            • JMccovery
            • 7 years ago

            I’m not sure what article you read, but the 7950 vs 660Ti article basically ended saying: “The 7950 is more powerful on paper, but with the frame issues, the 660Ti is the better perofrming card.” I can’t see where any ‘demonizing’ is coming from, unless you’re specifically looking for it.

            No, I’m not some ‘Nvidia fanboy’, as I have a 7750, but accusing TR of some ‘anti-AMD’ agenda is seriously pathetic. It is troubling than GCN has been out for this long, and AMD is just getting around to fixing this issue.

            On the ‘questionable timing’ issue you bring up: How many 7950s do you think AMD has sold? Here’s a better one: How well do you think the 7950 has sold since the 660Ti launched? Another one: Why do you feel that after trying to figure out a better way to benchmark GPUs, TR just released frame-latecy testing just to ‘damage’ AMD?

            • TREE
            • 7 years ago

            I can see your point of view and in someways I kind of agree with it. I don’t think others should be so quick to thumb down your comments because you’ve made some good points and it is after all only your opinion.

            While I agree with a lot of what you say, especially the part about the story being completely overblown, as both AMD and Nvidia have had these problems multiple times over various GPU architectures, I disagree with the notion that TR is being secretly paid or swayed by Nvidia. My opinion on the matter is that TechReport has just done what the media always does, be it in politics, technology or any other news source. They’ve found a story based on their results which makes it unique to them and they’ve made it somewhat controversial. This brings in readers, plenty of debate and boosts TechReport’s status as a news site. It’s what some would call “good journalism”. I guess I can say that because I have good friends who work as reporters and I see that sort of thing everyday.

            • Voldenuit
            • 7 years ago

            [quote<] Fact: AMD admitted it was a bug, and that they had a working fix. I cannot stress enough that they publicly stated that they had a working fix and were fast-tracking it to release. It wasn't high priority because AMD wasn't testing frametimes, but now they are.[/quote<] How is TR demonizing AMD by bringing to light a real issue that AMD admitted to? If anything AMD is the guilty party here, because they admitted that they had known about the issue for a while and done nothing because it was not a priority for them. Also, the fix is: a. Not available to the public yet (latest beta driver available is the 13.1) b. Does not fix the root cause, and only targets a handful of games TR has repeatedly responded to reader concerns with objective and informative responses. When ppl complained about win7 vs win8 drivers, TR retested with Win7. When ppl questioned how much impact the stuttering had in the real world, TR released high speed videos comparing the two cards.

            • raghu78
            • 7 years ago

            there are people who have had a GTX 670 and moved to a HD 7950 and said the HD 7950 when on a decent overclock (1150 mhz) beats a GTX 670(1250 mhz) in majority of games. a few months back hardocp tested a HD 7950 OC vs GTX 670 OC vs GTX 660 Ti OC. even on older drivers the HD 7950(1.2 ghz) beat a GTX 670(1.3 Ghz).

            [url<]http://www.hardocp.com/article/2012/08/23/galaxy_gtx_660_ti_gc_oc_vs_670_hd_7950/[/url<] on the newer 12.11 beta 11 or 13.1 whql drivers its a no contest. with 13.2 beta and the upcoming new memory manager driver things get ugly. the HD 7950 boost easily overclocks to 1150 mhz on average and can match or exceed GTX 670(1.3 Ghz) in majority of games. this review has made AMD pay attention to frame latency testing. so good. but only the most biased of people can even say that GTX 660 Ti competes with HD 7950 boost. as for the timing of the article and the noise that TR created people can draw their own conclusions. nvidia's 310.90 drivers are a stutterfest in BF3 according to many users on ocn. [url<]http://www.overclock.net/t/1346657/battlefield-3-unplayable-on-r310-drivers[/url<] [url<]http://www.overclock.net/t/1339698/the-eternal-question-which-is-better-hd-7870-vs-660-ti-benchmarks-inside[/url<] lets see if TR has the balls to check on nvidia's driver quality. here is hd 7870 tahiti le vs gtx 660 ti comparison [url<]http://www.hardocp.com/article/2013/01/01/powercolor_radeon_hd_7870_myst_edition_review/11[/url<] " Compared to the reference Radeon HD 7870 GHz edition, the PowerColor HD 7870 MYST represents a step up in performance from both the Radeon HD 7870 GHz edition and the GTX 660 Ti. At the same time, it is very near the performance level of a Radeon HD 7950, which start at $274.99 after a $20 mail in rebate. You can achieve almost comparable performance for a lower price, and definitely faster than a stock GTX 660 Ti for the same price with this new video card. If it was the intent for this card to be a "GTX 660 Ti Killer" then it has been successful at carrying that out. " also here is the opinion of hardocp GPU editor on GTX 660 Ti's value [url<]http://hardforum.com/showpost.php?p=1039522632&postcount=3[/url<] [url<]http://hardforum.com/showpost.php?p=1039524658&postcount=17[/url<] and before you guys go off saying hardocp is biased, the gtx 660 ti at launch was considered by them to be an excellent value and given a gold rating. [url<]http://www.hardocp.com/article/2012/08/16/galaxy_geforce_gtx_660_ti_gc_3gb_video_card_review/14[/url<]

            • MrJP
            • 7 years ago

            I’m a 7950 owner, and I disagree with you.

            TR didn’t “demonize” anything. They ran a set of tests, published the results, and drew sensible conclusions. It was the response of the crazy fanboys who demonized TR and blew this all up out of all proportion that prompted the immediate series of follow-up articles. If TR are biased, why even test the updated AMD driver?

            Furthermore, if you believe TR are biased, why are you still here reading the articles and ranting in the comments?

            • tfp
            • 7 years ago

            It’s a hobby, gotta do something with your free time right?

            • l33t-g4m3r
            • 7 years ago

            No, the trolls have taken up that task, obviously. The tests weren’t in any way comprehensive, which was admitted, and therefore the conclusion was skewed towards the point TR was trying to make. This was not an article designed to be neutral, it was an “expose” using cherry picked games to show a worst case scenario. TR doesn’t have to openly admit preference to have a bias, cherry picking the test data does that for them, and is much more effective.

            Why test the updated driver? Duh, not doing so would destroy the facade of being a neutral site. I don’t know why so many of you drink this kool-aid. Nobody is 100% neutral, but I don’t like it when people try to be sneaky about it.

            Here’s the problem: the 660 has crippled pixel fill rate, due to missing a ROP. Then, TR deliberately avoids testing games that make use of heavy fill rate. [b<]Any game that uses tessellation will perform poorly with this card.[/b<] Lets start with some history first: The 460 outperformed AMD's cards in tessellation, but clearly had terrible shader performance. How did TR benchmark the 460? They disabled dx11 shaders in Metro 2033 while using tessellation. READ IT FOR YOURSELF: [url<]https://techreport.com/review/19242/nvidia-geforce-gtx-460-graphics-processor/10[/url<] People complained about that and TR changed the testing method later: [url<]https://techreport.com/review/20126/amd-radeon-hd-6950-and-6970-graphics-processors/12[/url<] [quote<]Dude, so, yeah. At the lower quality settings, the GeForces' higher geometry throughput with tessellation like totally puts them on top of the older Radeons. The situation evens out with higher-quality pixel shaders. [/quote<] There you have it. I would also like to point out some quotes about the 460 768 model: [quote<]The more accessible version is the 768MB card. Since it's down one memory interface/ROP partition, it has 24 ROPs, a 192-bit memory path, and 384KB of L2 cache. You're giving up a lot more than 256MB of memory by going with the 768MB version, and we understand that "GeForce GTX 455" is available. You know there will be folks who pick up a GTX 460 768MB without realizing it's a lesser product. The GTX 460 768MB isn't bad, but such confusion isn't good for consumers. [/quote<] That said, the 660 Ti has 24 ROPs, and a 192-bit memory path. So where is the lesser product advice? On to the 660 now: [url<]https://techreport.com/review/23419/nvidia-geforce-gtx-660-ti-graphics-card-reviewed/8[/url<] Did you see that? TR DISABLED tessellation. Why? Because the 660 is fill rate limited, and would do horribly with tessellation on. LOOK: [url<]http://hexus.net/tech/reviews/graphics/43797-kfa2-geforce-gtx-660-ti-ex-oc-3gb/?page=6[/url<] The 7870 BEATS the 660 Ti 3 GB Super Overclocked card because the 660 Ti is simply too crippled to handle tessellation. TR EVEN ADMITS THIS in their original hit-piece with the one game that has tessellation on: [url<]https://techreport.com/review/23981/radeon-hd-7950-vs-geforce-gtx-660-ti-revisited/7[/url<] 39 FPS Vs 61! It's not an optimization issue, since AMD doesn't sabotage games like nvidia does. The 660 Ti simply can't handle tessellation, and this pisses me off when TR recommends the card, especially when they just got through calling out the 460-768mb, and the 660 Ti IS THE 460-768 of today! Sure AMD has some driver problems, but WTF-ING HELL are you doing recommending a castrated card like that over a 7950? The 660 Ti should be priced @ 7870 levels, because that's what it's competing with. It does have a few benefits, such as drivers and power usage over the 7950, but it's certainly not in the same tier in terms of performance capability, nor should it be priced as such. Anyone who buys this card will have to disable tessellation in every game.

            • Spunjji
            • 7 years ago

            Thanks for bringing these matters to light. I genuinely wasn’t aware of them and it does put things in a somewhat different perspective.

            Edit: Down-voted for polite thoughtfulness. Awesome! 😀

            • TREE
            • 7 years ago

            You get a thanks from me too. I need to look over some older reviews – something doesn’t seem right from where I’m sitting.

            Graphics Core Next is without doubt a superior architecture to GK1XX in terms of complexity and general compute capability. It would be detrimental to AMD to show games played in settings that favor Nvidia’s greater linear, never happens in 90% of code, number crunching power. If TR has skewed game settings to show only the strengths of Nvidia’s cards then there is indeed a serious issue here.

            • Damage
            • 7 years ago

            Man, not only are your accusations of bias after all that has happened shocking, but they brazenly ignore facts.

            First, your main accusation appears to be based on a conflation of tessellation performance with pixel fill rate prowess. In truth, tessellation is about increasing polygon counts, so its performance is gated by things like polygon setup rates and dealing with the data flow issues caused by geometry expansion. Pixel fill rates are another matter entirely and become a constraint with higher display resolutions, not with higher degrees of tessellation. (The two things, or at least ROP rates and polygon throughput, can become entangled when very high degrees of tessellation are combined with higher levels of multisampled AA. Multiple polygons per pixel can be vexing for ROP rates. But I believe there are few examples of such problems in current games, whose subdivision levels tend to be fairly conservative.)

            At any rate, high degrees of tessellation are not a problem for the GK104, like you assert. In fact, the higher the degree of tessellation, the larger the Kepler architecture’s advantage over GCN:

            [url<]https://techreport.com/r.x/geforce-gtx-680/tessmark-x64.gif[/url<] Yes, the GTX 660 Ti is somewhat trimmed down versus the 680, but not by half. And even at half the polygon throughput, it would soundly outdo any Tahiti-based graphics card if tessellation is the primary performance gater. Given that, I'd really like to see you explain how your assertions above make any logical sense. Second I'm amazed you keep repeating the assertion that our game selection was "clearly biased." Or I would be, if it didn't appear that repeated assertions of falsehoods is a part of your strategy. Let's review. The games we tested are: -Borderlands 2 (TWIMTBP) -Guild Wars 2 -Sleeping Dogs (Gaming Evolved) -Assassin's Creed III (TWIMTBP) -Hitman: Absolution (Gaming Evolved) -Medal of Honor: Warfighter (Gaming Evolved) -Skyrim Gaming Evolved games: 3 TWIMTBP games: 2 Non-affiliated games: 2 Explain for us all how that selection is problematic. I think folks would like to know.

            • l33t-g4m3r
            • 7 years ago

            That graph is off. First, no 660 Ti listed, so we’re left to extrapolate that. Second, the 7970 is behind the 7870. Something is wrong there, probably drivers. This is exactly the type of funny numbers that I’ve been complaining about. How does the 7870 beat the 7970 in tessellation? LOL!

            Here’s how the 660 Ti actually performs with tessellation:
            [url<]http://www.anandtech.com/show/6159/the-geforce-gtx-660-ti-review/17[/url<] The 680/670 cards clearly cream AMD's cards, I agree. However! The 660 Ti does poorly on EVERY OTHER SITE that tests with tessellation. You guys didn't test the 660 Ti with tessellation on, which skewed the performance metric to shader performance, which the Ti has gobs of. It doesn't matter what game you use, TWIMTBP or G-E, when you're adjusting the quality settings to fit the best case scenario for the hardware capabilities of a particular card. All this unequal benchmarking reminds me of HardOCP, where you can barely get a feel for how a card performs because they directly compare different resolutions and AA settings. If you want to say the 660 Ti beats the 7950 without tessellation, and you think it's a gimmick, then do so, out in the open. Don't skew the benchmarks to make that point. I want to know how a card performs under the conditions I'd be using at, with full dx11.

            • Damage
            • 7 years ago

            The numbers in the graph are correct. The 7870 outperforms the 7970 in that test because performance there is gated by triangle setup rate, and Tahiti and Pitcairn share the same front-end hardware. The Pitcairn card has a higher clock rate, ergo, higher throughput.

            You see bias and shenanigans everywhere, when reality and math easily explain the facts.

            Anyhow, you’ve still not explained how tessellation throughput equates to fill rate. That is a cornerstone of your argument above. Lay it on us. How does fill rate impact tessellation performance?

            If the two aren’t tightly related, your argument falls apart, so this is important.

            Also, you are the one who suggested our selection of games was biased, not me. Don’t shift your position on me there. Please, make it clear to us all how the selection of seven games we used was, in your words, “questionable.” You’ve repeated it enough. We’ve allowed you the platform to repeat the assertion many times. Now explain, for all to see.

            • l33t-g4m3r
            • 7 years ago

            LoL. Your benchmark settings are what’s biased, and that’s what I’m complaining about. The only problem I had about the games you selected for the article was that the majority of them were too new for AMD to have optimized their drivers for them. Now that AMD has released updated drivers, that entire hit-piece is debunked. The only point that you made here was that AMD needed to work on their drivers, and then they did. Point absolved.

            You’ve still not explained why you’ve consistently avoided and minimized the use of tessellation in benchmarking the 660 Ti. Perhaps because it would show the Ti can’t handle it, unlike your false assertions.
            [url<]https://techreport.com/review/23419/nvidia-geforce-gtx-660-ti-graphics-card-reviewed/9[/url<] 54 fps to 64 fps. The Ti can't handle it. I don't claim to know anything other than that. The benchmarks don't lie, only the benchmarkers. Since you know it all, why don't you explain to me why the Ti can't beat the 7950 using tessellation, because it CAN'T and that is a FACT. [url<]http://www.anandtech.com/show/6159/the-geforce-gtx-660-ti-review/17[/url<] I'm not a professional reviewer who understands every little detail, but I know a sham when I see it. Every other site shows the 660 Ti cannot handle tessellation, except TR, but that's because they avoid using it in their game benchmarks.

            • Damage
            • 7 years ago

            We’ll let the wider world decide whether we should wait weeks and months after a game’s release before testing it in order to be fair to AMD. Were we not equally “unfair” to Nvidia by testing when we did? If not, why not? Are the standards for timely drivers supposed to be different for AMD?

            Also, you ask…

            [quote<]The benchmarks don't lie, only the benchmarkers. Explain to me why the Ti can't beat the 7950 using tessellation.[/quote<] And you provide a link. In the linked article, the numbers are: 99th percentile frame time (lower is better) GTX 660 Ti: 22.7 7950 Boost: 23.8 Time beyond 50ms (lower is better) GTX 660 Ti: 0 ms 7950 Boost: 26 ms So here is your explanation: the 660 Ti's measured performance was higher in the metrics that matter. It can and did beat the 7950 in that game, with tessellation enabled.

            • l33t-g4m3r
            • 7 years ago

            Straw Man. You’re ignoring AMD’s recent driver and using old stats.

            The 660 Ti is performance limited by it’s crippled hardware, the 7950 is limited by drivers. Driver updates invalidate your argument.

            You still haven’t commented on why you’re avoiding games with tessellation.

            This blatant denial is ridiculous, btw. This is the only site that recommends the 660 Ti over the 7950 based on benchmarks without tessellation, and driver bugs. Whatever. You obviously have a bias, and are using every trick in the book to justify it. Like I said earlier, WHEN’S THE PHYSX BENCHMARKS COMING?

            Seriously though, neutrality is appreciated when you’re supposedly a neutral site. If you’re going to start being pro-nvidia, then do it openly instead of sneakily disabling tessellation in the benchmarks, and complaining about now fixed driver bugs.

            I personally like nvidia’s drivers and hardware, but I don’t like their constant crippling of their mid-range cards. It’s not worth spending money on a card that’s been deliberately outdated on release, like the 460-768, and IMO it’s unethical to recommend crippled cards like that to people.

            • Damage
            • 7 years ago

            You: Explain these linked results from a game with tessellation!

            Me: Ok, here’s what they mean. You read it wrong.

            You: Invalid! You use old stats because driver updates!

            Me: What the…? I don’t even…

            You: “You still haven’t commented on why you’re avoiding games with tessellation.”

            Me: Like the one you just linked?

            • Voldenuit
            • 7 years ago

            Scott, that exchange should be an xkcd comic!

            • l33t-g4m3r
            • 7 years ago

            Your original piece on the 660 Ti vs 7950 had only one game using tessellation. Well, I guess MoH does use tess, but I don’t consider that a game.

            • Cyril
            • 7 years ago

            [quote<] You still haven't commented on why you're avoiding games with tessellation. [/quote<] In the GTX 660 Ti review you linked: [url<]https://techreport.com/review/23419/nvidia-geforce-gtx-660-ti-graphics-card-reviewed/4[/url<] "Mesh quality: Ultra" "Terrain quality: Ultra" [url<]https://techreport.com/review/23419/nvidia-geforce-gtx-660-ti-graphics-card-reviewed/5[/url<] "Tessellation: Very high" [url<]https://techreport.com/review/23419/nvidia-geforce-gtx-660-ti-graphics-card-reviewed/6[/url<] "Crowd: Ultra" Plenty of tessellated games in later articles, as well. Not that it matters, because your argument—that the GTX 660 Ti only ever falls behind the 7950 because of lackluster tessellation performance—is fallacious. The Kepler architecture doesn't have poor tessellation performance, as Scott attempted to explain. And even if it did, drawing a causal relationship between lower tessellation throughput and lower performance in one game or another would be a shaky proposition at best, unless you tested at different settings and could prove that changing the tessellation level turned the tables. But methodically verifying things doesn't seem to be your strong suit. 😉 [quote<]This is the only site that recommends the 660 Ti over the 7950[/quote<] ...based on benchmarking methodology that other sites are only beginning to adopt. Benchmarking methodology that showed a clear problem with the 7950, which its direct competitor, the GTX 660 Ti, appeared to be immune from. How is that evidence of bias?

            • Firestarter
            • 7 years ago

            [quote<]Benchmarking methodology that showed a clear problem with the 7950, which its direct competitor, the GTX 660 Ti, appeared to be immune from. How is that evidence of bias?[/quote<] well duh, you're using the wrong benchmarks! :p

            • l33t-g4m3r
            • 7 years ago

            BF3: 7950 loses, but old stats are now invalid with driver update.
            [url<]http://www.anandtech.com/show/6393/amds-holiday-plans-cat1211-new-bundle[/url<] Max Payne: 7950 beat the 660 Ti in fps and 99% Dirt Showdown: 7950 wins, but I'm not interested in that game. Skyrim: 7950 wins, ties 99% Has a driver update Batman: No tessellation, and the 7950 still wins Crysis 2: 7950 wins, barely loses in 99% My argument that the 660 Ti falls behind the 7950 with tessellation isn't fallacious. It does fall behind the 7950 using tessellation, and the benchmarks prove it. What Scott explained was the tessellation performance of the 680, not the 660 Ti. They're not the same chip, as the 660 Ti is horribly crippled. [url<]http://www.anandtech.com/show/6159/the-geforce-gtx-660-ti-review/17[/url<] Have you guys read anand's review? There is a clear performance hit with tessellation compared to the 670! Ignoring this fact doesn't look good. Now, your benchmarking methodology is interesting, but it's also now invalid, as you just got through reporting in this article. The driver update reduced the Radeon's frame times, meaning that there is no longer a reason to recommend the 660 Ti over the 7950. [quote<] With Cat 13.2, the 7950 essentially ties the GeForce and provides smoother, more consistent frame rendering times. [/quote<] This whole thing was probably a publicity stunt to drum up hits with false controversy. W/E.

            • Cyril
            • 7 years ago

            [quote<]http://www.anandtech.com/show/6159/the-geforce-gtx-660-ti-review/17 Have you guys read anand's review? There is a clear performance hit with tessellation compared to the 670! Ignoring this fact doesn't look good.[/quote<] Have you? 😉 Especially this part: "To be honest we’re not quite sure why there’s such a performance drop here relative to the GTX 670. On paper the geometry performance of the two should be identical. Either we’re ROP limited (this test does draw a lot of pixels at those framerates), or it really likes memory bandwidth." [quote<]Now, your benchmarking methodology is interesting, but it's also now invalid, as you just got through reporting in this article. The driver update reduced the Radeon's frame times, meaning that there is no longer a reason to recommend the 660 Ti over the 7950.[/quote<] So, our methodology is invalid because it highlighted a problem that AMD went on to fix? Yep. Totally invalid. We should just quit now and go back to reporting FPS averages.

            • l33t-g4m3r
            • 7 years ago

            [quote<]Have you? 😉 Especially this part:[/quote<] Like I said, I'm not an expert reviewer, I'm just looking at the numbers, and the numbers say it's a poor performer. Perhaps I'd understand it better if review sites would better explain technical details and consequences of being ROP limited. [quote<]So, our methodology is invalid because it highlighted a problem that AMD went on to fix?[/quote<] No, but you over-hyped the problem, which was fixable, and made a premature recommendation based off it. It doesn't help things when your reviews are always missing something and you randomly mix in overclocked models. That's why I second guess this stuff. If you're going to use an OC card, then also show the card at it's normal speeds, and include the other cards in the same performance range. Batman should have been tested with tessellation, being that all the other instances used tessellation, and it would have been nice to see 7950 numbers with the 7870 review for comparison. It's like you guys deliberately leave things out to force us to read other reviews to understand the big picture, and get more page hits, and that can be easily confused with bias.

            • Damage
            • 7 years ago

            Seems to me you easily confuse just about anything with bias.

            • l33t-g4m3r
            • 7 years ago

            Maybe because you actually are biased? I kid. Not that you would admit it anyway, but it would be nice to see more consistent reviews. It sucks to be looking through 3 or 4 different articles just to find out how the 660 Ti performs in batman with tessellation, or how the 7870 compares with the 7950 and 660 Ti.

            • Spunjji
            • 7 years ago

            Your discussion has been interesting to me, but your repeated assertions of conspiracy are getting… well, over the top. You might want to back off this a little.

            • Cyril
            • 7 years ago

            [quote<]Second, the 7970 is behind the 7870. Something is wrong there, probably drivers. This is exactly the type of funny numbers that I've been complaining about. How does the 7870 beat the 7970 in tessellation? LOL![/quote<] The Radeon HD 7870 has a higher clock speed than the 7970 and rasterizes the same number of triangles per clock. Its peak rasterization rate is indeed higher. See the figures in the tables [url=https://techreport.com/review/22573/amd-radeon-hd-7870-ghz-edition<]here[/url<] and [url=https://techreport.com/review/22573/amd-radeon-hd-7870-ghz-edition/2<]here[/url<]. I think this is a pretty good example of what's wrong with your assertions. You seem to keep constructing arguments based on hunches, assumptions, or what you perceive to be common sense, yet you never seem to take the time to research things more fully. I get the sense that you're more concerned about being right and proving yourself than being rigorous and finding out what's actually true.

            • clone
            • 7 years ago

            interesting.

            with “stutter gate” I knew the issue was hugely overblown but was that TR or the readers?

            kinda sad if true that TR is deliberately shifting the bar as required but then again wasn’t it TR a while back that exposed Nvidia’s skullduggery in Crysis 3?

            • clone
            • 7 years ago

            hehe I used Skullduggery.

            🙂

            • Cyril
            • 7 years ago

            [quote<]but wasn't it TR a while back that exposed Nvidia's skullduggery in Crysis 3?[/quote<] Sure was. And back then, folks in the comments were accusing us of unfairly penalizing Nvidia, blowing the issue out of proportion, and sweeping problems with AMD hardware/drivers under the rug: [url<]https://techreport.com/discussion/21404/crysis-2-tessellation-too-much-of-a-good-thing?post=574386[/url<] Funny how some things never change. 😉

            • jonjonjon
            • 7 years ago

            this is pure comedy and i couldn’t stop reading. not sure why you even waste your time responding. i realize it must be annoying have people make up ridiculous claims about your integrity. pretty much everyone of his points was either some twisted logic that didn’t make sense and/or just flat out wrong. he keeps saying the games are unfair then he says its not the games its the settings. well which is it? its not the 660ti’s fault if a game doesn’t use tessellation or some other feature. what he is basically asking for is for you to “cherry pick” games with tessellation/settings to show the 7950 is better. isn’t that exactly the same thing he is claiming you do and why you are biased? does he really expect you to test the top end video cards with games from 2009. if a card can play the newest games with no issues i’m guessing you will be fine with older games. just a guess. if you take all the newest games and turn the settings on ultra/max whatever you want to call it how is that biased? who the hell cares how a card performs on some synthetic benchmark that doesn’t translate to actual games. i want to know what card is going to give me the best price/performance for the latest games. isn’t that what you are buying it for? there is literally nothing you could do or say that would convince him otherwise because he has clearly proved that he is clueless.

            also its funny how amd only let you test 3 of the games. why? i’m guessing the issue is not fixed but they were able to find some other work around for those specific games.

      • derFunkenstein
      • 7 years ago

      edit: meh. not worth my time.

      • cynan
      • 7 years ago

      [quote<] The only positive here is that all the drummed up controversy forced AMD to fast-track the bug fix which had already been in the works, [b<]and more actively test for frame latency[/b<].[/quote<] That last bit - getting a major GPU hardware company to be more vigilant about latency, thereby ensuring a noticeably better gaming experience across the board in future - is no small feat for a single journalism outlet (TR) to almost have single-handedly accomplished. [quote<]Frame times are an issue, but not when they're due to a easily fixable bug. This issue has been trumped up beyond reason or rationality.[/quote<] I can't help but sort of agree here, especially the last part. The timing of AMD's driver bug, insofar as the latency issues on the HD 7950, coinciding with TR rolling out more extensive latency testing (with the slow-motion video analysis of that one Skyrim level - obviously cherry-picked to highlight a worst-case scenario to boot) was rather unfortunate for AMD. Added to this is the apparent prevailing bias in the marketplace that Nvidia systematically >>> AMD and is similar in my mind to the mass market perception of Apple vs Microsoft (if perhaps not quite so skewed). For some reason, Nvidia has come off as the trendy young meterosexual enlightened one, while AMD is the aging, overweight, uptight prisoner of their own misfortune. Perhaps their recent CPU development reputation has bled into the public's perception of their gaming GPUs... On the other hand, maybe this is somewhat deserved, given how Nvidia has seemed to be a bit more on the ball regarding total gaming experience insofar as their driver development (as backed up anecdotally in this very article), at least in this generation. Regardless, as always happens when you get a bunch of people who live vicariously through inanimate objects, or I suppose, the companies that manufacture them, the discrepancy between the two brands was blown out of proportion. At the end of the day, both Nvidia and AMD offer excellent gaming graphics solution, and both generally come with less bugs and compatibility issues than the gaming hardware of even a few short years ago. The good news, as others have been stating, is that AMD (and probably also Nvidia, for that matter) will be more vigilant about letting bugs that significantly impact gaming experience slip past QC. A win either way for the consumer -which is really the important thing for most of the readership of sites like this.

      • havok
      • 7 years ago

      U point out a console, ported game and u think your statement is valid?

      No.

      Did u look at your “approval” rate of actually hitting home base, aka thumbs down ?

      No.

    • ptsant
    • 7 years ago

    AMD r0x0rz…

    • clone
    • 7 years ago

    did anyone really think this wasn’t going to be fixed….. quickly?

    so many willing to condemn as systemic when the news broke, hopefully at least those rants are done with now.

    • cynan
    • 7 years ago

    Like others have stated, it’s great that TR’s tremendous recent contribution to GPU benchmarking is looking like it will ultimately lead to keeping hardware developers being more vigilant about the quality (smoothness) of the actual gaming experience. This substantial accomplishment for a journalist in the field. That aside, as a tech enthusiast and consumer, I greatly appreciate the wealth of information imbued by these frame time metrics.

    That said, I still can’t help but feel that this whole situation with the HD 7950 vs the GTX 660 Ti was perhaps at least a tad overblown. First, there was no such large discrepancy in frame times between the two in prior reviews using older drivers. Second, this issue seems to be limited to the HD 7950 (as far as I am aware, even the HD 7970 did not have these issues, at least to no where near the extent).

    This looks to me like one of AMD’s competing driver development groups had a temporary brain fart and messed something up that was already working more or less fine in a recent driver. Part of the reason why so much of the issue was addressed so quickly was probably due to something like the developers may have only simply had to examine why this problem didn’t exist before, and then compare to the current driver and make the appropriate changes (pure speculation on my part, I know).

    Anyway, bottom line, its a good thing that TR is around to keep these people on their toes to minimize the repeat of such issues in future, ensuring better PC gaming experiences for all (even if this particular issue was perhaps blown out of proportion).

    • panthal
    • 7 years ago

    Part of me didn’t even want to post this but…..but outta scientific curiosity i thought i would.

    Remember the old Killer K1(lol) network cards? About 2 years ago i did a fraps comparison of running through the same run through in an empty part of the city in World of Warcraft using a K1 and a intel onboard nic.I had seen Bigfoot(the maker of the card) do something similar but in a different game.And it did indeed make a difference on the on how quickly the character crossed the finish line.I know that probably came down to lag,and how the Killer drivers were skipping the windows network stack.And i also realize this is different than the testing TR is doing as network latency isn’t involved in their testing.

    As far as multiplayer games go,do you guys think different network cards could influence frame latency at all in multiplayer games?

    Funny fact from an insider:….you would be surprised how many stock traders used those K1 cards for a small advantage in stock trading. I can verify that somewhat,when i was still using the card i was ALWAYS the first one to pop into any instance or battleground i queued up for in WoW 🙂

    More than a few requested the source code to work into their stock trading apps.

      • moose17145
      • 7 years ago

      I actually think that what you brought up some good points. I know a lot of people made fun of the Killer NICs, but everyone who actually had one said that the thing actually worked… (obviously only to an extent as it can’t control whats being sent to the computer and when, or what traffic is happening out on the internet… but outbound traffic it definitely has a great deal of control over). I was also reading that a lot of people were using them for other things because they essentially had most of the features of an enterprise grade NIC for a much lower price.

    • willmore
    • 7 years ago

    Has this problem been seen on any other 7xxx series cards than the 7950? As an HD7850 owner, I’m curious. (also an HD7770 owner)

      • Farting Bob
      • 7 years ago

      It was first mentioned here in the 7950 article, but i never fully understood if it was mainly that card it effected or all 7 series chips. Probably all, but to varying degrees.

      Ive got a 7850 as well, will probably update when the 13.2 goes official.

        • willmore
        • 7 years ago

        I updated, but I don’t run FRAPS nor do I graph my frame latencies so I don’t have numbers to substantiate this, and BL2 seems smoother. There used to be ‘hitching’ when I would jump or turn quickly. That’s gone, even in wide open spaces where it used to be really bad. Sorry, no hard numbers, just sear-of-the-pants measurements.

    • tbone8ty
    • 7 years ago

    Its nice u have collected data to compare and contrast with. Other websites will have a hard time showing a difference or at least catching up to you guys. Now you can add more variables to yor testing like memory speed, different resolutions, cpu and gpu overclock, ssd vs hardrive etc…heck maybe even a last generation graphics benchmark test…. like maybe a Radeon HD 6970 vs. Gtx 570 would be neat.

    • tbone8ty
    • 7 years ago

    Glad u switched to the msi 7950 hawk…edit: twinfrozr3 Now overclock it to 1000/1500. So u can test if overclocking effects frame times lol

      • Srsly_Bro
      • 7 years ago

      The picture shown and the core clock reported point toward it being a Twin Frozr III OC. It’s also the same card I own.

    • slaimus
    • 7 years ago

    What I don’t understand is why are drivers being optimized for specific games, when it should really be game makers optimizing their games on their target hardware? What happened to the days where the game engine would actually check how much buffer is available and change the amount of data it loads to minimize swapping to main memory?

      • Arclight
      • 7 years ago

      I have 2 ideas:
      1. Cross platform programing made devs lazier and now they don’t give 2 sh*ts about proper optimization for the PC
      2. The ill effect of programs like TWIMTBP and AMD’s one, i can’t remember how it’s called, which made game developers “accomplicies in crime” by tweaking the game engine to work better with a specific graphics architecture.

    • Zarf
    • 7 years ago

    This in-depth coverage is wonderful! And far too difficult to get elsewhere. I’d like to suggest once again that if you guys are using That-Browser-Addon-That-Must-Not-Be-Named, to add Techreport to your whitelist.

    I do have a suggestion: Would you guys be willing to put a line in at 16.5 miliseconds? If the graph line touches that line and stays there or goes below, you will be getting smooth, 60 FPS gameplay, and each frame time will be the same length as your monitor’s refresh rate (Assuming you have a 60Hz monitor. If you have a 120Hz monitor, you’d need frames to take no more than 8.3ms to render in order to match the max performance that your monitor can give) – in essence, all of the frames will stay on screen for the same amount of time, providing the absolute smoothest gameplay.

    On an unrelated note: I have long wondered why Markarth performs so poorly compared to other cities. I toggled wireframe mode and noted that if you stand in the market and look at the temple, the things behind the temple are not rendered, which is good. So why am I getting 25 FPS there and ~50-60 just bout everywhere else? I wonder if it would be possible to make a benchmarking tool that would color objects depending on how long they took to render.

    [EDIT: Removed reference to That-Browser-Addon-That-Must-Not-Be-Named because I like it here and don’t want to get banned! Sorry folks!]

      • Flying Fox
      • 7 years ago

      TR members are not supposed to use or mention/recommend/suggest the use of Ad-blocking software, it’s in the rules.

      Regular GPU reviews usually contain a “time spent in frames >x ms”. That is essentially what you are talking about. Example:
      [url<]https://techreport.com/review/23419/nvidia-geforce-gtx-660-ti-graphics-card-reviewed/4[/url<] (near the end of that page)

        • GrimDanfango
        • 7 years ago

        [quote<]TR members are not supposed to use or mention/recommend/suggest the use of Ad-blocking software, it's in the rules.[/quote<] You might want to re-check his wording - he's actually pointing out that people should allow the techreport ads through, assuming they are using adblock already, to show appreciation for TR.

          • Srsly_Bro
          • 7 years ago

          The first rule of ad blocking is to not talk about it.

            • BoBzeBuilder
            • 7 years ago

            What’s the second rule?

            • Srsly_Bro
            • 7 years ago

            Add BoBzeBuilder to the black list. 🙂

        • Zarf
        • 7 years ago

        I was not aware of that rule. Sorry! I’ve amended my post. DEAR MODS HAVE MERCY ON MY SOUL!

        Thanks for that link – that table with the FPS-to-MS conversion is handy. I just figured that a 16.5 MS line (16.7 on their box) might make it a little easier for people to come to whatever conclusion about whatever card. It might also be absurdly difficult to add it in to Excel, Excel being Excel and all. As long as they include that conversion chart, I’m happy with it.

          • derFunkenstein
          • 7 years ago

          Is cool, brother. FWIW I agree with the [redacted] sentiment. 🙂

      • lilbuddhaman
      • 7 years ago

      That 16.5ms line will be effectively the “Oculus Rift” line, because it requires 60fps, vsync, at all times to maintain the effect.

      • jihadjoe
      • 7 years ago

      I agree with a 16.5ms line.
      Why? Because fighting games!

      Street Fighter 4 is a good example of one such game on the PC, and that game has several “one-frame links”, meaning the input timing is exactly that, 1 frame, 1/60th of a second, or 16.5ms. Basically it means the input timing on certain moves has to be exactly right, or your combos won’t connect.

      If your system has any sort of stutter those links can become nearly impossible to get, and they’re hard enough when everything is running smoothly. Links and strict input timing aren’t exclusive to the Street Fighter series. They are present in almost every fighting game.

        • Firestarter
        • 7 years ago

        Having your game mechanics be based on the framerate is pretty poor design though.

          • jihadjoe
          • 7 years ago

          To an extent all games are based on framerate, It’s just emphasized more in fighting games. It comes down to animation. Each animation has startup, active and recovery phases, and these are equally applicable to a game like SC2 or LoL.

          SC2’s marine shuffle is much like a fighting game combo. When pros play you see marines (and other ranged units) attack, fire and then very quickly move away before attacking again. Since the recovery frames, or the backswing animation of the marine attack is cancellable, so you want to begin moving as soon as the active frames are done. Too soon and the marines don’t attack at all, too late and that baneling get to move a few pixels closer to the marines than if the timing was just right.

          Now if the computer in use was dropping frames of animation, then these movements aren’t as efficient because the visual cues for timing those inputs disappear.

            • Firestarter
            • 7 years ago

            [quote<]To an extent all games are based on framerate[/quote<] Well, no, there are plenty games around that have the game world simulation (physics, AI, networking) running at a fixed rate, uncoupled from the framerate with which the game world is being displayed. IIRC, Quake originally did this at a rate of 20 or 40 ticks per second, where for each tick the state of the game world is computed based upon inputs. As such, players in SC2 for example would still be able to do the marine shuffle with their PCs running the animation at 10FPS, [i<]if[/i<] the input is also being captured at a fixed (or at least fixed minimum) rate. However, you're right in the sense that the rate with which the player input is captured is almost always linked to the frame rate, or animation rate as you put it. The reason of course is input lag, if you decouple input capture from the animation, the latency between capturing the input and displaying its results is always at least one animation frame, and most of the time more. When input is captured every animation frame (therefor tightly coupled), the latency is at most one animation frame, assuming no other buffering or shenanigans like that. Now, given that the game world is often very dependent on the input of the player, it makes sense to calculate the world state after getting the user input and before displaying it. That is the way many games worked (and many probably still work that way), and it can work pretty well if the extremes of very high FPS and very low FPS are considered and accounted for. You can see what happens if the extremes are not properly accounted for when you disable the framerate cap in Bethesda games: Suddenly docile kitchen utensils are launched across the dungeon at lethal speeds while your whole house full of carefully placed artifacts and doodads starts to shake itself apart. All because the game world and physics simulation is tied to the framerate and flips out above 60 FPS. Now, I have some limited experience with the Quake III Arena engine, as I made a modification called Corkscrew. The way Quake3 solved this problem is, again, by decoupling the game world from the animation. More specifically, the player merely controlled a game client in his own copy of the game world, where the world (players, projectiles) was updated for each animation frame. The game server controlled the world based upon the clients inputs, regardless of whether there were 1 or 20 of them, or at what framerate they were animation their version of the world, and then sends the state to the clients which sync their own copy to it. Game engines have since evolved further (I hope), but as far as I know this kind of decoupling between the game world and the player's animation framerate is still pretty essential. Now, seasoned Quake players will chime in and say that there was still an optimal framerate and that it was 125 FPS, but that had more to do with the quirkiness that was still built into the player movement code, and how that interacted with the netcode. edit: I started 3 paragraphs in a row with "Now, ", shame on me 🙁

            • jihadjoe
            • 7 years ago

            I agree with you here, and I do believe almost every modern game engine does decouple the game state from the display rate (pretty much necessary for net play), but it is still very hard to play certain games unless the GPU can maintain a constant 60fps.

            It’s not just the input capture, but the ability to time your inputs as well. Cues are important, and the frame rate has everything to do with your visual cues.

            Actually SF4 does allow the frame rate to be decoupled, but it is quite unique in that it allows the player to choose whether or not to do so. Very annoying when I play someone online that has his framerate set to “variable” and obviously his rig isn’t able to maintain 60fps because then his gameplay experience (and slowdowns) carry over to me.

      • Deo Domuique
      • 7 years ago

      I’ll do it when I’ll see a single article about the broken DX9 on AMD 7000 series cards. Nothing, so far. Despite many users have reported and certainly themselves have experience the issue. Why not a single mentioning? Many Dx9 games have flickering/flashing black lines/artifacts or corruptions -you name it.

      One year after the cards came out, and nothing. Wouldn’t be this reason for recall? AMD devs said on 13.2 they will focus on it… It’s the last hope for a general fix, otherwise it will be proved, the cards are defective. So far they supposedly fixed it just before the Christmas once, but nothing actually has been fixed. On 13.1 they mention Skyrim fix individually, but nothing…

      Man, these cards are the most problematic hardware I’ve ever had.

    • anotherengineer
    • 7 years ago

    Damage/TR Nice work.

    I have a question.

    Looking at your tests it appears you used 2560×1440 resolution, I know when you lower the resolution, you can increase your framerate (depending if the cpu or gpu is the bottleneck)

    Now if you dropped the resolution to 1680×1050 or 1920×1080 and the frame rates increased, the the ms time per framerate would also go down.

    Would this reduce the spikes also, or leave them unchanged?

    I’m thinking maybe a 1920×1080 resolution may give results that more people can relate to than 2560×1440, but I don’t know so I am asking.

    Edit – or if the spikes remained the same, and the avg. ms per frame was lowered I suppose the stutter effect would even be greater?

    Thanks

      • superjawes
      • 7 years ago

      I don’t have any numbers to verify this or back it up, but because those resolutions have less information to render, ALL frame times should be lower, spikes and all. Reducing other settings should also lower frame times.

      Smoothness is an effect at the monitor, where frame times above 16.7 ms (for 60 Hz monitors) mean that frames are shown more than once (and time appears frozen). A spike causes a frame to be shown several times.

      To get smooth performance, you want to get frame times down, so lowering resolution and detail should help. However, TR will continue to test at high resolutions because that will give you the best idea of what happens when system is pushed hard. If a card can handle 16.7 ms at ultra settings and 2560×1440, then it will surely perform well below that.

      • Damage
      • 7 years ago

      My sense is that the answer depends a lot on whether the application is primarily CPU bound or GPU bound, and that includes a lot of moving pieces that affect both chips. I think we may have to do some multi-resolution testing to explore this issue soon.

        • anotherengineer
        • 7 years ago

        Thumbs up to that, could be interesting to see, *waits patiently*

        • MrJP
        • 7 years ago

        Yes please.

        P.S. Heartfelt thanks for the free upgrade to my 7950. 🙂

    • Tamale
    • 7 years ago

    So great to see such a rapid response and validation of all the work you’ve been doing lately! Nice!!!

    • superjawes
    • 7 years ago

    What, didn’t want to end the article with a snarky “you’re welcome” to all the people who lashed out over the 660 Ti vs. 7950 review? 😛

    Excellent work by AMD, though. To see that “time spent beyond 50 ms” go from a few dozen to zero is an excellent sign. They obviously have good people working on those drivers to get such a turnaround and do it quickly.

      • moose17145
      • 7 years ago

      AMD has always had a large amount of talent working for them. In fact many people would argue that AMD has had more talented people working for them than Intel does. Look at what they have been able to do. Stand Toe to Toe with the 800 pound gorilla on only a fraction of a budget. Takes some serious talent to do something like that. Although in recent years they have lost a lot of that talent sadly.

      Also AMD makes some very excellent products… the only issue is that lately it seems like either Intel or NVidia has a ever so slightly better product out. The FX processors are not bad chips by any means. For productivity (actually using them for real work) tasks they easily hold their own against even the Intel i7’s in terms of raw performance. But the Intels are just as fast, for almost the same price, or not much more, and consume half the power. And yes the Intel chips are better at gaming too… but really gaming is a very small slice of the market compared to the enterprise world.

      The Radeon 7 series were good chips from day one. In fact AMD really should have been beating NVidia on the graphics front this cycle considering they had a 6 month lead on them… But AMD kinda fails at marketing and just doesn’t have the brand recognition as Intel and NVidia both enjoy. Anyways their GPUs were still good… but as we saw the 660 Ti was just better (at least until this driver update). I fully expect the 7950 to way outperform the 660 Ti by the time they both reach EOL, considering the 7950 still has WAY more headroom for performance gains than the 660 does (judging this based purely upon the fact that hardware wise the 7950 SHOULD be able to spank the 660). So it’s nice to see AMD still pulling performance optimizations and gains from their cards just with driver updates as they truly learn more about how to fully utilize the hardware they invented. But at any rate it’s not like the 7950 was really a BAD card by any means. It was just outshined by the 660.

      Anyways where I was going with this, AMD has good products, and I know several people who have an all AMD based machine (CPU and GPU) and they say that the thing is lightning fast and handles everything they throw at it with ease and room to spare (even games). So they make good stuff… they just can’t quite seem to capture that performance crown, and when they do, they can’t figure out how to properly market it (like when the Radeon 7k series was releases and NVidia had no answer to it for 6 months). In fact the more I look at AMD the more impressed I am with them. They are standing toe to toe against both Intel AND NVidia. Not a very friendly place to be in. And yet their products are competitive against both companies. And they don’t have the financial resources of either company right now and were still able to produce what they did. Takes some pretty innovative minds to accomplish so much with so little. Probably why so many other companies are willing to snatch up AMD talent right now. They all saw what they were able to do with so little. Even if they couldn’t capture the performance crown… the fact they can get to 95% performance of Intel or NVidia on like 1 / 20th the budget is pretty impressive. At least I think it is.

    • geekl33tgamer
    • 7 years ago

    Nice that AMD’s looking at this now, but I bet there’s ZERO chance of these improvements being rolled out for my Radeon 6950’s, or any other Radeon 5xxx and 6xxx cards out there that are still more than able to play today’s games…?

      • willmore
      • 7 years ago

      You mean cards that didn’t have the problem?

        • geekl33tgamer
        • 7 years ago

        Disagree. My 6950’s don’t appear to render frames smoothly, but it may be down to Crossfire so im not sure. Despite in (most) games the cards hold a solid 60FPS with V-Sync on, it still jitters and jumps around.

        I would say there’s a problem, but maybe not the same one???

          • Spunjji
          • 7 years ago

          What willmore means is that the problem you’re seeing has entirely different root causes to the ones behind the 7000 series issues resulting in this update. So, your cards do not have “the” problem. They have a different one that is very difficult to resolve. Multi-GPU rendering systems all suffer from that problem when the frame-rate of each individual card is getting close to what you’d perceive as not being a smooth frame rate.

          So, yeah, it’s down to Crossfire. Or your CPU. Or your PCIe bus. Or background processes. etc. 😉

            • willmore
            • 7 years ago

            Spunjji is correct, that’s what I meant. No need to -1 rcs2k4, though. It’s a valid question.

            • jensend
            • 7 years ago

            Minusing would not have been for the question, but rather for the snark.

            • willmore
            • 7 years ago

            If there are minuses for snark, it’s amazing I ever get above zero!

      • DaveBaumann
      • 7 years ago

      The alterations that are in place that affect the titles here are generic changes and not limited to any series of GPU. The GCN specific element that has been discussed as a re-write of the CGN memory management code, which is not in this driver and is more seen as an “improvement” for GCN in a number of cases.

    • derFunkenstein
    • 7 years ago

    And that’s how it’s supposed to look. Back to what I wrote when you started addressing this and AMD said they’d fix it, this has to be simultaneously frustrating and gratifying. Glad to see they put forth the effort; their customers deserve it.

    • lilbuddhaman
    • 7 years ago

    About borderlands 2, I’m running 6870×2 and an i7 @ 3.5ghz. Running maxed settings, MSAA 2x, 1920×1200 I see
    GPU usage floating around 70% for each card, CPU usage ~75% on one core and the rest ~25-50%.

    I’ll get fluid 60fps most of the time, but will occasionally just drop in fps inexplicably, then just stay there at 20-30fps. (I’m thinking this is a game-side bug?)

    Anyways, are there any benchmarks out there to see if this positively effects 6xxx cards? The story from previously mentioned that they were rewriting memory handling that was specific to 7xxx series, I wonder if the older tech was touched? (probably not?)

      • Dingmatt
      • 7 years ago

      Wow remind me never to get an AMD card, I was getting better performance on my single 470 GTX and my new 680 GTX is out of this world.

        • Spunjji
        • 7 years ago

        Wow remind me not to bother reading pointless posts like that one.

          • Dingmatt
          • 7 years ago

          Well if you want then feel free to attach a semi-full system config log such as the dxdiag and I’ll attempt to give you both tips on how to increase your performance, however until then your stuck with the usual ‘my cards better than yours’. Your choice really.

            • Spunjji
            • 7 years ago

            So wait, are you now admitting that your post was pointless because of the lack of information and subsequent generalisation to a manufacturer’s entire product range? If so, great, thanks. I’m baffled as to why you think I want performance tips, though. Nothing I said indicated that.

            • Dingmatt
            • 7 years ago

            Yes, I thought that would be obvious from the ambiguous and more than slightly obtuse response; you akin it to a troll comment.

            • Srsly_Bro
            • 7 years ago

            Only the best trolls lack in the grammerz dept. Continue forth, spreading ignorance around the internet!

      • mkk
      • 7 years ago

      The Borderlands 2 game engine does not support triple buffering, so if you run with VSync on you’d better use settings in the game that gives you a more or less constant 60fps. Otherwise the next step below 60 will be 30 with no in-between. Any modern game supports triple buffering and has VSync on by default today, but Borderlands 2 was a hack.

        • alienstorexxx
        • 7 years ago

        it was supported until first update (some one crashed it purposely, as i think nvidia still has triple buffering support) . also, bl2 isn’t the only game with that problem, i had the same problem with nfs mw 2012, painkiller h&d and darksiders 2.

        amd users can use [b<]"RadeonPro"[/b<] to force triple buffering. it's an amazing tool, give it a try. it also has [b<]sweetfx[/b<] and [b<]ambient occlusion[/b<] injector. it's posted here [url<]http://forums.guru3d.com/showthread.php?t=322031[/url<] you can download the preview there, but if you want direct link here it is. [url<]http://www.radeonpro.info/en-US/Downloads/Preview.aspx[/url<] the "stable version" it too outdated

    • kristi_johnny
    • 7 years ago

    From where can we download these drivers? It’s not on AMD’s drivers page or it was released just for tech sites to benchmark it?

      • alienstorexxx
      • 7 years ago

      read please, it will be released next week.

        • kristi_johnny
        • 7 years ago

        Oh, sorry, i skipped that, thank for the eye opener 🙂

        • kristi_johnny
        • 7 years ago

        Well, still no 13.2 beta on AMD’ site, and the [b]next week[b] almost ended. Pity, no AMD video card in the future for me

          • alienstorexxx
          • 7 years ago

          last week, an update by catalyst creator from twitter said that this week will be released

    • alienstorexxx
    • 7 years ago

    nice work guys! as a AMD user i thank you for doing this tests and your compromise with amd community. also, amd has behaved very correctly with this problem, addressing it as fast as they could.

    thank you.

    • HisDivineOrder
    • 7 years ago

    Looks like once the frame latency is smoothed out, the 7950 really is equivalent to the 660 Ti. Uh oh, spaghetti-Oh!

      • BestJinjo
      • 7 years ago

      And they should be as they are price competitors. The real story behind HD7950 is its overclocking:

      [url<]http://www.legionhardware.com/articles_pages/his_7970_iceq_xsup2_ghz_edition_7950_iceq_xsup2_boost_clock,13.html[/url<] On price in US/Canada, HD7950 competes against GTX660Ti, HD7970 against GTX670 and HD7970GE against GTX680. This has been the case since AMD launched HD7970GE in June 2012 and dropped prices on HD7950/7970. It's not a surprise that HD7950 and GTX660Ti are roughly equivalent at stock speeds. The key difference is HD7950 overclocks from 800mhz to 1100-1200mhz at which point its performance is similar to HD7970GE/GTX680, not 660Ti. An overclocked 660Ti is much slower than an overclocked HD7950, which is why the 7950 has been the enthusiast choice for overclockers choosing between 660Ti/7950: [url<]http://www.hardocp.com/article/2012/08/23/galaxy_gtx_660_ti_gc_oc_vs_670_hd_7950/2[/url<]

        • derFunkenstein
        • 7 years ago

        I’ve had lots of different GPUs from both vendors and I’ve never had good OC results. Not saying other people don’t, but for me, OC does not factor at all. If it’s factory OC that’s fine – and I’ll pay a few bucks extra for a factory OC. But I just won’t do it myself.

        The last great GPU OC I had was an ASUS GeForce DDR on a Slot A Athlon 600MHz system. It did great til I killed it. I’ve been cursed by poor-overclocking GPUs ever since.

          • Firestarter
          • 7 years ago

          I had great success with my launch-day HD7950, which is humming along nicely at 1100mhz/1600mhz, which is a 37.5% overclock on the GPU. From what I read on the internet, others have theirs running ever higher up to 1200mhz (50% overclock), mine needs too much voltage for those speeds. TR got theirs to 1175mhz in the review: [url<]https://techreport.com/review/22384/amd-radeon-hd-7950-graphics-processor/10[/url<] It was pretty clear at launch that the stock clocks of the HD7950 were very conservative.

          • BestJinjo
          • 7 years ago

          That’s because most GPUs were lousy overclockers. Pushing the sliders from 800-925mhz on the HD7950 to 1100mhz is a 5 min exercise. Finalizing your overclock beyond 1100mhz requires a bit more work. There has never been a high-end GPU that overclocked on air as well as HD7950. If you purchased the MSI TwinFrozr III 7950 or Gigabyte Windforce 3x 7950, or Sapphire Dual-X 7950, then hitting 1100mhz was almost a given. I can understand why overclocking is not a factor for most people but in this case it’s the exception to the rule because a $280-300 HD7950 can surpass a $450 HD7970GE/GTX680 in performance when overclocked. Also, if you know how to safely overclock CPUs like 2500K, then overclocking GPUs is just as easy.

          For example, an overclocked HD7950 is 42% faster than GTX660Ti is in BF3:
          [url<]http://www.techpowerup.com/reviews/HIS/HD_7950_X2_Boost/31.html[/url<] HD7950 scales almost linearly with increased GPU clock. GTX660Ti can never touch this level of performance. GTX660Ti only competes with the 7950 at stock speeds. Once 7950 is overclocked, it can beat a GTX680/HD7970GE.

        • MadManOriginal
        • 7 years ago

        Too bad AMD’s long-term driver support is worse.

          • BestJinjo
          • 7 years ago

          Sorry, as covered already, that’s irrelevant for enthusiasts who buy $280+ GPUs like HD7950 because they will have upgraded in 2-3 years because they play modern games at high IQ settings. If you buy an HD7950 in hopes of keeping it for 5-6 years to play games, that’s your problem. Learn how to upgrade. I’ll make sure to keep a mental note and come back in the year 2018-2019 and let you know how GTX660Ti plays modern PC games by then. My guess is it’ll be a complete and utter slideshow at 1080P. You keep dropping the straw man “long-term” driver support but so far provided 0 evidence how this actually matters in the context of HD7950 vs GTX660Ti. Oh ya, let me know how DX11.1 support will be working out for you in Windows 8, 9 and 10 as GTX660Ti is not DX11.1 compliant, which means it doesn’t support Target Independent Rasterization (http://blogs.amd.com/play/2012/12/14/gcn-architecture-full-dx111/):

          – accelerates rendering of 2D vector graphics (used by the Windows Modern UI, HTML5 web pages, and .SVG image files) by up to 500% or more:

          [url<]http://blogs.msdn.com/b/b8/archive/2012/07/23/hardware-accelerating-everything-windows-8-graphics.aspx?Redirected=true[/url<] So much for using GTX660Ti long-term for even basic 2D work in later MS OS systems. In summary, HD7950 will be better than GTX660Ti during the next 2-3 years (its expected useful life) due to its 30-50% overclocking headroom, 3GB of VRAM for texture mods and DX11.1 support for 2D acceleration in later MS OS systems. So much for your argument regarding "long-term" usefulness of the GPU. AMD should provide drivers for this GPU for at least another 4 years which is enough before it's worthless anyway. Did you know an overclocked 7950 >>> GTX680/HD7970GE and is 42% faster than GTX660Ti in BF3 at 1080P? [url<]http://www.techpowerup.com/reviews/HIS/HD_7950_X2_Boost/31.html[/url<] I guess next you are going to tell me that overclocking doesn't count also? You also apparently do not own at least a Sandy Bridge CPU because obviously using an Intel GPU for 2D work seems like a foreign concept to you. Let me know how your Kentsfield/Westmere/Nehalem/Lynnfield/Phenom II/Bulldozer/Vishera is working out for you in modern games in 4-5 years as well. I guess you won't be buying any of Intel's CPUs that have an integrated GPU in them for the next 4-5 years either? Skipping Haswell, Broadwell, Skylake, are we? Can't possibly think outside the box like buying a $20 GPU for 2D OS support? (http://www.newegg.com/Product/Product.aspx?Item=N82E16814150600)

          Yup, you sound like quite the “PC hardware enthusiast” alright….planning to use parts for more than half a decade for PC gaming, have no plans to upgrade to Intel’s next gen of CPUs in the next 4-5 years and can’t possibly consider buying a way more power efficient $20 GPU for 2D OS work but instead prefer to stick with power hungry GeForce 6-8 GPUs. I guess there is no point in me mentioning things like BitCoin mining that can be done on AMD’s GPUs. Xbox 720/PS4 are calling your name.

    • uartin
    • 7 years ago

    I dont want to belittle the work that’s being made here and elsewhere but Why everybody keeps saying that techreport started a new way to benchmark when microstutter has been talked about in other (PCGH and CB now come to my mind) reviews sites for ages (we are talking 2008 time, maybe even earlier)?

    I can also remember interminable discussions on ancient threads of nvidia forums where people kept posting their Unreal tournament fraps frametimes dump shouting how slow frames would break the smoothness of gameplay in multigpu configurations…

    Had this microstutter been more advertised by review sites probably other ways to hande multigpu frame output ( like checkerboard) , even if not efficient like afr, would have not been completely abandoned…
    Ok i stop ranting.

      • gamerk2
      • 7 years ago

      No one bothered to actually INVESTIGATE the problem though; most simply assumed the PCI-E bus was getting oversaturated. I always personally suspected latency, but had no way to test myself. Glad TR has actually done what they are supposed to do, and investigate, rather then explain away, performance anomalies.

        • superjawes
        • 7 years ago

        Yeah, TR developed the tools to capture the microstuttering and analyze it in a useful way. On top of that, they’ve done a few writeups explaining what the metrics are and why they are important.

      • Spunjji
      • 7 years ago

      Most of the microstutter chattered revolved around multiple GPU implementations and was confined to testing in that area. The testing methodology used and the methods of displaying the results are hardly trivial either. 🙂

      • kuraegomon
      • 7 years ago

      For someone who “didn’t want to the belittle the work …”, you sure did a great job of exactly that. And, as the other responders pointed out, developing a methodology and actually benchmarking to isolate the issue is far removed from just whingeing about it in forums. The reason that this issue is now being substantively addressed by GPU makers is _precisely_ because of the work done by Scott and the TR team.

      One of the most important rules of debugging is “Quit thinking, and look” – i.e. don’t just sit there and pontificate about what you think might be causing an issue, but instead _measure_ the system behaviour to quantify the problem. TR’s work is the pioneering review-site effort in this regard, and appears to be a groundbreaking approach even for the manufacturers. Once again, well-done TR!

      • Arclight
      • 7 years ago

      It seems like it was indeed discussed way before TR implemented the more frametime oriented approach but until they did all the major review sites were afraid to make the jump completely. I’m certain many of the TR regulars had doubts when TR made the full switch to frame time metrics leaving a single familiar measurement, average fps.

      But after TR made the commitement, look, all the other “brave” sites are taking notice and gearing themselves towards a simillar way of analysis.

      I understand you are ranting because this wasn’t done sooner en masse, i agree, but fact of the matter is that before TR made the commitement, many were either oblivious to the issue or had alergic reactions when they saw that all their familiar way to measure fps dissapeared from TR reviews.

      • sschaem
      • 7 years ago

      Yes, this as always been an issue, but TR made it matter to a point where IHV took action.
      Not the first time TR did this.

      Anyone remember this ?
      [url<]https://techreport.com/review/3977/on-cinematic-rendering-and-agp-downloads[/url<] Before TR was involved graphic card could only get about 10MB transferred (ATI and nvidia would not budge and fix their drivers) With AMD new driver, over 10x improvement [url<]https://techreport.com/news/4943/new-ati-driver-speeds-agp-texture-uploads[/url<] TR is on the right path by measuring frame render time along side AVG fps.

    • puppetworx
    • 7 years ago

    Are these driver optimisations specific to the 7950 or do other models benefit? Also what about older 5 and 6 series cards, do they benefit? Just curious.

      • eofpi
      • 7 years ago

      5 and 6 series cards were VLIW5 or (69xx) VLIW4. They won’t benefit from things specifically aimed at GCN, but their performance is already rather closer to their theoretical capabilities.

    • Rigel84
    • 7 years ago

    [url<]http://www.digital-daily.com/video/vga_testing_2007/[/url<] [url<]http://www.digital-daily.com/video/vga_testing_2007/index2.htm[/url<] [url<]http://www.digital-daily.com/video/vga_testing_2007/index3.htm[/url<] Article from 2006 with varies ways of showing latency, a shame it didn't catch on..

    • rechicero
    • 7 years ago

    It must be great to know your work as humble journalist did make a difference. Maybe it’s not the end of hunger in the world, but it’s something measurable that will benefit thousands of Radeon users that will never know they should thank you for this.

    Great job!!!!

    (and kudos to AMD for the fast reaction)

    • ronch
    • 7 years ago

    I’m playing TES IV: Oblivion lately and I upgraded last month from a Phenom II X4 925 (unlocked X3 720) + AMD HD5670 to an FX-8350 + HD7770 1GB. Not sure, but gameplay seems to be choppier than usual. I’m not sure if this is a problem with the FX or the HD7770. Hopefully the HD7770 will perform better with these new driver updates.

    Edit – I was using Catalyst ver. 12.8, the one that came with my graphics card. I downloaded Catalyst 12.10 and installed it, which, I think, automatically updates my PC from 12.8 to 12.10. No improvement. Then now I did a complete reformat and installed 12.10 from scratch. Now Oblivion runs about as well as it did on my Phenom II. Not sure if the AMD installer didn’t do a good job updating the driver or something was messing with Oblivion before the reformat. I hope this somewhat vindicates AMD from the idea that the FX-8350 is slower than the Phenom II X4 925.

      • Arclight
      • 7 years ago

      If you haven’t sold the parts yet you could replace one at a time to find the culprit.

        • ronch
        • 7 years ago

        That’s what I’m thinking. I gave the new PC enough tossing around already (bringing it to several shops where I bought the diff. parts from because I was troubleshooting something right after I bought it) that I’m just not up to giving it more. I’m thinking about trying out newer drivers first.

      • Blibbax
      • 7 years ago

      Could well be your CPU, as Skyrim only uses two threads.

        • ronch
        • 7 years ago

        Still, that doesn’t explain why it seems to run slower on the FX than on the Phenom II X4.

          • Spunjji
          • 7 years ago

          The Phenom CPUs have better single-thread performance than the FX series. This does differ depending on the exact code running, though. What OS are you using?

            • ronch
            • 7 years ago

            Clock for clock the Phenoms do have better IPC than the FX chips. However, at 4.0GHz, the FX-8350 should deliver [i<]at least[/i<] equal performance to a Phenom II X4 running at 2.8GHz, if not better. Both the X3 720 (turned X4 925) and FX-8350 should have everything Oblivion needs, which is at least two cores dedicated to the game.

      • jihadjoe
      • 7 years ago

      That’s an uncommon problem with AMD drivers.

      Most people recommend completely uninstalling the old drivers and running some sort of driver cleaner before installing new drivers.

      There’s actually quite a number of third party programs that make their living off of cleaning up the mess left by AMD drivers. (atimantool, driver sweeper, driver fusion, etc).

        • Spunjji
        • 7 years ago

        This problem isn’t entirely AMD specific. It seems to be a “feature” of VGA drivers in general and manufacturer-specific variants in particular.

    • Arclight
    • 7 years ago

    That’s a remarcable change.

    BTW, i still hope that you guys will take a look at Planetside 2 at some point in the future.

      • Firestarter
      • 7 years ago

      A closer look at Planetside 2 performance would be great, but also very very hard I think. For example, I’ve noticed that while my system produces good framerates (100+) in the starting area when I’m stationary and not looking around, and it looks pretty smooth. However, if I do a quick 180, the framerate takes a nose-dive to 20 FPS during the turn and goes back up to 100+ as soon as I’m done turning. The faster I turn, the lower the framerate dips during the turn. It [i<]looks[/i<] slow as well, in the sense that I can see that the framerate is low while turning, but it doesn't [i<]feel[/i<] slow for some reason. With an eye on the GPU RAM usage (using the awesome GPU-Z tool), it looks as though the Planetside 2 engine goes to great lengths to only have the textures/models in GPU RAM that are actually currently being displayed, and actively removes textures from the RAM if they aren't used for even a single frame (all this is pure speculation of course). Any performance testing of Planetside 2 would probably bump into these kind of issues (seeing as how the engine is still not very mature) and would make it very hard to draw any meaningful conclusions. And that's even before you consider the fact that you can't possibly reproduce a single playing session.

        • brucethemoose
        • 7 years ago

        [quote<] And that's even before you consider the fact that [b<] you can't possibly reproduce a single playing session. [/b<] [/quote<] This^^ We need some sort of standardized benchmark run from SOE ASAP, as PS2 is easily the most demanding game I've played in awhile: it's needs some testing with TR's special sauce. Also, I assume you TR guys would play as... TR.

          • Arclight
          • 7 years ago

          Strenght in unity.

          • Firestarter
          • 7 years ago

          Well, it can be done but you need 2 benchmark setups and a 3rd system to help. You’d pack the 2 players on the benchmark systems in a sunderer or galaxy, and use the 3rd system to drive/fly around and find a big battle. It’s pretty much a one-shot, non-repeatable way of benchmarking, but at least it should be a fair comparison between the 2 systems as the 2 players should see the same exact thing during the whole session.

        • kn00tcn
        • 7 years ago

        yes, motion blur hides low framerates, that’s why it still looks pretty smooth (just like crysis or gta4)

          • Firestarter
          • 7 years ago

          Motion blur? Surely you jest, no player with even a hint of competitiveness would use that.

    • raghu78
    • 7 years ago

    in borderlands 2 time spent beyond 50ms for cat 13.2 does not agree visually with the data in the frametime chart. there are 2 spikes above 50ms. one goes above 60ms (approx 67 – 68 ms) and the other is below 60 ms ( approx 57ms). the time spent above 50ms looks to be 25ms ((68 – 50) + (57 – 50) ) . it definitely can’t be more than 30ms. please verify if there is any error in posting the correct time spent beyond 50ms for cat 13.2.

      • Damage
      • 7 years ago

      The plot comes from just one of five test runs, and time beyond 50 ms varies from run to run. The value we report in the bar chart is the median of five runs.

      The numbers for the individual runs with Cat 13.2 were 38, 37, 11, 23, and 55. (Yes, they do vary some, which is why we report the median.)

      The plot comes from the middle run, whose time beyond 50 ms was just 11 ms.

    • Bensam123
    • 7 years ago

    Nice Scott… Definitely glad to hear you’re changing things for the better… Have to give kudos to AMD for being quite willing to help fix this right away too. They were probably just about as excited figuring it out as you were… Just something else that could be hammered out to make the end product so much better.

    Hopefully you guys will start taking a look at core parking next and perhaps even powertune. I’m still unsure of how to capture the behavior that coreparking is exhibiting, hopefully you guys can come up with something. It’s pretty easy to notice right away though.

      • Bensam123
      • 7 years ago

      Oh and any and all 3DMark tests in the future should be replaced with Catzilla…

      [url<]http://www.allbenchmark.com/details[/url<]

    • Dwarden
    • 7 years ago

    can’t wait more DX9 games to be tested (PlanetSide 2, ARMA 2: OA) with this new driver … anyway good job on pinpointing the issue and sharing it with AMD so they could fix it

    • TrantaLocked
    • 7 years ago

    I think you wrote 12.8 beta11 on accident. I think you meant 12.11 beta11, right?

      • TrantaLocked
      • 7 years ago

      Yeah, on the Skyrim page you used “12.8 beta11” instead of 12.11 beta 8 🙂

      • Damage
      • 7 years ago

      Doh. Fixed!

        • TrantaLocked
        • 7 years ago

        BTW I see in on a few other pages too, I’d suggest just going through each page and ctrl-F search for 12.8. It happens in text and in graphs as well.

        Great review though. This has me super excited, as I game with a 7970M. This is that thing everyone has been saying about Nvidia, how it just feels smoother with Nvidia.

        Quick question, does it feel like there is any extra input lag with 13.2 beta? I wonder if AMD got this fix out so fast by simply lining up more frames by default.

          • Damage
          • 7 years ago

          Should be fixed now.

          No, I didn’t notice any weird input lag. Really don’t think that’s what’s going on here.

          • anotherengineer
          • 7 years ago

          I thought input lag was a monitor issue only?

          [url<]http://www.tftcentral.co.uk/articles/input_lag.htm[/url<]

            • Firestarter
            • 7 years ago

            The input lag discussed at TFT Central is the latency between the graphics card sending the frame to the monitor and the monitor actually displaying it. But before the graphics card is done with actually creating the frame, a lot of things can happen. Triple buffering for example is a technique that can prevent tearing, without artificially capping the framerate like vertical sync would do. But, to achieve that, the graphics card will buffer a frame (for a total of 3 buffers instead of 2), that is, it will keep sending the frame that was already being sent to the monitor and postpone the sending of the fresh new frame even though it’s ready. That is one of the things that can cause lag between [i<]your[/i<] input (your mouse movement/key presses) and the output of the monitor, which is also called input lag. The reason TrantaLocked asked if Damage observer any extra input lag is that you could just smooth over the inconsistencies in frame delivery by doing something similar to triple buffering. That would produce some pretty graphs and make the game feel smoother, but it would also cause input lag. edit: -1 ? My feelings are hurt!

            • Spunjji
            • 7 years ago

            Who gave you -1 for that?! “Corrected”.

    • Novuake
    • 7 years ago

    TR thanks for bringing this to light, thanks to you guys my HD7950 is about to become a lot smoother…

    • Voldenuit
    • 7 years ago

    Good job on AMD’s part to address this issue so promptly.

    I do have to wonder though, since all the games tested here were in the original 7950 vs 650Ti article, and the “fixes” are application-targeted, what are the chances that a random game not covered in the original article have been targeted for a “fix”? ie does the problem still exist in the real world for regular gamers? If so, I hope that the upcoming driver with the memory management rewrite will provide a panacea.

      • Novuake
      • 7 years ago

      You mean originally the HD7950 VS GTX660ti?

        • Voldenuit
        • 7 years ago

        I’m just wondering how AMD chooses which games to patch, and if they’re simply targeting the games that TR used to highlight the frame latency issue or whether this is a more far-ranging effort that aims to alleviate the issue for gamers.

        Note that I’m not advocating any sort of conspiracy theory going on here, but it might be interesting to test some new previously untested games with the old and new drivers and see if work’s been done on any titles other than the 7 games tested in the [url=https://techreport.com/review/23981/radeon-hd-7950-vs-geforce-gtx-660-ti-revisited<]7950 vs 660Ti revisited[/url<] article.

          • Airmantharp
          • 7 years ago

          He did say that the patch was based on game-specific tweaks. While that doesn’t bring much hope for an automatic blanket patch, it does at least mean that AMD understands the problem and has a selection of templates to fix it.

            • Voldenuit
            • 7 years ago

            Yeah, what I’m wondering is, was this a “let’s patch the games review sites are benchmarking so we get the press off our back” or “let’s patch the top 50 most popular games on Steam and start working on the backlog of everything else” type of driver update?

            I understand that a general fix is hopefully forthcoming with the GCN memory management rewrite, but is this current update a substantive update or just something to placate hardware sites?

            • Firestarter
            • 7 years ago

            Well even if they did this just to unrustle TR’s jimmies, they have still reproduced the problem on their own and managed to fix it. It remains to be seen if they will apply the things they learned doing this to other games or their testing in general, but it’s a step in the right direction anyway.

          • TrantaLocked
          • 7 years ago

          And it makes sense to target the games that were benchmarked. Just because AMD chose those games on purpose doesn’t mean the fixes aren’t real. Whatever they are doing is improving latency numbers, and all we do now is wait for the driver team to develop more targeted fixes and then the overall, general memory fix that would affect more than just one game, but many.

            • Airmantharp
            • 7 years ago

            In all, it should look really good in the near future.

      • TrantaLocked
      • 7 years ago

      It was stated on the last page that the memory manager rewrite that should come in a few weeks will help out all, or at least most, DX10/DX11 games in terms of latency. That means no reliance on special, game specific profiles.

    • mi1stormilst
    • 7 years ago

    Scott a big CONGRATS goes out to you and your team! You guys should be really proud of getting these companies to step up to the plate. Unfortunately for AMD I already retired my AMD 6950 and replaced it with a more affordable and more fluid working alternative (the GTX 660). I am very happy with my switch for more than just smoother game experiences. The driver panel loads faster and I don’t have issues driving my 2560×1440 Korean monitor 🙂 Keep up the great work and I hope AMD and nVidia take this to heart…again. We have seen both companies in the past optimizing for benchmarks, but the result of this work is real world benefits.

    • piesquared
    • 7 years ago
    • DPete27
    • 7 years ago

    It’s awesome to see such a quick turn-around from AMD with a fix. Let’s hope that they can make this a blanket fix to all cards and games ASAP. I wish they would give the 6000 series some attention also. Seems like they’re all in on GCN and ignoring cards that are only one generation old.

    Hopefully this can be solved completely via drivers since AMD 8000 series cards are already shipping (obviously not at retail yet). Otherwise AMD will be falling further and further behind Nvidia just like their CPU team is behind Intel….shame

    It really baffles my mind how intuitive and simle the idea of frame latency testing is and how AMD has been completely oblivious to it until now. (and Nvidia didn’t catch on until less than 2 years ago) Shouldn’t their engineers have a better understanding for how the card is delivering the game “experience” all the way up to the point where it hits the user’s eyes? That’s what happens when you get complacent and don’t ask questions like “hmm how come we can crank out 100fps and gameplay still looks choppy?” “Eh, must be someone elses fault.”

    Scott, you’re a genius, I thank you for bringing this to the public eye.

      • MadManOriginal
      • 7 years ago

      [quote<]Seems like they're all in on GCN and ignoring cards that are only one generation old[/quote<] No way, AMDs driver support is the bestest! And if you're not using a card that's from the very newest generation you are a sub-human non-enthusiast! We would actually be better off if Nvidia and AMD gave up on improvements for older GPUs because who cares, THEY'RE OLD! At least, [url=https://techreport.com/news/24219/amd-former-execs-handed-trade-secrets-to-nvidia?post=702233#702233<]that's what I've been told by BestJinjo[/url<] whose awesome math skills (did you know 4 is greater than 8??) means he must be right!

        • Spunjji
        • 7 years ago

        Way to air irrelevant grudges.

          • MadManOriginal
          • 7 years ago

          Thanks!

        • BestJinjo
        • 7 years ago

        Twist information much to suit your agenda? Go back to that thread and name a single person who said AMD cards are supported for longer than NV cards in regard to official WHQL drivers. I never made such a statement. Your entire insinuation that cards should be supported for 8 years is ludicrous.

        You also mentioned this idea that I think 4>8. First of all, HD5000 cards are still supported so please don’t twist facts.
        [url<]http://support.amd.com/us/gpudownload/windows/Pages/radeonaiw_win8-64.aspx[/url<] Second of all, it costs a lot of money for companies to waste millions of dollars on supporting outdated cards like HD2000-4000 series, especially for companies such as AMD that aren't financially thriving to put it mildly. IMO, 3 complete generations of card support or roughly 5-6 years is more than sufficient. In the context of TR's GPU testing, almost any modern GPU will be worthless for modern games way before that. For most of us reading HD7950 vs. GTX660Ti review, whether or not 7950/660Ti will be supported for longer than 5 years is irrelevant since we'll have upgraded anyway. You also failed to address the point I made that starting with SB/IVB generations, a lot of people already have a GPU for 2D work and it comes free with the CPU. So why would you be using GeForce 6 or 7 for Windows 8? If these people don't upgrade for 8 years and want a GPU for 2D work, they can use Intel's. If someone is using GeForce 6 or 7, chances are their entire system is slow (Core 2 Duo, mechanical drive, etc.) You are just making a point for the sake of making a point but not looking at it from a perspective of practical user applications. Why would anyone be using a hot and power hungry GeForce 8 for Windows 10 for example when they could be using the GPU inside their CPU? You are making a big deal out of nothing. People here are gaming enthusiasts, and a lot of them upgrade GPUs every 2-3 generations, if not earlier. There is no reason anymore to keep a card for 6-8 years for 99% of us. It's better NV and AMD spent their resources on supporting last 3 generations of cards because those are the ones that can actually play games well. That's $ well spent on optimizations that impact gamers. Releasing WQHL drivers for GeForce 6 and 7 is a waste of time since there is no more practical performance to be had and what kind of 2D errors are you fixing exactly? You keep trying to tell people how having 8 years old WHQL drivers are important for 2D OS support but you haven't explained once why. You can still use a card like HD4000 series in Windows 7 or 8 and you don't need January 2013 WHQL drivers for it to work properly. You can use September 25, 2012 ones: [url<]http://support.amd.com/us/kbarticles/Pages/catalystlegacywin8.aspx[/url<] Finally, even for basic 2D OS acceleration, HD content/H.264/Adobe Flash acceleration, superior 2D and video content IQ, etc., modern cards are still worth upgrading to for non-gamers. For instance, Windows 8 supports Partial Resident Textures that speeds up snappiness of the OS on DX11.1 capable GPUs: [url<]http://www.techspot.com/news/49508-microsoft-details-windows-8s-improved-graphics-performance.html[/url<] The 2D IQ in GeForce 6 and 7 is atrocious to begin with, even GeForce 8's is poor. You need to get at least a Fermi card for proper 2D quality. It's pretty laughable you keep defending NV wasting $ on WHQL drivers for these outdated cards but not realizing that their IQ is terrible even for 2D operation. Superior idle power consumption and new features of modern cards means that for most people it's better to buy a next gen budget card in 5-6 years for future OS systems anyway than to keep say a GTX550Ti for future Windows OS environment in 2018 (or as I said just use the GPU in their SB/IVB/Haswell, etc.) Given my entire post, AMD is doing the smart thing and focusing on its latest generation of cards since that's where the issues need to be resolved first and foremost. It's good to see them taking proactive steps and criticism of TR to fix the frame latency issues. If you are asking for the same dedication from AMD on HD4000-5000 cards, you are not being realistic or completely ignorant to the idea of budgets at large companies. If you want 8 years of driver support for games, just buy a PS4/Xbox 720. PC gaming is targeted at people who want to stay at or near the cutting edge. For gamers who want 8 years of support and who can't afford to upgrade even once in that period of time, consoles are the logical purchase.

          • clone
          • 7 years ago

          ATI video cards historically did get longer driver support because ATI historically was tweaking their architecture for longer periods while Nvidia was pursuing a 9 month product cycle and an 18 month replacement.

          it’s why ATI’s Radeon 8500 was supported for so long which validated the reputation, core elements of R200 were still being used in the HD 4870 series for heavens sake.

          with DX 10 things changed and those days of prolonged support seem to have vanished leaving both Nvidia and AMD following much the same path and far shorter support periods…. but for a while ATI/AMD was the company offering the much longer support.

            • Scrotos
            • 7 years ago

            Then at the time of DX10 nvidia used the same GPU for five generations, right? 8800GS/GT/GTS512, 9600GSO/9600GTX/9800(all), GTS150, GTS240/GTS250, GT340? G92, I’m lookin’ at you!

            Ye gods, the more I looked back, the more products I found doing that. I shudder to think how many generations it spanned in the mobile segment.

            • BestJinjo
            • 7 years ago

            I played around with Radeon 8500 and GeForce 3 Ti 500 and at the time ATI’s drivers were piss poor. Having Radeon 8500 support for 5 years was meaningless, as the card became useless for games after 2 years. Actually, during that period of time, graphics evolved at a frantic pace and you were almost forced to upgrade every 2 years just to play games. For example, Radeon 8500 was basically useless for HL2 above 1024×768, and chug-fest in games like Doom 3 or Far Cry 1.

            As I said, what’s the point of spending $ validating WHQL drivers today for so many years? Just because it was done in the past, doesn’t mean that it’s logical to continue doing so in the context of today’s GPU landscape. Are you going to be playing BioShock 3, Metro Last Light, Crysis 3 on 8800GT/HD2900XT? If you just need to build a cheap HTPC, Core i3 / AMD APUs are good enough for those users. If you are using old cards for 2D, not only are you not taking advantage of superior 2D IQ, HD content video acceleration and other features, but the idle power consumption of cards like HD4870/4890 is 60-75W:
            [url<]http://www.techpowerup.com/reviews/ATI/Radeon_HD_5870/28.html[/url<] It would cost more money in power consumption over 5-6 years to keep using these outdated cards for future OS support than buying a cheap DX11 GPU. You can just buy a sub-$20 DX11 GPU for 2D OS work that sips power: [url<]http://www.newegg.com/Product/Product.aspx?Item=N82E16814150600[/url<] MadManOriginal's comments are completely ignorant once we consider all the negative downsides of using 6-8 year old GPUs for modern OSes, including their inferior 2D IQ and higher idle power consumption, while at the same time he isn't even considering that non-gamers can buy a sub-$20 GPU. Essentially he is making a big deal out of nothing. He provided no support for why NV and AMD should continue spending millions of dollars on making WHQL drivers for GPUs older than 5 years given everything I said.

            • clone
            • 7 years ago

            I had 3 comps in the house with one at one time using a Radeon 8500, initially it kinda sucked but over time it came down to useage, if I like a video game I’ll replay it every so often.

            as an example after a several year hiatus I’m currently playing all of the Mechwarrior 4 games, Vengeance, Black Knight, Mercenaries. (using an HD 6670)

            for me that long lived driver support has a very high value….. you are correct in saying Radeon 8500 when launched wasn’t that great, I bought one and then sold it soon after because of it’s issues but a year later I got one in trade while selling off my high end card at the time and not surprisingly because I was following the news about driver updates it wound up running very nicely because by then the drivers were fixed.

            on a side note their is no such thing as an issue free video card, Nvidia, AMD/ATI and 3Dfx all had issues some more pronounced then others and to be clear I’m not saying the card got incredible, faster than the modern hardware but it just plain worked and when I went back to play FreeSpace 2 it was the Radeon 8500 I used, when I went back to playing Unreal Tournament GOTY it was the Radeon I was using.

            that’s despite at the time having a Ti-4600 and later when I got a Radeon X1950 XTX or later still when I got my 8800 GT, especially in the case of the 8800GT I was royally pissed off because several older games were completely unplayable on it.

            those games went to the Radeon box and I also had to make note that DX10 hardware was potentially problematic with DX 7 & 8 games, the Rad box was using an X800 GTO which was built around the by then ancient Radeon 8xxx which was still getting support at that time.

            I would never say that my use is typical but while the Radeon 8500’s are long since gone I still have a soft spot for them because unlike any other card they eventually became greater than they were which is far better than any of the flash in the pan cards that get dumped after 3 years that we are getting now.

            it was a different era….. lol and it was only 5 years ago.

            • l33t-g4m3r
            • 7 years ago

            The 8500 is perfect for retro boxes, especially the 128mb model. Great 2d, free AF and AA that’s superior to 3dfx’s. ATI’s older cards have always had value in longevity, with the exception of their dx10 series. The 4870 was a good card, but was effectively useless long-term with nobody supporting 10.1.

            On the Nvidia side, Fermi is the only card that I’ve felt has offered any long-term viability, with the partial exception of the 88 series since they had all kinds of non-performance related issues.

            [quote<]Radeon 8500 was basically useless for HL2 above 1024x768, and chug-fest in games like Doom 3 or Far Cry 1. [/quote<] [quote<]Are you going to be playing BioShock 3, Metro Last Light, Crysis 3 on 8800GT/HD2900XT?[/quote<] As for Jingo's comments on the 8500, that card actually was capable of playing doom 3 and even Halo, albeit at lower resolutions. (Nvidia's FX was the real stinker, not the 8500.) Still, the 8500 wasn't meant for that, being it was dx8, and those games were dx9. People were using the 9800 during that period, not the 8500, unless you were hardcore on older games like quake3, starcraft, diablo, or UT. Also, Metro is playable on a 4870.

            • clone
            • 7 years ago

            I played HL2 the 2nd time through with the 8500 it was fine for what it was, I wasn’t expecting maxed out details and 1920 X 1200 resolution with it.

            I personally had bad luck with my 8800 GT but Fermi is benefiting from the same support that ATI offered for 8500 because the architecture is still in play and units are still available for sale today.

            I got the impression BestJingo has the position that if it’s not maxed it’s worthless, I used to think that way until the cost set in, it’s the reason I’m still using my 460 GTX today, I’m not even sure I like the card all that much…. I don’t hate it or I wouldn’t have it but I can’t say I’d recommend it given their has been some older game incompatibilities, a few graphics glitches that appear occasionally, it can’t handle max details and when I put my system in sleep the monitor won’t and I have to shut it off manually.

            • UltimateImperative
            • 7 years ago

            [quote<]The 4870 was a good card, but was effectively useless long-term with nobody supporting 10.1.[/quote<] How so? 10.1's Big Feature was the ability to run shaders on AA samples instead of pixels. Now that the 4870 is a bit long in the tooth, I doubt many people are running AA with it. Sucks for AMD that they implemented a bunch of features nobody used, but the card is still useful to run new Dx9/Dx10 titles at medium settings.

            • clone
            • 7 years ago

            I thought that comment was silly as well, just because DX 10.1 never took off didn’t mean that the card didn’t work.

            that’s like saying all of Nvidia’s graphics cards made over the past 6 years are worthless because PhysX has barely any game support at all.

            the cards still work fine, in the 4870’s case they were an example of a very long lived product that was inexpensive, performed well on release and aged very well over time.

            I still use integrated audio instead of the audio on the video card does that make the vid worthless. (rhetorical)

            • Kaleid
            • 7 years ago

            I had a 8500 too and it worked just fine. It’s only with the hd7xxx series that I’ve had trouble worth complaining about.

            • clone
            • 7 years ago

            totally new architecture leads to a few gremlins, has it really been all that bad?

            I didn’t buy one yet so can’t say but really has it been that bad?

            • Kaleid
            • 7 years ago

            It works, but there has been that stuttering with some games plus flickering..

            Other than that it has only been throughout the years some games which have had texture problems like the water in Mass Effect 1. But that was quickly fixed.

            • clone
            • 7 years ago

            thank TR for getting it fixed.

            while I saw the stuttering and agree that it should have been fixed I never considered it absolutely horrid as some made it out to be.

      • xeridea
      • 7 years ago

      There is a lot more attention for GCN because it is a new architecture. The old architecture has been around for several years and has already been optimized a lot.

      • TrantaLocked
      • 7 years ago

      I don’t want AMD to forget about VLIW, but right now, GCN needs so much work it is nearly incomprehensible. Not that GCN isn’t a good performer, but there are a lot of bugs and a ton of performance potential to be tapped, both by AMD and by game developers.

        • clone
        • 7 years ago

        I don’t believe the processes in place at AMD or Nvidia or Intel work the way you are implying where they are all in or out as they chase fires, AMD most likely has several teams in play at any given time, the team working on VLIW isn’t suddenly being rushed into another room to work on GCN and then running back to VLIW, they may pull some ppl off teams for better focus but it’s highly doubtful all work stops or else nothing would ever get finished.

        it’s not like their are no new video games ever coming out nor a moment in time where every single glitch gets fixed and then the work can then stop so that everyone moves onto GCN or VLIW.

      • Heighnub
      • 7 years ago

      Having worked in a graphics semiconductor company, I doubt they’ve been [i<]completely[/i<] oblivious to it: they've just focused most of their efforts in increasing throughput as this is what the press, and hence gamers, and hence their marketing department has always wanted.

    • ColdMist
    • 7 years ago

    So, what ranges of cards will these tweaks help? Only the 7000 cards, or 6000 cards as well? Or even older ones?

      • HallsMint
      • 7 years ago

      It probably only affects GCN cards, considering that the focus is on the HD 7950.

      • Damage
      • 7 years ago

      Good question. Need to check into that.

      • zimpdagreene
      • 7 years ago

      Yeah that’s a good question also. I would think that it would be something they can pass down in card series. And also would this help in a 7950 crossfire or trifire situation? Hmm maybe some really good things to come.

      • TrantaLocked
      • 7 years ago

      The question is, does this issue even exist with 6000 cards and under? I think you are asking for there to be fixes for nonexistent issues.

        • nanoflower
        • 7 years ago

        From what has been written in the past it seems that this problem only exists on GCN (7000 series cards) so it won’t be an issue for older cards. In addition I gather the driver is following different paths for the GCN cards so the changes shouldn’t impact earlier cards. Hopefully Scott can report back shortly with what AMD has to say about the impact of these changes on older cards.

        • anotherengineer
        • 7 years ago

        One way to check, get Damage to put a 6870/6850 through the spin cycle and see.

          • zimpdagreene
          • 7 years ago

          Yeah while everything is open run a test on the 6000 cards. I know plenty use them. I still have 4000 series running and one 3000 still spinning in a old computer. Could be very interesting to see the results.

        • cmrcmk
        • 7 years ago

        I get a lot of stuttering on my 6970 so I’m pretty sure the 6000 series shares this problem.

      • jensend
      • 7 years ago

      [url<]https://techreport.com/discussion/24218/a-driver-update-to-reduce-radeon-frame-times?post=702623[/url<]

      • juampa_valve_rde
      • 7 years ago

      If i have to guess, i hope its just a software issue. Is documented as resolved with a buffer tweak, maybe a driver buffer tweak can make it to all the supported gpus, as it is game based. The other new feature, a new memory manager maybe its also a general driver improvement on the memory management on any card. The thing is that maybe they will update this two items only for the libraries for 7000 series, but if they are really commited to the users, they will release an update for all the supported cards (7000, 6000, 5000 and igps). This tweaks also could be added to a refresh of the legacy driver that affects 4000 cards, as it is a dx9 issue.

        • jensend
        • 7 years ago

        Or you could have read AMD product manager David Baumann’s post in this thread, which I just barely linked to above your post, which answers these questions and avoids the need for speculation.

          • juampa_valve_rde
          • 7 years ago

          Thanks, i didnt see that post. So this tweaks are granted for all the cards on the main catalyst, but the legacy drivers for older series is a mystery yet. Bad that the memory management update is only for GCN.

            • mkk
            • 7 years ago

            Only the GCN arcitechture has any need of rewritten memory management as not having it is the underlying problem for GCN. Can’t be bad not to fix what’s not broken.

            • Shobai
            • 7 years ago

            So many negatives! My brain hurts interpreting that last sentence…

    • MKEGameDesign
    • 7 years ago

    The more I learn about Skyrim, I think it’s a really bad game for graphics testing.

    To ensure uncapped framerates you have to disable vsync but this leads to VERY BAD THINGS in Bethesda games. A ton of things in the scripting and physics parts of the engine are directly tied to vsync. There’s a very good reason it’s not a user-exposed option.

    Not to mention testing Skyrim means you’re playing the equivalent of a blank canvas. Mods are the real game 🙂

      • jihadjoe
      • 7 years ago

      Adaptive vsync has your back.

    • jessterman21
    • 7 years ago

    “…cure what AILS the Radeon?”

    Amazing follow-up article, and some incredibly encouraging results from AMD! I bet it feels extremely satisfying that you’ve moved mountains with these frame-time metrics.

      • jessterman21
      • 7 years ago

      😀

        • Damage
        • 7 years ago

        😉

    • phez
    • 7 years ago

    That was quick .. !

    • CampinCarl
    • 7 years ago

    I was hoping this was going to more of a blanket fix, but I guess we still have to wait for the memory manager change for that. Hmm…

      • DancinJack
      • 7 years ago

      Drivers do take a little time to make yo. At least they’re addressing it.

        • derFunkenstein
        • 7 years ago

        Hope for a miracle every time and you’ll always be disappointed. At least it’s consistent.

        • CampinCarl
        • 7 years ago

        Well, my point was that I would have preferred that they didn’t waste time with some targeted fixes on certain games.

          • UberGerbil
          • 7 years ago

          Even once they’ve addressed the more systemic problems (the memory manager will be a big part of this, but it may not be the only part) they’re still going to need to do targeted optimizations for specific games. So the effort isn’t necessarily wasted, and there’s no certainty that it could have been redirected to get the memory manager rewrite finished sooner ([url=http://en.wikipedia.org/wiki/Brooks%27_law<]Brooks' Law[/url<] aside, developers aren't entirely fungible resources to be redeployed on anything with zero ramp-up). And there's no denying the PR benefit of getting a little something out sooner rather than a big something out later (after weeks of increasing nerdrage and the inevitable accusations of footdragging or worse).

    • StuG
    • 7 years ago

    Very nice to see that they were able to get the issue under control to an extent. It still doesn’t seem totally solved, but as they said they are still working on other optimizations. Good write up!

    [url<]http://bupp-portal.com/pictures/fp.jpg[/url<]

      • uartin
      • 7 years ago

      post edited

Pin It on Pinterest

Share This