AMD says frame pacing beta driver due in June/July timeframe

Earlier this week, we showed you how microstuttering taints the appeal of AMD’s latest Radeon HD 7990 graphics card. We also saw a prototype frame pacing feature smooth out that uneven frame delivery, resulting in noticeably more fluid graphics in some games. At the time, we didn’t have a good sense of when the prototype tech would make it into a publicly available beta, let alone a WHQL-certified release. In a Twitter Q&A session yesterday, however, AMD shed more light on when a beta can be expected. Here’s the exchange:

Brolivia Wilde

@AMDRadeon when will the new prototype driver will be ready for the public to download? as in, the driver that is addressing frame metering

AMD Radeon Graphics

.@ILoveHitMarkers We’re anticipating a beta release in the June/July timeframe. I should note that single-GPU pacing is already fixed. ^RH

Folks with CrossFire configs and dual-GPU Radeons like the 7990 may have to wait 2-3 months for the first public beta driver with frame pacing support. While it’s nice to see AMD taking steps to address the microstuttering problem, we should note that Nvidia has had similar frame metering tech since our original inside-the-second article back in 2011. The Radeon team is playing catch-up on this front.

Because techniques like frame metering and frame pacing are applied at the end of the pipeline, they may not be a silver bullet for multi-GPU stuttering. Uneven frame dispatch early in the pipeline can impact in-game animation, causing jitter in the contents of the smoothed-out frames. The effectiveness of frame smoothing implementations may vary from game to game depending on how the engine maintains its internal timing.

Comments closed
    • ronch
    • 6 years ago

    It’s kinda sad to think that we’d be waiting for this driver when GCN has been on the shelves since early last year.

    • brucethemoose
    • 6 years ago

    Like you said, frame metering does’t really fix the problem of uneven frame delivery, it just sweeps it under the rug, adds a little
    bit of input lag, at makes TRs own frame based testing less useful.

    To be fair, AMD/Nvidia can’t really smooth out in-game content with driver updates: thats the responsibility of game devs.

      • Azkor00
      • 6 years ago

      Totally agree it started with the game devs doing sloppy work and cutting corners, but I’ve seen several things in games that should have been fixed by the devs but it was fixed by AMD/Nvidia so it seams like a norm. AMD should have fixed this along time ago anyways.

      • Jason181
      • 6 years ago

      It actually adds output lag; the input is still being processed by the engine while the gpu is waiting to flip the buffer.

      While devs bear some responsibility, you can hardly expect them to test every modern gpu in use with drivers that are updated almost monthly. The fact that one company is having a hard time with multi-gpu and the other company has found a partial solution tells me that it’s at least partially solveable by drivers.

      I have 6970s in crossfire, and although the problem is nowhere near as pronounced as it is on the 7xxx series (based on articles and comments by owners of such configurations), it does actually feel smoother with vsync enabled, which is the opposite of what you’d traditionally expect.

      It indicates that with vsync disabled there’s an inconsistency in frame times that vsync helps to smooth out in conjunction with triple buffering.

        • clone
        • 6 years ago

        I’m not sure how much “responsibility” their is to pass around so much as this is the business model that exists.

        on the developer side of things their is a lag time between coming up with the idea for a game and releasing it which will always require a crystal ball with all it’s inconsistency, regarding gpu support it’s not just about the latest gpu’s but everything from the past 5 years that need to be considered.

        for Nvidia. Intel and AMD they could be more involved but this will always result in the constant efforts in how to best offer support as needed and where it will best yield fruit….. winners and losers will have to be picked.

        is it perfect, no but does it work… sure and pretty darn good to be honest given how rare serious issues come to light.

        regarding the new feature AMD will be introducing to account for frame pacing it’s not so much an effort to sweep the problem under a rug as it is a solution for the realities of the business.

        in a perfect world of unlimited budgets and nonexistent not for profit business models I’m sure a perfect balance could be found but it’s capitalism not communism and communism failed in the end anyway.

    • Meadows
    • 6 years ago

    “Single-GPU pacing is fixed”?

    .
    .
    .
    They had to “fix” [b<][i<]single-GPU[/i<][/b<] pacing?

      • Waco
      • 6 years ago

      That was my first thought as well…

      • Goty
      • 6 years ago

      Intentional misunderstanding FTW!

      • cynan
      • 6 years ago

      Would this not be likely in reference to the stuttering encountered in the now infamous HD 7950 vs GTX 660Ti review from last fall? That issue may have largely been fixed by now… What’s the problem?

      • Fighterpilot
      • 6 years ago

      yeah…kinda like how NVDA had to fix the pacing on their 5 series Fermi cards….
      Selective memory much??

    • cobalt
    • 6 years ago

    Have there been any attempts to quantify the change in the contents of newframes?

    Conceptually, even with simple image differencing you might be able to see every-other-frame have lower or higher differences, and this could be measured on the output frames, so it would be independent of the Present timings. There are obvious problems, e.g. image differencing is too naive, you’d need something more advanced — though I’m wondering if you couldn’t just e.g. take the motion estimation piece out of a video encoder and use that as the difference value. Also, it would only work during phases where you expect consistency, e.g. when you’re simply walking across scenery and expect a consistent frame-to-frame change, not blowing stuff up sporadically.

    • l33t-g4m3r
    • 6 years ago

    Just ordered a 7950 and a 3d monitor. Couldn’t care less about CF or barely perceptible frame times, which will soon no longer be a problem.

    The 660 ti just doesn’t have any value in comparison. It’s overpriced, under-performs, and the 7950 comes with 3 games. Yeah.

      • internetsandman
      • 6 years ago

      With a 120hz monitor you essentially need to always be above 120 frames per second for it to work optimally. I’d imagine vsync would be kept on for such a scenario (I don’t own a 3D setup so I don’t know for sure) but even if it’s not any drops below 120fps and you’re likely to notice big time, so either your settings will be turned down to compensate for demand or you simply won’t meet it

        • Farting Bob
        • 6 years ago

        You can use vsync to run games at 60fps if it can’t keep up with 120fps just fine. You shouldnt notice much difference in most situations while gaming.

        • l33t-g4m3r
        • 6 years ago

        I actually went with passive 3d because of that. It’s a low latency IPS screen that uses polarization. Overpriced seizure inducing glasses not required.

        • Firestarter
        • 6 years ago

        [quote<]need to always be above 120 frames per second for it to work optimally[/quote<] no, even at 80FPS you're still getting 20 more FPS that you can actually see with a 120hz monitor versus a 60hz monitor. That's a 33% improvement, and definitely visible. Of course more is better and a solid 120FPS would be awesome, but even with FPS fluctuating between 50 and 80, a 120hz monitor makes for a smoother experience. Don't forget about tearing either! It's hard to explain without graphs, diagrams and what have you, but with vsync turned off, a 120hz screen you get less tearing and a smoother, less jerky display even with less than 60FPS!

        • Meadows
        • 6 years ago

        No.

      • Laykun
      • 6 years ago

      “barely perceptible frame times”

      You’ve obviously never had crossfire then. I had it and it was in your face terrible kind of perceptible.

        • l33t-g4m3r
        • 6 years ago

        Obviously. I mentioned that in my second sentence. Dual GPU setups don’t interest me in the slightest, or at least not until they start being able to share memory and improve efficiency. I also think AFR is a joke compared to alternatives like scan line interleave. Either way, if I was going to consider dual gpu, it would be nvidia, since amd hasn’t taken it serious for a long time. Even at that, SLI is twice the power consumption, heat, noise, and cost. Not interested.

          • Jason181
          • 6 years ago

          SLI no longer stands for scan line interleave because it’s not practical for modern games; I think it’s now “scalable link interface,” or something. Nvidia uses AFR too.

          Scan line interleave was better, and if it was doable (efficiently) in modern game engines you can bet both companies would be using it because it does avoid a lot of the problems that AFR introduces.

            • clone
            • 6 years ago

            I’m not sure but didn’t Scan line interleave suffer from tearing back in the day, the only difference being those were CRT’s rendering it and they’d minimize it’s affects.

            their were also syncing issues with it as well if an error occurred during the rendering a frame, it was the least driver dependent of the dual gpu implementations and in so being quite a bit more elegant but it wasn’t perfect.

            if it offered significant benefits Nvidia’d be using it given they got the rights to it after buying 3Dfx.

            • Jason181
            • 6 years ago

            Scanline interleave only tore if you had vsync off. You’re right that they’d be using it if it were practical.

      • Krogoth
      • 6 years ago

      You want to get check prices again.

      The only card that beats the 660Ti in its price range is the Tahiti-based “7870 LE”, assuming you overclock the 7870LE (which has plenty of headroom). 7950 is trading blows with the 670.

        • l33t-g4m3r
        • 6 years ago

        I’m comparing the 3 GB 660 ti prices vs the 7950, there might be cheaper 2gb models, but I’d rather take 3GB if I’m plinking down a couple hundred. Helps with hi-res texture packs.

        • sschaem
        • 6 years ago

        The 7950 boost 3gb is $279 , the 2gb 660 ti is $249 That’s not a big difference.

        Considering that you get crysis3 , bioshock infinite and far cry

    • Chrispy_
    • 6 years ago

    Why does it take an article from TR to provoke AMD into a response?
    Don’t get me wrong, a response is good and the appropriate action to take but I can’t overlook one [b<]VERY[/b<] important point: We, the buyers of dual-GPU solutions, have been [b<]SCREAMING[/b<] at AMD to sort out the micro-stuttering issue since 2005. Forums; RMA's; Trade shows; Comments in their product reviews around the web; Feature requests; Beta driver feedback.... The list is long, the number of people involved is epic and the acknowledgement of the issue by ATI (and now AMD) has been practically nonexistant until recently. Just to repeat the timescale again for its significance: [b<][i<]2005[/i<][/b<] Thank you, that is all.

      • someuid
      • 6 years ago

      Maybe TR were the first to act professionally with solid, concise data, engaged AMD directly and discretely without trying to embarrass them, and gave them an opportunity to gracefully acknowledge and work on the issue publicly. And yes, it takes all three of those items above, not just one or two of them, to bring about change.

      SCREAMING at anyone is never going to get their attention or their cooperation. It only pisses them off, makes them ignore you, and breeds resentment.

        • Chrispy_
        • 6 years ago

        Well, ‘screaming’ was metaphoric, rather than literal.

        My main point was that the community of GPU buyers has been far from silent on this issue, using all the official/unofficial channels at their disposal over the last eight years to voice the stuttering issues to ATI/AMD.

        [b<]Some will have screamed and ranted on forums, for sure;[/b<] That isn't productive, but it is a publicly visible issue that AMD can see, as well as bad for marketing. Many, however, will have raised the issue via warranty claims, queries over RMA validity, AMD forums, driver feedback forms, bug reports, questionnaires, developer conferences, discussions with AMD reps at trade shows, professional dealings between software developers and the AMD driver teams. It is [i<]these[/i<] professional, courteous, properly documented lines of communication that I am irate about. What is AMD's excuse for ignoring the issue when it's been raised in this way?

      • Game_boy
      • 6 years ago

      Because it was finally a serious threat to sales.

      All those people “screaming” had already handed over their hundreds of dollars. And they’d probably buy Crossfire next time as well because they had no way of telling objectively if it was better or not.

      But TR had the potential to turn off potential buyers with hard data.

      • PixelArmy
      • 6 years ago

      The TR article forces this, because it shifted the way most hardware sites review. Lots of what you listed is “hidden”

      Vote with your wallet. If you scream at them but continue to buy their product, they have little incentive to change. (Strictly talking dual-gpu cards, crossfire is fuzzier since cards are bought individually.) Have you continued buying duals? Fool me once…

      I understand the need for competition and AMD to hang around, but stop lowering expectations. Look at the “inside the second” articles (or really any article concerning AMD), you’ll find a more vocal group of posters bending over backwards to come up with excuses.

        • Chrispy_
        • 6 years ago

        I’ve never bought a dual-GPU card myself, though I’ve played with a couple.

        The real issue with Crossfire being poor is that people buy a second card to boost performance and it doesn’t really boost performance.

        These sales show up as single card sales, and I would imagine that crossfiring two cards makes up the vast majority of the dual-GPU market – Whilst not a captive audience, these buyers are buying AMD because they’re happy with their single card performance and are therefore vendor-locked to AMD when looking to dual-GPU as an upgrade option.

      • HisDivineOrder
      • 6 years ago

      So true. The problem was that AMD did not see any financial inclination to make a change to their driver when nothing could be proven. As long as the problem was unproven, they could shrug and say, “We don’t see a problem.”

      Then suddenly there was proof to the long-held and known opinion that CF was crap. Suddenly, they HAD to respond, especially when nVidia sighed and said, “Yeah, AMD. Here’s how you look for it. Let us show you why we’re so much smoother than you and have been smoother for years. We hoped you’d figure it out for yourselves eventually, but after the fifth year, it’s getting really rather pathetic…”

        • auxy
        • 6 years ago

        Eighth, you mean?

          • Chrispy_
          • 6 years ago

          No, fifth:

          Nvidia introduced frame-metering in the G80, 2008,
          SLI was a godawful unevenly micro-stuttery mess before then, too.

      • indeego
      • 6 years ago

      Anyone with half a brain saw CF/SLI as hackish way back then and has stayed far away since. Just read through the horror stories of getting basic games to work months after release with major bugs.

      • ronch
      • 6 years ago

      Because nobody reads your email and relatively few read your forum posts. TR, however, is read by billions. Billions, dude, billions.

      • clone
      • 6 years ago

      if I was to take 2 guesses 1 would be because TR was able to reproduce the issue in an easy to understand way that left AMD no option.

      the 2nd would be AMD’s driver team has the time to spend on more involved issues since they abandoned the monthly catalyst launch schedule, if true that would seem like unintended consequences and would explain in part why they abandoned it.

      on a side note the Frame Metering fix has been in the works for a while given they had a demo of it ready for HD 7990’s launch and the completed version is due in less than 2 months which shows in that case they were being pro-active and not simply responding.

    • Prestige Worldwide
    • 6 years ago

    They should have delayed the release of the 7990 until this driver was ready.

      • Silus
      • 6 years ago

      Precisely.

      • HisDivineOrder
      • 6 years ago

      If they had done that, they’d have been much closer to the end of the 2Q and the dual-GPU card would have had less time to impact GPU sales for the high end. The whole point of rez’ing this card from the dead was to put it up against the Titan (and to a lesser extent the 690) to keep the high end somewhat a question mark.

      • Bensam123
      • 6 years ago

      Why delay something that can be fixed with a driver update? Most users wont even notice anything is wrong unless they’re made aware of the situation. As evil as this sounds, it’s done all the time and keeps big companies in business. No one is going to die if it’s released early and it still does it’s job reasonably well to the point that it’s still better then a single card. It’ll just get better with newer driver releases.

    • anotherengineer
    • 6 years ago

    “we should note that Nvidia has had similar frame metering tech since our original inside-the-second article back in 2011”

    “by Scott Wasson — 6:08 PM on September 8, 2011”

    Wow it seemed like about 8 months ago, how time flies 😐

    • Silus
    • 6 years ago

    So, even more time to wait for something that their customers should already have for a while now ? And even after that wait, they’re calling it a beta…
    I honestly don’t get why some people give slack to AMD on this…is it a masochist thing or it’s the simple fanboy ideology that everything their favorite company does is the best thing even if it hurts them (which in itself is masochist enough I guess…) ?

      • rxc6
      • 6 years ago

      I have an SINGLE GPU AMD card. Why should I be giving AMD crap again? And why me not complaining makes me a fanboy who thinks “everything their favorite company does is the best thing even if it hurts them.” My unlocked 6950 was waayyyyy beyond anything Nvidia based on performance/price and that is all that I look for. If I was buying now I’d go for a 7950 for the same reasons, the 670 just seems overpriced to me.

      • l33t-g4m3r
      • 6 years ago

      Most people don’t buy every game day1, nor use CF/SLI. The ones who impulse buy are fans of a series like starcraft. Nobody is buying every single specific game that amd has problems with day1. By the time people actually get around to buying these games, usually at lower prices, amd has fixed any associated issues. Normal people don’t even notice that there was a problem, provided they regularly update drivers.

      The only guys who actually care about this stuff is the E-peen crowd who buys SLI and triple 120hz monitors. They’re the ones really affected, and they’d be better off with dual Titans if they’re that impatient and whiny with disposable cash.

      The rest of us look for value, and nvidia doesn’t offer it. The 670 should be priced at the 7950, and the 660ti should be priced at the 7870, but they’re not. My conscience has no problem buying the better card for the money here.

      SLI/CF is the only place this garbage really matters, but most of use aren’t considering such an outlandish setup either. Generally speaking, this “controversy” is all irrelevant to the average gamer.

      • Fighterpilot
      • 6 years ago

      You’re such a predictable troll Silus…why don’t you do us all a favor and STFU….

      • clone
      • 6 years ago

      the real question is why would anyone buy a slow GTX 680 that’ll never get better when AMD’s “dismally” supported HD 7970 is already faster out of the box & getting faster over time.

      only a fanboy would view the prospect of buying the fastest video card available that’s getting faster over time as a bad thing.

    • Bensam123
    • 6 years ago

    Any word on when the new memory manager is coming out that was talked about in January?

      • Firestarter
      • 6 years ago

      bump for visibility

        • Bensam123
        • 6 years ago

        Someone else brought this up a couple weeks ago and I believe it’s a very prudent question.

          • ermo
          • 6 years ago

          According to Scott, it’s already in the 13.3 beta. Apparently, AMD has found that the memory management driver changes didn’t make as much of a difference as they had hoped, so going forward, their focus is on driver tweaks on a game-by-game basis.

          [b<]@Scott:[/b<] Feel free to chime in -- I'm just rephrasing a recent e-mail reply of yours.

    • Phishy714
    • 6 years ago

    In four or five month’s time, I wonder what’s going to happen to these metering benches. I mean, once it’s no longer a big problem, and it simply becomes integrated into AMD’s regular driver release – will websites continue to monitor this phenomenon? Will it become part of an already somewhat bloated benchmarking suite many tech sites have, or will it become an ever-hanging cloud over AMD ‘s driver support?

    Or will this bench slowly fade away, only to be remembered as a golden time in which gamers themselves changed the way one of their beloved companies operates and prioritizes due to their resolve to “Never Settle”?

      • deruberhanyok
      • 6 years ago

      My guess is that frame times will continue to factor into reviews, in the same way that TR now uses it alongside FPS, and it will just become another metric used to gauge the capabilities of video cards. I don’t think we’ll see it dropped.

      • DPete27
      • 6 years ago

      [quote<]...will websites continue to monitor this phenomenon....Or will this bench slowly fade away, only to be remembered as a golden time in which gamers themselves changed the way one of their beloved companies operates and prioritizes due to their resolve to "Never Settle"?[/quote<] Both I think. The last year or so has seen huge strides in improving gameplay fluidity. Technically speaking, continuing to use frame times to compare competing cards will/should become trivial. That said, if you stop monitoring such things, there is the potential for AMD/Nvidia to go back to their old ways. All this additional testing and driver tweaks do cost money....

    • Ryu Connor
    • 6 years ago

    NVIDIA’s solution for frame metering is built into the logic of the GPU since at least the G80.

    Is a solution built into the AMD driver really going to be as robust as on die logic NVIDIA has been tweaking for five product revisions?

      • GeneS
      • 6 years ago

      It’ll be interesting to see if they adopt the same approach in their next-next-generation silicon.

      • Deanjo
      • 6 years ago

      That really depends on the the quality of the code. Software solutions can actually be better then die logic if done right. It’s not easy to tweak die logic once it is in the customers hands.

        • Ryu Connor
        • 6 years ago

        I feel this is the answer people want, but isn’t necessarily the complete picture.

        Why dedicate die space and transistor budget to something that software could do better or even equally well? Neither endeavor is free, but as you point out the hardware choice has some nasty pitfalls due to logic errata. The cost of hardware implementation, quality assurance, and regression testing are more expensive in hardware than in software. Every piece of additional logic added in hardware represents one more piece of complexity piling onto complexity. How it all interacts together is expensive to simulate. Missing your window for product launch due to an unforeseen bug created by newly added unnecessary logic is going to be incredibly expensive and result in Jen-Hsun Huang asking if you’re stupid (literally if rumors are true).

        These are pitfalls more easily corrected and more cheaply implemented had NVIDIA just left it a software construct. This implies to me that software isn’t as good. A risk analysis was made and that the reward for a hardware implementation must have an upside great enough to make it worth said risk.

          • Deanjo
          • 6 years ago

          [quote<]Why dedicate die space and transistor budget to something that software could do better or even equally well?[/quote<] Because they can. A good example of this in GPU's that are out now. They all have hardware decoding engines but being that they are a fixed function logic they are set in stone as to their capabilities, WYSIWYG. A software decoder (or even shader based or GPGPU based) can however be ultimately more flexible. Same goes for their hardware encoders. While fast, they make great sacrifices in quality and have zero capability of updating for new support. A software solution again is flexible enough that it can be improved and offer new support easily. When it comes to something like frame metering, a hardware solution may be more efficient but a software solution may suffice just as well if it has little detriment on overall performance. In this case all the software solution has to do is to produce results that are "good enough" to get the job done. One real pitfall to doing this via software however is that it would be very windows specific unless the developers see that the issue big enough to warrant porting over to other platforms. With hardware, it is simply there and OS indifferent. We see this often already in items like soundcards and such that do much of their DSP done in software and then lose all of that in other OS's which have just base hardware support.

            • auxy
            • 6 years ago

            It would not surprise me at all if this was one of the little bits of technology that Nvidia moved to the driver for Kepler series, to save die space.

            • Deanjo
            • 6 years ago

            Maybe, if it didn’t have any detrimental overall effect then why not put that function in the drivers where it can easily be altered if need be.

            • Ryu Connor
            • 6 years ago

            Given where it sits in the rendering pipeline and what it does, it’s not a piece of logic one casually would remove at this point. It would involve scrapping QA and regression testing that presently exists to validate it. Creating new QA and regression tests to make sure removing it didn’t break something. Then given AMD’s timeline, months worth of work to implement the feature into drivers. Let’s not forget that the drivers need to undergo QA and regression testing as well. Then there’s the many more months of fine tuning to adjust for corner cases (something its longer hardware life should have already benefited from).

            There still isn’t a conclusive answer as to where this logic is best placed. So as noted, moving this logic from hardware to software might have a detrimental performance effects overall.

            • Deanjo
            • 6 years ago

            It might have detrimental performance and it might not like so many other areas of PC hardware where die logic has been replaced by a more flexible software solution.

      • tipoo
      • 6 years ago

      Did Nvidia ever say something like that? I don’t recall.

        • Ryu Connor
        • 6 years ago

        [url<]https://techreport.com/review/21516/inside-the-second-a-new-look-at-game-benchmarking/11[/url<] [quote="Scott Wasson"<]In fact, in a bit of a shocking revelation, Petersen told us Nvidia has "lots of hardware" in its GPUs aimed at trying to fix multi-GPU stuttering. The basic technology, known as frame metering, dynamically tracks the average interval between frames. Those frames that show up "early" are delayed slightly—in other words, the GPU doesn't flip to a new buffer immediately—in order to ensure a more even pace of frames presented for display. The lengths of those delays are adapted depending on the frame rate at any particular time. Petersen told us this frame-metering capability has been present in Nvidia's GPUs since at least the G80 generation, if not earlier. (He offered to find out exactly when it was added, but we haven't heard back yet.) Poof. Mind blown.[/quote<]

          • tipoo
          • 6 years ago

          ty

        • Deanjo
        • 6 years ago

        Yes,

        [quote<] Petersen told us this frame-metering capability has been present in Nvidia's GPUs since at least the G80 generation, if not earlier. (He offered to find out exactly when it was added, but we haven't heard back yet.) [/quote<] [url<]https://techreport.com/review/21516/inside-the-second-a-new-look-at-game-benchmarking/11[/url<]

      • Bensam123
      • 6 years ago

      Good question, I still think this is why the 8xxx series is delayed. They’re actually trying to fix their newest graphics cards before they’re released. We still haven’t been given a reasonable explanation for the delay. They don’t just decide like right before a release that they need another six months to fix their hardware unless it’s something really important that can’t wait. They may be trying to scrub the rest of the latency issues out of hardware before their release. At least as much as they can without a complete rebuild, which we wont see for like 4-5 years.

    • dpaus
    • 6 years ago

    Well, it’s a good First step for them. As long as they keep up the effort, good for them.

      • OU812
      • 6 years ago

      Too bad others had to point out the problem to AMD in the first place. This looks like something AMD should have caught and fixed a long time ago.

        • dpaus
        • 6 years ago

        Maybe, but it took Scott’s ‘inside the second’ work (and kudos to him for it) to quantify the problem, and it took a hardcore techno-geek-gamer like Scott (and I say that with admiration, Scott) to recognize the problem in the first place. Should AMD have hired a small squad of hardcore gamers to play games all day just in case they could find a problem? I’d love to see what the shareholders of a company with limited financial resources would have to say about that.

        Contrast this situation to the drivers for their FirePro cards (or Nvidia’s Quadro cards); those drivers get extensive attention to performance and stability, [i<]because the customers buying them are paying for it[/i<] - and it really shows.

          • jessterman21
          • 6 years ago

          [quote<]it took a hardcore techno-geek-gamer[/quote<] with robo-balls.

            • dpaus
            • 6 years ago

            [quote<]it took a hardcore techno-geek-gamer with robo-balls[/quote<] ...and [url=http://www.visitdunkeld.com/Birnam%20Games%20Photos/images/Birnam%20Highland%20Games%20Tossing%20the%20Caber%203_jpg.jpg<]this kind of e-peen[/url<]

            • RDFSteve
            • 6 years ago

            Biggest wood I’ve ever seen…. (possibly NSFW – unless you’re Scottish)

            • superjawes
            • 6 years ago

            I’ve seen bigger… (contextiseverything)

          • HisDivineOrder
          • 6 years ago

          HardOCP (and many others including myself) have been declaring Crossfire unfit to use due to stuttering for years. Perhaps AMD ought to have looked into it rather than dismiss it. Clearly, nVidia managed to.

Pin It on Pinterest

Share This