Multi-GPU micro-stuttering captured on video

When we pinpointed multi-GPU micro-stuttering in our recent article, we mentioned a couple of things about it that would require further investigation, including perhaps the use of high-speed cameras in order to capture the effects of micro-stuttering (and, hopefully, of Nvidia’s frame metering technology designed to combat those effects.)

The response to the article has been overwhelming, and I want to thank everybody who took the time to send a note or post a comment. As I’ve been sifting through the feedback, I’ve already learned some things. One of the best bits of info on the subject comes courtesy of Carsten Spille of GPU-tech.org, who had already managed to capture the effects of micro-stuttering quite nicely on video. Have a look:

Very educational. I take several lessons from this video. One, it validates my argument that the high-latency frames in a jitter pattern can be the gating factor for the perceived illusion of motion. Although the multi-GPU setup produces a Fraps readout of 60 FPS, it looks no smoother than the single-GPU setup at 30 FPS, and multi-GPU at 30 FPS doesn’t look as smooth as the single GPU at the same rate. This case may be an extreme one, but the effect of those high-latency frames is clear.

I’ll pause here to reiterate an important point. I said in the article that I’d never been able to detect multi-GPU micro-stuttering myself, and some folks seem to have taken that statement to mean that micro-stuttering doesn’t have any real impact on the user experience. But I was talking about a specific aspect of the problem: the visual disruption caused by uneven frame delivery. Even if you can’t “see” jitter—that is, if you can’t easily perceive the long-short-long-short pattern of uneven frame delivery with the naked eye—pretty much anyone should be able to notice the reduced fluidity caused by the longer latencies in every other frame (once frame times grow high enough.) Nvidia’s frame metering has the potential to reduce or largely eliminate the perception of long frame times, at least with certain game engines. Without metering, though, their impact should be easily perceptible, just as it is in the video.

With that said, the high-speed capture does seem to show us a second, “runt” frame that comes just after a major screen update. So even if you can’t see the visual distrubance with the naked eye, the effect is there on video, slowed down for all to see.

Also useful is the in-game setting for this video and the content of the frames, which seems well suited to showing the effects of micro-stuttering. The side-to-side motion appears to help tease them out, too. I’m off to IDF this week, so I can’t do any further experimenting for a few days, but I’ll try a little strafing in high-contrast areas when I return. If you’re looking for micro-stuttering in your own setup, you might want to try the same.

Comments closed
    • d0g_p00p
    • 8 years ago

    After reading Scott’s article I busted out my old school gaming rig that has 2 Voodoo2’s in SLI. Playing a number of older games I have not noticed any micostuttering. Granted things are way different now but how was 3Dfx able to pull off SLI perfectly vs current gen cards and the knowledge we have today?

      • lilbuddhaman
      • 8 years ago

      fraps or it didn’t happen. Lets see some proof.

      • UltimateImperative
      • 8 years ago

      SLI originally meant Scan Line Interleave; basically, card A rendered the Even lines while card B rendered the Odd lines, so they each contributed to each frame. With all the shaders and geometry stuff going on in modern cards, I don’t think this approach would work very well, so now SLI isn’t really an acronym.

        • lilbuddhaman
        • 8 years ago

        I’d be interested to see super tile or split screen tasking algorithms be updated on the cards… You can force them with RadeonPro with ati cards, but usually does not work.

        • swaaye
        • 8 years ago

        It’s “scalable link interface” now.

      • l33t-g4m3r
      • 8 years ago

      I agree. Never noticed microstutter with 3dfx, although tearing occurred which was fixable through tweaking settings. I think one of the major reasons voodoo’s didn’t stutter was because the entire architecture was designed for SLI. Even a single voodoo2 was composed of 3 separate processors. The voodoo2 and voodoo5 architectures were both capable of extreme scaling. That’s why quantum could create cards like quantum’s obsidian mercury brick. [url<]http://www.thedodgegarage.com/3dfx/q3d_mercury_brick.htm[/url<] And the voodoo 5 6000. You can't do that with nvidia or ati and get perfect scaling, let alone without microstutter, but 3dfx did it. 3dfx was way ahead of it's time. Hell, the voodoo2 could do bumpmapping.

        • swaaye
        • 8 years ago

        To be fair, games were very (relatively) primitive back then and scanline interleave was viable. Not so much anymore. There are many ways a game can ruin multi-GPU efficiency now. It’s why I avoid multi-GPU setups – they are inherently trouble.

    • kamikaziechameleon
    • 8 years ago

    Sad to see this has persisted for so long and that there hasn’t been a resolution found.

      • GrimDanfango
      • 8 years ago

      Alas, as far as I’m aware, there’s never even been a hint of an official acknowledgement that the problem exists, from nVidia or AMD. That’s why a video demonstrating it as obviously as possible is a great step forward. It’s harder to ignore now.

      • GTVic
      • 8 years ago

      Part of graphic acceleration is/was the utilization of info from previous frames and now you have two processors working on successive frames simultaneously so the problem is hardwired.

      How many people (% wise) spend money on multi-GPU setups? These products are aimed at people who care about FPS beyond $, and IMO, reason.

    • RobbyBob
    • 8 years ago

    Is it just me, or has everyone been completely ignoring Bensam123 with respect to the variance/micro-stuttering discussion?

      • GrimDanfango
      • 8 years ago

      Well, as far as I can tell, it relates to the measuring of microstutter, and what metric to use. But the bulk of discussion is more about whether it exists, how and why it manifests, and what could be done to solve it. If the GPU makers worked out a solution to make the problem go away, there wouldn’t be any need to define the best way to quantify it.

    • moriz
    • 8 years ago

    i wonder if the problem persists if i force scissors (split frame) rendering. i know scissors mode doesn’t scale nearly as well as AFR, but since it doesn’t need to wait for every second frame, maybe the issue gets mitigated.

    my testing with my 5870+5850 crossfire setup suggests that vsync makes the problem worse. in every game, enabling vsync made my games look like stuttering messes, which immediately disappears if i turn off vsync. it’s also not tied to FPS in any way, since if i limit fps to 60 without vsync, there’s also no stuttering. odd.

    • luisnhamue
    • 8 years ago

    If reviewers start using all this benchmarks, im afraid that we might well see a GPU being released and have to wait one week to know how does it performs.

    • hansmuff
    • 8 years ago

    I’ve always avoided multi-GPU setups because of cost and power usage. I’ve read about microstutter here and there but nobody has yet made a video showing it, at least not that I’ve seen.

    Thank you, Scott, good job and well done.

      • luisnhamue
      • 8 years ago

      I always avoided Multi-GPU config. because of Power and heat constraints that it imposes. I’d rather go with single GPU config. because they’re simple to upgrade, and since i never used a graphics for more than 1,5years it would be a waste of money at all

      • bcronce
      • 8 years ago

      I avoided multi-gpu setups because instead of paying $300 a piece for two cards, I could buy one $350card now, and buy a faster card a year later for $200. All the while I don’t have all these driver issues and bugs that SLI/xFire brings.

    • GrimDanfango
    • 8 years ago

    I think this has always been a touchy subject (witness the folks still trying to weave some nonsense nVidia/AMD bias conspiracies out of this article), and I reckon it’s got a lot to do with the cost of entry and denial.

    I get the distinct feeling that the reason this issue has remained hidden and silent for so long is down to the fact that anyone who has invested the kind of money required for a dual-card setup has to convince themselves it was a worthwhile investment, and that seems to inevitably involve religious devotion to the cause, and flat-out denial of anything that threatens it.

    I’ve dabbled, I’ve tried both SLI and Xfire setups in the past, (when I’ve been on some of my sillier buying sprees :-P) and in both cases, I spotted microstuttering almost instantly, proceeded to run a week’s-worth of benchmarks and tests to try and convince myself it wasn’t happening, and in the end sold the second card in disgust.

    The biggest problem with this issue I found was that it’s imperceivable at high frame rates – ie, exactly when you *don’t* require the extra power, and progressively more intrusive the lower the framerate got – exactly when you *do* need the extra power. This leads a lot of people to blame shoddy game programming, and every other excuse under the sun, but I’m afraid there one very glaring problem here, it affects both nVidia and AMD, and the video in this article demonstrates it beyond a shadow of doubt.

      • willmore
      • 8 years ago

      Begin huge speculation:

      Given that it doesn’t show up in every game (is this correct?), it may still be a game engine issue. Now, I’m not saying that it’s shoddy programming because it may be an inherent property of how rendering needs to occur.

      For example, if you need to make a query about the scene from the driver, it may need to sync the card to the state the game thinks it is at so that the query results make sense. I’m thinking of things like the occlusion test in HL2(HDR). The game asks the driver “how much of this light source is visible to viewpoint?” It uses the results of that to determine the brightness of the scene. There’s a nice youtube video showing it.

      The point of that is, the card must have done a lot of the rendering process before it can answer that query–it must have done all of the geometry processing and the rasterization. Texturing and post aren’t necessary, but I’m not sure if there’s a way to tell the driver that–IANA3DP.

      But you say, “that would happen on single GPU cards, too, right?” Yes, to an extent, it would. And I bet, with tripple buffering and vsync, the game has done things to work around it in some manner. Now, how does that workaround interact with AFR? My money is it will look like what we’re seeing here.

        • GrimDanfango
        • 8 years ago

        I’m sure there are far deeper complexities to the problem than any of us are giving credit. As you say, a lot of modern graphics processes aren’t just fire-and-forget, and do a lot of processing to/based-on the rendered image, etc.

        That said, as I understand it, the basic microstutter issue is always potentially present. In the cases where a game seems unaffected, I think it’s just a case of fast frame rates masking the effect, and frame latencies coincidentally falling into sync. It could be due to a game being CPU bound too – if the draw calls are coming up slower than the cards can render, they’ll stay more consistent.

        The problem seems to stem from there being no way for AFR to coordinate. When a new frame draw call is made, from what I’ve heard there’s no hardware facility to report how long it’s been since the last, so if the game is heavily GPU-limited, and the draw calls can come almost instantly, you end up out of sync, as two concurrent frames can be requested with barely any time between them.

        So yeah, I imagine there’s a whole lot of extra problems on top of basic microstutter, but I don’t think they’re the cause… it seems to be a fairly universal issue, and whether or not it blights a particular game doesn’t seem to be down to the quality of coding so much as just where the bottlenecks happen to lie.
        And like I said, it seems as a result, the problem only gets worse the more graphically complex a game you try to run, which is rather unfortunate seeing as that’s the very situation you’d hope it performed best in.

          • willmore
          • 8 years ago

          Generally agreed here.

          The perception issue is that visual data is presented at a time other than (or a consistant delay after) it is expected (based on the time of the game simulation). The video guys been aware of this and have always known that it is better to drop a frame rather than display it too late.

          The game engine/driver/GPU side of it isn’t likely to have a simple solution, I imagine.

      • End User
      • 8 years ago

      As someone who has an dual GTX 570 SLI setup this issue has my attention. This will factor into my next GPU upgrade. I don’t have any problem going back to a single GPU setup if need be and I’m more than happy to switch back to AMD if they offer a better solution.

      That being said I don’t notice this on my current setup in my primary game (TF2). I built my gaming rig (i7-920@4.2) to avoid low FPS situations (detailed maps on full servers). I rarely go below 150 FPS in TF2 (1920×1200 with everything maxed). It seems brute power and an older engine mask the problem. I’ll need to test this with a more demanding engine. I’m going to fire up the Heaven benchmarking tool and run passes in single and SLI mode.

        • GrimDanfango
        • 8 years ago

        The thing is, the length of time a frame takes to render on an SLI/Xfire rig is exactly the same as on a single-card setup (in fact probably longer, due to overheads)

        The difference is, you’re rendering two frames in parallel instead of one. The overall input-to-output latency will never be improved by the multi-card setup… the only benefit is a smoother overall output, and of course this microstutter issue messes up that only benefit.

        You won’t notice anything at 150fps regardless, as you’re bound by your monitor refresh rate… even if the game renders 1000fps, you’re still only going to see 60 of them per second, and the rest will be thrown away (unless you kept hold of an old 85hz+ CRT, or have one of the rare genuine 120hz-input modern flatscreens)

        This is what I meant – games that run fast anyway don’t exhibit microstutter issues, but they don’t benefit from multi-card anyway. Try it on a game that only manages to squeeze 30fps from your multi-card setup… it’s quite likely that playing it will feel like a 15fps game… including the feeling of input lag.

          • willmore
          • 8 years ago

          Yep, two issues:

          Total input to display latency–mouse twitch to photons hitting the eye.

          And the variance with time of that value.

          • End User
          • 8 years ago

          [quote<]Try it on a game that only manages to squeeze 30fps from your multi-card setup... it's quite likely that playing it will feel like a 15fps game... including the feeling of input lag[/quote<] I've not seen that. I assume you have. What was the game/CPU/GPU?

            • swaaye
            • 8 years ago

            That’s what the video here showed essentially.

            • End User
            • 8 years ago

            I’d like to know which GPUs Grim has used. I’d also like to know which GPUs were used in that video. Heck, I’d like to know more about the CPU/memory used as well.

            The issue is not the same across all GPUs. From the original TR article:

            “the mid-range Radeon HD 6870 CrossFireX config generally showed more frame-to-frame variance than the higher-end Radeon HD 6970 CrossFireX setup. The same is true of the GeForce GTX 560 Ti SLI setup versus dual GTX 580s. If this observation amounts to a trait of multi-GPU systems, it’s a negative trait. Multi-GPU rigs would have the most jitter just when low frame times are most threatened. Third, in our test data, multi-GPU configs based on Radeons appear to exhibit somewhat more jitter than those based on GeForces. We can’t yet say definitively that those observations will consistently hold true across different workloads, but that’s where our data so far point.”

            What about the underlying system? Does that have an impact?

            • GrimDanfango
            • 8 years ago

            I haven’t tried in a while… I think my last venture was with 2x Radeon 4890 cards, and I ran the original Crysis on them. I think 25-30fps is pretty much the minimum for fluid motion perception… and while FRAPS told me I was running a buttery-smooth 35-40fps, the reality was near unplayable, definitely a sub-25fps experience. I took out one of the cards, FRAPS reported no more than a lowly 25fps, and it felt noticeably smoother and more playable.

            As far as I was aware, the problem had never been addressed since, so I’ve avoided another attempt. I was just pointed to the bit of the article about nVidia having attempted to address the issue though… so maybe things are finally looking up. For the moment though, I’m sticking with single card setups.

            Anyway, as I mentioned elsewhere, the effect seemed to be reduced/eliminated in the cases where a game was essentially CPU-bound rather than GPU-bound, as presumably the draw calls to the SLI/Xfire array would be arriving fairly evenly. So the effect seems most pronounced when you have a combination of a GPU-bound, GPU-stressing game… which is unfortunately exactly the usage scenario people get multiple GPUs to cope with.

    • davidedney123
    • 8 years ago

    I briefly dabbled with SLI with a pair of GTX 8800s and while I was impressed with the on paper performance I never found it that satisfying to game on – nice to get a real explanation for that. Yet another reason why a single powerful GPU is a better choice than two slower GPUs.

    I can’t help thinking this could be more of a problem for ATi, with their strategic decision to use multi GPU cards to service the high end of the market.

      • travbrad
      • 8 years ago

      Nvidia’s strategy seems to be almost identical to AMD’s regarding product segmentation/core counts, so I don’t know why it would hurt AMD more. AMD’s fastest single-GPU card (6970) is only slightly slower than Nvidia’s fastest (580), and even beats it in certain games.

      IMO the single-GPU cards ARE the high-end cards, and the dual-GPU are the sort of “ultra high-end”.

      P.S. I just looked up prices on the 580 and 6970, and wow…I can’t believe Nvidia is selling any 580s. The 580 is $100 more right now, and barely outperforms it.

    • Drewstre
    • 8 years ago

    (A) Please please tell me you just coined the term “runt frame”.
    (B) Serious proof always involves crowbars.
    (C) I am feeling pretty good about my multi-GPU skepticism.

    • can-a-tuna
    • 8 years ago

    Scott “Radeon hater” Wasson is drumming hard to get this “revolutionary” results into public minds. I have always had single GPU setups from both nvidia and Ati and only thing I can tell you: I NEVER had a completely smooth scrolling experience. There will always be jitter, stutter call it whatever you want because of loading textures and data into memory and displaying it in real time. I remember playing once some need for speed variation with geforce FX based single GPU setup. Framerate was over 60 but still there was “microstuttering” that rendered the gameplay almost unplayable. Stutter is really not about whether you have single, dual or quad GPU setup. It’s always there. If the dual+ GPU setup would be visually much inferior to single GPU setup, this “revolutionary” result would be taken more seriously long long time ago. But its not, so this study is crap and of course just made to show how nvidia handles things (again) “better”. I put Scott in the same category as Fuad “Faud” Abazovic.

    Edit: I guess this article popped up because AMD currently has better crossfire/dual GPU scaling than nvidia, which previously always got more praise from superior scaling. That’s now unacceptable and dual GPU setups are evil!

      • Palek
      • 8 years ago

      You must be spectacularly disconnected from reality if you somehow manage to find bias in Scott’s writing.

      Somebody go pinch this guy.

        • sweatshopking
        • 8 years ago

        somebody should pinch his bum!!

      • GrimDanfango
      • 8 years ago

      You have no idea what microstutter is, and you’ve entirely ignored the content of this article and video.

      As I see it, the point of all this is that neither nVidia or AMD have ever solved, addressed, or even acknowledged this issue, and that until one or the other builds some dedicated hardware for the task, multi-GPU is really just a gimmick and best avoided, whichever card you choose.

        • ermo
        • 8 years ago

        [quote<]Naturally, we contacted the major graphics chip vendors to see what they had to say about the issue. Somewhat to our surprise, representatives from both AMD and Nvidia quickly and forthrightly acknowledged that multi-GPU micro-stuttering is a real problem, is what we measured in our frame-time analysis, and is difficult to address. Both companies said they've been studying this problem for some time, too.[/quote<] ^ From the original article, page 11. So they way I see it, you can't have read all of the original article 😉

          • GrimDanfango
          • 8 years ago

          Haha, okay, ya got me. Missed that part.
          Thanks for pointing it out too… first time I’ve heard of any acknowledgement of the issue.

          The best solution I’ve seen has been the Lucid Hydra chip, where each frame is drawn as a composite of polygons sent from each discrete card. I rather hoped that nVidia or AMD would buy up or license the Hydra tech to incorporate directly into their cards, so as to alleviate the issue of API compatibility that Hydra suffers.

          Alas, I suspect the whole issue of microstutter could be solved quite effectively with a solution like that, but they’ll probably never go down that route as it would add cost/complexity to the cards for the extra silicon.

        • End User
        • 8 years ago

        [quote<]As I see it, the point of all this is that neither nVidia or AMD have ever solved, addressed[/quote<] From Scott's article: "Nvidia has "lots of hardware" in its GPUs aimed at trying to fix multi-GPU stuttering."

          • kamikaziechameleon
          • 8 years ago

          Neither vender really resolves the issue but Nvidia appears to be moving in that direction…

    • south side sammy
    • 8 years ago

    it almost seems as if the cards are rendering the same frame at the same time instead of alternating frames between the cards.

    • Prion
    • 8 years ago

    There was a line of Samsung CRT HDTVs that would require either 1 or 2 frames to process and upscale an image (everything⇒1080i iirc) depending on god knows what factors. Of course, the effect was unnoticable for television and movies. However, constantly varying between 1 and 2 frames of output lag made those particular models completely unusable for games with frame-sensitive inputs like fighting games, rhythm games, and shoot’em ups. Even though under normal circumstances 1 frame or 2 frames of latency is perfectly acceptable and easily adjusted to, the variance just destroyed any usefulness for gaming.

    That’s what this reminded me of.

      • swaaye
      • 8 years ago

      You can see the same thing happen on LCDs that support say 70-75 Hz but really only output at 60 Hz and throw away the extra frames.

    • jensend
    • 8 years ago

    I already said this in the original thread but it got buried among the >130 replies, and I think the point is really important:

    The key thing about Damage’s article is the realization that what we should be looking at is the distribution of frame times instead of average FPS. FPS is the wrong measure, and the mean is not enough information about the distribution to tell us what we need to know.

    Other people have realized that average FPS is not a good enough metric and have looked at charts of FPS over time. Their charts didn’t have enough granularity to capture the microstuttering effect- but even if they had, that’s the wrong metric. Since what we really care about for game performance is whether frames are rendered quickly enough to give satisfactory reaction times etc, using frames per second is misleading. We need the inverse measure.

    Another example where the same “inverted measure” thing happens is fuel consumption: we keep talking about miles per gallon, but what we primarily care about is the fuel consumed in our driving, not the driving we can do on a given amount of fuel, so this is misleading. To use wikipedia’s example, people would be surprised to realize that the move from 15mpg to 19mpg (saving 1.4 gallons per 100 miles) has a much bigger environmental and economic impact than the move from 34mpg to 44mpg (saving 2/3 of a gallon per 100 miles).

    Similarly, moving from 24 fps to 32 fps has a bigger impact on the illusion of motion, fluidity, and response times than moving from 40 fps to 60 fps (10.4 ms difference vs 8.3 ms difference in time between frames).

    It may take some time to figure out what ways of displaying information about the distribution of frame times are most informative and most necessary: histograms, order statistics like the 99th percentile, etc. But I hope that reviewers everywhere pick up on the key advance and start looking at the ms per frame distribution instead of the average frames per second.

      • Voldenuit
      • 8 years ago

      Good points. I’d just like to remind everyone that the metric world has been reporting “mileage” in litres per 100km for years now.

      And now back to your regularly scheduled programming…

      • Bensam123
      • 8 years ago

      Variance… it’s called variance and it’s been used in statistics forever… There is more then mean, median, and mode.

        • jensend
        • 8 years ago

        Look, I’m no idiot; of course I know about variance, and it’d be a useful statistic to report. But it’s not self-sufficient; we still need to know other things about the distribution. It would be trivial to exhibit four different frame time distributions with the same mean *and variance* which would have very different perceptual properties by messing with their skew and kurtosis:

        1. a highly negative-skewed distribution i.e. most frame times are clustered around one value but a lot of variance comes from having some frame times which are significantly lower than the mean
        2. a highly positive-skewed distribution i.e. most of the variance comes from having some frame times which are significantly higher than the mean
        3. a platykurtotic distribution i.e. the peak around the mean isn’t very high and most of the variance comes from frequent modest-sized differences from the mean; there are very few extreme values in either direction
        4. a leptokurtotic distribution i.e. “fat-tailed”- the peak is very high and so medium-sized deviations are more rare, but extreme deviations are more common than in the platykurtotic distribution and account for most of the variance

        2 and 4 will be perceptually much worse than 1 and 3 despite having the same mean and variance.

          • Bensam123
          • 8 years ago

          To be the same that would depend heavily on the distribution, throwing obscure examples into the mix wont result in a highly skewed result.

          Putting them into perspective, the chances that you would end up with such niche examples would be extraordinarily rare and there would be something extremely wrong with the machine for them to even take place. Average FPS as it’s used right now could easily show that. For the sake of argument, even if they do happen, lower variance will provide a smoother experience then results with high variance.

          You also have to keep in mind that most readers here as well as the people writing the articles wont understand things if you make them extremely complicated. Also why I suggested taking average FPS / variance, which would give one overall number that takes a majority of this into account.

          When you analyze something enough you can get anything you want out of it, that doesn’t mean it’s applicable to the real world.

            • jensend
            • 8 years ago

            There’s nothing “obscure” or “niche” or “extremely wrong” about skewed distributions; the distributions of frame times Damage reported all have fairly strong skew. These aren’t just academic curiosities. And they totally invalidate your silly claim that lower variance will always provide a smoother experience.

            And while averaging FPS and variance does give you a single number instead of two, it’s a totally meaningless garbage number. You’re averaging totally incommensurate quantities to arrive at something which has no units. It would be ridiculous to try to compare cards based on that. You act as though people will run screaming if we show them two different numbers so we have to dumb it down to one meaningless scale. People aren’t that stupid.

            As to your “most readers… won’t understand things,” the only one here who seems to be totally confused is you.

            • Bensam123
            • 8 years ago

            It’s obscure because in order for those circumstances come to bare, there would have to be something extremely wrong with the computer. The average FPS would be extremely varied in some spots. Basically what throws means off also throws variance off, although variance is quite a bit less susceptible.

            I never said ALWAYS, generally speaking (quote that before you use absolutes). I even acknowledged that such examples exist, they’d just be really rare. Please, go digging and find a benchmarking result that shows any of the issues you described that would invalidate variance as a means to detecting micro-stuttering.

            Dividing average FPS by variance gives you a practical number that includes variance. Just because there isn’t a formal label for it, does not make it meaningless. That’s like saying $/performance is wrong merely because they’re two very different things. Any weighted scale starts somewhere. What this allows is for more the one different graphics card to be graphed and compared to other graphics cards based on these two different descriptors.

            You could most definitely use average FPS as well as variance too, I never said you couldn’t. You can’t put both of them on the same scatterplot though.

            • jensend
            • 8 years ago

            OK, when you said “average FPS / variance” I mistook you for someone from the original thread who advocated averaging FPS and some frame time statistic; thus I read what you were saying as “average of FPS and variance” i.e. just naively take the sum of two numbers with different units and divide by two. My mistake; what you were saying is obvious in retrospect.

            Dividing FPS by variance does of course give you a quantity with meaningful units. I don’t see why anyone would possibly claim it’s a helpful unit to look at. Price/performance is a helpful metric since there’s a natural tradeoff between the two; no such relation holds here. (You can of course put both FPS and variance on the same scatterplot; I guess you must have meant you can’t put them both on the same bar chart.)

            I can’t help suspecting that you’re being either deliberately thickheaded or disingenuous about the skew issue. I’ve already explained that all the cards’ frame time distributions have a fair bit of skew, and some of them have a lot more skew than others. There’s nothing “extremely wrong with the computer” or “obscure” or “rare” about any of this, and there’s no digging required. Though none of them appear to have negative skew, the large differences between the distributions’ skew has the very same effect on how informative variance is as does the difference between numbers 1 and 2 in my example above; it’s just more complicated to understand that effect intuitively than it is in the case where they have opposite skew.

            [quote<]The average FPS would be extremely varied in some spots. Basically what throws means off also throws variance off, although variance is quite a bit less susceptible.[/quote<] This is silly. Frame times or FPS can vary with time and most certainly do vary quite a bit with time (did you [i<]look[/i<] at any of those charts? we're not just rendering a static scene here, after all) but average FPS cannot (it's the average over the time period). The statement about throwing means and variance off is so imprecisely put that there's no way to tell what you were trying to say with it. Anyhow, this discussion has passed the point of being a meaningless and futile attempt by me to change the fact that [url=http://xkcd.com/386/<]someone is wrong on the Internet[/url<]. Goodbye.

            • Bensam123
            • 8 years ago

            It can be put on a scatterplot in lieu of average fps, which takes into account micro-stuttering.

            You can put them both in a scatterplot, but combining them would give a more meaningful result for a overall picture when combining multiple graphics cards, not analyzing one specific graphics card. It’s a better visual representation then just giving the variance and the average FPS.

            I don’t believe I’m being thickheaded. Using results that are available please provide examples illustrating your point. I don’t see a skew in any direction. What is exactly being graphed and being skewed for that matter? The results show a pretty steady trend over time. For instance:

            [url<]https://techreport.com/articles.x/21516/7[/url<] None of them illustrate any of the effects you're talking about. Even if they did, the effects we're looking for now is a steady and large increase in variance, not one or two blips. As is further illustrated in the video above where every third frame has a late one for the multi-gpu setup. It just sounds like you aren't apply the statistical examples you gave to the actual work, even if it is a real issue in statistics. Ouch I got XKCD'd and said goodbye too... You know I never attempted to be a pompous ass, but you sure took the cake with that one.

      • willmore
      • 8 years ago

      And what you’re missing about micro-stuttering is that, *even if all frames render with a low latency*, you are going to have an unacceptable visual reaction if the frame latencies are not *consistant*. These are two different issues. Scott started this whole thing by looking into the former issue and found the latter. Don’t conflate them.

        • jensend
        • 8 years ago

        Sorry, this is just plain wrong. If half your frame times are 10ms and the other half are 5ms, there is no earthly way you will either notice or care, period. That’s not detectable by human perception.

        Consistency definitely matters- consistent 25ms frame times would be tons better for maintaining the illusion of motion and making controls fluid and usable than having 2/3 10ms frames and 1/3 40ms frames. But it’s not the only thing that matters.

          • GrimDanfango
          • 8 years ago

          I see what you’re saying, but I don’t think it’s quite right either. Consistency is important at any speed that matters. The reason you wouldn’t perceive a difference between 5ms and 10ms frame times is because both are faster than your monitor refresh, so it’s irrelevant anyway.

          At frame times actually representable on a 60hz screen, I think anything more than ~20% variation in frame times would start to become intrusive, whether that 20% was at 20fps or 50fps.

          • cynan
          • 8 years ago

          [i<]Consistency definitely matters- consistent 25ms frame times would be tons better for maintaining the illusion of motion and making controls fluid and usable than having 2/3 10ms frames and 1/3 40ms frames.[b<]But it's not the only thing that matters.[/b<][/i<] Perhaps not, but this "consistency" is exactly what the article is about. The other points you've brought up about how an FPS number fails to adequately describe video playback performance are valid, but largely irrelevant to this article. And as for average FPS, I agree that a second number should be added to describe average distribution over time (call it variance). However, I think it would be most useful if this second quantity only described deviation below the average, as deviation above the average should not impact performance. One way to combine these two metrics would be to create a scores on a scale of 1-10. For example, if the deviation below the average FPS only falls to 90% of the average, then you get a score of 9. Similarly, the average FPS would get a score, likely based on a logarithmic scale, because as you say, the increase in FPS toward the bottom of the scale has a much greater impact (a difference of 25 to 30 FPS has a greater impact than a difference of 40 to 50 FPS). So an FPS of 10 might get a score of 1, and FPS of 20 might get a score of 4, and FPS of 30 might get a score of 6, and FPS of 40 might get a score of 7.5, FPS of 50, a score of 8, etc. Then you could add these two numbers together (or combine them with a more complex algorithm if warranted) to get some overall performance score. Perhaps you could even add a third quantity representing proportion of benchmark spent below average FPS... You would need to run a bunch of validation models relating the scoring and combining algorithm to qualitative measures of perceived performance to fine tune such a metric, but it should be doable with a reasonably small data set, as long as the data isn't biased. But I agree that in a perfect world, average FPS as well as some metric of variance would be displayed along with a graph of these two quantities over time...

      • OneArmedScissor
      • 8 years ago

      “Similarly, moving from 24 fps to 32 fps has a bigger impact on the illusion of motion, fluidity, and response times than moving from 40 fps to 60 fps”

      This is a problem with very nearly all benchmarks, not just games measured in FPS. Everyone and their dog presents them as bar graphs where one may be twice as long as another, but in a scenario that not only doesn’t reflect twice as much of anything, but quite possibly not even tolerable performance.

      And then we end up with people saying things like, “Look at my link! It proves X is twice as good as Y!” Even the websites themselves pull this by averaging their collective results.

      Maybe this was ok when all CPUs were single-core and the difference was largely clock speed, but this isn’t 2001 anymore.

      • GTVic
      • 8 years ago

      You’re writing like you’ve gleaned some important point here that no one else could see or a conclusion that the author didn’t reach.

    • Bensam123
    • 8 years ago

    “One, it validates my argument that the high-latency frames in a jitter pattern can be the gating factor for the perceived illusion of motion.”

    Variance. 🙁

      • ermo
      • 8 years ago

      Scott has let slip that he’s a liberal arts major once or twice, so my take is that it figures that it takes expensive toys not working as advertised to make him address his lack of familiarity with statistics… 😉

      And no, I’m not well versed in statistics either — but unlike Scott, I don’t make a living off of drawing conclusions and making recommendations based on the analysis and visualization of large data sets.

      Still a good couple of articles, though.

        • Bensam123
        • 8 years ago

        Yeah, I was going to suggest him and Geoff take a statistics and statistical analysis course from their local U as I think they could add a lot to their arsenal, but didn’t because it seemed sorta offensive.

        I didn’t know if they just didn’t know about variance as a statistical tool or if they were trying to respin what the article in a way that makes it seem like their method is a proprietary measuring tool so I just didn’t say it.

    • lilbuddhaman
    • 8 years ago

    I never liked how gpu processing goes from GPU2 to GPU1 then out, always felt something was wrong there. Why can’t the bridge be used to communicate info about frames to render, and then each gpu output to an “intelligent” alternating dongle of some sort? Or would this just add more delay between frames?

    Or would a higher speed bridge allow for less delay between cards?

    Is it possible to “supersample” frames such that your cards render at say, 100fps to produce 50fps? ( or say, 75fps)? and then the most fluid of these 100 frames are chosen to produce the end goal fps ?

      • Shuttleluv_83
      • 8 years ago

      I’ll say it forever. The absolute best implementation of SLI was 3dfx’s. Absolutely butter smooth performance, you felt it.

        • bcronce
        • 8 years ago

        3dfx rendered every other line, which didn’t scale well, but it didn’t micro-stutter like that crap.

    • puppetworx
    • 8 years ago

    The video really illustrates the problem beautifully.

      • Entroper
      • 8 years ago

      Absolutely. Heck, I didn’t even need the slo-mo, and I didn’t need to enlarge the video. The problem was immediately obvious at the 39-second mark.

    • Draphius
    • 8 years ago

    at last someone confirms what everyone else hs told me is just crazy. ive played games where fraps is telling me im avging around 60fps with vsync on and it feels like its playing at around 20’ish fps. im curious if vsync has any affect on the outcomes of these tests aswell cause i have a feeling it exagerates the problem even more cause when i turn it off it feels like its maybe running about 50ish fps even though fraps will tell me im around 150fps lol. btw microstuttering is very apparent in some games and almost non existent in others. i have a good 60+ games installed right now and some games work wonderful and i cant percieve it and others are just awful and i have to switch to single gpu rendering to even play the game, hint hint codemasters

      • Bonusbartus
      • 8 years ago

      I think the problem is to perfectly sync the gpus, but I think it are actually two problems. the problem with the preceived FPS would be highest if both GPUS are done rendering 2 frames at almost the exactsame time, you would get this result:

      —–|—–|—–|—–|—–|—–|
      —-|—–|—–|—–|—–|—–|

      while this could be called 12 frames, a single GPU could never finish 2 frames after another so fast, and the biggest gap is stll reasonably big. I’d say this would look like 6 frames on a single gpu system.

      I can imagine if the cards wouldn’t even stay in the same sync (which would be logical as not every frame is as complex as another) the microstuttering problem could seem worse:
      —–|—–|—–|—–|—–|—–|
      —–|–|—–|—|—-|——|—-|
      you would end up with double frames at some times, and larger gaps on others. obviously the perfect rendering would be this:

      —–|—–|—–|—–|—–|—–|
      –|—–|—–|—–|—–|—–|—


      just my thoughts on the microstuttering and perceived fps problem

        • BobbinThreadbare
        • 8 years ago

        I don’t think your 2nd scenario is possible with how multi-gpus work. The main mode they use is alternate frame rendering, so they have to take turns no matter the complexity.

          • Bonusbartus
          • 8 years ago

          So the 2nd scenario would look like this, which still gives a good view of the problem?

          —–|—–|—-|–|—–|—–|—-
          —-|-|—–|—-|——|—–|—-|

    • Captain Ned
    • 8 years ago

    Scott:

    You laid out the science, and now others who have noticed the effect have a banner under which to gather.

    You have performed a genuine service to the GFX community.

      • homerdog
      • 8 years ago

      Yet another reason to avoid multi-GPU setups.

        • Captain Ned
        • 8 years ago

        What should I know. What few games I play are still running on an 8800GTS 640MB.

        • Waco
        • 8 years ago

        All you need to do is add more GPUs! With 4 GPUs the average delay has to go down…right? 😛

Pin It on Pinterest

Share This