As the second turns: further developments

I told myself I’d try to keep pace with any developments across the web related to our frame-latency-based game benchmarking methods, but I’ve once again fallen behind. That’s good, in a way, because there’s lots going on. Let me try to catch you up on the latest with a series of hard-hitting bullet points, not necessarily in the order of importance. 

  • First, I totally missed this when it happened, but not long after we posted our high-speed video of Skyrim, the folks at NordicHardware included a frame latency analysis with slow-mo video in one of their video card reviews. They tested the GTX 680 vs. the Radeon HD 7970 GHz Edition, and they saw the same issues we did with the Radeon 7950 in Borderlands 2 and Skyrim. I’m not sure, but I think I may have talked with one of these guys at a press event a while back. Infected them successfully, it seems.
    Also, a word on words. Although I’m reading a Google translation, I can see that they used the word "microstuttering" to describe the frame latency issues on the Radeon. For what it’s worth, I prefer to reserve the term "microstuttering" for the peculiar sort of problem often encountered in multi-GPU setups where frame times oscillate in a tight, alternating pattern. That, to me, is "jitter," too. Intermittent latency spikes are problematic, of course, but aren’t necessarily microstuttering. I expect to fail in enforcing this preference anywhere beyond TR, of course.

  • Next, Mark at AlienBabelTech continues to progress with latency-based performance analysis. He asks the question: does an SSD make for smoother gaming? (Answer: only sometimes, not often.) Then he straight up pits the GeForce GTX 680 vs. the Radeon HD 7970 in a range of games. Among other things, he saw frame time spikes on the 7970 in Hitman: Absolution similar to what we saw with the 7950. Mark says more is coming, including results from the rest of his 30-game benchmark suite.
  • The guys at Rage3D have gotten a start on their version of latency-based testing in this review. The text takes some time to explain their approach. There are some interesting ideas in there, including a "smoothness index" that could become useful with further refinement (including a clear sense of how specific, knowable amounts of time matter more than percentages in real-time systems based on display refresh rates and human perception.) I get the sense James and I see the world in very different ways, and I’m happy to have him join the conversation. 
  • Ryan at PCPer has continued his vision quest on the matter of "frame rating," after offering part one just before CES. For the uninitiated, he’s using a video capture card and colored overlays to record and analyze each frame of animation output to the display. In part two, he shows how stepping through the captured frames allows him to identify and pinpoint frame delivery slowdowns, which he calls "stutter." (Bless him for not adding the "micro" prefix.)
    The colored overlays that track frame delivery are nifty, but I’m pleased to see Ryan looking at frame contents rather than just frame delivery, because what matters to animation isn’t just the regularity with which frames arrive at the display. The content of those frames is vital, too. As Andrew Lauritzen noted in his B3D post, disrupted timing in the game engine can interrupt animation fluidity even if buffering manages to keep frames arriving at the display at regular intervals.

  • To take that thought a step further, I recently realized—much later than I probably should have—that the possibility for timing disruptions at both ends of the rendering pipeline means there may never be a single, perfect number to characterize smooth gaming performance. At least, that number would likely have to be the result of a complex formula that accounts for the game engine simulation time, the time when the frame reaches the display, and the relationship between the two.
    Those folks who are still wary of using Fraps because it writes a timestamp at single point in the process will want to chew on the implications of that statement for a while. Another implication: we’ll perhaps always need to supplement any quantitative results with qualitative analysis in order to paint the whole picture. So… this changes nothing!

  • On a tangentially related note, Nvidia’s Timothy Lottes, the FXAA and TXAA guru, has taken to his personal blog to discuss the issue of game input latency. I mention his post in part because our talk of frame rendering latency has caused some folks to think about that other latency-oriented problem, input lag. Frame rendering times are important to the CPU and GPU reviews we do, but frame times are just one piece of the larger puzzle when you’re talking input lag. Timothy’s post explains the sources of input latency and how GPU rendering fits into the picture. I expect we’ll be hearing more about input lag as things like Oculus Rift move toward becoming products.
    Although it may be confusing to some folks, we will probably keep talking about frame rendering in terms of latency, just as we do with input lag. That’s because I continue to believe game performance is fundamentally a frame-latency-based problem. We just need to remember which type of latency is which—and that frame latency is just a subset of the overall input-response chain.

  • Finally, this may be old news to most of you, but those who are new to the subject may be interested to see that our frame latency-based game testing methods apply to CPU performance, too.

That’s all for now, folks. More when it happens.

Comments closed
    • DeadOfKnight
    • 7 years ago

    The plot thickens:

    [url<]http://www.pcgamer.com/2013/02/06/3dmark-wars-nvidia-and-amd-go-head-to-head-on-our-test-rig-who-wins/[/url<]

    • oliviadub11a
    • 7 years ago
    • DPete27
    • 7 years ago

    Shout out to Scott [url=http://www.fudzilla.com/home/item/30301-amd-ships-public-beta-for-next-catalyst<]on Fudzilla[/url<] regarding the new Catalyst Beta.

    • galco093x99
    • 7 years ago
    • medbor
    • 7 years ago

    Awesome work!
    I’m from sweden and could traslate the methodology from NordicHardware if you want, and probably tell them of this post regarding word choises.

    On another note Sweclockers (the other big swedish computer hardware site) have had in developement a system of benchmarking frame latencies for about a year, but not posted much about it yet. All they have said so far is that they can not see the big differences you have found in ther measurements, so they want to be absolutely sure before posting.

    Last thing i have to say (as a mathematician) is that the “Latency by percentile” graph should have a logarithmic scale to the x-axis (more detail the closer to 100 you get). As it is now the main information we can see is that they are all mostly the same up to 95% and then it is hard to see.
    if we would start with 50% on the left and have 95% in the center and maybe 99% right quarter the differences would be much more readable.

    just my two cents (maybe a few pennies more)
    you can contact me at <username> at the warm postalservice in the commercial web, close to com.hot.mail

      • Firestarter
      • 7 years ago

      [quote<]Last thing i have to say (as a mathematician) is that the "Latency by percentile" graph should have a logarithmic scale[/quote<] this is an excellent point, the graphs clearly lend themselves to being displayed on a logarithmic scale

    • chelseyox9aa
    • 7 years ago
    • jessterman21
    • 7 years ago

    Looks like Tom’s Hardware has converted, too.

    [url<]http://www.tomshardware.com/reviews/fx-8350-core-i7-3770k-gaming-bottleneck,3407-6.html[/url<]

      • DPete27
      • 7 years ago

      I was going to mention the same thing. However, Tom’s is definetly lacking the polish of TR’s latency based presentation. Still, it’s good they’re trying.

      Side Note: Why they chose to pit the i7 against the FX-8350 is beyond me. I think matching price points is a more realistic approach, but since the i7 beat the FX, [url=https://techreport.com/review/23246/inside-the-second-gaming-performance-with-today-cpus/3<]an i5-3470 would have destroyed it.[/url<]

        • jessterman21
        • 7 years ago

        Right? It still makes me laugh sometimes that my $100 i3-2100 nearly matches AMD’s flagship 8-core in games.

        I also like how TH acts like it was their idea to test frametimes all the sudden. Give credit where it’s due, Tom’s!

        • ronch
        • 7 years ago

        Same thoughts here. I saw it a few days ago but it seems Tom’s needs to polish their presentation a bit.

        As for pitting the FX-8350 against the i7-3770, I guess they just wanted to again put to light how bad the FX-8350 is for gaming compared to the i7 and threw in latency graphs for free. Yeah, we already know that, don’t we? Enough with the bashing, TH. I bought my FX-8350 and I’m happy with it. Buy what makes you happy.

    • llisandro
    • 7 years ago

    Can we open a second can of worms? I’d be really interested in seeing your analysis on what effect Lucid’s Virtu MVP would have on a game like Skyrim with one of those 7950s that show lots of high-latency frames. Techreport has really only talked in depth about MVP on Origin’s laptop, but with only qualitative descriptions of “smoothness.” I’d love to see a latency-spike graph for MVP +/- virtual Vsync and hyperformance on a higher-powered card.

    My feeling is that most people hate MVP, mostly because it is completely broken for so many games.* But since Skyrim is Techreport’s poster child for Radeon-based latency problems, and MVP works for this game, I think this might be a great way to test to see if MVP has a place with high-end cards if the goal is a steady supply of evenly spaced frames at 60Hz, not just high FPS.

    Background: I have a 7850 2GB on a 2500k system running non-beta catalyst 12 drivers, and I’ve never really ever noticed latency spikes, churning, skips, etc with my card. The GTX 650Ti review does show a pretty significant # of frames > 16.7 msec for the 7850 2 GB, way more than the 7950 showed (at higher res). But then I remembered that early on I enabled Virtu MVP and it looked great so I never looked back. I was using ultra settings at 1080p, but I don’t think I had the high-res texture pack installed. And I think I ended up only enabling virtual vsync, not hyperformance, but it was a definite improvement in my eyes over Skyrim’s vsync. My understanding is that Hyperformance can “cheat” by dropping bad frames. In my opinion I guess I’m okay with dropping a frame here and there for a smoother overall feel, so I’d love to see at least one latency plot for a mid or high-end card with MVP enabled just so I know if it’s worth messing with, given all the compatibility problems. Pretty please?

    * Man, I feel bad for Lucid. We complain about their game compatibility list, even though it takes AMD and Nvidia four rounds of driver tweaks to get their drivers tweaked so you can play the biggest game of the year without it bricking (ok we complain about that too). Every time AMD puts out another beta fix I can picture a room full of people at Lucid slamming their heads against their keyboards knowing they have to go check all their games again.

    No, I don’t work for Lucid 🙂 But I’ve always wondered why there was so much hate for it, when it’s worked very well in my limited experience with it.

    Anyway, thanks to techreport for kicking off the trend. Here’s hoping in 5 years Nvidia’s shouting “kills 99.999% of high-latency frames at 60 Hz XxY resolution!!!” like a hand sanitizer commercial.

      • 0g1
      • 7 years ago

      Yeah, I’d be interested to see the impact of MVP too. Especially since every new mainboard these days seems to include it. I’m just interested in how it works and if they can back up their claims of increased frame rates.

      • Damage
      • 7 years ago

      Well, there is this:

      [url<]https://techreport.com/review/23746/a-look-at-lucid-virtu-mvp-mobile[/url<] Explains why Fraps latency measurements are problematic with Virtu.

        • llisandro
        • 7 years ago

        You’re right, I guess I skipped the intro and headed straight to the results, sorry. Seems like PCPer’s frame-capture technique would be required to quantify this. I don’t use FRAPs, I was using MSI Afterburner, which shows 60 FPS once virtual vsync is enabled, but afterburner can only log every 100msec, I think, so that’s not useful, and I am unclear on how it’s getting that number or if it is “real,” since apparently FRAPS would give me a larger value representing the actual framerate being produced. Perhaps this means afterburner is outputting what Lucid is telling it, but its refresh rate isn’t fast enough to tell us actual frame latencies, bummer. The problem with that review above (that you note) is that virtual vsync is designed to be employed when you’re already OVER 60FPS, and only COD4 was tested under those conditions. I guess I’d be interested to hear from the peanut gallery- what do you get when you turn on virtual vsync when you already had >60FPS on a higher-end card. Because for me it looked super-smooth- I never see tearing, jitters, sluggishness, lack of responsiveness, anything, even when sprinting across the plains of Skyrim with virtual vsync on, hyperformance off. Oh well. I guess I’ll stop worrying and just play the game. Thanks!

    • Silus
    • 7 years ago

    Again, congrats on all of this Scott and TR. You certainly put the frame latency based benchmarks in the spotlight!

    • DarkUltra
    • 7 years ago

    Wouldn’t more FPS reduce the microstutter, in a linear fashion? How about more FPS and a 120/144hz monitor? How about a slo-motion video of that? We could again recommend Radeon, just make sure you buy a fast one!

      • superjawes
      • 7 years ago

      Microstutter, as Scott defines it, is inheirently NON-linear, so increasing FPS would not reduce the stutter because it does not solve the non-linearity.

      The one thing you might get from a higher refresh rate is reduced tearing, since more of each frame will be shown. This wouldn’t solve stuttering issues when a frame gets “frozen in time,” but it might smooth things out so that the recovery from a freeze is less jarring.

      I would like to see something like PC Perspective did at 120 Hz myself just to see what, if anything, is gained by increasing the rate. We may just need to revisit or improve frame synchonization to improve “smoothness.”

        • Firestarter
        • 7 years ago

        I can tell you from personal observation that there is a lot to be gained from 120hz vs 60hz, even if the framerate is not > 100 FPS. You say that the only thing to be gained is reduced tearing, and that is exactly what happens. The effect is rather disconcerting, when switching back to 60hz after having played games at 120hz for so long: Suddenly, tearing is a thing again! Mind you, without trickery like vsync or triple buffering, it’s always there even at 120hz, but at 120hz it’s far less noticable. Since the display is being refreshed twice as often, if one frame ‘tears’ into another, the tear is shown only for at most 8.3ms, instead of 16.7ms. That makes quite the difference, not only because the horizontal tear itself is less visible, but also because the outdated portion of the display gets corrected sooner.

        The real benefit of a 120hz display lies in actually displaying 120 frames per second of course, but even at framerates of 50 to 70 FPS it helps a lot due to this phenomenon. Of course, it would help even more if game engines (in conjuction with the GPU drivers) were smarter about when to flip the damn buffers.

          • Chrispy_
          • 7 years ago

          Yeah, I found that 120Hz monitors allow you to use vsync better too.

          At 60Hz, when a frame takes over 17ms to render, you wait two 60Hz frames, which means your framerate drops to an effective 30fps until the frames are rendering at under 17ms again.

          at 120Hz, when a frame takes over 17ms to render, you wait for three 120Hz frames instead of two, which is an effective drop to 45fps until the frames are rendering at under 17ms again.

          45fps > 30fps, thanks.

            • superjawes
            • 7 years ago

            I wasn’t trying to say that there isn’t an overall benefit from increasing the refresh rate, just that it doesn’t eliminate stuttering, tearing, or microsuttering issues. They might be less noticeable and less pronounced, but they would still be there.

            But even with vsyc on a 120 Hz monitor, if you have a latency spike, you’re still showing a frame for a (relatively) long time, interrupting the animation. If a frame is shown twice at 60 Hz, it’s visible for 33.4 ms. At 120 Hz, the same frame would be shown three times and would be visible for 24.9 ms. That’s only a 8.3 ms (or 25%) reduction, but that’s best case. If a frame is shown for more than two refresh cycles at 60 Hz, the best reduction you get at 120 Hz is one 8.3 ms refresh cycle, and the freeze reduction will be less than 25%.

            This is why shifting the testing methods is so important. Improving refresh rates will smooth out some tearing and improve recovery from stuttering, but if we really want to see these things fixed, you have to focus on the frame time spikes and getting those down.

            • Firestarter
            • 7 years ago

            I agree that it doesn’t eliminate stuttering or tearing issues, but I do think that it makes at least tearing significantly less visible. Big stutters are just as visible though, and if we’re talking 80+ FPS on a 120hz screen, I’d argue that a 50ms frame is more visible than on a 60hz screen with a 60 FPS cap. I can’t comment on microstuttering as I’ve never personally witnessed it.

          • 0g1
          • 7 years ago

          They are smart about when to flip the buffers. Its called v-sync and when the game uses a triple buffer, the 3rd buffer is always the one that gets synced to the display. No tearing. It is held there until the next frame of the display at which time the buffer is flipped to the 2nd one (the 1st being the one used for rendering).

          The problem with v-sync is there is a input lag of up to 1 monitor refresh before the next rendered buffer starts getting streamed — ie always wating for the 3rd buffer to complete.

          With v-sync disabled and double buffering, the contents of the 2nd buffer are constantly being streamed out to the display so you get tearing, but pretty much instant display of the frame.

    • Maxwell_Adams
    • 7 years ago

    I’ve been wondering something – what if you take all the frame times from a benchmark run and just add them up? Do the results match up with framerate averages?

      • Damage
      • 7 years ago

      Yes, if you add up the # of frames and divide by the time, you get FPS. We do that in our frame time spreadsheets, and the numbers match the averages output by Fraps.

        • cygnus1
        • 7 years ago

        Lol, I was going to say 5 minutes (or however long the benchmark run lasted)

    • TaBoVilla
    • 7 years ago

    Great job Mr. Scott =)

    Thanks to some of your findings and interest in the subject, the overall PC gaming experience will most likely benefit from it.

    • tfp
    • 7 years ago

    Random side question, do we have a picture on how things like PhysX or the off loading non-Video in game calculations from the CPU to a GPU impact Frame and Game latency?

    It seems like it could be a new large can of worms but a very limited audience. Multi card configs, using PhyX on a high end single card, etc. Really it seems pretty Nvidia-centric, I can’t really remember what happened with AMD/ATI/Hovak when it comes to items like this.

      • Voldenuit
      • 7 years ago

      [quote<]I can't really remember what happened with AMD/ATI/Hovak when it comes to items like this.[/quote<] Havok FX (the GPU-accelerated branch of Havok) was canned when intel bought Havok. Pity, because they (Havok) promised vendor-agnostic GPU physics.

    • Bensam123
    • 7 years ago

    “To take that thought a step further, I recently realized—much later than I probably should have—that the possibility for timing disruptions at both ends of the rendering pipeline means there may never be a single, perfect number to characterize smooth gaming performance.”

    Yes! I definitely agree with this and further benchmarks need to be looked into that don’t rely solely on FPS or frame-time, which is derived from FPS (at higher resolution). Although there aren’t many of those available yet. The frame capture as well as motion estimation seem to be pretty good starts.

    I disagree about latency being associated with frame time (no surprise). While FPS is in a way correlative to latency, it’s not the same thing. FPS would be bandwidth, which is also correlated to latency. Jitter, which also the same as variance. When I refer to latency and bandwidth I’m referring to the internet usage of the terms, such as a ping compared to a speed test. Often times internet test sites include ping, bandwidth, and jitter because they’re different tests that measure different aspects of the same connection.

    Unfortunately there really isn’t a way to measure latency, such as a ping for games. Which would be exceptionally helpful at diagnosing problems when the video card is under load. Obviously throughput would influence latency.

    I would really urge TR to take a look at core parking and it’s implications. It may most definitely be influencing more then can simply be gauged by FPS or frametime (many people have spoke of this both in the TR forums and across the web). For instance I’ve noticed it affects simple mouse and keyboard input immensely, which isn’t represented at all in FPS. It is definitely a game changer. In all my years reading and working with tech I really haven’t seen something make such a huge difference besides maybe a SSD. The only other one I can think of is every once in awhile when I upgrade to a higher DPI mouse.

    Perhaps that will change when I purchase one of those new 144hz monitors with 1ms response times…

    Speaking of which, you can purchase the Asus VG248QE from Amazon.com now. Hopefully you guys will do a review for us. ^^

      • tfp
      • 7 years ago

      Reviewing monitors kills tech websites, TR don’t do it!

        • Bensam123
        • 7 years ago

        Not talking about a bunch, just a couple interesting variants… Like the Korean IPS and the Asus I mentioned has a 144hz refresh rate and a 1ms response time, which is pretty much a complete outlier.

      • cygnus1
      • 7 years ago

      [quote<] I disagree about latency being associated with frame time (no surprise). While FPS is in a way correlative to latency, it's not the same thing. FPS would be bandwidth, which is also correlated to latency. Jitter, which also the same as variance. When I refer to latency and bandwidth I'm referring to the internet usage of the terms, such as a ping compared to a speed test. Often times internet test sites include ping, bandwidth, and jitter because they're different tests that measure different aspects of the same connection. [/quote<] you're right on jitter and bandwidth but very wrong on latency. latency is a measure of how long something takes or is delayed, plain and simple. i guess you may only be exposed to it's use in the network world where it's referring to how long a round trip takes, from point A to point F and back to point A. But there's no problem describing how long it takes for a frame to make it from the game engine to the screen as latency. I very much agree with the wikipedia definition "Latency is a measure of time delay experienced in a system, the precise definition of which depends on the system and the time being measured."

        • Bensam123
        • 7 years ago

        That’s why I said it’s not entirely right, but not entirely wrong either. If you’re using latency to measure bandwidth and vice versa, it’s not really a different test… It’s just naming it something else.

        I could see TRs take on frametime being latency centric, because it is. They dialed up the resolution on FPS down to the millisecond level and they’re analyzing the data based on inconsistencies and looking for data that affects latency. That still doesn’t change where the data comes from or how it’s measured.

        This may end up turning into a argument of semantics though, but IMO, if you want to measure latency you have to do it in a way that doesn’t have a direct relationship with bandwidth or throughput… or you’re just measuring bandwidth.

          • Firestarter
          • 7 years ago

          TR has never really measured latency in the GPU reviews, it’s always been throughput. What we essentially see in the graphs is the time delay measured between one frame and the next, and assumption is made (when discussing the latency of the system) that this is the time the system needed to create the frame based upon the input. This assumption is NOT correct, but it’s a reasonable one to make if you consider the alternative.

          The proper way to measure latency of the system is to measure was we perceive as input lag. The system here is defined as the PC + peripherals, that is, the motherboard/CPU/RAM/GPU, [i<]and[/i<] the mouse and monitor. To measure the time delay of that whole system, you provide input on one end (mouse) and measure how long it takes for the output (monitor) to display the result of that input. One of the problems with that approach is that you can only realistically do this in very few situations, certainly not throughout gameplay sessions of multiple games. It's also very labor intensive, and you introduce even more variables to the test (monitors being the big problem). And in the end you will find that the latency of the system can almost always be seen in terms of the [i<]number of frames[/i<] it takes for the input to be sent to the output, plus the latency that the monitor and mouse impart. The ideal case would be 1 frame latency, throw in vsync, triple buffering, input smoothing or tricks like that, and it grows to 2 frames or more. Knowing that, it follows that if we have a reasonable idea of how long it takes for a single frame to be rendered, that we also have a reasonable idea of the latency of the system, as it will always be a somewhat constant factor (mouse, keyboard) plus X times the render time of one frame, where X is constant for the whole session. Now, the point where all these assumptions break down and become meaningless is when you measure frame render times like the ones in this graph: [url<]https://techreport.com/r.x/radeon-latency-catbeta/skyrim-zoom-7950.gif[/url<] Anyway, latency is not really what TR concerns itself with, even when we occasionally use expressions like 'high latency frames' when discussing these articles. GPU reviews have always been about throughput, measuring how many frames the system can spit out in a given timespan. We used to just take the average throughput and call it a day. What TR has tried to do is find a measure for the [i<]consistency[/i<] of that throughput. And part of the reason that this consistency is worth the extra scrutiny is that it also affects the perceived latency of the system (or input lag as it's more commonly known).

            • Bensam123
            • 7 years ago

            Yes, as I said there is a correlation between bandwidth and throughput, but they aren’t one in the same. So it’s not entirely wrong saying frame time is a measure of latency (as like you said its a measure between the two frames).

            I think it would be possible to monitor latency on a game level and then also monitor it on a system level. You’re talking about a global approach, which I definitely think would eventually be the way to go, but there currently aren’t testing methods available for it. Yes, we’d need a device that serves as a mouse input and then also monitors the output of the device through a DVI port.

            I disagree about FPS being a tell all of system latency though. Core parking is a prime example of how FPS can mean jack diddly when it comes down to it, even with a nice smooth low variance output, like from the Nvidia cards. Graphics are just part of a game and if the entire game was made up JUST of graphics, it would be a pretty good measurement of latency. All else being equal, but all else is not equal. You can spin in a circle and throw everything off. There is no amount of predicting or usage scenarios that can accurately account for user input yet.

            A frame buffer or vsync would add even more variables to this as then you’re starting to edge on input lag, which is also similar to latency… But still can’t be measured accurately. You can throw a large buffer in the middle of everything and it’ll look smooth as silk on the other end, but that’ll completely wreck system responsiveness. Frametime or any of the above measurements do not account for any amount of input latency. That’s why I used the internet as a really good example of setting apart throughput and latency, because it has similar usage scenarios as this.

            Being consistent is not the same as low latency. High variance in frametime or FPS does not indicate poor responsiveness, only poor throughput. It may not even be poor throughput, it could simply be someone spinning in a circle or a sudden explosion would cause a spike. A large buffer would give you nice consistent and maybe even high FPS, but it would be like walking through mud.

            Some of the posts on Lucid’s tech on here offer a similar insight. Where it’s not necessarily about throughput, but delivering a frame in time for it to be perceived properly and cutting out frames that wouldn’t or aren’t needed. It’s unfortunate that so much of what they talk about is shrouded as it would most definitely offer a interesting take on things. I suppose that’s where their mojo is coming from though.

            A good question from all of this, is input lag and latency the same thing? Lag seems to be a more subjective and qualitative term, not necessarily one that has an exact definition like latency.

            • cygnus1
            • 7 years ago

            [quote<] So it's not entirely wrong saying frame time is a measure of latency (as like you said its a measure between the two frames). [/quote<] You are right, it's not entirely wrong, it's actually entirely right and you shouldn't describe it in other ways. I really think you're still not understanding what's being measured. And you keep on about how what TR is doing is derived from FPS. But it's not. They're not measuring a rate of any kind. They're measuring the time it takes to render each individual frame. That's the definition of latency, how long a process takes to complete. You can derive FPS from that data, but in no way can you ever derive that frame time data from FPS. You're likely stuck on bad analogies. FPS isn't a rate like a velocity, you're not measuring motion of an object over time. FPS is derived from an average over time of individual events. And the BS of this just being high resolution FPS measuring is beyond stupid. Applying a rate to individual frames requires extra math steps. You can't measure how long a frame takes to render with a rate. You can convert the render time (time being what you actually measure it in) into the FPS rate but that's just extra hoops to jump through for the, I think, mentally limited out there.

    • Aphasia
    • 7 years ago

    I read the Nordic Hardware article in original language, and while they use microstuttering to describe it, in swedish there is no denying that the messege of the article says what they mean, despite their choice of microstuttering.

    So they are very clear what they are describing, so the rest is just semantics 😉

    • Firestarter
    • 7 years ago

    I can’t help but think that part of the reason that other sites are picking up on this is that you do not claim to know it all, and that you openly invite everyone to join the discussion and suggest a better method. At first, I thought you were doing this mainly to engage us TR readers, but these follow-up articles prove that you’d rather engage and enlighten all of us PC enthusiasts, regardless of where we prefer to read our news. In the face of advertisement revenue competition, I find that position very honorable and I commend you for it, sir.

      • Aphasia
      • 7 years ago

      ^^ This

      I see good times come to the PC-gaming world in the comming year or two. First now with latency testing that should give us more focus on experiencing a smooth experience instead of pure numbers, and also with the talks of input lag/latency that at times, might be something of a pickle for us FPS gamers, or for very tight music games for that matter.

      FPS as a pure performance number is dead, long live the FPS.

      • clone
      • 7 years ago

      I don’t really believe that at all, what I believe kickstarted this notably rapid change now as opposed to when TR had started breaking down FPS was the video’s, suddenly everyone else’s methods seemed a little dated, maybe a little … “lazy” in comparison.

      prior to the video’s I assume many in the industry considered the effort somewhat obsessive if not outright unnecessary, post video a response was mandated.

      I also believe the microstuttering discussion has given the web something new to chew on as opposed to the same old, same old that’s been going on for the past decade.

      you can thank Tech Report for working at it, you can give credit to SLI and Crossfire both of which have been plagued by microstuttering since their inception and lastly you can also give a huge thanks to AMD for letting it get bad enough in single gpu to become visible although I’m certain the latter is an honor AMD could do without and they would certainly be willing to graciously pass on to Nvidia.

        • nanoflower
        • 7 years ago

        I think the video definitely helped push other websites to join in but I think it really started with TR pointing out the issues with the 7000 series. Before that it seemed like the other web sites weren’t really interested in looking at the latency issue as it didn’t seem to be showing much more information compared to the FPS measure. Once the issue showed up with the 7000 series drivers that drew a lot of comments and even drew a few posts on other web sites like Slashdot. That enabled the topic to get a much wider discussion among people that don’t read the Tech Report.

        In any case I’m glad that Scott and the rest have dug in on this issue as it will only help all of us out going forward as I expect Nvidia/AMD/Intel will be paying attention to this from now on.

    • superjawes
    • 7 years ago

    I think PC Perspective probably captured the importance of frame times better than anyone so far…no offense, Damage 😉

    Frame times are good, but the real test is going to be at the monitor and the reconstruction of the motion to the user (because what the user ultimately sees is what determines “smoothness”). Particularly, I like how he pointed out how multiple frames are put together at the same time on the monitor, and how bad those “jumps” can be.

    Unfortunately, that analysis still hasn’t turned up a usable metric. It’s great for looking at the issues under a microscope, but doesn’t really measure it for a comparison.

    I think you probably can and should start looking at frame time distributions for these tests (as others have mentioned) so you can basically measure the “noisiness” of a card. I know we normally don’t want to penalize fast frames (since they aren’t necessarily bad), but I do think they can cause some trouble in the animation, at least when paired with high latency spikes (in which case the bursts of frames can cause more severe animation tearing).

    • anotherengineer
    • 7 years ago

    What were you doing all this time Damage?

    Gaming?
    😉

      • Damage
      • 7 years ago

      Well, I did…
      [url<]https://techreport.com/news/24202/tr-big-ces-2013-digest[/url<] and... [url<]https://techreport.com/news/24211/oculus-rift-is-freaking-amazing[/url<] then... [url<]https://techreport.com/review/24218/a-driver-update-to-reduce-radeon-frame-times[/url<] and also... [url<]https://techreport.com/review/24242/the-tr-podcast-127-the-ces-2013-extravaganza[/url<] Plus the holiday yesterday. Also finished Captain Scarlett DLC last night. 🙂

        • Captain Ned
        • 7 years ago

        Come August the first JBI homebrew is on me. 😉

Pin It on Pinterest

Share This