Nvidia’s GeForce GTX 560 Ti 448 graphics card


The video card market has been surprisingly static in the second half of 2011, so Nvidia’s recent introduction of a new product—the GeForce GTX 560 Ti 448—was a happy occasion on several counts. First, it was a chance for something different to perhaps offer a little extra value to Christmas shoppers. Second, it was an opportunity for us to revisit some fancy new GPU testing methods with the latest games. Thus, we fired up the graphics test systems in Damage Labs, installed titles like Skyrim and Batman: Arkham City, and set to work. Finishing up this review has taken a little longer than we’d have liked, but we’ve managed to include some fresh insights on several fronts. Read on for our take.

Above is a picture of the video card that prompted this little get-together, Zotac’s version of the GeForce GTX 560 Ti 448. As you may know, today’s video cards are carefully calibrated beasts, based on one of several different chips and tailored to deliver a particular mix of price and performance. This newcomer slots in between two very well established offerings, the GeForce GTX 560 Ti and the GeForce GTX 570. One artifact of this product’s late addition to the Nvidia lineup is its awkward name, which is meant to signify its place in the world using an especially long accumulation of letters and numbers. In fact, the “448” in the name refers to the number of shader ALUs enabled on the card’s GF110 GPU.

Yep, that’s right. Although the rest of the GeForce GTX 560 series is based on the smaller GF114 chip, this new card packs the big daddy, the GF110. We’ve already described the Fermi graphics architecture, on which the GF110 chip is based, in some detail. The fundamental building block of this architecture is a unit known as the SM, or shader multiprocessor, which is nearly a complete GPU unto itself. The GF110 has 16 SMs, along with six memory interfaces and six corresponding ROP partitions. Nvidia has spun out several products based on the GF110 chip, including the pricey GTX 580, with all units enabled, and the GTX 570, with 15 SMs and five memory interface/ROP partition pairs enabled.

The GeForce GTX 560 Ti 448 takes this selective trimming operation a tiny bit further. Only 14 of its SMs are enabled, but in every other way—clock speeds, the number of memory interfaces and ROP partitions, the works—the GTX 560 Ti 448 is similar to the GTX 570. The consequences of this change are so minor as to be nearly imperceptible. The loss of an additional SM means the GTX 560 Ti 448 will have a little less shader arithmetic throughput, texture filtering capacity, and geometry processing ability than the 570. However, the Ti 560 448 has the exact same memory bandwidth, pixel fill rate, and triangle rasterization rate. Combine that with the fact that the Zotac card we’re reviewing is clocked higher than Nvidia’s baseline speed, at 765MHz rather than 732MHz, and the GTX 560 Ti 448 becomes vanishingly close to the GTX 570 in terms of key graphics throughput rates.

Peak pixel

fill rate

(Gpixels/s)

Peak bilinear

integer texel

filtering rate

(Gtexels/s)

Peak bilinear

FP16 texel

filtering rate

(Gtexels/s)

Peak shader

arithmetic

(GFLOPS)

Peak

rasterization

rate

(Mtris/s)

Peak

memory

bandwidth

(GB/s)

GeForce GTX 560
Ti
26.3 52.6 52.6 1263 1644 128
Asus GTX 560 Ti
DCII TOP
28.8 57.6 57.6 1382 1800 134
GeForce GTX 560
Ti 448
29.3 41.0 41.0 1312 2928 152
Zotac GTX 560
Ti 448
30.6 42.8 42.8 1371 3060 152
GeForce GTX 570 29.3 43.9 43.9 1405 2928 152
GeForce GTX 580 37.1 49.4 49.4 1581 3088 192
Radeon HD 6870 28.8 50.4 25.2 2016 900 134
Radeon HD 6950 25.6 70.4 35.2 2253 1600 160
Radeon HD 6970 28.2 84.5 42.2 2703 1760 176

Then again, the hot-clocked Asus GTX 560 Ti card that we’ve included in the table above—and in our tests on the following pages—is theoretically faster in several key categories, including texture filtering and shader arithmetic, thanks to its even higher clock speeds. You can see how the small amount of daylight between these different products tempted Nvidia to call this card a GTX 560 Ti, even though it’s based on a different chip.

Both the GTX 570 and the GTX 560 Ti 448 have several other advantages over the regular GTX 560 Ti, however. Chief among them is more memory bandwidth, which will likely translate into better performance. Also, the GF110 has substantially higher peak polygon rasterization rates and a little bit more geometry processing potential, all told. This property doesn’t generally affect current games, unless they have strange polygon-count inflation problems, but it may impact performance in future games that put DirectX 11 tessellation to truly good use. For now, we can show you the difference between the GF110-based 560 Ti 448 and the GF114-based vanilla GTX 560 Ti using a quick, synthetic tessellation benchmark.

Note that even the GF114-based card produces much higher scores than today’s fastest Radeons. Since those same Radeons are altogether competitive with the Nvidia cards in current games, you can probably surmise that the difference between the GF110 and GF114 may not amount to much for a while. Still, it is a real, hardware-based distinction between the chips that matters for a specific sort of performance.

As you can see in the picture of the Zotac card above, the GTX 560 Ti 448 sports dual SLI connectors that will allow it to participate in three-way SLI teams, something the regular GTX 560 Ti cannot do. The 560 Ti 448 card also has a little bit more video memory, 1280MB instead of 1024MB, which may help occasionally at very high resolutions, when video memory is running low.

Left to right: Asus GTX 560 Ti TOP, Zotac GTX 560 Ti 448, and an older Zotac GTX 570

Another perk of grabbing Zotac’s GTX 560 Ti 448: like newer versions of the GTX 570, it’s been squeezed into a more reasonable 9″ card length, versus the 10.5″ length of the first wave of 570s. That change may help with fit in mid-sized cases.

Nvidia’s suggested price for the GeForce GTX 560 Ti 448 is $289.99; the Zotac card above will run you $299.99 at Newegg and comes with a copy of Battlefield 3. Such pricing not only lands squarely between the GTX 560 Ti (~$240) and the GTX 570 (~$340), but also between the Radeon HD 6950 (~$250) and 6970 (~$350). Like Toyotathon, though, Nvidia insists the GTX 560 Ti 448 is a limited-time offer. These cards will only ship to North America and Europe, and only via select board makers. When this allocation of GPUs is exhausted, that’s all she wrote. In fact, Nvidia tells us the limited nature of the product run is one reason it didn’t pull out a more suitable name, like GeForce GTX 565, to assign to these cards.

Some refinements to our methods

A few months ago, we reconsidered the way we test video-game performance and proposed some new methods in the article Inside the second: A new look at game benchmarking. The basic argument of that article was that the traditional approach of measuring speed in frames per second has some pretty major blind spots. For instance, one second is an eternity in terms of human perception. A bunch of fast frames surrounded by a handful of painfully slow ones can average out to an “acceptable” rate in FPS, even when the fluidity of the game has been interrupted. (We opened a whole other can of worms when we applied these insights to multi-GPU systems, but that is a story for another day.)

We weren’t quite sure what folks would think of our proposed new methods, but the response so far has been overwhelmingly positive. Most folks embraced the idea of a new approach, and many of you wrote in to offer your suggestions on how we might improve our methods going forward. Since then, several things have happened.

For one, while I was preoccupied with reviewing new CPUs, Cyril took the ball and ran with it, testing both Battlefield 3 and Skyrim using our proposed new methods. Folks seemed to like those articles, and in both cases, Cyril was able to pinpoint performance issues that a simple FPS measurement would have missed.

Meanwhile, behind the scenes in conversations with TR editors and others, I’ve slowly sifted through your suggestions to figure out which of them might prove worthwhile to us. We’ve rejected some interesting ideas simply because we think they’d be too complicated for mass consumption, and we’ve passed on some others because they didn’t necessarily apply to the sort of performance we’re after. The goal of a real-time graphics system is to produce frames regularly at relatively low latencies, within a window established by the limits of display technology and human perception. Measuring properties like “variance” without reference to the realities involved doesn’t appeal to us.

We have come up with one refinement to our methods that we think is helpful, though. In past articles, in order to highlight cases where a particular config ran into performance problems, we reported the number of frame times that were longer than a given time period for each card, usually 50 ms. We kind of pulled that number out of a hat, but 50 ms corresponds to about 20 FPS at a steady rate. We think that’s slow enough that the illusion of motion is being threatened. A collection of too many frame latencies beyond about 50 ms wouldn’t produce a good experience in most games. By counting the number of frame times above 50 ms for each config, we were able to offer a sense of which ones had potentially problematic performance problems (picked a peck of pickled peppers).

This approach, though, has two problems. First, in certain cases where the prevailing frame times rose above 50 ms for most GPUs, the faster GPU would of course produce more frames above 50 ms. We didn’t want to penalize the faster solution, so we had to be very careful about how we set our threshold in each test scenario.

The second problem is related: a simple count of the frame times longer than a certain threshold fails to consider the time element involved. For instance, take the two example performances below. They’re fabricated but possible.

The first card, the ReForce, produces several frames in 51 milliseconds during its test run. That’s not great, but three frames at 51 ms probably wouldn’t interrupt the flow of a game too badly. The second card, the Gyro, has only one long-latency frame, but it’s a doozy: 200 ms, a fifth of a second and an undeniable interruption in gameplay. Here’s how our long-latency frame count would look for these two cards:

Whoops. The Gyro comes out looking better in that chart, even though it’s obviously doing a poorer job of delivering fluid motion. The solution we’ve devised? Rather than counting the number of frames above 50 ms, we can add up all of the time spent working on frames beyond our 50-ms threshold. For our example above, the outcome would look like so:

Those three 51-ms frames only contribute 3 ms to the total time spent waiting beyond our threshold, while that one 200-ms frame contributes much more. I think this result captures the relative severity of the interruptions in gameplay fluidity quite nicely. This technique also does away with any concerns about the faster card being penalized for producing more frames.

I should note that, although we cooked up this new method months ago in a frenetic conversation at Starbucks during IDF, a TR reader named Olaf later wrote in and pointed out this exact problem with the time element of the frame rate count, in response to one of Cyril’s articles. Olaf, you nailed it. We’re going with the technique of adding up time spent beyond 50 ms from here on out.

The time element of individual frames also scuttled one of our favorite suggestions for augmenting the presentation of our results: histograms showing the distribution of frame times. At first, that seemed like a nice idea. However, when we actually created one, it looked like so:

The problem? These are real data from our tests, and as you’ll see later, what separates the performance of some GPUs from others here is an abundance of long frame times in some cases. Some of the GPUs devote quite a bit of time to processing long-latency frames, so those frames are very important to consider. Yet the severity of those long-latency frames is entirely obscured in this histogram. As a simple count, they’re overwhelmed in the chart by the many thousands of low-latency frames produced by all of the GPUs. It’s a purty picture, but I don’t think it adds much to our analysis.

That’s a shame, because I was ready to bust out the fancy stuff:

Bam! Pow!

But ultimately pointless. We’ll keep looking for ways to better analyze and present our results in the future. I think what we’ve developed so far is pretty strong, though, with our one new refinement.

For what it’s worth, I’ve also rejiggered things behind the scenes a little bit to make sure that, where possible, we’re sampling all five runs from each game separately and then picking the median result. That way, the amount of time spent beyond 50 ms is the time spent during a single, representative run—and not the result of the occasional system performance blip due to a background task. The one place where that doesn’t work is the 99th percentile frame times, where we’ve found we get more coherent results by analyzing the data from all five runs as a group.

Our testing methods

As ever, we did our best to deliver clean benchmark numbers. Tests were run at least three times, and we’ve reported the median result.

Our test systems were configured like so:

Processor Core
i7-980X
Motherboard Gigabyte EX58-UD5
North bridge X58 IOH
South bridge ICH10R
Memory size 12GB (6 DIMMs)
Memory type Corsair Dominator CMD12GX3M6A1600C8

DDR3 SDRAM at 1333MHz

Memory timings 9-9-9-24 2T
Chipset drivers INF update
9.2.0.1030

Rapid Storage Technology 10.8.0.1003

Audio Integrated ICH10R/ALC889A

with Realtek 6.0.1.6482 drivers

Graphics
Dual Radeon HD
6950 2GB

with Catalyst 11.11b drivers

Radeon HD 6970
2GB

with Catalyst 11.11b drivers


Asus GeForce GTX
560 Ti DirectCU II TOP 1GB

with ForceWare 290.36 beta drivers

Zotac
GeForce GTX 560 Ti 448 1280MB

with ForceWare 290.36 beta drivers

Zotac
GeForce GTX 570 1280MB

with ForceWare 290.36 beta drivers

Hard drive Corsair
F240 240GB SATA
Power supply PC Power & Cooling Silencer 750 Watt
OS Windows 7 Ultimate x64 Edition

Service Pack 1

DirectX 11 June 2009 Update

Thanks to Intel, Corsair, Gigabyte, and PC Power & Cooling for helping to outfit our test rigs with some of the finest hardware available. AMD, Nvidia, and the makers of the various products supplied the graphics cards for testing, as well.

Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests.

We used the following test applications:

Some further notes on our methods:

  • We used the Fraps utility to record frame rates while playing a 90-second sequence from the game. Although capturing frame rates while playing isn’t precisely repeatable, we tried to make each run as similar as possible to all of the others. We tested each Fraps sequence five times per video card in order to counteract any variability. We’ve included frame-by-frame results from Fraps for each game, and in those plots, you’re seeing the results from a single, representative pass through the test sequence.
  • We measured total system power consumption at the wall socket using a Yokogawa WT210 digital power meter. The monitor was plugged into a separate outlet, so its power draw was not part of our measurement. The cards were plugged into a motherboard on an open test bench.

    The idle measurements were taken at the Windows desktop with the Aero theme enabled. The cards were tested under load running Modern Warfare 3 at a 2560×1600 resolution with 4X AA and 16X anisotropic filtering.

  • We measured noise levels on our test system, sitting on an open test bench, using an Extech 407738 digital sound level meter. The meter was mounted on a tripod approximately 10″ from the test system at a height even with the top of the video card.

    You can think of these noise level measurements much like our system power consumption tests, because the entire systems’ noise levels were measured. Of course, noise levels will vary greatly in the real world along with the acoustic properties of the PC enclosure used, whether the enclosure provides adequate cooling to avoid a card’s highest fan speeds, placement of the enclosure in the room, and a whole range of other variables. These results should give a reasonably good picture of comparative fan noise, though.

  • We used GPU-Z to log GPU temperatures during our load testing.

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

The Elder Scrolls V: Skyrim

Our test run for Skyrim was a lap around the town of Whiterun, starting up high at the castle entrance, descending down the stairs into the main part of town, and then doing a figure-eight around the main drag.

Since these are pretty capable graphics cards, we set the game to its “Ultra” presets, which turns on 4X multisampled antialiasing. We then layered on FXAA post-process anti-aliasing, as well, for the best possible image quality without editing an .ini file.

By any measure, all of these graphics cards aced this test. Honestly, we didn’t expect this outcome. Cyril found that slightly lower quality settings at 1080p were a bit of a challenge for the GeForce GTX 560 Ti and Radeon HD 6950. We resisted jumping up to a higher (and less common) resolution in part because we thought these settings would be challenging enough. Yet in our tests, none of the cards spent much time at all above 50 ms or even 40 ms.

Why the difference? Two possibilities come to mind. One, we’re using newer video drivers all around. And two, our test system has a much beefier Core i7-980X processor, lending credence to the claims that this game is somewhat CPU-bound.

Whatever the case, you’re good with any of these cards in Skyrim, and the differences between them are minor—three milliseconds of frame latency for 99% of the frames produced, as our percentile chart shows. As one might expect given the theoretical numbers at the opening of this review, there’s virtually no difference between Zotac’s GTX 560 Ti 448 and the stock-clocked GeForce GTX 570.

Batman: Arkham City

We did a little Batman-style free running through the rooftops of Gotham for this one.

We had hoped to test Arkham City with its DirectX 11 features enabled, but the DX11 code path in the first release of the game was essentially broken, with way too many performance pitfalls on any video card. The game has since been patched, but we’re not yet confident it’s truly fixed, given the user feedback so far. So we’re sticking with testing this game—which is truly excellent, by the way—the same way we’ve played it: in DirectX 9 mode at the highest possible resolution and image quality settings otherwise.

Notice how the Radeons range into 40-ms frame times quite a bit more often than the GeForces here. The FPS average doesn’t really capture that important bit of reality. The GeForce GTX 560 Ti and the Radeon HD 6970 are virtually tied in terms of FPS, but the GTX 560 Ti returns 99% of its frames in 26 milliseconds or less. For the 6970, that figure is 35 ms. This one is a clear win for Nvidia.

Long-latency frames above 50 ms are a bit more of a problem for the Radeons, too. Any of the GeForces will give you smoother gameplay overall.

Once again, though, even our most sophisticated tools can’t detect a meaningful difference in performance between the GTX 560 Ti 448 and its bigger brother, the GTX 570. Heck, even the regular GTX 560 Ti’s performance isn’t much different, really.

Battlefield 3

Now for something a little more intensive: Battlefield 3 with all of its DX11 goodness cranked up, including the “Ultra” quality settings with both 4X MSAA and the high-quality version of the post-process FXAA. We tested in the opening portion of the “Thunder Run” level, from the first checkpoint through the initial tank battle and a few moments after that.

What’s this? Way too many long-latency frames during the tank battle on all of the GeForces. Just by looking at the frame time plot, we can spot the problem. The Radeons aren’t similarly afflicted.

Yeah, so all of those FPS numbers we’ve reported for years? Not very confident in them now. The GeForces capture the top three spots in the average FPS sweeps, yet they clearly are slower in ways that matter. The 99th percentile frame time results capture that fact.

The two GF110-based GeForces spent roughly three-quarters of a second each working on frames beyond our 50-ms threshold. That is an awful lot of time spent waiting during the course of a 90-second snippet of gameplay. In practice, we know those longer frames tend to take about 80 milliseconds to produce, which corresponds to a steady-state frame rate of 13 FPS. By the seat of our pants, the impact on the game experience is this: a little more of the “fog of war” feeling during battle than the game developers probably intended, along with some difficulty getting a fix on your next target. The tank battle itself is pretty much the opposite of a fast-twitch scenario, though, so the problem isn’t felt as intensely as it might be elsewhere in the game.

More troubling is that fact that Cyril saw similar frame latency spikes on GeForce cards in battle on the “Rock and a hard place” map. We resolved then to test other areas of the game, and now that we have, a pattern appears to be emerging. Hrm.

Call of Duty: Modern Warfare 3

Ok, so the Call of Duty game engine is older than dirt, but this game has made more money than, well, a rounding error in the federal budget, which is way more than most of us can say. I figured we could ratchet up the resolution to four megapixels, max out the image quality, and at least provoke some reaction from these cards.

Yeah, not so much. The only news here, in my view, beyond the fact that all of these cards are incredibly competent for this game, is how small the differences are between the various solutions. Console-focused games like this one simply don’t stress the pain points in lower-end solutions like the GF114 GPU, things like memory bandwidth and geometry processing throughput. The question is: why pay more than you would for GeForce GTX 560 Ti or a Radeon HD 6950? Clearly, not for higher performance in this sort of game.

Power consumption

I feel like I should have thrown in a vastly more expensive video card and maybe also a vastly less expensive one, so maybe we’d get some drama in these results somewhere. Seriously, 11 watts of total system power consumption separate the contenders at idle and only nine watts separate the top four while running a game? Sheesh. Take your pick, folks.

Noise levels and GPU temperatures

Well, at least our noise and temperature results are somewhat interesting. The Zotac GTX 560 Ti 448 card we’re reviewing comes out looking pretty good overall—a little louder at idle than everything else, though the difference subjectively is minor, and a little quieter under load than the rest of the bunch. That fits with our overall impression that the Zotac cooler has a decent acoustic profile. It’s not tuned to maintain unnecessarily low GPU temperatures under load, as the Asus GTX 560 Ti card is, and that’s fine with us. We’ll tinker with the fan speeds if we want to overclock, but the defaults should prioritize quiet operation.

Conclusions

The Zotac GeForce GTX 560 Ti 448 card we’ve tested is functionally almost identical to the GeForce GTX 570. More than anything, it’s a bit of a temporary Christmas price cut on the 570. We’re favorably inclined toward such things, and for as long as this card is available, nobody should buy a GTX 570. The Ti 560 448 is the same basic thing for less money.

We’re also generally pleased with Zotac’s rendition of this holiday special. We like the shorter board length, higher clock speeds, reasonably quiet cooler, and the included coupon for a copy of Battlefield 3. All in all, a very solid offering from Zotac.

The surprising outcome of our testing is the relatively minor differences in performance between the cheapest two cards, the GeForce GTX Ti 560 and Radeon HD 6950, and the more expensive offerings, which cost about $100 more. In fact, in our admittedly limited set of tests, we saw larger differences between the two major GPU brands than we did between the different rungs on their respective product lineups. For instance, Arkham City runs better on all of the GeForces than on any of the Radeons, while the reverse is true (in more dramatic fashion) for Battlefield 3. I had kind of expected our tighter focus on GPU frame times and on specific performance stumbles to lead to more separation between the various product segments, not less. Then again, we are dealing with relatively minor differences even in theory, all told, once you factor in the relatively high clocks on our Asus GTX 560 Ti and such. There’s just not a lot of differentiation to be had between $240 and $350 or so.

Given that fact, I’m going to hit you with a very specific recommendation to wrap up this review. If you have a single monitor at a resolution of 1920×1200 or less, the cards to buy are the GeForce GTX 560 Ti and the Radeon HD 6950. There’s no compelling reason to spend more, given the level of performance these cards deliver, and good luck choosing between the two on the basis of our performance results. Either one should serve you well, although if you favor Battlefield 3, you may want to opt for the Radeon. On the other hand, if you’ve moved up to that next class of display at 2560×1440 or better, you may want to consider paying extra for the GeForce GTX 560 Ti 448. You might get by just fine with a lower-end card, but the additional memory size and bandwidth over the regular GTX 560 Ti will probably pay off here and there—not in spades, but enough to justify a little more expense. With that qualification, we’ll throw a TR Recommended award to the Zotac 560 Ti 448. It’s essentially a GTX 570 for 300 bucks, and that we like.

With that said, we’ll apologize once again for the lack of drama in this particular review. We consider this one a down-payment on a future review with similar methods but broader scope, both of games and graphics cards, with some tougher challenges for all involved.

Comments closed
    • l33t-g4m3r
    • 8 years ago

    How’s this card stack up to the 470 is what I want to know. Good review, btw.

    • sweatshopking
    • 8 years ago

    YOU KNOW WHAT THIS CONVERSATION LACKED?!?!?!?! SOME SSK!!! DON’T WORRY, IT’S HERE NOW!!!!!!!!!!!!!!!!!!!!!!!!!!!!

      • Bensam123
      • 8 years ago

      I think he’s more of a ‘red’ then ‘green’ christmas guy…

    • yogibbear
    • 8 years ago

    Absolutely first class article. I LOVE the new metrics. You nailed it. Well done TR and Olaf.

    • gorillagarrett
    • 8 years ago

    This review does smell fishy, and it doesn’t match the performance figures you see at tomshardware, legit reviews, hardocp, or the other sites.But then again, Nvidia has been sponsoring…oops, did i say too much?

    Watched a performance review of bf3 on youtube on Linustechtips’s channel and the 6970 on Ultra with deffered AA off got the best avg and min fps.Here it loses against the new 560..

    Disappointed in techreport!!!

      • Palek
      • 8 years ago

      [quote<]oops, did i say too much?[/quote<] Indeed, and you made yourself sound like a fool. TechReport being accused of pro-nVidia bias might be a first, though.

        • Dashak
        • 8 years ago

        I don’t agree with him, but it’s not a first.

          • Palek
          • 8 years ago

          Yeah, you’re right. I guess my memory has been overloaded by the “TR is populated by AMD fanbois” team who have been rather vocal lately.

            • gorillagarrett
            • 8 years ago

            Im no AMD fanboy.And I’ve never owned and AMD processor.It’s obvious that Scott has been being affected by Nvidia lately, and this benchmark is just doesn’t feel legit, not even close.

            I have a problem with Nvidia’s pricing.And I feel like pimping for Nvidia doesn’t make things better for customers.

            • Palek
            • 8 years ago

            [quote<]Im no AMD fanboy.And I've never owned and AMD processor.[/quote<] I didn't say you were. I was suggesting the opposite. You need to slow down with your reading. [quote<]It's obvious that Scott has been being affected by Nvidia lately, and this benchmark is just doesn't feel legit, not even close.[/quote<] It's obvious to [b<]you[/b<], meanwhile the thousands and thousands of long-time TR readers know that Scott and his crew are in a league of their own when it comes to fairness and integrity. And saying that something "doesn't [b<]feel[/b<] legit" just sounds silly. [quote<]I have a problem with Nvidia's pricing.And I feel like pimping for Nvidia doesn't make things better for customers.[/quote<] Buy AMD, then.

        • gorillagarrett
        • 8 years ago

        I stand by what i said.In the last few pod casts, the pimping for Nvidia is strong.And the fact that those performance figures are totally strange compared to what6 u can see from the reviews of the new 560 on tomshardware or any other site, leads to one conclusion, Nvidia’s sponsorship has affected Scott’s unbiasedness…

          • LaChupacabra
          • 8 years ago

          I took a look at both of those sites to give you the benefit of the doubt. And they all pretty much agree with each other. I think the thing you missed is that the TR article reported the average frame rate and how many frames dipped below an acceptable level. The reasons for this are outlined in the review. I think it would benefit you to go back and re-read the first couple of pages of this review to get a grasp on why people think so highly of TR’s new review system. I’m sure if you have any questions about it (after giving it some thought, that is) people here would be happy to answer them for you.

            • gorillagarrett
            • 8 years ago

            I don’t think you paid attention to the numbers when u “took a look”.
            Here:

            [url<]http://media.bestofmicro.com/G/R/316827/original/BAClow%201920.png[/url<] [url<]http://media.bestofmicro.com/G/L/316821/original/AvP%201920.png[/url<] [url<]http://media.bestofmicro.com/G/U/316830/original/BF3%201920.png[/url<] Skyrim and MW3 were not used in the 560 review on tomshardware.But, all of the above games were used in TR's review and in all of 'em the Radeon 6970 loses against the gtx560 448.You can see how the figures fro the two sites are completely "not alike".

            • LaChupacabra
            • 8 years ago

            Fine, let me be blunt.

            Arkham City was tested at different resolutions with different levels of anti-alasing

            Alien Vs. Predator wasn’t tested in the TR review

            Tom’s tested Battlefield 3 on High, TR used mostly Ultra (notice how even the nVidia card are 10-15 fps slower in the Tom’s review)

            Like I said. If you would like to think about the reviews, analyze the data and then post a good question we can have a good discussion about the merits of the review systems. Unless that happens I’ll just consider you a troll and won’t respond to more of your posts.

            • gorillagarrett
            • 8 years ago

            the difference was 8x MSAA at toms vs 4x here at TR.I’m not sure why u would not apply 8x MSAA when ur getting +100 fps in a a game.

            BF3 on on my stock 6870 and core i5 750 at 3.36GHz never drops below 35 fps on Ultra at 1920 *1080.There’s no way the 6970 gets 36fps average.Check Linus’s review on youtube. With everything maxed out the 6970 beats even the gtx580 by a frame.

            • Meadows
            • 8 years ago

            Arkham City was tested at 2560×1600, a resolution vastly greater than 1920×1200.
            Battlefield 3 here was tested with 4x deferred MSAA, while Tom’s Hardware did not use any AA.

            Learn to read before making accusations.

            Edit: also, TR used “Ultra” in BF3, not just “High”, and the AMD videocards won, because they gave a lot smoother gameplay. The NVidia cards jittered.

            • gorillagarrett
            • 8 years ago

            Here’s Arkham City tested at 2560×1600:

            [url<]http://media.bestofmicro.com/G/S/316828/original/BAClow%202560.png[/url<] The 6870 is still well faster than the 570 leave alone the gtx560. Also adding the 4x msaa in BF 3 drops the average fps on ultra from 70fps to 47 on the 6970 while the gtx580 drops from 68 fps avg down to 54 fps which is much less dramatic of a hit, but it's still worth mentioning that forcing FXAA can save you more performance and allow you to get that average 60 fps with equivalent IQ enhancment, if you can notice at all, instead of using the messy MSAA. Anyhow, I know i sound harsh, but Scott's complains about AMD's drivers and the "features" he mentioned u get with the Nvidia cards were pretty childish in my opinion in the latest podcast.And the way his numbers look in this review just make him look more of a Nvidia supporter.

            • derFunkenstein
            • 8 years ago

            oh i get it now, you’re pimping this “bestofmicro” website under the guise of “calling out” TR.

          • Palek
          • 8 years ago

          [quote<]In the last few pod casts, the pimping for Nvidia is strong.[/quote<] I don't listen to the podcasts so I don't really know what the issue is, but if nVidia are sponsoring the podcast then I don't see a problem with TR dropping a few lines of nVidia promotion unless it happens in a way that unfairly elevates nVidia over their competitors. [quote<]And the fact that those performance figures are totally strange compared to what6 u can see from the reviews of the new 560 on tomshardware or any other site, leads to one conclusion, Nvidia's sponsorship has affected Scott's unbiasedness...[/quote<] The fact that you're suggesting tomshardware as an example of an unbiased hardware site makes me think you haven't been around for very long. Oh, and going back to your original comment: [quote<]Watched a performance review of bf3 on youtube on Linustechtips's channel and the 6970 on Ultra with deffered AA off got the best avg and min fps.Here it loses against the new 560..[/quote<] Did you actually read the article or did you just look at the FPS figures and jump to your unfair conclusions? Scott's conclusion was that the 6970 and the 6950 beat nVidia chips in bf3!!! The [b<]whole point[/b<] of this article is that average FPS doesn't give you even half the story, and yet the average FPS figure is the only point you picked up on! Read the article, then complain! (edited for clarity)

            • gorillagarrett
            • 8 years ago

            While i do commend Scott and techreport on their professionalism and unbiasedness in many of those truly informative articles, i believe that Scott has abandoned all that in this article and in those latest podcasts.

            Right in those podcasts i could tell that he’s not being himself when he was talking about the GPU rivalry and the latest system guide.

            Now dont get me wrong im loving the competition and the price wars between AMD and Nvidia, but i still think that Nvidia is overpricing their products to squeeze every penny they can out of those eager pockets.The fact that the gtx580 is 50% more expensive than the 6970 while offering pretty much the same performance says it all…

      • flip-mode
      • 8 years ago
        • indeego
        • 8 years ago

        “Edited 3 time(s). Last edit by flip-mode on Dec 16 at 09:29 AM.”

        Seriously…It was that difficult for you? 😉

          • flip-mode
          • 8 years ago

          2 edits to correct myself, and 1 edit because I was struck by the futility of correcting another.

            • indeego
            • 8 years ago

            Feel free to correct me. I’ve been wrong on here several times and it’ll happen again.

            • Bensam123
            • 8 years ago

            I’ve been wrong on here too. Hold me Indeego.

      • bjm
      • 8 years ago

      [url<]http://www.dailytech.com/Pay+to+Play+Uncovering+Online+Payola/article7510.htm[/url<] "We have a real strong policy at The Tech Report of what we like to call separation of church and state, where essentially the editorial content is separate from the marketing and the advertising ... you are not going to be able to buy a review or an article.” 'nuff said.

    • cobalt
    • 8 years ago

    (I have to admit to getting lost in the Stargazer/Damage conversation, so while I think my comment is likely related, I can’t even tell whether I agree enough to attempt to squeeze this in there. I’ll just phrase it my own way here:)

    Personally, think having the bar-chart of average FPS next to 99th percentile frame time is slightly confusing, since one is just the inverse of the other — the 99th percentile frame time is just the 1st percentile frame rate. And the 1st percentile frame rate is just a minimum-framerate-discarding-outliers. They’re incredibly similar — you could be using a median (50th percentile) or harmonic mean (since it’s a rate) or a generalized mean, any other way of expressing “average” — you just happen to be choosing a mean. Both are some way of expressing average, one is just a type of “average low” value instead of a average near the “middle”.

    I guess I’d rather both were presented using the same metric so I could compare them without having to pull out a calculator and do the math myself (1000/time or 1000/rate). I suppose you could present both as a frame time if you prefer, but I think doing both as a rate (i.e. mean framerate and 1st percentile framerate) would be the most intuitive.

    (I’m not suggesting you should change the temporal line graphs. I’m simply suggesting consistency would be improved in the two adjacent bar graphs where one is higher-is-better and one is lower-is-better, and I’m suggesting the two metrics aren’t different enough to warrant the current distinction.)

    In any case, great reviews and methodology. You’ve really raised the bar!

    • jensend
    • 8 years ago

    Wish you could have done a bunch of compute testing. As I [url=https://techreport.com/discussions.x/22071?post=599172<]said when this card was announced[/url<], I think the main interest with this card is its compute capability. For a some CUDA users, having a relatively inexpensive card that does double precision at 1/8 the speed of single precision rather than the 560's 1/12 may be quite interesting. In other aspects, the performance bump over the 560 is pretty minimal; for programs where double precision compute performance is the limiting factor, the 560 Ti 448 should theoretically be 56% faster.

      • Damage
      • 8 years ago

      Show me a consumer application that uses CUDA and requires the use of double-precision FP math, and I’ll consider testing it in the future. I wasn’t aware of…. any.

        • jensend
        • 8 years ago

        Not really any “consumer” applications I’m aware of; there are distributed computing projects that regular consumers might be involved in though. Most of the stuff I’m aware of is either computational number theory or partial differential equation solvers.

        Yes, those who are interested in DP CUDA are a relatively small group. But the market for this card is pretty small anyways especially given how slim its advantage over the 560Ti is. And if the price is right and the application fits, CUDA users might pick up quite a number of cards.

    • dashbarron
    • 8 years ago

    [quote<] Bam! Pow! But ultimately pointless. [/quote<] Oh God. The 1960's Batman series has risen from the dead! Kill it, kill it now!!

      • dpaus
      • 8 years ago

      I’d like to say ‘Why so serious?’ but I hate it when people cross-pollinate memes.

    • dpaus
    • 8 years ago

    And don’t think we didn’t gratefully notice that there’s now a visible link back to the article from the Comments section!

    • kristi_johnny
    • 8 years ago

    Hmm, i wanna upgrade my PC, i have a Geforce 8600GTS and after i have read this test i am still undecided to what i should upgrade; 6950, 560ti, 560, 560-448.
    hard decisions (which, maybe, ain’t that bad to have many options)

      • Chrispy_
      • 8 years ago

      [quote<]If you have a single monitor at a resolution of 1920x1200 or less, the cards to buy are the GeForce GTX 560 Ti and the Radeon HD 6950.[/quote<] If you're making do with an 8600GTS, you could probably buy a GTX 550Ti for half the price and you'd still be incredibly pleased with it. Most games are console ports these days, and they just don't need much power. Save your money and buy a more powerful graphics card once a game turns up that can [i<]actuall use it[/i<]

        • derFunkenstein
        • 8 years ago

        Pretty much any game released in the last 2-3 years is just killing that 8600GTS. I found Bioshock on medium to be unplayable on an 8600GTS back when that was a popular, current card. kristi_johnny must be playing at super low resolutions on super old games, or else has a weird sense of what “playable” means. :p

          • kristi_johnny
          • 8 years ago

          Actually, i managed to play Skyrim a little running at 1920×1080 at medium details (no AF, no AA) getting 20-25-30fps (though sometimes the frames would drop to 14-15)

          ps: don’t know why somebody thumbed me down, i was just asking for advice 🙁

            • Meadows
            • 8 years ago

            Anything starting from a GTX 460 (and up) will multiply your framerate by [i<]at least four[/i<], so start there and just look for a good deal.

            • kristi_johnny
            • 8 years ago

            😉 Thanks

      • crazybus
      • 8 years ago

      I upgraded from an 8600GTS to a 560ti and find that I am often cpu limited with a 3GHz C2D. Depending on the age of your system you might not be getting the most out of a video card upgrade.

        • kristi_johnny
        • 8 years ago

        i am using q6600@3.4ghz

          • Beelzebubba9
          • 8 years ago

          Anything in that class will be a huge upgrade.

    • odizzido
    • 8 years ago

    The total time spent above 50ms is really good. These reviews are getting better and better.

    • can-a-tuna
    • 8 years ago

    Scott again with his biased reviews. HD6970 slower than GTX560, right. I’m wondering why AMD even bothers to send you any samples.

      • Meadows
      • 8 years ago

      How are numbers biased?

      • Palek
      • 8 years ago

      You again with your accusations of bias. TR falsifying their numbers, right. I’m wondering why you even bother to post any comments.

    • tfp
    • 8 years ago

    Well looks like the cards are much better than the 9600GT I’m running.

    • Deanjo
    • 8 years ago

    These reviews really should include nvidias current flagship (GTX 580) for comparison/reference sake. It seems only right if you are going to include AMD’s single GPU flagship.

      • Airmantharp
      • 8 years ago

      I get what you’re saying, but the GTX580 has no direct AMD competitor in either price or performance. Given that the cards tested included the most immediate faster and slower cards from both Nvidia and AMD, it’s relative performance is clearly determined. Further, given that the GTX570 and GTX580 have been tested together many times before, it’s more than safe to conclude that the GTX580 is faster than the GTX570.

      In other words, there’s empirical reason to perform the extra work.

        • Deanjo
        • 8 years ago

        The biggest reason would be to have a baseline as lower end cards vary greatly in their implementations and settings. The flagships however tend to be sold more at stock specs then at vender tweaked settings so using it as a baseline allows for a good representation of other products in it’s family. Also the GTX-580 uses the same GPU found in this card with everything enabled and as such again serves as a better baseline. It’s all about giving a giving a proper baseline so that relative performance is accurately represented.

        I might also add a card such as a GTX-280/285 should have been added as well as people looking to upgrade their card are more then likely coming from a previous generation of card and using the former top dog would again allow for people to get an accurate picture if an upgrade for them is warranted.

        Also one last thing. Comparing performance by referring back to old benchmark runs is bad practice as drivers mature and there can be a great variance in benchmark results even on the same card just by utilizing a newer driver.

          • Airmantharp
          • 8 years ago

          Sorry, don’t think so.

          Your baseline is the GTX560 and GTX570- which are one step slow, and one step faster, than the 448. Do you really need something that is faster than a GTX570 to know relatively that the 448 is slightly slower than the 570? Really?

          It’s unnecessary, and asking that reviewers do more work just for the hell of it is selfish. If you’re worried about drivers then ask for a different article detailing the relative performance of older cards to newer cards with newer driver sets and newer games.

            • Deanjo
            • 8 years ago

            It is not selfish at all, it is proper benchmarking. Product reviews are about one thing and one thing only, helping people make a purchasing decision. Without this information together and benched on the same setup the results are useless for doing so. Take a look at the steam hardware survey. The largest graphics shares there are by cards in older generations of products, heck the old G92 8800GT/9800GT still holds ~9% of the cards installed there. Referring people to older reviews to see where a product lies in performance is just poor practice as there are so many variables that come into play to that create differences in performance but cannot be substantiated without a true head to head comparison on the same configuration.

            I guarantee you that there are more people looking to upgrade from older generations then there are people looking to upgrade from a GTX 560 to a GTX 560 Ti and reviews like these do nothing in helping those decisions.

      • flip-mode
      • 8 years ago

      The GTX 550 should be in there too to prevent fanboy butthurt in both directions.

        • CampinCarl
        • 8 years ago

        Hey, if we’re putting the 550 in there, we should include at least a 6850 for reference! And if we’re including old cards, where’s the inclusion of the 4870 1GB that I’m still running?

    • ish718
    • 8 years ago

    Not worth it!

      • AssBall
      • 8 years ago

      Agreed. Maybe if you find one on a nice sale, fine, but 60 dollars more for negligible performance increase? No thanks.

      • derFunkenstein
      • 8 years ago

      Worth it if you’ve already dropped a ton of money on a huge monitor. At 30″ 2560×1440, it’s worth it. At 22″ 1920×1080, not a chance.

    • Airmantharp
    • 8 years ago

    Thanks for the great work Scott!

    The graphs have gotten better/more intuitive, and I like that you’ve included every relevant card above and below the 448.

    But of course, there’s always something to add:

    The first thing that struck me was a lack of contemplation on cooler design. It certainly doesn’t matter in an open bench, but air coolers that are only partially exhausting are certainly not as thermally efficient in a closed low-noise enclosure. Cases like the Fractal Design Define R3, Antec’s P280, and Silverstone’s Raven, Fortress, and Temjin series take serious advantage of stock blower designs and allow for positive pressure when properly set up. Cards like this one with a central fan that dumps air back into the case would likely require more ‘turbulent’ airflow like those Cooler Master’s HAF series, and by design would be a louder overall solution even if they’re quieter on an open bench.

    Also, BF3 multiplayer, including the recent ‘Back to Karkland’ map pack. BF3 single player is decidedly GPU bound while the multiplayer taxes both CPU and GPU in significantly different ways, and is also much more significant given how much attention is given to the muliplayer.

      • mongoosesRawesome
      • 8 years ago

      Second the testing of BF3 multiplayer.

    • indeego
    • 8 years ago

    Your link from comment–>article is not right.

      • Stargazer
      • 8 years ago

      And here I thought I’d somehow been trapped inside a time warp…

    • Stargazer
    • 8 years ago

    [quote<]The basic argument of that article was that the traditional approach of measuring speed in frames per second has some pretty major blind spots. For instance, one second is an eternity in terms of human perception.[/quote<] Objection! How long a second is "in terms of human perception" is irrelevant, and the blind spots are not with measuring speed in frames per second! Sure, *average* frame rates have these problems, but *frame rates* are no worse than frame times in this regard. I fully understand if you prefer using frame times rather than frame rates, but can you please clarify that your objections are about *average* frame rates, and refrain from mentioning "one second" as if frames per second are automatically measured over one second?

      • Damage
      • 8 years ago

      You keep selling, but I’m just not buying. Human perception, of course, is very relevant to the question of game performance. Not sure how you can argue otherwise.

      Beyond that, your imagined future is… strange. “The GeForce GTX 680 is the clear leader in Half-Life 3 performance, posting an average of 0.0625 frames per millisecond, while the Radeon HD 7960 trails at 0.0621 FPM.”

      Yeah, yeah, you want us to use FPS but not think in terms of frames… per second. I get that we do that with MPH, but really, time per frame is the right thing if we’re measuring latencies, which we want to do.

      I’ve heard your argument, but I think time/frame in ms is more intuitive to grok than your proposed alternatives. Thanks for offering your input, though.

        • Stargazer
        • 8 years ago

        Human perception is very relevant to the question of game performance, but how long *a second* is *perceived to be* is irrelevant to the subject of something *measured in* the *unit* *frames per second* being problematic or not.

        Also, as I said, preferring to look at frame times instead of frame rates is perfectly fine, but this does not change that it’s wrong to claim that the problem is with *frame rates* just because the frame rates happen to be measured in a unit that contains “second”.

        That unit does not mean that the frame rate is necessarily measured over a time span of one second. Similarly, you don’t have to drive your car for one hour in order to measure your speed in miles per hour.

        Yes, looking at *averaged* frame rates hides small time-scale behavior.
        Looking at *frame rates* does not inherently have this problem, even though the unit contains “seconds”.

        This is not an opinion.

        edit: Again, I’m not complaining about you choosing to look at frame times instead of frame rates. It’s not necessarily the choice I would have made, but it’s a perfectly valid choice.

        If you look at my first post, you’ll see that I object to two things. One is that you say that the problem is with looking at frame rates. It is not. The other is that you’re saying that a second is long, as some kind of justification for this. This is meaningless, since the time span being measured does not have to be on the scale of seconds just because the unit contains “second”. In no way did I claim that human perception in itself was irrelevant.

        Also, I have never suggested that frame rates should be measured in “frames per millisecond”, so I’m not sure where you were getting that – unless you again feel that the unit needs to represent the time scale a measurement is made over – which is *wrong*.

        In this thread I have also not suggested that you use anything other than frame times; I’ve just been saying that the way you’re representing frame rates is wrong.

          • Damage
          • 8 years ago

          Well, look at the longer version of my text, which you clipped off:
          [quote<]The basic argument of that article was that the traditional approach of measuring speed in frames per second has some pretty major blind spots. For instance, one second is an eternity in terms of human perception. A bunch of fast frames surrounded by a handful of painfully slow ones can average out to an "acceptable" rate in FPS, even when the fluidity of the game has been interrupted.[/quote<] The "traditional approach" of measuring FPS is to average, with the resolution being limited to one second. That's problematic, for the reasons explained. No offense intended, but what you seem to be doing is being needlessly pedantic. You'd like me to have used the word "average" one or two more times, perhaps. I dunno. I understand that rates can be useful, although I think they're less intuitive for much of what we're trying to do. What I don't understand is why you're countering a proposition I (clearly, I think) wasn't trying to advance.

            • Stargazer
            • 8 years ago

            [quote<]Well, look at the longer version of my text, which you clipped off[/quote<] The longer version does not change the problem: You're still implying that a second is somehow the time period all frame rates have to be compared against. [quote<]The "traditional approach" of measuring FPS is to average, with the resolution being limited to one second.[/quote<] No. Yes, numbers given in FPS tend to (almost exclusively unless there are special circumstances; such as looking at small time scale behavior, but that doesn't matter right now) be averages, but not with a "one second resolution", and in the vast majority of cases averaged over time scales *much* larger than a second. [quote<]No offense intended, but what you seem to be doing is being needlessly pedantic. You'd like me to have used the word "average" one or two more times, perhaps. I dunno.[/quote<] Using "average" more often would be more formally correct (which, I suppose, would be great for us in the pedantic crowd - a group I'm a card carrying member of), but what I'd really like you to do is stop putting in suggestions that A Second is something that has to be addressed when using the unit frames per second in your small time-scale articles. ("one second is an eternity", "if these frame times are maintained for one second" (previous article), ...) The unit containing "second" does not mean that we have to look at a second.

            • Damage
            • 8 years ago

            The words “traditional approach” are not a sufficient indicator that I am talking about how things have been done in the past and, in fact, are still done at virtually every other review site? My further clarification of that fact… also insufficient?

            Hm. Perhaps the fault is not my own.

            • Stargazer
            • 8 years ago

            “The way things have been done in the past” and “are still done at virtually every other review site” when presenting frame rates (using FPS) is, like I said, to take averages *over a longer period*, not one second.

            Look, the thing I’ve been complaining about is that I feel that you’re consistently saying things that make it seem like you feel that because the unit contains “seconds”, you have to keep looking at a corresponding time period in order to use that unit.

            Apparently you do not agree with this, but then why did you choose to characterize me as wanting to use “frames per millisecond” as a unit (presumably based on my comment in an earlier article that I’d prefer you to use the unit frames per second for looking at the small time-scale (per-frame) behavior).

            Is it really that hard to understand that someone might get the impression that you believe that the unit used limits what time periods you can look at?

            • Damage
            • 8 years ago

            No, it’s very clear you have the wrong impression.

            • Airmantharp
            • 8 years ago

            I just think he’s confused.

            The graphs show succinctly and clearly how fast AND how smooth these cards are. That’s what I want to know.

            Hell, you’re the first ones to answer some of the questions that have been running around in our minds for over a decade, concerning smoothness. I liken it to the [H]’s ongoing attempt to focus on ‘playable settings’ over relative measured performance. That your articles are also clearly written and host largely correct grammar also helps :).

            • Stargazer
            • 8 years ago

            Confused about what?

            I’m saying that just because “frames per second” contains “second”, you’re not limited to looking at time scales on the magnitude of a second.

            Anyone that disagrees with that is Wrong.

            [quote<]Hell, you're the first ones to answer some of the questions that have been running around in our minds for over a decade, concerning smoothness.[/quote<] I've had nothing but praise for looking at small time-scale behavior ever since they started doing that.

            • Arag0n
            • 8 years ago

            For me this graph somehow defend my usual position that it’s not important to be close to 60FPS, but we use to want it because we want to have at least 30FPS. If we play at an average of 30FPS, we use to have bursts of 10 and 20FPS, so we have a non-smooth playing expirience, while playing at 60FPS with bottoms at 20-40FPS. Smoother playability it’s better than higher frame rates.

            • [TR]
            • 8 years ago

            He seems to miss the point that when we mention human perception in these matters we’re not talking of being consciously aware of how long a second takes to pass, but rather that within a second several things can occur in a real-time rendered environment that can look really out of place. Uneven frame rendering times can, for the most basic example, make a perfectly stable 60fps game look a little off, with what looks like jittery animations. It can be as bad as giving you a bit of a headache if you’re sensitive to those kinds of things.

            BTW, Stargazer, the frame times are mentioned not just for one second, but for a longer period, just like the averaged FPS counts. If you look, for instance, at the graphs for Batman in the article you have about 90 seconds of recorded frame times (unless you think the GTX570 was rendering nearly 5500 frames in one second?!). From those the interest lies in looking at the worst cases and seeing how much worse they are, hence the “over 50ms” graph.

            • Stargazer
            • 8 years ago

            Not only did I not miss that point, I didn’t even address it.
            I complained that how long a second is “in terms of human perception” has nothing to do with the limitations of the *unit* frames per second. Just because the word “second” is there doesn’t mean that it can’t be used for much smaller time scales.

            • derFunkenstein
            • 8 years ago

            Would you prefer to measure in frames per hour? Look, this game runs at 7600 fph! that’s fast, right?! RIGHT?! No. That’s not a useful metric.

            • Stargazer
            • 8 years ago

            I think “frames per second” is just fine. I also think that “milliseconds per frame” is just fine.

            The point is that how long a second is “in terms of human perception” is irrelevant to the question of whether “frames per second” is a suitable unit for looking at small time-scale behavior.

            • Stargazer
            • 8 years ago

            Coming from someone that’s been claiming that I’m arguing that human perception has nothing to do with game performance, and that I’m advocating measuring frame rates in “frames per millisecond”…

            You’re apparently not even paying attention to what I’m writing, and yet I’m the one having the wrong impression?

            Fine. Take any disagreement as an insult if you prefer, and by all means feel free to continue insulting me (I’d expect nothing less than at least one jab in the next article though). I stand by my opinion that you’re incorrectly giving off the impression that the usage of “second” in the unit implies some special relationship to the time period of a second. If anything, this discussion and the mention of “frames per millisecond” has reinforced that impression.

            • Airmantharp
            • 8 years ago

            I get the impression that you wish to exclude a temporal reference in the benchmarking process; but just what reference are we supposed to use instead? And why exactly is using seconds wrong?

            • Stargazer
            • 8 years ago

            No.
            I’m saying that just because the *unit* contains “second”, we’re not limited to looking at time periods stretching over a second anymore than we’re limited to measuring speeds over time periods of an hour when using the unit “miles per hour”.

            • [TR]
            • 8 years ago

            You can look at speeds inside that hour and you’d be looking at instant velocity, right? Different from average velocity (in that overall hour) and still expressible in M/KPH, if I understand you correctly. No need to change the scale of the metric there, but is there an equivalent measure for frame rates that would be as meaningful? Because you could take average FPS and turn it into average frame times and forget about all that “per second” business if you wanted.

            • Stargazer
            • 8 years ago

            My point is that:

            1) There’s nothing about measuring frame rates that makes you loose small time-scale detail.
            2) There’s nothing about using the unit “frames per second” that makes you loose small time-scale detail (or require you to measure over a second, or multiples of seconds).
            3) It’s averaging that cases the loss of small time-scale detail.

            • Damage
            • 8 years ago

            No need to take offense. You just seem to be misunderstanding me, despite my attempts to clarify. Not sure what can be done to help you.

            • Stargazer
            • 8 years ago

            Ditto.

            • cygnus1
            • 8 years ago

            I think you’re both arguing past each other.

            Scott, I think I can sum up what Stargazer is saying. He simply is pointing out that you could convert the frame render times into FPS, and still be correct. He’s saying that the time to render an individual frame can ALSO be expressed in FPS. And I agree with that, it’s perfectly reasonable to convert x milliseconds/frame into FPS. One is the inverse of the other.

            Stargazer, here’s what I *think* you’re not understanding. Scott doesn’t want to talk about frame *rates* when it comes to how smooth a game runs because of the ‘traditional’ baggage with the unit being used as an average. FPS has most often been used in the past to express an average frame rate for an arbitrary number of frames, and NOT individual frames. Scott wants people to understand frames take a finite amount of time to render, and if that time varies much between frames you get jitter. That one or two frames with long render times can ruin a game experience even with very high frame *rates*. I believe the easiest way to make that point is to use a unit of measure that is not associated with averages of multiple frames, by using a unit that is a finite time per frame.

            When the unit of measure reflects the language used in the discussion it is much simpler to understand.

            • cygnus1
            • 8 years ago

            Actually, now that I’ve read your sub-thread with flip-mode below, I’ve changed my mind and I also believe you’re argument is 100% pointless. Even though it is the inverse and it shows the ‘same’ data, it is flat out logically wrong to measure the render time of a single frame, in *frames* per second. And it actually has nothing to do with the word second, it has to do with frames. FPS by it’s nature implies multiple frames. That’s why it’s plural.

            In a previous discussion about the way Scott is presenting this data somebody linked to graphs where they tried to present similar data but labeled individual frames with FPS and it was confusing as all hell.

            Scott is right, and you’re argument is wrong at worst and pedantic at best.

            • Stargazer
            • 8 years ago

            I didn’t make the argument that you’re claiming (that they should use FPS).

            Also, what you’re claiming is “logically wrong” is not actually the case. If you have a <something> per <something> unit, the first one is commonly pluralized. Meters per second, miles per hour, …
            These units are certainly used even when the distance being measured over is one meter, one mile, …

            • cygnus1
            • 8 years ago

            And I didn’t say you said they should use FPS. As I understood it, your argument is that the two units are interchangeable. That’s what I disagree with. You’re comparison choice is wrong. Seriously quit using speed as a comparison. It’s just wrong, you’re not measuring a velocity. Nothing is moving. You’re counting an occurrence. If you need to compare, compare to RPM or Hz. If you talk about a single rotation or cycle, you switch to using finite times. Take random seek times of a hard drive for instance. It’s most common for them to be measured in time/seek because you are talking about a single seek. We could very easily measure that in seeks/time, but we don’t because it’s not logical to use a unit that refers to multiple seeks when we are measuring a single seek.

            • Stargazer
            • 8 years ago

            I’ve been comparing to velocities because they are the rates that people are most used to, but rates in general are not limited to large “sample sets”, and you can even look at instantaneous rates of change.

            • cygnus1
            • 8 years ago

            There are different kinds of rates, and you’re comparing to one that is not similar. I’d say people are equally familiar with RPM or Hz, thus why I referenced them. Your comparison is not valid just because people are familiar with it. I’m clearly not getting through to you, because you steer the discussion to tangent after tangent. Just know that you are wrong and go in peace.

            Good trolling by the way.

            • rechicero
            • 8 years ago

            In fact, he’s right. They are interchangeable. Which one to use is just a matter of convention. A good example: miles per galon or litres per km. Depending of your continent you’ll use one or the other… And the info is always the same. Something similar to frequency and wavelength.

            I HDD there are conventions, we’ve always used some units for some things. And in graphic cards is the same. In this case we use FPS. We know that 60 FPS is good enough. We know that 12 FPS sucks. We don’t need to think about it. But, if somebody tell us the time used for a frame, like 80 milliseconds, we usually don’t know if that’s fast, slow or what. We need to look for the equivalent: 1/0.08 = 12,5 FPS = Ok, 80 milliseconds sucks!

            So, it’s perfectly logic to think: “Wait, if I need to calculate how many FPS are equivalent to 80 millisecond, why didn’t they use FPS directly?” And that’s not what Stargazer wanted to say, anyway

        • rechicero
        • 8 years ago

        I’m going to answer this message, although I read most of them. First, I’d say you (Stargazer and you both) lost somewhat the focus of the “positive” part of the conversation.

        I must say Stargazer points looks sounder, too. You’ve develop a great system to assess GPU performance. You went to the fine detail and that’s great, kudos to you. But at the end of the day this is about which card is faster. Period. You can use different units for that. You can use how many time the card use to render one frame, or ten frames, or whatever. Or you can use how many frames per second the card render in a given time (one millisecond, one second, one minute, that doesn’t matter).

        Both approaches are valid and it’s just a matter of tradition or conventions. And in the GPU performance, the unit that we’ve used all these years to express how fast they are is “speed”, in frames per second. So, IMHO, it would be more consistent and intuitive to use the same measurement unit. Average FPS for the average result and then talk about the points were the FPS dropped. It just make more sense using the same criteria.

        You wrote this sentence “In practice, we know those longer frames tend to take about 80 milliseconds to produce, which corresponds to a steady-state frame rate of 13 FPS” for a reason. Because you know 13 FPS tells as a lot more than 80 milliseconds. It’s essentially the same info (as switch time and refresh rate are for LCD panels; or MPG is used in the States and litres per km is used in Europe… just a matter of conventions), but we all know what 13 FPS means (it sucks!) and we’re not so sure about what 80 ms means (does it suck?).

        Anyway I want to stress that I think you did something great going for the fine detail of performance measurements. Kudos for you.

      • flip-mode
      • 8 years ago

      Let me see if I understand…. hmm…. nope, I’m not getting it. You’re taking issue with one sentence in the article that isn’t quite phrased the way you want it to be and are suggesting that single sentence is throwing readers of the article into confusion or false understanding?

      Well, I think all Damage was saying is that humans perceive things at intervals shorter than a second, and so articles that rely simple on frame per second can omit some real problematic scenarios where frames are delivered slowly enough to interrupt the perception of fluid motion.

      I’m trying to actually understand what you think the problem is, but I simply can’t. You’re not making any sense.

      There’s a saying around my office: If it makes sense to you but it doesn’t make sense to anyone else, then it probably doesn’t really make any sense.

      Practically speaking, even if you had a valid point but you can’t communicate it clearly then it’s a moot point.

      So either you have a valid point that you’re not communicating clearly, or you actually don’t even have a valid point. I cannot figure out which it is.

        • Stargazer
        • 8 years ago

        [quote<]Let me see if I understand.... hmm.... nope, I'm not getting it. You're taking issue with one sentence in the article that isn't quite phrased the way you want it to be and are suggesting that single sentence is throwing readers of the article into confusion or false understanding?[/quote<] I'm taking issue with the implications that keep popping up in these articles that the problem with frame rates (when looking at small time scale behavior) is that they're measured in frames *per second*, and/or that you need to look at a time period of one second to measure a frame rate in FPS, and that there somehow is a problem with using frame rates at all for these small time frames. There's not. Frame rates and frame times are each others' inverse, and you could use either of them to provide the same information. It's perfectly fine to choose to use frame times instead (and an argument could definitely be made that having a different metric makes it easier to separate it from "ordinary" frame rates), but there's no inherent problem with frame rates or with using the unit "frames per second" when looking at small time-scale behavior. [quote<]Well, I think all Damage was saying is that humans perceive things at intervals shorter than a second, and so articles that rely simple on frame per second can omit some real problematic scenarios where frames are delivered slowly enough to interrupt the perception of fluid motion.[/quote<] There is nothing about using the unit "frames per second" that would prevent you from looking at fast changes. You could just as easily describe those slow frames with a frame rate as with a frame time. You certainly don't have to, but there's nothing preventing you from doing it. Having "second" being a part of the unit does not automagically rob you of precision.

          • flip-mode
          • 8 years ago

          [quote<]I'm taking issue with the implications that keep popping up in these articles that the problem with frame rates (when looking at small time scale behavior) is that they're measured in frames *per second*, and/or that you need to look at a time period of one second to measure a frame rate in FPS, and that there somehow is a problem with using frame rates at all for these small time frames. There's not. Frame rates and frame times are each others' inverse, and you could use either of them to provide the same information. It's perfectly fine to choose to use frame times instead (and an argument could definitely be made that having a different metric makes it easier to separate it from "ordinary" frame rates), but there's no inherent problem with frame rates or with using the unit "frames per second" when looking at small time-scale behavior.[/quote<] Hmm... I think I now understand what you're saying but I totally disagree with it. It's not that you're technically wrong, it's that you're way of expressing the data is a lot more complicated than Scott's way of expressing the data. Scott's way of expressing the data is clear, clean, and simple. You seem to be suggesting that things still be expressed in frames per second even when dealing with single frame times or when dealing with periods of less than a second. That's more complicated and less intuitive and so it really just doesn't make any sense to do it that way. If you're willing to try an experiment, it would be interesting for you to represent the data in this article according to the way you suggest. I'd like to actually see it presented because right now I'm thinking that what you're suggesting would be very confusing, but it would be a fun surprise if that were not the case.

            • Stargazer
            • 8 years ago

            I’m actually not suggesting that the data should be expressed in frames per second, or that there’s anything “wrong” with the current way the data is expressed. I’m saying that it’s wrong to say that you *can’t* express it using frames per second.

            For instance, take this quote:
            [quote<]The basic argument of that article was that the traditional approach of measuring speed in frames per second has some pretty major blind spots. For instance, one second is an eternity in terms of human perception.[/quote<] I find it irrefutable that even if a human perceives a second to be "long", that does not mean that a unit containing "second" can not be used to measure "short" events. "For instance, <statement>" suggests that <statement> is an example of what is being discussed in the preceding sentence. Thus, those two sentences taken together suggest that how long a human perceives a second to be is a major blind spot of measuring speed in frames per second. It's not. Units containing "second" are used when measuring extremely short events all the time (similar to how we often measure speeds in "miles per hour" even over time frames *much* shorter than an hour). I don't mind that TR has chosen to use frame times instead of frame rates in these articles (it's actually seeming like a better and better choice the more time passes), but I don't like that they're repeatedly suggesting something that is *wrong*. With regard to the second part of your post: While I'm not actually suggesting that there's anything "wrong" with the way they're currently presenting their data, there has been something I've been wanting to see added - but it's already been included in this article. I already mentioned this in my first comment of this article (separate from this huge sub-discussion), but I really like the first "fancy stuff" graph used in this article (second to last graph on page 2 I believe). I would like to see some refinements (mainly a smaller bin size), but I believe that a refined version (something closer to this: [url<]http://forum.avsim.net/topic/329356-i5-2500k-vs-ram-frequency-and-latency/page__p__1942577#entry1942577)[/url<] would be very interesting. I don't know if the results will always be as clear as in that example (it's very possible that sometimes it won't contribute much - but then again that's the case for other views too), but I'd very much like to see more of it and find out. [i<]edit: The "Frames@FPS" axis in the linked article corresponds to "number of frames" in TR's version, and "Frames Per Second (FPS)" corresponds to "frame time in milliseconds".[/i<]

            • flip-mode
            • 8 years ago

            Scott never said you can’t, though. You’re being too pedantic about this. Admit it and be free! Your point is made but it is such a pedantic point to make that it’s wasteful to even talk about it. This goes far beyond the pedantry of Meadows’ obsession with correcting each and every grammatical error. Just swallow hard and mea culpa already!

            • Stargazer
            • 8 years ago

            He did say the part that I quoted. Do you disagree that it’s wrong?

            I’ve already said that I’m pedantic, so that’s not exactly news. :p

            I also can’t help but think that if a number of people responded to grammar/spelling corrections with posts like “your wrong its perfelcty fine too right that way”, grammar discussions would also end up being longer… 🙂

            • flip-mode
            • 8 years ago

            He did say what you quoted, but nowhere in that quote did it say that the per-second metric is totally impossible to use – rather, he said that the traditional way of presenting GPU performance in terms of average frames per second “has some pretty major blind spots”, and when he said that, he’s absolutely correct – he’s talking about the traditional method of doing things, he’s not talking about any particular limitations of the second itself as a base unit metric. He’s talking about a method, not a unit of measurement.

            So, getting to the method, the traditional way of presenting GPU performance in frames per second terms has always been to use either a time demo or a fraps run for some period of time – could be 30 seconds or 5 minutes – and then show the average frame rate per second for that period. This is the method that Scott was talking about and this method absolutely and undeniably has blind spots. That is what Scott was saying.

            What Scott did not say is that it is altogether impossible to accurately express these things in per second terms. He didn’t say that. That’s not in your quote. It’s not anything he’s said in any article before so far as I know.

            So once you sort that out, you can begin to address what is the clearest and simplest unit of measurement to use when presenting the data, and whether you’re going to present it in terms of frames-per-time or time-per-frame. At that point you enter the realm of subjectivity and there is no right or wrong – second, millisecond, minute, hour, femto-second.

            Edit: by the way, I’m not thumbing you down on any of this, just so you know that’s not me doing that.

            • Stargazer
            • 8 years ago

            Fair enough. He did say “the traditional approach” (I initially parsed the first sentence as if measuring speed in frames per second being the traditional approach, but there is also a traditional way for measuring frame rates), and the traditional approach for reporting frame rates does in fact involve an average (though generally over longer time periods than a second). I will also *definitely* agree that this traditional approach has some pretty major blind spots (which is why I like this type of analysis in the first place). However, how long a human perceives a second to be has no impact on this either, so I maintain that the composite statement is incorrect.

            It was also wrong of me to imply that they’re (TR) claiming that you *can’t* use frame rates over these time scales. There are other commenters (for previous articles) that have made that claim, but I do not recall anyone from the TR staff making that particular claim. However, I do feel that throughout this article series (which, again, I’m a huge fan of), there has been a trend of misrepresenting the downsides of frame rates, and reinforcing an incorrect belief that frames per second are measured over a second (or with a “resolution” of a second). Examples include the “for instance” quote above, Damage’s first comment here, as well as “We’ve already established the benefits of looking at frame times rather than frame rates” and “33-35 ms per frame works out to 29-30 frames per second, if the system maintains those frame times for a whole second.” from the previous article.

            Enough people have issues with “frames per second” being used on split-second time scales without that idea being reinforced. The unit is not limited to being used for time scales on the magnitude of seconds just because it contains “seconds”. It’s perfectly viable to use it on per-frame time scales (and it most certainly makes more sense than using “frames per millisecond” on these time scales).

            TL;DR:
            Remove the “For instance, one second is an eternity in terms of human perception” part and I’m fine with it all.

            • flip-mode
            • 8 years ago

            Well, I think that constitutes a respectable amount of progress for one day! 😉 We can deal with eternities in a second during our next session!

            • Stargazer
            • 8 years ago

            Will there be cookies?

            • flip-mode
            • 8 years ago

            I promise!

      • capricorn
      • 8 years ago

      Did I understand your statement correctly?
      [url=http://i1222.photobucket.com/albums/dd494/capricorn781227/fps-1.png<]link[/url<] The lower graph (fps for each frame vs frame number) is apparently much more straight forward than the upper one (frame time ms vs frame number).

        • Stargazer
        • 8 years ago

        Not once in this thread have I argued that frame rates should be used instead of frame times.

        However, frame rates and frame times are each others’ inverses, and either of them can be used to present the same information (as can be seen in your graph). You can look at per-frame frame times and per-frame frame rates, or averaged frame times and averaged frame rates. Either works just as fine.

        Using a frame rate does not rob you of any temporal resolution, or in any way make it harder to see fast (even single-frame) events. However, looking at *average* frame rates (or average frame times) can have this effect. That’s not a problem with frame rates, it’s a problem with *averaging*.

        In a previous article I also said that you could use either, and argued that there were reasons for why using frame rates rather than frame times might be a good idea (mainly because people tend to be more familiar with frame rates, which would give them a good point of reference). Cyril counter-argued that using a separate unit/(metric) would make it easier for people to differentiate from “standard” frame rate analysis, and that this benefit would outweigh the benefit given by familiarity. I didn’t necessarily agree, but certainly accepted that as a perfectly valid choice (I’m actually leaning more and more towards Cyril’s side here, since it’s becoming very clear that people have a harder time than I expected keeping these things separate).

    • Stargazer
    • 8 years ago

    I really like the first “fancy stuff” graph. Sure, it doesn’t tell you about the very high latency frames, but it does provide other information, and there are other ways to show the effect of (very) high latency frames.

    Have you tried using a smaller bin size (say 2.5 ms instead of the current 10 ms) and using the relative number of frames on the y-axis (so that configurations that produce different amounts of frames are easier to compare)? The current version is rather coarse, and presumably hides a lot of the information.

    • Ryu Connor
    • 8 years ago

    I question my sanity after classifying some of the graphs in this article as pretty.

Pin It on Pinterest

Share This