Today’s mid-range GPUs in Skyrim

In a landscape littered with cookie-cutter war shooters and tightly scripted action-adventure games, titles like The Elder Scrolls V: Skyrim are a rarity. I suspect the very idea of letting players loose in a massive game world filled with hundreds of hours of content would give most game designers an aneurysm. Bethesda Softworks has been cranking out Elder Scrolls games in the same vein since 1994, though, and it’s gotten no shortage of accolades for them. Already, Skyrim has earned its place in Metacritic’s all-time top 10, with more favorable reviews than Portal 2 and the original Quake.

It’s not hard to see why. Skyrim‘s open world is a masterpiece of beautifully rendered forests, towns, moorlands, and snow-capped mountains. Yes, the huge open-world design does make for relatively formulaic quests, but the ability to explore freely and to develop your character however you please adds a dimension other titles simply lack. You can be anything from a thief to a mercenary to a master wizard, and the game lets you take sides in a huge civil war and turn the tide of battle. Oh, and did we mention you get to fight dragons?

To our knowledge, Bethesda hasn’t talked about giving the PC preferential treatment like EA DICE did during Battlefield 3‘s development. So, while Skyrim certainly looks beautiful, it may not quite harness all of the horsepower top-of-the-line GPUs have on offer. Nevertheless, we expect PC gamers will want a reasonably quick graphics card if they wish to bask in the game’s vistas and immerse themselves completely. They’d be doing themselves a disservice otherwise.

Since folks took a liking to our Battlefield 3 performance article earlier this month, we thought we’d put our assortment of mid-range graphics cards through the paces in Skyrim, as well. We’ve worked our benchmarking magic on the same six cards, with the GeForce GTX 460 and Radeon HD 6850 facing off below $150; the GeForce GTX 560 and Radeon HD 6870 duking it out at $180-190; and the GeForce GTX 560 Ti and Radeon HD 6950 1GB dueling just under $250.

Our questions are the same: which cards are needed to play the game at what graphical settings, and do either AMD or Nvidia GPUs have an advantage over the competition? Again, we weren’t content to jot down average frame rates. We’ve also used elaborate frame-time measurements to assess how smoothly the game runs on each card. Read on to see what we learned.

Our testing methods

Our testing setup, detailed below, should look familiar to folks who’ve read our Battlefield 3 article. It’s the same one; we’ve just updated drivers and loaded up Skyrim on it.

We’re still testing at a resolution of 1920×1080 throughout. Like BF3, Skyrim lacks a built-in, scripted benchmark, which forced us to test manually by playing certain portions of the game multiple times with each card. Covering multiple resolutions using that method would have been far too time-consuming. We chose 1080p because it seems to be considerably more popular: a quick look at Newegg’s listings shows considerably more 1080p monitors than panels with other resolutions.

As ever, we did our best to deliver clean benchmark numbers, with tests run five times per card. Our test system was configured as follows:

Processor Intel Core i5-750
Motherboard Asus P7P55D
North bridge Intel P55 Express
South bridge
Memory size 4GB (2 DIMMs)
Memory type Kingston HyperX KHX2133C9AD3X2K2/4GX

DDR3 SDRAM at 1333MHz

Memory timings 9-9-9-24 1T
Chipset drivers INF update 9.2.0.1025

Rapid Storage Technology 10.1.0.1008

Audio Integrated Via VT1828S

with 6.0.1.8700 drivers

Graphics XFX Radeon HD 6850 1GB (HD-685X-ZNFC)

with Catalyst 11.11 drivers

Asus Radeon HD 6870 1GB 915MHz (EAH6870 DC/2DI2S/1GD5)

with Catalyst 11.11 drivers

Gigabyte Radeon HD 6950 1GB 870MHz (GV-R695OC-1GD)

with Catalyst 11.11 drivers

Zotac GeForce GTX 460 1GB (ZT-40402-10P)

with GeForce 285.79 beta drivers

MSI GeForce GTX 560 1GB 870MHz (N560GTX Twin Frozr II/OC)

with GeForce 285.79 beta drivers

Asus GeForce GTX 560 Ti 1GB 830MHz (ENGTX560 TI DCII/2DI/1GD5)

with GeForce 285.79 beta drivers

Hard drive Samsung SpinPoint F1 HD103UJ 1TB SATA

Western Digital Caviar Green 1TB

Power supply Corsair HX750W

OS Windows 7 Ultimate x64 Edition

Service Pack 1

Thanks to Asus, Intel, Corsair, Kingston, Samsung, and Western Digital for helping to outfit our test rigs with some of the finest hardware available. XFX, Gigabyte, and MSI for supplying the graphics cards for testing, as well.

We conducted testing using the Catalyst 11.11 driver from AMD and the GeForce 285.79 beta driver from Nvidia. We left optional AMD optimizations for tessellation and texture filtering disabled. Otherwise, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests.

We used the following test applications:

We used the Fraps utility to record frame rates while playing a 90-second sequences through each level we tested. Although capturing frame rates while playing isn’t precisely repeatable, we tried to make each run as similar as possible to all of the others. We tested each Fraps sequence at least five times per video card, in order to counteract any variability.

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Introducing the graphics detail settings

We’ve included screenshots of Skyrim‘s video options screens on the next few pages, but let’s first take a little time to compare and contrast the different detail presets. What do they mean, and how do they impact image quality?

First, here’s an at-a-glance rundown of the three graphical presets we’ll be looking at today—”medium,” “high,” and “ultra high”—and the parameters they affect:

  Medium High Ultra high
Antialiasing 4X FXAA 8X MSAA 8X MSAA
Anisotropic filtering Off 8X 16X
Texture quality High High High
Radial blur quality Low Medium High
Shadow detail Medium High Ultra
Decal quality Medium High Ultra
Water reflects Land only Land, objects,

and trees

Land, objects,

trees, and sky

Distant object detail Medium High Ultra

Just like with Battlefield 3, antialiasing is a standard feature, even at the “medium” setting. High texture detail is also on the menu for all three presets. Other settings degrade as you climb down the graphical preset ladder. In addition to what’s in the chart above, the presets impact the draw distance for things like characters, objects, lights, and vegetation—so fewer distant objects are displayed at the lower detail presets. The implications are obvious when we look at in-game screenshots:

Medium High Ultra

You can click the thumbnails for full-sized versions of the shots in lossless PNG format. However, zooming in on certain parts of the scene gives us an even better sense of how the detail presets impact image quality and draw distance:

Medium

High

Ultra

The water reflections don’t look too different even when we zoom in, but the changes in texture filtering quality, object detail, and object draw distance are immediately obvious.

Interestingly, because the “medium” preset uses FXAA antialiasing by default, it makes vegetation blend more smoothly into the scene than the “high” and “ultra” presets. Those higher-detail presets rely on multisampled antialiasing, which smooths over polygon edges more accurately than FXAA but doesn’t affect sprites and transparent textures. (We’ll be sticking to the presets for our tests, but users are free to mix and mix and match individual settings as they please in Skyrim‘s graphical options control panel.)

Another scene, this time in the town of Whiterun, gives us a better look at how the detail presets impact lighting and shadowing quality:

Medium High Ultra

Here’s a part of the scene magnified 200%:

Medium

High

Ultra

In this corner of the scene, some of the shadows are completely gone with the “medium” preset. You’ll have to switch to “ultra” to get all of the shadows, including those on that house at the right. Other points of note: the torch doesn’t render with the “medium” preset, but the flames inside do; foliage pixelation is again worse with the higher-detail presets; and distant textures look awfully blurry without anisotropic filtering in the “medium” mode.

From those two series of screenshots, I think we can take away that the “high” preset provides a good amount of visual fidelity, especially when you’re not magnifying part of the scene and looking for missing objects in the distance. Meanwhile, the “medium” preset really degrades visual quality, reducing detail and making textures look awfully blurry. Hopefully, our lower-end cards can handle Skyrim at the “high” preset.

Now that we have a firmer grasp on things, let’s get into our performance data.

Whiterun

Let’s start with something you’ll find yourself doing quite a few times in Skyrim: walking up and down Whiterun, one of the large towns that serve as quest hubs and play a part in the main story arc. Our benchmark run involved walking up through the market area to Dragonsreach, the castle overlooking Whiterun, and then panning the camera to take in the scenery. We then walked back down through a different neighborhood to the main gate. We repeated this trajectory five times for each card, taking a measurement each time.

We started testing with the “high” detail preset, which seems to be what all the cards defaulted to. We’ll do some further tinkering with other detail presets on the next page.

We’ve already established the benefits of looking at frame times rather than frame rates, both in our article, Inside the second: A new look at game benchmarking, and in our more recent Battlefield 3 performance comparison. So, we’re going to start with frame times and then study average FPS numbers in context afterward.

If you’re not familiar with our testing methodology, here’s the Cliff’s Notes version. Frame time data gives us a much better sense of overall smoothness and playability. In a perfect world, we’d want cards to spit out frames in about 16.7 milliseconds each (which would mean 60 frames rendered per second). Just as importantly, we’d want to ensure consistently low frame rendering times. Even momentary spikes in frame rendering times can translate into perceived choppiness—and overall FPS numbers can’t capture that.

For example, imagine one hypothetical second of gameplay. Almost all frames in that second are rendered in 16.7 ms, but the game briefly hangs, taking a disproportionate 100 ms to produce one frame and then catching up by cranking out the next frame in 5 ms—not an uncommon scenario. You’re going to feel the game skip, but the FPS counter will only report a dip from 60 to 56 FPS, which would suggest a negligible, imperceptible change. Looking inside the second helps us detect such skips, as well as other issues conventional frame rate data measured in FPS tend to obscure.

In the three graphs below, we’ve plotted individual frame rendering times for our cards across the duration of the Fraps run. We’ll be comparing competing pairs of cards in each graph. Note that the faster cards produce more frames and thus slightly longer lines in the graphs.

Surprisingly, it doesn’t look like there’s much of a difference between, well, any of the cards we tested. The GTX 460 and 6850 seem to be about as fast as each other, for instance, and the GTX 560 Ti and 6950 don’t appear to produce substantially lower frame times than their less powerful cousins. Across the board, we see frame times go up around two thirds of the way in. That’s when we overlooked the town of Whiterun from the castle above.

We can use our frame time data to calculate the total number of frames that took longer than 40 ms to render across our five benchmark runs for each card. (40 ms per frame, in case you’re wondering, would yield a sluggish 25 FPS average). Will this metric highlight greater differences between the products we’re looking at?

Sort of. The GeForces have slightly fewer frame time spikes than the Radeons, but the spikes aren’t numerous to begin with, and the differences between cards are relatively minimal.

As we’ve said, we want our graphics subsystem to deliver consistently low frame latencies. We can consider the big picture by looking at the 99th percentile frame times: the threshold below which 99% of frames are rendered. We think this may be our best overall performance metric. Simply put, the percentile calculation doesn’t let unusually short frame times cancel out unusually high ones, unlike average FPS results can. At the same time, this calculation excludes the rarest and most extraordinary latency spikes—the 1%, if you will—so it better reflects overall playability.

This outcome confirms the surprising consistency we inferred from looking at the line graphs. On all of these cards, most frames take no longer than about 33-35 ms to render. (33-35 ms per frame works out to 29-30 frames per second, if the system maintains those frame times for a whole second.)

The 40-ms chart counts exceptional frame times, and the 99th percentile chart gives us typical maximum frame times. The chart below shows simple average frame rates per second, a metric that will be familiar to most folks.

Most frames should take no longer than 33-35 ms to render, but this latest chart tell us that, on average, you can expect frames to render in about 18-19 ms (corresponding to the 52-56 FPS frame rates above). That’s quite smooth.

In fact, having played the game during testing, I’d go so far as to say even the 33-35 ms frame time peaks don’t ruin fluidity too much, subjectively speaking. Yes, the game is smoother with sub-20-ms frame times, but Skyrim isn’t one of those titles that requires instant twitch reactions and rapid, sweeping mouse movements. (During combat, you’ll usually either slash at an enemy with a melee weapon or run backward while flinging fireballs at him.) Mouse movements seem to feel responsive even as frame times go up, too, which helps conserve a sense of fluidity even as slight choppiness pervades. We’re going to test the “medium” graphical preset on the next page anyway, but keep these results in mind as we go forward.

Whiterun—further tinkering

Can we buff out those high frame times we saw earlier by turning down the detail? And is the “ultra high” detail preset a good match for higher-end cards? Let’s find out, starting with “medium” detail results.

The “medium” preset enables lower frame times as we overlook the town of Whiterun from the castle, but there’s still quite a jump from about 10 ms to nearly 30 ms when we start taking in the view. (That would correspond to a momentary frame-rate drop from 100 to about 33 FPS.) Considering what we saw in our image quality comparisons earlier, I’d say dropping down to the “medium” preset isn’t worth it.

What about “ultra” detail on the higher-end cards?

Unsurprisingly, our higher-end GPUs experience even greater increases in frame times at the highest detail preset. Not only that, but we’re looking at several hundred frames taking longer than 40 ms to render and at 99th-percentile frame times well over 40 ms for both cards. (Again, a 40-ms frame time corresponds to a 25 FPS frame rate.) The Whiterun walkabout is mostly playable at these settings, but it’s hard to get over the choppiness as we look around the castle.

We should note the 40-ms graph above potentially muddles things somewhat. Because both cards produce many frames that take longer than 40 ms to render, the card that produces the most total frames—in this case, the GeForce GTX 560 Ti—might be unfairly penalized. We can counteract this effect by raising the threshold and looking at the number of frames that take longer than 50 ms to render:

The Radeon is at a much greater disadvantage here, with 25 times as many frames over 50 ms as the GeForce. Again, though, both cards exhibit a lot of choppiness at these settings.

That’s enough exploration. How about some combat?

Ancient’s Ascent
Skyrim will have you fending off wildebeests and clearing out dungeons of undead creatures on a regular basis. By far the most spectacular bits of combat in the game are fights with dragons, though. We just couldn’t pass up the opportunity, so we benchmarked our cards during a night-time dragon fight at Ancient’s Ascent, a location in the snow-capped mountains near the town of Helgen. Since the dragon’s attack behavior was hard to keep entirely consistent, we did seven runs per card instead of the usual five.

Again, we started testing at the “high” preset.

Whoa!

These line graphs tell us three things. First, there are a lot of rapid variations between long and short frame times in this test. These variations resemble multi-GPU-induced micro-stuttering we’ve measured in the past. Second, AMD cards seem to fare slightly better than their Nvidia siblings, at last when it comes to maintaining relatively consistent frame latencies. Third, faster cards clearly yield more consistently low frame times than slower ones; the uniformity we measured in our Whiterun test isn’t present here.

Dissecting the data further, we see the GeForce GTX 460 and 560 produce substantially greater numbers of exceptionally long frame times than the corresponding Radeons. The AMD cards stay ahead even when we remove outliers with our 99th-percentile calculation, though the spread between the 6850, GTX 460, 6870, and GTX 560 Ti doesn’t amount to much.

Looking at average frame rates gives no indication of the strange frame time inconsistencies we detected, and it actually puts one of the Nvidia cards, the GeForce GTX 560, ahead of its AMD rival. There’s clearly something strange at play here, and the FPS numbers don’t capture the problem.

Ancient’s Ascent—what’s with the jitter?

Before we look at results for other detail presets, let’s look into the cause of the strange frame-time jitter we measured.

Obviously, the jitter didn’t occur in Whiterun, so something about the dragon fight triggered it. The jitter was consistent throughout the test, so it wasn’t tied to the dragon’s attacks or the fireballs we shot at it. After a little brainstorming, we determined the jitter must have been caused by one of two factors—or a combination of them—in the Ancient’s Ascent test: the ongoing snow storm or the idle fire animation in our character’s hands, which indicates a fire spell is ready to cast.

We tried measuring frame times with our fire spell selected in Whiterun, but no jitter was apparent.

After that, we traveled to Winterhold, another location with an ongoing snow storm. First we measured frame times as we stared at the empty scene. Next, we unholstered our fire spell. Then, we pressed the R key to make the fire spell go away again:

Bingo. The snow storm doesn’t cause jitter on its own, and neither does the fire animation. Combine the two, though, and things go a little crazy. Here’s a close-up of what the jitter looks like:

As you can see, the jitter manifests itself as one long frame time, two medium frame times, and a short frame time, repeated in the same sequence ad nauseum. Subjectively, the variance wasn’t sufficient to disrupt the illusion of motion too much on the faster cards. That makes sense since, in our example above, the 6950’s frame times are consistently below 16 ms. However, we could definitely notice skips in what should have been continuous movement when sliding from side to side. The jitter was especially bad on the GeForce GTX 460, where frame times went from about 12 ms to 20 ms to 30 ms and back again repeatedly. (To put things in perspective, those frame times would correspond to frame rates of 83, 50, and 33 FPS.)

Ancient’s Ascent—further tinkering

Now that we’ve demystified that jitter issue, we can continue our performance testing, starting with a look at how the Ancient’s Ascent test plays out on the GeForce GTX 460 and Radeon HD 6850 with the “medium” graphical preset selected.

Turning down the detail level helps minimize the jitter, but the problem doesn’t go away entirely. The AMD and Nvidia cards are almost neck and neck, although the GeForce produces a slightly greater number of exceptionally high-latency frames.

What about the GTX 560 Ti and 6950 1GB in “ultra” mode?

Yes, the jitter is still there, but these cards produce surprisingly low frame times overall. They’re definitely playable at this setting—especially the Radeon, which has better average frame rates/times, a lower 99th-percentile threshold, and fewer high-latency frames. If it weren’t for the poor showing in Whiterun, we’d say 6950 and GTX 560 Ti owners should definitely be playing Skyrim at the “ultra high” setting.

Conclusions

We launched into this endeavor thinking Skryim wouldn’t put cards like the Radeon HD 6950 and GeForce GTX 560 Ti under too much strain. We weren’t quite right. Yes, for the most part, those cards have no trouble cranking out smooth, low frame times at 1080p with the “ultra high” graphical preset selected. However, particularly demanding scenes like the castle vista in Whiterun cause frame latencies to vault into noticeably choppy territory.

Some folks have dug up additional, undocumented eye-candy options in Skyrim‘s configuration files. We didn’t have time to play with those, but according to PC World’s coverage, they have a subtle but noticeable impact on the game’s visual fidelity. Bethesda may not have dazzled PC gamers with fancy DirectX 11 effects like EA DICE did with Battlefield 3, but Skyrim nevertheless has substantial amounts of visual ‘zazz—enough, perhaps, to warrant a graphics card upgrade for some players.

The good news is Skyrim‘s “high” detail preset is already plenty detailed, and it doesn’t stress cheaper cards like the GeForce GTX 460 and Radeon HD 6850 all that much. In other words, you’ll benefit from a fast GPU in this game, but a not-so-fast one will still deliver a great experience. As for which GPU vendor to choose, we saw consistently lower frame times during our dragon fight with the Radeons, but in cases where the strange snow-and-fire jitter issue didn’t come into play, the competitors were quite closely matched.

Before we go, we should mention that AMD released a beta Catalyst driver while we were putting the finishing touches on this article. The new driver promises performance improvements of 2-7% in Skryim, which could tip the odds further in AMD’s favor. In our experience, though, such quoted speed increases for driver updates apply only to best-case scenarios. We didn’t have a chance to re-test performance with the new drivers, but we did run a quick check to see whether the snowstorm-and-fire-spell jitter effect was still present, and it was.

Comments closed
    • kamikaziechameleon
    • 8 years ago

    Looking over the different preliminary GPU reviews the 6950 looks more and more like a better deal. It is marginally more expensive than the rest of the 200 dollar pack and notably leads the pack in most tests/games. Then there is the price desert until you hit the 6970 and the performance gains are negligible.

    • kamikaziechameleon
    • 8 years ago

    Would nice to see a GPU round up at the end of the year that uses this holidays greatest as testing material.

    • ronch
    • 8 years ago

    How about a “Today’s High-End and Mid-Range PROCESSORS in Skyrim” article?

    Also, game devs should keep improving their multi-threaded programming skills, and as more and more cores get crammed into more and more processors, hopefully it’s enough hint for them. AMD folks sure have their fingers crossed (those who weren’t fired, anyway).

      • Kaleid
      • 8 years ago

      CPU:s
      [url<]http://www.techspot.com/review/467-skyrim-performance/page7.html[/url<]

        • ronch
        • 8 years ago

        Thanks, Kaleid. Man, that really makes AMD look bad. Even AMD employees looking to play Skyrim will probably get themselves a Sandy Bridge.

    • Stargazer
    • 8 years ago

    Cyril,

    I couldn’t help but notice that the article still says “33-35 ms per frame works out to 29-30 frames per second, if the system maintains those frame times for a whole second.”.

    Does this mean that your opinion is that frame rates can only be used over a time period of a second?

      • Cyril
      • 8 years ago

      [i<]Frame rates[/i<] certainly can be used over periods of less than a second. We could talk about frames per millisecond, for instance. [i<]Frames per second[/i<], on the other hand, typically refers to the number of frames rendered... for each second. I fully realize that we could talk about frames per second over periods of less than a second, just like one can talk about, say, watt-hours for periods less than an hour. Just because that's mathematically possible doesn't mean it's a good idea, however. The whole point of our study of frame times is that normally reported FPS data is misleading, precisely [i<]because[/i<] it doesn't go inside the second. I don't think it's advisable to use the same term to refer to misleading and non-misleading metrics when we're comparing and contrasting them. These things are challenging enough to explain clearly and unambiguously as it is.

        • Stargazer
        • 8 years ago

        I don’t necessarily agree that FPS [i<]typically[/i<] refers to frames rendered [i<]for each second[/i<], but I do agree that it is common. As I've said before, I also understand if you want to stick to frame times because you want to differentiate from "regular FPS analysis" (though I think that this gives you a new set of problems, in that people don't have the same intuitive grasp about frame times as they do about frame rates). However, you're still giving misinformation about the possible usages of frame rates. If I understand you correctly, we're agreed that "33-35 ms per frame works out to 29-30 frames per second" is correct without any further qualifications, right? If so, then why is "if the system maintains those frame times for a whole second." even there? It doesn't provide any more information than any "if <random tautology>" statement does, and only serves to confuse. Why use it? The general impression I'm getting when reading your articles using this new small time-scale testing methodology (which, again, I *love*) is that you're generally racking down on the usage of [i<]frame rates[/i<], and their limitations. Why? You could do exactly the same analyses with frame rates instead of frame times. By all means, go nuts going after *averages*, but I don't think you're making things any clearer by misrepresenting what you can do using frame rates. I still believe that the benefit of using a unit that people have an intuitive grasp for (FPS) is significant though. Would you consider using *both* units in your graphs (while focusing on ms/frame in the discussion and making that your "primary" scale)? I think that'd make it easier for people to read your graphs/charts, and would reduce the need to write "this corresponds to a frame rate of ..." several times in the text.

          • Stargazer
          • 8 years ago

          Oh. And if you’re not interested in trying out any graphs like this:
          [url<]http://forum.avsim.net/topic/329356-i5-2500k-vs-ram-frequency-and-latency/page__view__findpost__p__1942577[/url<] , would you consider releasing your data sets so that people can perform their own analysis on them? (I fully understand if you don't want to do this)

          • Cyril
          • 8 years ago

          [quote<]If I understand you correctly, we're agreed that "33-35 ms per frame works out to 29-30 frames per second" is correct without any further qualifications, right? If so, then why is "if the system maintains those frame times for a whole second." even there?[/quote<] Because, in my opinion, it helps clarify the relationship between frame times and frames per second—in the commonly understood sense—as well as the pitfalls of the commonly understood FPS metric. I disagree that there's anything particularly confusing or misleading about it. I also don't think we can automatically assume that using FPS across the board would make things easier to grasp intuitively. We'd still be introducing a new metric into the picture. For things to make sense, we'd have to talk about frames per second [i<]per frame[/i<]. The numbers would appear familiar to a lot of folks, but the unit would leave many scratching their heads. Perhaps presenting readers with both FPS/frame and ms/frame would be helpful, but at the same time, I think it would make graphs harder to read. Either way, we could just be trading one kind of confusing for another. With all that said, I sympathize with the desire to make our inside-the-second data more intuitive and bite-sized. Obviously, the last thing we want to do is alienate readers by blinding them with science. I don't an ideal solution has presented itself to us quite yet, but we definitely welcome suggestions like yours.

            • Stargazer
            • 8 years ago

            [quote<]Because, in my opinion, it helps clarify the relationship between frame times and frames per second—in the commonly understood sense—as well as the pitfalls of the commonly understood FPS metric. I disagree that there's anything particularly confusing or misleading about it.[/quote<] It's misleading because it's an irrelevant if-clause. "33-35 ms per frame works out to 29-30 frames per second" is true if you're measuing over one frame, two frames, ten frames, a tenth of a second, half a second, a second, two seconds, ten seconds, ten hours, 30 days, 10 years, or 7 centuries. Measuring over one second is not a required statement (so it's not an iff (if-and-only-if)). Moreover, this means that the second statement can be *anything*, and the total statement would still be true. You could also say "33-35 ms per frame works out to 29-30 frames per second if the moon is a blue cheese", and you would be right *because the second statement doesn't matter* (the overall statement is also true if the moon is *not* a blue cheese, just as it is true if you *don't* measure over one second). Basically, the second statement implies a relationship that is not true (a requirement that does not exist). *At best*, it's reinforcing an incorrect belief. I'd also say that the "common usage" of frame rate is *not* averaged over one second. Usually when we talk about frame rate (or "FPS"), it's averaged [i<]over the testing period in question[/i<], not over one second. When you've given frame rates in reviews in the past, the measurements have not typically been over one second. An exception to this is that apparently the "Minimum FPS" reported by some tools actually shows the minimum *1-second average FPS*, but I doubt that many people were actually aware of this (this is actually the very reason your new methodology is interesting - giving the *true* minimum FPS!) [quote<]I also don't think we can automatically assume that using FPS across the board would make things easier to grasp intuitively. We'd still be introducing a new metric into the picture. For things to make sense, we'd have to talk about frames per second per frame. The numbers would appear familiar to a lot of folks, but the unit would leave many scratching their heads.[/quote<] It would be easier to grasp intuitively because people have a fairly good idea of what frame rate is required to give a "fluid" experience, but not so much what the corresponding frame times are. People know if they want their frame rate to stay above 20FPS, 30FPS, 60FPS or whatever. Typically people aren't as familiar with what 50ms, 33.3ms or 16.7ms implies. Also, the *unit* would still be frames per second, not frames per second per frame (similarly, the unit is not frames per second per second when you look at the frame rate for a 1-second interval).

            • Cyril
            • 8 years ago

            [quote<]I'd also say that the "common usage" of frame rate is *not* averaged over one second. Usually when we talk about frame rate (or "FPS"), it's averaged over the testing period in question, not over one second. [/quote<] Nobody's disputing that. Again, however, I believe it helps clarify the relationship between the two metrics to say, "Okay, we know that this frame took x milliseconds to render. If all the frames in one second take the exact same amount of time to render, then you have a frame rate of y frames per second." I don't think there's an implication that it's the only possible conversion; it's just an illustration. [quote<]It would be easier to grasp intuitively because people have a fairly good idea of what frame rate is required to give a "fluid" experience, but not so much what the corresponding frame times are. . . . Also, the *unit* would still be frames per second, not frames per second per frame [/quote<] Therein lies the problem. You want to make things more intuitive by using the exact same unit, with no qualifier, to refer to two different metrics. What you're proposing would simply replace confusion over unfamiliar units with confusion over the nature of the metrics used. Perhaps you view that as a worthwhile tradeoff, but I'm not so sure it is. I think we'll just have to agree to disagree on all this.

            • Stargazer
            • 8 years ago

            [quote<]Nobody's disputing that. Again, however, I believe it helps clarify the relationship between the two metrics to say, "Okay, we know that this frame took x milliseconds to render. If all the frames in one second take the exact same amount of time to render, then you have a frame rate of y frames per second." I don't think there's an implication that it's the only possible conversion; it's just an illustration.[/quote<] And I would have been perfectly fine with "If all the frames in one second take the exact same amount of time to render, then you have a frame rate of y frames per second.". What I think is misleading is the ", if" part. You don't have to go further than the comments here to see that there are people who actually believe that frame rates have to be measured over one second. Why add unnecessary ambiguity (at best) to a topic where confusion already exists if you could have just as easily used a non-ambiguous phrase? It's not just the above quote either, I simply took that as one example of things that suggest that frame rates are somehow "limited" in what you can do. As another example, you also say this: [quote<]We've already established the benefits of looking at frame times rather than frame rates[/quote<] You haven't. You have established the benefits of looking at frame times instead of [i<]average[/i<] frame rates (which has the same limitations as looking at average frame times). I have to say that you've overall been quite good at mentioning "average" in the appropriate places in this article, but there *is* a confusion out there, and people should not in any way be encouraged to believe that frame rates are in any way inferior to frame times in these small time-scale investigations. You could use either equally well. [quote<]Therein lies the problem. You want to make things more intuitive by using the exact same unit, with no qualifier, to refer to two different metrics. What you're proposing would simply replace confusion over unfamiliar units with confusion over the nature of the metrics used. Perhaps you view that as a worthwhile tradeoff, but I'm not so sure it is. I think we'll just have to agree to disagree on all this.[/quote<] I wouldn't say "with no qualifier". They're two different metrics. It's rather common for multiple metrics to have a shared unit, and we tell them apart by having different names for the different metrics. That said, as you say there are benefits to keeping the units separate, and it definitely is a tradeoff (and thus subject to personal opionions). So, while I don't share your opinion in this matter, I acknowledge that your viewpoint is just as valid as mine, and on *this* issue I'm perfectly willing to agree to disagree. 🙂

            • crazybus
            • 8 years ago

            You’re arguing that FPS is more familiar than ms/frame, which I’m sure it is, but don’t confuse familiarity for intuition. The only reason people perceive certain FPS thresholds to be necessary for fluidity is because that’s the way we’ve be trained to think. Ms/frame is inherently superior to FPS in the same way that litres/km is a better fuel economy metric than km/litre. In the end you’re not concerned with how many frames can be rendered in a second, but rather the lag in between them, which is what you actually perceive.

            • Stargazer
            • 8 years ago

            You’re right.
            When I’m saying that people have a better “intuitive grasp” for FPS, I’m pretty much actually referring to a familiarity with the concept and the values you get when using it.

            What I mean is that people can look at a value in FPS and pretty much immediately tell if it is “good” or “bad. For the majority of people, the same does not hold true for a value given in ms.

    • AGerbilWithAFootInTheGrav
    • 8 years ago

    interesting, like the new game based reviews…

    and now… what about testing Serious Sam III? 🙂

    • webs0r
    • 8 years ago

    Guys I wonder if this “large address aware” loader would assist with the micro-stuttering.

    [url<]http://www.skyrimnexus.com/downloads/file.php?id=1013[/url<] From the readme: Skyrim4GB additionally hooks and replaces the Windows GetTickCount() function that may improve stuttering. In sections the game was smoother for me using this loader.

    • clone
    • 8 years ago

    I’ll probably be killed for this but whatever.

    Tech Report you’ve got to find a cleaner and simpler way to portray your information…… I made it to “further tinkering” and stopped, went directly here, did not pass go, did not collect $200 and am simply going to say it. ( I was using the shoe btw 🙂

    their was a time, a simpler time where FPS was a good thing, where Frames per second told me how well a video card ran, an improvement on that came a little later albeit a little more data intensive when graphs showed max frames, min frames and average frames….. ahh yes ignorance was bliss.

    I loved those graphs, they made sense and I was never disappointed when I bought a video card that was faster than the minimum frames….. ahh the happy times.

    but now the graphs at Tech Report no longer track FPS the way they used to, they have the squiggle graph, that same damned graph that made me leave Hard OCP behind 8 years ago never to look back, the graphs where lower is better, the graphs where 0 is the best and then the graphs that track micro stuttering and then the (apology in advance) value graph where I’m told what I’m supposed to consider as the correct value.

    as mentioned I stopped at “further tinkering” and it’s funny it was the further tinkering that killed me.

    their seems to be an issue with a cppl of cards, the game is buggy as hell apparently, probably not ready for primetime benching to be honest but beyond that I admit it, I give up, it’s nothing personal but while I understand your purpose regarding micro stuttering and frames & MS latency, I really don’t want my computer purchases to be that involved….. if things really are this bad, if PC gaming has truly gotten as sensitive as the masses who will likely flame me to a cinder after reading this are then I’m throwing in the towel, I don’t care enough about PC gaming and after 14 years I’m being told apparently it’s gotten much worse.

    p.s. most of my post is meant to be humor / sarcasm in truth I humbly suggest you try doing a traditional benchmark suite laid out in a traditional way then label the rest as the “advanced section” so that those who do care can obsess over it while those who don’t won’t have to.

      • Cyril
      • 8 years ago

      I totally see where you’re coming from. These are new metrics for us, and laying out the information in a clear, concise, and unintimidating way is a real challenge. At the same time, some of the things we learn are so important that we’d be misleading the reader if we relegated them to an “advanced” section. We’ve opened Pandora’s box, and going back to the old methods for simplicity’s sake feels completely wrong now. Our findings on multi-GPU micro-stuttering are a good example; we’ve actually changed our system guide recommendations because of them.

      I don’t think our dilemma is symptomatic of an underlying flakiness with PC gaming, though. I think consoles do a fine job of neatly packaging an experience that’s good enough for most people, but the PC sets the bar so much higher because it lets you have the best of the best—image quality, fluidity, physics, 3D, and so on. A higher bar and a higher degree of control leads to a greater potential for nitpicking. When a PC game gets choppy, you can find articles like this and spend a Sunday afternoon dealing with the problem. When a console game gets choppy, you’ve just gotta learn to live with it.

        • yogibbear
        • 8 years ago

        The issue for me Cyril is that in the BF3 one you included the FPS chart of ages gone by along with the new 1/FPS charts. Whereas for Skyrim there simply isn’t an FPS chart anywhere to be seen. Let people make up their mind when they see the two charts side by side and then they will glean the value that the second chart gives vs. the previous. Left alone it is difficult (without first doing a very simple calculation) to tell if the gpu runs the game well or not without reading the text to find out (which is the no. 1 thing people want to know, once they’ve established that their card produces playable frames, then they might be interested to know if they have micro-stutter issues with said card or not).

          • Cyril
          • 8 years ago

          [quote<]The issue for me Cyril is that in the BF3 one you included the FPS chart of ages gone by along with the new 1/FPS charts. Whereas for Skyrim there simply isn't an FPS chart anywhere to be seen.[/quote<] Yes there is. It's right there on each results page, just below the 99th percentile chart.

            • yogibbear
            • 8 years ago

            Yeah but there’s no min/max/nice graph like usual….

            I know I know… all want and no give…. 🙁

        • clone
        • 8 years ago

        thx for the response, agree on the whole.

        • XaiaX
        • 8 years ago

        I like the frame-render-time metrics. Average FPS was annoyingly over generalized ages ago. I don’t care if I get 30fps average if that means I get 60fps for a few frames and then drop some later frames.

        So, I, for one, welcome our frame render time overlords. If anything, I’d prefer a little more math than even you’re doing.

        One, standard deviation. This would tell us how consistent the framerate is overall. Skew would tell us if the inconsistencies were spikes or troughs. The main thing I want however:

        Median.

        Average is dumb. Median is where it’s at. You could even overlay the average/median numbers on a single line. If you tell me median frame render time is 12ms, with a 5ms standard deviation and a negative skew, I’ll know that I can hit a solid 60fps.

        One other thing, everyone, everywhere seems to use “lower is better” on their charts, even though it’s super non-intuitive. Can you try simply right-justifying the charts, so that “close to the right edge” is always “better”? That way you maintain an intuitive aesthetic consistency, and the reader doesn’t have to double check the chart label when the numbers seem upside down.

        Thanks for experimenting with a new format.

      • obarthelemy
      • 8 years ago

      Actually, I’m with you.

      I think part of it is growing up. Games are no longer that important to me; neither is having the biggest… FPS, and even game tech has started eliciting a yawn.

      TR’s graphs are a study in illegible nerdiness though.They seem stuck at the “complicated stuff makes us look intelligent” phase, hopefully they’ll graduate to the “making complicated clear and easy is where talent’s at” sometime.

        • lilbuddhaman
        • 8 years ago

        Uh…no ? The articles themselves are extremely easy to understand, they just have a not-so-great graphical representation of their data… ie their current chart format sucks. They’ll find a better method and it’ll be intuitive to both the geek and the casual reader.

        • EtherealN
        • 8 years ago

        Illegible nerdiness? Quite far from it – it’s easy to understand and I find it very interesting to get something more than just the bog-standard “FPS” that doesn’t really say much of anything.

        Of course, getting the “lows” in the FPS graph back might be nice, but to my mind the new method is superior and not at all hard to understand.

    • gmskking
    • 8 years ago

    Who has the money to upgrade video cards every couple years just to play that new game? Too expensive to game on PC’s in my opinion.

      • dcasez
      • 8 years ago

      People who enjoy playing video games and spending money on their hobbies. $200 every 2 years to upgrade to a decent video card isn’t that much. It’s only a little over $8 a month to save for it.

      However, when you factor in the need to usually buy a whole new system every 4 to 6 years, you do have a bit more expenses.

      All in all though, you get access to more on the PC than on consoles. I love modding and using other peoples mods in games to truly enhance the game. It is the mod scene that really makes PC Gaming head and shoulder above console gaming for me.

      The enhanced graphics that the PC will always have is just pure bonus.

      • indeego
      • 8 years ago

      [url=http://i.imgur.com/MYZsS.jpg<]It is worth it![/url<]

      • paulWTAMU
      • 8 years ago

      150 every 2 years is too much? Damn I thought I was broke/cheap.

      • Bensam123
      • 8 years ago

      God forbid you play a game on any other setting then ULTRA.

      • lilbuddhaman
      • 8 years ago

      You’re on the wrong site to be having that opinion, bud.

      • Kaleid
      • 8 years ago

      Since most games come to consoles first there is no need to upgrade all the time. Right now I’m going from dualcore i3 to quad i5 simply because I got the CPU dirt cheap.

    • glacius555
    • 8 years ago

    How to play Skyrim with ultra high settings, a guide:

    – Avoid enjoying the scenery from top of tall structures.

    – Sleep during the day, explore at night (in Skyrim). Your GPU will thank you.

    – Be quick about assignments, mostly kill and plunder, fight scenes seem to be more smoothly rendered.

    – Avoid using fireballs when it snows or rains.

    – Buy a Sandy Bridge and ditch Core i5-750..

    – Finally, avoid stupid combinations like mine: HD6970 and Core i5-750..

      • Hattig
      • 8 years ago

      “- Avoid using fireballs when it snows or rains.”

      Makes sense to me. How about drizzle? What if you could craft fireball hats?

        • UberGerbil
        • 8 years ago

        And fireball ponchos.

      • Firestarter
      • 8 years ago

      Stupid combination? I’d say that for a vast majority of games that combination is just about perfect.

        • glacius555
        • 8 years ago

        I should’ve ended my post with ” /sarcasm”..

    • yogibbear
    • 8 years ago

    Fantastic article, but I’ll wait for the GOTY Skyrim edition with most of the bugs removed.

    Considering Arkham City has DX11 + PhysX, can you do the same article again but for Arhkam City? I want to see what the joys of my PC’s insides can do rather than just look at Oblivion 2.0

    • glynor
    • 8 years ago

    This article is utterly fantastic. Thank you.

    This proves beyond any doubt that the new frame-time calculations and tracking that you are now doing are painting a much more nuanced, and much more accurate, picture of performance than the raw FPS numbers everyone else has been using for so long.

    Really, really good job on all of this.

      • kamikaziechameleon
      • 8 years ago

      ^^^+1THIS!

    • WaltC
    • 8 years ago

    [quote<]Obviously, the jitter didn't occur in Whiterun, so something about the dragon fight triggered it.[/quote<] Here's what I think you are seeing.... What is it exactly that gpus do? They render pixels to the screen on a per-frame basis, and generally the amount of work a gpu is being asked to do amounts to the number of pixel changes between frames the gpu is asked to render. So, when you are actively engaged in a dragon fight you are going to have many, many more pixel changes per frame than when you are standing still in Whiterun taking in the castle vista. Obviously, then, the "jitter" you see is actually reflective of the work the gpu is doing between frames, and as the changes in pixels per frame are anything *but* uniform in the "fighting the dragon" scene, so goes the frame rate frequency. Generally speaking, the more pixels that change between frames the slower the frame rate. With more powerful GPUs this won't be that obvious when playing the game, because these gpus have the horsepower to process more pixel changes per frame while still maintaining a relatively robust frame rate per second. What you seem to be charting, apart from "jitter", is merely the pauses in the frame rate that occur while the gpu is processing more data between frames. Generally speaking, again, people do not perceive in tenths of seconds, or 1/20th of a second, etc., therefore usually a simple frame-rate chart will suffice to inform of card performance as human beings will perceive it. [quote<]We tried measuring frame times with our fire spell selected in Whiterun, but no jitter was apparent.[/quote<] ...which is exactly as expected because the pixel changes per frame were minimal to none. Hence, the frame rate is far more uniform because the gpu is essentially sitting there spitting out copies of the same frame as opposed to having to process the data involved in changing pixels between frames. [quote<]After that, we traveled to Winterhold, another location with an ongoing snow storm. First we measured frame times as we stared at the empty scene. Next, we unholstered our fire spell. Then, we pressed the R key to make the fire spell go away again:[/quote<] What happens visually onscreen when you "unholster" your fire spell? There's a slight variance in the constant, cycling brightness onscreen that indicates that the spell is ready to go, and that's *exactly* what your charting shows. Indeed when you magnify the result as you do, look at the repetitive pattern--it is nearly perfectly repetitive and perfectly uniform--because it describes the nature of the pixel changes per frame with the spell "ready." They are changing, only slightly, per frame, and repeating, in accordance with the visual effect. So, what you have mapped isn't jitter, it's merely a very slight slowdown in the frame rate due to the data being processed between frames. When you holster the spell the slight variance in cycling between frames disappears and you are sitting still, once again, and the line goes relatively flat. [quote<]However, we could definitely notice skips in what should have been continuous movement when sliding from side to side.[/quote<] It's not "jitter"...;) When you move the mouse from side to side while staring at a relatively static vista, what is happening? You are commanding your gpu to start rendering many more pixel changes between frames than it has to do when the mouse is sitting still and there are essentially *no* (or very few) pixel changes between frames. Obviously, you aren't working with mutiple gpus here, so what you are charting simply cannot be "microstutter" of the same kind--even though it can be mapped to a similar-looking chart. In most frame rate comparisons, the smallest division of time measured for a gpu is one second (frames per second.) What you have done here, though, is to greatly shrink the smallest unit of time measured for each gpu to a single millisecond, which is 1,000 times more brief that a frames per second measurement, and which can accordingly measure the microchanges in gpu frame-rate performance that occur relative to the amount of work each gpu is being asked to do per frame. This is what you are charting as "jitter," when in fact it simply describes the normal operation of any gpu. If you did not have the "jitter" that you see, then it would be because the gpu was not processing any pixel changes between frames--ie, not functioning as a 3d gpu. Simply put, it takes *time* to process data, and even though very fast gpus and cpus can make it seem as though data is being processed without taking any time, that surely isn't true. If we break down our time slices sufficiently, as in we go from one second (frames per second) to milliseconds (1,000th of a second), then we can see evidence of the work being done by the time it takes. In other words, gpus do not function like film projectors in which so long as the sprockets are aligned with the film the frame-rate remains constant at 24fps regardless of the changes between film frames. The frame rate for gpus is determined by the amount of data processing a given gpu can do in relation to how many pixels change from frame to frame (assuming we aren't insisting on vsync, etc.) The faster the gpu can process such data the higher the frame rate, etc.

      • Firestarter
      • 8 years ago

      Changing pixels? GPUs generally render the whole scene for every frame, and don’t care one bit whether the last frame was completely different or not. Some filters might be affected by previous frames and some textures might have been swapped out of the GPU RAM and have to be transferred back over the PCI-Express bus, but all in all those ‘pixel changes’ you talk about don’t matter.

      It sounds like you are comparing rendering with video encoding/decoding.

      • Hattig
      • 8 years ago

      GPUs re-render the entire scene for each displayed frame. There’s no working out which parts of the scene are static and which are moving – that’s 80s tech for framebuffers and slow CPUs.

    • Hattig
    • 8 years ago

    Detailed article, thanks.

    Next – how about for those of us gaming on the go? How do laptop graphics fare?

    E.g., HD6750M @ 1680×1050

    or a HD6670M @ 1366×768

    Or whatever you have available in-house – it can be a more subjective review as the system configurations will be different.

      • FuturePastNow
      • 8 years ago

      And throw in some integrated graphics. Can the A8-3850’s GPU run the game acceptably at 1366×768? What about the HD3000?

    • NeVeRLiFt
    • 8 years ago

    I installed the game on a old pc I leave out sitting in the corner for the kids, it has a eVGA 8800 GTS 512 videocard in it and runs Skyrim really good on ultra setting with shadows turned down to high or medium.
    Not running high resolution and no AA…. but the game is smooth and the kids love it.

    • jounin
    • 8 years ago

    how about trying this with the same cpu overclocked or a faster cpu and see if that helps to stabilize the jittering… i wonder if this would affect the results any…

    • flip-mode
    • 8 years ago

    The new testing methodology is doggone impressive.

    • can-a-tuna
    • 8 years ago

    How about running the tests again with with Catalyst 11.11a drivers. Seems that always when AMD gets their performance drivers for certain game ready you just before do these kind of reviews.

    ” Improves performance 2-7% on single GPU configurations” One can add that to the results.

    Edit: I read the conclusion and you mention this driver but they are NOT beta drivers. Forget that nvidia twisted beta-logic already which doesn’t apply AMD and re-run ALL your tests with these yesterday morning released drivers.

      • cygnus1
      • 8 years ago

      demanding much

      • flip-mode
      • 8 years ago

      On the other hand, maybe AMD should get drivers out more quickly. I don’t know what the driver development process is, but how is it that Nvidia can get drivers out so much more quickly, even if they have to call them “beta”?

      • spuppy
      • 8 years ago

      When you go to ATI to download drivers, the latest ones available are 11.11.

      [url<]http://sites.amd.com/us/game/downloads/Pages/radeon_win7-64.aspx[/url<] 11.11a are only found when you google it, through AMD's knowledge base website. So they ARE beta drivers, or at least not officially released drivers.

    • Meadows
    • 8 years ago

    “Develop your character however you please” is a rather strong claim compared to even Morrowind.

    You have fewer specialisation choices, you have fewer skills, there is no athletic difference between the races, there can be no penalty for wearing armour for any of the races, there are no crossbows and thrown weapons, there is no climbing and acrobatics available, there is no custom spellcrafting.

    The world is beautiful, expansive, but painfully generic.

    See the game for what it is. It’s fast food for today’s remaining RPG players. (I don’t mind, I could never get into the genre deeply in any way, but I know and see a number of people who criticise it.)

      • dashbarron
      • 8 years ago

      I’m not sure if we entirely agree on the definition, but I think of the series as rather “generic.” They have a lot of “crap:” areas to explore, races, npcs, quests, skills, etc., but so many they do none “very well.” There are fun aspects to the game but it seems like almost every area has some inevitable negativity which takes away from the immersion or overall quality of that feature.

      Don’t get me wrong I think they’re good games, but under all the acclaimed features and hours of “content” and things to do in the games, I always found a substantial core of mediocrity under the gold foil.

        • indeego
        • 8 years ago

        BUT DRAGONS.

          • lilbuddhaman
          • 8 years ago

          dragons are by far, the worst aspect of the game.

            • Meadows
            • 8 years ago

            But the furries love them.

      • Meadows
      • 8 years ago

      Thumb down all you want, fanboys, you know it’s true.

        • Goty
        • 8 years ago

        Clearly everyone who disagrees with you must be a fanboy!

          • Meadows
          • 8 years ago

          Yes, unless they provide arguments. So far, there have been none.

            • SPOOFE
            • 8 years ago

            What arguments are there to be made against statements that are simply wrong?

            Oh noes, they haven’t had “climbing” since Daggerfall, it must be teh suxxorzzz!!!

            • Meadows
            • 8 years ago

            How are my statements wrong?

            • SPOOFE
            • 8 years ago

            By not being right, of course.

            • Meadows
            • 8 years ago

            Still waiting for the argument part.

            • Beelzebubba9
            • 8 years ago

            My counter argument would be that complexity for complexity’s sake does not make for a better game. Athletics was one of the worst parts about Morrowind, and I can’t say I missed Morrowind’s convoluted custom spell aspects either. What makes Bethesda’s games fun for me is the massive detailed world that has a lot more depth to it than any other RPG I’ve played. Omitting certain types of ranged weapons and onerous skill trees does nothing to detract from this.

            If I want to play a game that’s a glorified spreadsheet, I’ll play WoW. And if I want a game that’s really open ended and uniquely creative, I’ll play Minecraft. Skyrim serves its own needs just fine.

      • Bensam123
      • 8 years ago

      The game is balanced like complete and utter crap, but that hasn’t seemingly changed across Morrowind remakes or Fallout 3 for that matter. I’m not entirely sure I can agree with your overall premise that this is a deevolved form designed for todays RPG players and that RPGs in general have reach the point of ‘fast food’ style consistency.

      Witcher 2 and Dragon Age, Dragon Age 2 (although a bit consolized) also fits in there. Oblivion in contrast most definitely needed to tone down the ridiculousness of having to level every single aspect of the game. Things like having to put one point into stamina every time you level or you’re screwed late game was just an example of it. It is more consolized, but in general their games have always had the feeling of ‘genericness’ that leads you to believe that your actions are going relatively unnoticed and change relatively little. Other things in game change quite a bit, such as the main story line in this version after you’re done. They most definitely changed up dungeons a bit in this game, where as in Oblivion they were endless unamed corridor grinds that made you wish for your hour or two back you spent in them after you’re done.

      I can only handle so much of Morrowind in general. I eventually start suffering from the onset of lethargy, as I’m sure a lot of people do.

      • webs0r
      • 8 years ago

      Agreed, the franchise has been on a slippery slope since Morrowind unfortunately.

      Why the thumbs down people, it is true.

      But yet, I am still having fun playing. I just think about what could have been.

      The races, stats and spellcasting are completely dumbed down. In fact in combination with the world auto-balancing, you are better off only doing 1 thing really well rather than trying to dabble in a few things, you get punished for it.

        • SPOOFE
        • 8 years ago

        “Why the thumbs down people, it is true.”

        No, it isn’t.

      • SPOOFE
      • 8 years ago

      “The world is beautiful, expansive, but painfully generic.”

      Heh, VERY incorrect.

        • Meadows
        • 8 years ago

        I still don’t see an argument, but you’re well on your way making a fool out of yourself.

        Here, I’ll give you an argument: the world is generic because it has generic dragons, generic mountains, generic grasslands, and generic medievalness. Gone are the unique design points that were last seen in Morrowind and separated the TES universe from other fantasy games – what we have with Oblivion and Skyrim is basically “instant fantasy”, of the “just add water!” type, and boy, is it ever generic.

        Take a hint from Dragon Age, for example. The world’s unique and pervading politics and the whole “Darkspawn” issue made for lots of special content that saved the game from being a generic shell. Or World of Warcraft, where the massive steampunk elements and varied history/lore make for a distinct universe that you will always recognise and never forget.

        What could TES show us these past several years? Swords, dragons, generic medievalness, and a sh!tty story that I can’t even recall anymore.

          • EtherealN
          • 8 years ago

          Interestingly, the points you lament with Oblivion/Skyrim are exactly what I felt was totally crappily done in WoW when I tried that (and, for that matter, Warcraft in general but also StarCraft suffers greatly from it – and let’s not even start about Diablo. 😀 ). Tossing in Steampunk in a fantasy setting is nothing new, not revolutionary, and doesn’t really add anything on it’s own – it can work if done right, but IMO it wasn’t done right in WoW (indeed, in the warcraft universe it feels a bit like a fetish where everything has to be special in some way, which is a bit lazy imo. You don’t need that to create an interesting world, you just need to do a good job.)

          To each their own I guess. 😉

          • yokem55
          • 8 years ago

          Evidently you haven’t been into Blackreach yet. There is some really wild stuff down there. And I am impressed as always with the environs in Dwemer ruins. Outside of that, I think it is really unfair to call it generic medieval styling – they put a nice Nordic spin on it all that feels fairly authentic and the world just oozes of the feeling that people really have been living there for thousands of years.

          • SPOOFE
          • 8 years ago

          [quote<]Gone are the unique design points that were last seen in Morrowind[/quote<] All that separated Morrowind from any other TES game, in any significant measure, was giant mushrooms. "Oh noes, 'axe' and 'sword' aren't different skills anymore!" Insignificant.

      • indeego
      • 8 years ago

      I agree with you Meadows. I also agree that your detractors haven’t really put up a decent “con” argument.

        • Meadows
        • 8 years ago

        I gave you +1 because I love myself.

    • Bensam123
    • 8 years ago

    Adding a bit more to this, it would be interesting if these tests were done with NICs and processors as well, not just GPUs…

    • Bensam123
    • 8 years ago

    I’m not sure what all the fuss is about. Skyrim most definitely is new, but the graphics don’t look any better then Oblivion. If anything Skyrim is even more targeted towards consoles. You would probably see more variation by retesting Oblivion with some high texture mods out there.

      • Airmantharp
      • 8 years ago

      Just like when you said BF3 is a slight update on BF:BC2? Please.

      It looks like Oblivion, I agree there, but it is also much improved; and at no point would it be acceptable to benchmark a game’s performance with mods before benching the released game itself.

      • ish718
      • 8 years ago

      The graphics in skyrim are certainly more detailed, the lighting and effects are better, it is using a new engine afterall.

    • irvinenomore
    • 8 years ago

    Fantastic! OK now can you do the same for Witcher 2? Pleeeese….

    • jamsbong
    • 8 years ago

    When I first installed the game, I maxed out with ultra everything graphics and it still ran at a smooth 60fps (vsync on). Only area where it slows down is where there are fog and river (not sure why) as I have a 560TI.
    I didn’t use any MSAA as FXAA is simply more efficient (free performance) and looks better than MSAA alone.
    Afterwards, I’ve decided to crank up the graphics from custom settings in .ini file. Also, you can enable SSAO using Nvidia Inspector:
    [url<]http://forums.gametrailers.com/thread/skyrim-graphics-tweaks/1261457[/url<] The SSAO is really costly but adds a lot more depth to the world. The next thing I plan to do is to add more CPU threads and free up the memory using LAA.

    • crsh1976
    • 8 years ago

    Honestly, the PC version isn’t very demanding, we can thank it being a “cheap” console port (that works for me, however). I’m getting very smooth performance on High quality with a typically low-end rig (by today’s standards): Core 2 Duo 3.0 GHz, 4 GB of RAM, Radeon 5770 1 GB, Win 7, 1920×1080 monitor.

      • sweatshopking
      • 8 years ago

      yeah, my 4890 plays it like a dream, with no slowdowns at all on ultra. It seems to be relatively well optimized.

      • JohnC
      • 8 years ago

      Yea, it is not a very demanding game (when it comes to GPUs), even though it has a very beautiful outdoor environments… Not really surprising considering it is using a DX9-based engine.

    • indeego
    • 8 years ago

    Link to article from comments page is broken.

    I enjoy the geekery of the article, but I fear that you scare half your audience away with so many graphs that ultimately show a conclusion of “all cards run this game fine except for the occasional complex scene.”

    • GasBandit
    • 8 years ago

    I have to say I find it very gratifying that Skyrim also has the option to use a wide price/age variety of graphics cards. My laptop has an aging Mobility Radeon HD 5145, and while it chugs hard on “high” presets, on medium or custom presets (which for me, basically amounts to “high” without AA), the game is quite playable and enjoyable.

    My desktop rig has a Nvidia 8800 GTX, but it’s still DX9 (windows XP, too lazy to upgrade, I know)… this is the first game where I could actually see the difference for myself. It looks prettier on 7 for sure.

    • Stargazer
    • 8 years ago

    [quote<]33-35 ms per frame works out to 29-30 frames per second, if the system maintains those frame times for a whole second.[/quote<] I think that it's *awesome* that you've started paying such close attention to small time-scale behavior, but this sentence highlights something that is consistently misrepresented in your articles. There is *absolutely* no need for the frame times to be maintained for a full second for a given frame time to correspond to a given frame rate. Just because the *unit* used for frame rates contains "second", that does not mean that frame rates have to be an average over one second (do you have to drive your car for an hour before your speed can be measured in mph?). Yes, in most cases reported frame rates are given as an average. Yes, unfortunately some tools (apparently FRAPS) even report "minimum FPS" as a minimum *average* FPS (this is stupid). This does not change that frame rates do not need to be averaged over any particular length of time. It is perfectly fine to present frame rates (in FPS) for half a second, a tenth of a second, or even for a single frame. Frame rate is simply the inverse of the frame time. So, you could just as easily present the same data using per-frame Frame Rate (in FPS) instead of frame time (in ms). In fact, since most of your readers presumably have a much better intuitive grasp of how "good" a particular frame rate is, it would probably be easier for most people to interpret your data if you used per-frame frame rates instead. Just see how many times you felt the need (rightly so) to say things like "this frame time corresponds to a frame rate of X" during this article. "20 FPS" means more to most people than "50ms". Using frame times (in ms) is certainly a valid choice, but it's essentially a stylistic choice, and I'm certain that using per-frame frame rates (in FPS) would be a more *reader friendly* choice. At any rate, I think you should stop implying (or outright saying) that using frame rates imply a one second average. It's wrong. Regardless of if you continue using frame times or start using per-frame frame rates, I think something like the following would be a very interesting way to present the data: [url<]http://forum.avsim.net/topic/329356-i5-2500k-vs-ram-frequency-and-latency/page__view__findpost__p__1942577[/url<] This distribution of per-frame frame rates makes it very easy to see how common various per-frame frame rates (or frame times if you prefer) are, and also makes it very clear what the spread is like. If an article about small time-scale behavior could only have one graph, I'd like this to be it. It gives a ton of information at a glance (and it's even easier to interpret if you use per-frame frame rates instead of frame times! 😉 )

      • cygnus1
      • 8 years ago

      i think trying to express a sub 1 second frame rate in FPS is counter intuitive for what they’re trying to explain. trying to explain that you had 74 different FPS inside of 1 second seems overly complex when all you really want to point out is the few frames that caused stutter.

      absolute frame render time is definitely easier to grasp for me. i think the frame time plots TR is using are better than the distribution graph you linked. the frame time plots make it very easy to spot when you have a small number of frames out of thousands that ruin a game experience with stutter. that distribution graph you linked would completely miss that data

        • Stargazer
        • 8 years ago

        [quote<]i think trying to express a sub 1 second frame rate in FPS is counter intuitive for what they're trying to explain. trying to explain that you had 74 different FPS inside of 1 second seems overly complex when all you really want to point out is the few frames that caused stutter.[/quote<] It shouldn't really be more counter intuitive than looking at your car *speed* while passing a school crossing. A rate (speed, frame rate, ...) can be looked at for either average ({speed, frame rate} over time) or instantaneous (current speed, per-frame frame rate) values. I don't really think it'd be all that hard to explain either. Something like "this corresponds to a frame rate of..." should suffice (these explanations are pretty much there anyway since the frame times are converted into frame rates several times). Either way, this is the stylistic choice I was talking about. You can do it either way. Can we at least agree that it is just that - a stylistic choice - and that it is wrong to say that there is a fundamental difference between the two? (in that the usage of frame rates require an average over a second) Heck, it doesn't even need to be an either-or proposition. For graphs you could use dual scales, and in the charts the secondary scale could be shown in brackets after the main scale. [quote<]. i think the frame time plots TR is using are better than the distribution graph you linked. the frame time plots make it very easy to spot when you have a small number of frames out of thousands that ruin a game experience with stutter. that distribution graph you linked would completely miss that data[/quote<] They're not mutually exclusive though - you could use *both*. As you say, the "frame time plots" (which could just as easily be instantaneous/per-frame frame rates! 🙂 ) are better at showing isolated spikes, but the other graph has other advantages. You can for instance easily see how often the frame rate (or frame time if you prefer) drops below (goes above) certain values. In the current articles there is one chart showing the number of frames above 40ms (below 25 FPS), but that is only one value, and showing the same thing for many different values in that way would require a ton of graphs. A distribution graph like the one mentioned would show all this and more. As a bonus, they seem to give an approximately Gaussian distribution (at least in the linked example), and who doesn't like a nice Gaussian distribution? 🙂 I don't think it should *replace* the current graphs, but I think that it presents valuable information at a glance, and I would like it to be *added to* the current graphs. I do think that the bin size in the linked example might be a bit large though (5 fps). I think I'd prefer something smaller (2? 2.5?), but that might require some experimentation.

          • cygnus1
          • 8 years ago

          [quote<]It shouldn't really be more counter intuitive than looking at your car *speed* while passing a school crossing. A rate (speed, frame rate, ...) can be looked at for either average ({speed, frame rate} over time) or instantaneous (current speed, per-frame frame rate) values.[/quote<] I don't think I agree with what you're trying to say. A rate is always a rate. A rate is always defined by a duration of time, an interval. An 'instantaneous rate' doesn't really exist. If you freeze time, you're no longer moving or you're no longer producing frames. The instantaneous rate you mention, is simply a very short measurement interval. I think the only reason people can intuitively understand applying a large interval rate, like miles per hour, to a smaller, near instantaneous, interval is because a vehicles velocity is something which they're used to feeling and measuring. I think most people can intuitively understand fluidity of motion on the screen though. Frame rate on the other hand is not something you measure on a regular basis as most people don't play games with a frame rate counter on their screen. And I think that's the point that TR is making by presenting the data the way they are. It honestly took me a minute to understand that graph you linked because it really is counter-intuitive to apply a rate to individual frames I personally don't care how many frames per second a video card can render for a game. I've honestly never liked using FPS as a benchmark. The way it's been used in the past has not communicated how fluidly a game is rendered and if it is used the way you suggest, you're just renaming a more intuitive metric. I would personally like to see the term's usage diminished or even completely go away. [quote<]Either way, this is the stylistic choice I was talking about. You can do it either way. Can we at least agree that it is just that - a stylistic choice - and that it is wrong to say that there is a fundamental difference between the two? (in that the usage of frame rates require an average over a second)[/quote<] Nope. I really don't agree. Frame render time and frame rate, while related, are not the same thing. No matter how long the interval is. What your suggesting is making the frame render time be the interval for the frame rate for each frame. Do you see that to me, it's not a stylistic choice and that they really are two different things?

            • Pantsu
            • 8 years ago

            Most people do understand and can relate to FPS numbers better. The frame time graphs in the article could be converted to FPS per frame graphs easily enough, and it would provide an easier graph even without a detailed explanation of how this works, other than the fact that the X-scale is frames and not seconds and so faster cards produce longer graphs.

            • Stargazer
            • 8 years ago

            [quote<]I don't think I agree with what you're trying to say. A rate is always a rate. A rate is always defined by a duration of time, an interval. An 'instantaneous rate' doesn't really exist. If you freeze time, you're no longer moving or you're no longer producing frames.[/quote<] An "instantaneous rate" is the rate of change at some particular point in time. We tend to call this a "derivative". It most certainly exists. Things get a tad different when we move to the discrete realm, but the rate can still be used since we're simply measuring over n=1 values instead of n=<some number required to reach 1 seconds>. [quote<]I think the only reason people can intuitively understand applying a large interval rate, like miles per hour, to a smaller, near instantaneous, interval is because a vehicles velocity is something which they're used to feeling and measuring.[/quote<] People are also used to talking about frame rates, and tend to have a feeling of how "fluid" some particular frame rate is (at least when they're devoid of extreme disturbances). People in general tend to have less of an intuitive grasp how "fluid" some given frame time is. I also don't agree with the characterization of "miles per hour" as a "large interval rate". "miles per hour" is a *unit*, used to describe a movement rate. It can most certainly be used to measure speeds over "small" (compared to an hour) time scales, and even instantaneous speeds (people do this all the time!). [quote<]It honestly took me a minute to understand that graph you linked because it really is counter-intuitive to apply a rate to individual frames.[/quote<] I obviously disagree. [quote<]I personally don't care how many frames per second a video card can render for a game. I've honestly never liked using FPS as a benchmark. The way it's been used in the past has not communicated how fluidly a game is rendered and if it is used the way you suggest, you're just renaming a more intuitive metric. I would personally like to see the term's usage diminished or even completely go away.[/quote<] It actually seems like you *do* care, but you prefer to *call it* something else. That's fine. As I've said before, both ways can be used. What's wrong is to claim that just because the unit happens to include "second", a frame rate has to be expressed as an average over a second. [quote<]Nope. I really don't agree. Frame render time and frame rate, while related, are not the same thing. No matter how long the interval is. What your suggesting is making the frame render time be the interval for the frame rate for each frame. Do you see that to me, it's not a stylistic choice and that they really are two different things?[/quote<] They're not the same, they're each other's inverse. You can have an instantaneous (per-frame) frame rate and an instantaneous (one-frame) frame time. You can also have a 1-second average (or n-frame average) frame rate, and a 1-second average (or n-frame average) frame time. It *is* a stylistic choice which one to use, and it is most certainly, without any possible doubt whatsoever, incorrect to claim that a frame rate *has to* be measured as an average over one second just because its *unit* contains "second".

            • Bensam123
            • 8 years ago
        • aggies11
        • 8 years ago

        I initially thought the same. But then again, how many people actually think about the “second” unit of measure in FPS? We all know 30fps (or 60fps) is what you want. I think the FPS has been abstracted enough to become it’s own unit of measure, and not so much tied to the underlying second. So the idea of “the framerate dropped to 20fps for this frame, then 35fps for the next one”, might be rather straightforward.

        It’s only when you stop to think about to see that it’s sub-second framerate, or really just an extrapolated framerate based on a single frame.

        It’s simply a math calc on the numbers, 1000/x where x is frametime. 16ms becomes 60fps, 50ms becomes 20fps etc. So the graphs become easier to understand (the good ole higher is better, lower is worse etc).

        I can understand the idea of wanting to avoid confusion, as it can be quite the heady topic, but I’m not sure if that strict adherence is really necessary to understand the main talking points of the issue.

      • Bensam123
      • 8 years ago

      I used the same argument when Scott or Geoff first intro’d this method of benchmarking and said all they had to do is list variance or average variation over time (which is what they’re attempting to show with percentiles).

      I believe frame time is used as a entry into percentiles though and as a way to differentiate the site, even if it’s not statistically different from average frame rate.

        • Stargazer
        • 8 years ago

        They could have just as easily looked into percentiles with the corresponding (non-averaged) frame rates though.

        Using frame times to differentiate against other review sites is fine (as I’ve said before), but that shouldn’t justify giving out misinformation about frame rates.

      • XTF
      • 8 years ago

      [quote<]So, you could just as easily present the same data using per-frame Frame Rate (in FPS) instead of frame time (in ms).[/quote<] I think frame time is easier to work with. Time is a direct measure. The inverse (rate) is a derived value. Consider this, averaging two frame times: 10 ms + 20 ms = 30 / 2 = 15 ms (per-frame) 100 fps + 50 fps = 2 / (1 / 100 + 1 / 50) = 67 fps

    • lilbuddhaman
    • 8 years ago

    It’s a veritable can o’ worms with this game.
    -Massively configureable ini, There are 50+ settings that greatly impact visual fidelity of the game (and performance)
    -“easy” to tinker with post process filters change overall look greatly
    -Game is CPU limited to 2 cores (but your OS will split that to however many you want, BUT you’ll still get hindered performance).
    -Game is natively only able to address 2gb ram, LAA tweak exists, but was broken by a ninja steam update, but was quickly re-tweaked to work again.
    -Texture packs are already hitting from the mod scene fixing some of the extremely low resolution textures that exist in game, but also making the LAA tweak a must.

    A deep analysis of the various configuration options of the ini alone could take up 20+ pages.

      • Stargazer
      • 8 years ago

      Is it actually *limited* to 2 cores, or does it just see very limited improvement after that?

      Either way, the “recommended specs” (https://techreport.com/discussions.x/21893) of a Quad-Core CPU seems a tad… well… unjustified?

        • lilbuddhaman
        • 8 years ago

        well it seems I was wrong about the scaling only to 2 cores, but rather it seems that speed > cores.

        [url<]http://www.techspot.com/review/467-skyrim-performance/page7.html[/url<] An i3 @3.3ghz is eye to eye with an i7 @ 2.66ghz

          • OneArmedScissor
          • 8 years ago

          This will likely always be the case, and cannot be overcome with multi-threading. Higher clock speed = lower memory latency.

          Lower end CPUs in the Pentium or Celeron class are artificially hampered more in this department, but the i3 is not at all.

          Any modern CPU (i.e. not Atom or Bobcat), no matter how low end, is pretty much “too powerful” for games in almost any case. The lag “caused by the CPU” tends to be something momentarily piling up in the memory system.

      • squeeb
      • 8 years ago

      Bit disappointed how this runs on my system. The fps I get have a pretty ridiculous range. I tested in one of the towns, Whiterun I believe: Min 22 Max 92 Avg 52 – High Settings, FXAA, 8x AF @1920×1080

      PII 965 @ 3.4
      gtx 570 @ 845 core
      8gb ddr3

      Something just didn’t seem right..I recall similar, inconsistent framerates with Oblivion on my older system :\

        • ermo
        • 8 years ago

        I don’t suppose that you are running the game with the ‘balanced’ power profile?

        Assuming that you are on a PhII 965 BE rev. C3, bumping the clock to 3.6 or 3.8 GHz with no voltage tweak should be perfectly possible (just up the multiplier in the BIOS) and then when you play, you can switch to the ‘high performance’ power preset, which will peg your CPU to its fastest P-state.

        FWIW, if you have a half decent BIOS, it will use your OC settings for its fastest P-State and then be able to throttle down to the other standard P-State settings at lower loads in the ‘Balanced’ power profile (typically 2100/1600/800 MHz).

        But maybe you knew all this already? 🙂

          • Pez
          • 8 years ago

          Running at 3.8GHz with a X2-550 makes a world of difference, should get a nice boost.

          • squeeb
          • 8 years ago

          According to CPU-Z its Revision RB-C3 – and no I have not tried to overclock it at all but I know my board (870A-UD3) is quite capable..

          I’ll look into it, thanks.

      • entropy13
      • 8 years ago

      Indeed, you can do a lot with the game. Here’s what I did:
      LAA
      uGrids 11
      Skyrim Total Enhancement Project (includes ENB + FXAA Injector)
      various mods (recommended by STEP)
      higher water reflections resolutions
      higher shadow resolutions
      higher grass fade distance
      8x AA
      forced 4x Supersampling
      forced high quality AO
      16x AF
      etc.

      20-30fps for me.

        • lilbuddhaman
        • 8 years ago

        all of the shadowing tweaks I’ve done at various settings (defaults included) give me bad banding (looks like black striping across the shadow) and occasional flickering. Nothing seems to fix it. Haven’t done an exhaustive troubleshooting but I’m just gonna go ahead and wait on the super-magical-fix-to-all-problems 11.11b drivers that ATI should have already put out for my 6870×2’s ….

    • dashbarron
    • 8 years ago

    And again I would love to see Skyrim and BF3 tested with high-end GPUs, for the sake of benchmarking and comparison!

    • ShadowTiger
    • 8 years ago

    I have been waiting for this!!!

    I am glad that mid-range cards can run Skyrim just fine, I was worried that I would need to upgrade to enjoy it fully.

    And i’m not a slow reader, I was busy with work.

      • danny e.
      • 8 years ago

      slow reader?

        • Meadows
        • 8 years ago

        I did laugh.

    • anotherengineer
    • 8 years ago

    So from [url<]https://techreport.com/articles.x/22048/4[/url<] with the different cards all having roughly the same fps, does that mean the game is CPU bound?

      • travbrad
      • 8 years ago

      According to this Tom’s article it seems very CPU limited, which is pretty common for console ports actually: [url<]http://www.tomshardware.com/reviews/skyrim-performance-benchmark,3074-9.html[/url<] It seems to be more about per-clock performance and clock speed rather than core count though. I wouldn't want to run Skyrim on Bulldozer.

        • ark778
        • 8 years ago

        Skyrim is also incredibly badly optimized due to it being shoddy console port. Up until the most recent patch the vanilla game did not even utilize 4 gigs of ram. As much as i love the game i am continually disappointed at all the bugs, issues, and optimization flaws the game has.

      • JohnC
      • 8 years ago

      I dunno about other reviews, but in my (very limited) experience at looking at CPU utilization monitor it doesn’t really appear to be CPU-bound, at least not during the times I’ve looked at (for example, running around Whiterun with one of my companions and doing random stuff like talking to NPC’s and casting spells). Never seen any cores going up to 100% utilization and staying there for more than a few seconds. That is playing at max settings at my monitor’s native res (1650×1050) using ATI/AMD 6970 card and Intel’s i5 750 CPU.

        • Firestarter
        • 8 years ago

        Looking at the CPU utilization monitor is very misleading. The game apparently only uses 2 threads, which on your 4-core CPU would show as 50% CPU utilization for all 4 cores. That’s because the threads get swapped from core to core. At any one point in time, the game will only be using 2 of the 4 cores, while the other 2 cores sit idle or do some housekeeping. That’s why the game might not appear to be very demanding of the CPU, while still being limited by it.

        You can see for yourself, even though the CPU utilitzation is not 100% (more likely ~55%), the performance will probably still improve when you overclock your CPU. Or alternatively, you can underclock your CPU and performance will drop, even though the utilization will stay the same.

          • Pantsu
          • 8 years ago

          Indeed the game is very much CPU limited in places like Whiterun. And even OC’d 2500K will struggle at times to produce proper framerates at higher quality settings where there’s higher draw distances and more to calculate by the CPU. It’s disappointing that Bethesda hasn’t optimized this better to quad cores. Personally the highest CPU usage I’ve seen TESV.exe use is around 45%. At the same time the GPU is using maybe 50%, and adding stuff like AA does nothing to the framerate since it’s the CPU and bad threading bottlenecking the game.

          Now this is not the case in all scenes though. Skyrim has quite a varied landscape and some parts of it can be heavily GPU bound while others (mostly towns) are CPU limited, which is usually worse since there’s not much you can do about it, since most settings affect only GPU load.

Pin It on Pinterest

Share This