Radeon HD 7950 vs. GeForce GTX 660 Ti revisited


Ah, it’s the eternal battle, the unending duopoly duel: GeForce versus Radeon, Radeon versus GeForce. The skirmishes are ongoing, but the victor is never decided for long. Today, another chapter in the story unfolds.

Doesn’t seem that long ago, back in August, when the GeForce GTX 660 Ti first hit the scene and squared off against the Radeon HD 7950. That match-up ushered in a new generation of competition among ridiculously powerful video cards at around 300 bucks. Nvidia had the advantage going in, since it was facing off against an already established competitor; it knew what the GTX 660 Ti had to do in order to win. AMD, however, has been unusually feisty lately, and it had other ideas. At the last minute, the Radeon team rolled out a new BIOS that added dynamic clock speeds to the 7950. The result was an incredibly slight win for the GTX 660 Ti on points, but in the end, we threw our hands up and said the differences mattered little.

We’re vaguely astonished by how much things have changed since then.

Of course, we have a new crop of games for the holiday season, headlined by titles like Borderlands 2, Hitman: Absolution, Sleeping Dogs, and Assassin’s Creed III. AMD’s newfound aggressiveness means many of these games are part of its Gaming Evolved program, so they should run very well on Radeon graphics cards—and maybe, you know, not so well on those pesky GeForces.

In fact, accentuating its stronger ties to game developers, AMD has taken to bundling a trio of these games with its Radeon HD 7950 cards. Cramming that sort of gaming goodness into the box with a graphics card certainly changes the value equation.

As if that weren’t enough, AMD has also released Catalyst 12.11 beta drivers that promise a roughly 15% across-the-board performance increase for its 7000-series Radeons. New drivers often bring performance gains for individual games, but general improvements of that magnitude are uncommon. AMD tells us it has employed new insights in tuning its relatively young GCN architecture.

What’s more, Windows 8 is out, and we’ve transitioned our test rigs to the new operating system.

Add up all of these changes, and you have a recipe for realignment in ongoing battle for GPU supremacy. Are we still at rough parity, or have AMD’s bold moves allowed it to push into the lead? We’ve deployed our infamous “inside the second” testing methods with a host of the latest games in order to find out.

Our lovely contestants

Pictured above is the Sapphire HD 7950 Vapor-X, our representative from the Radeon camp for this little hoedown. The 7950 Vapor-X is our first look at a retail product with the new Boost BIOS, and it ups the ante by sporting a peak Boost clock of 950MHz, 25MHz above stock. Sapphire’s Vapor-X cooler sprouts quad heatpipes that snake into a large array of cooling fins situated beneath dual fans. The shroud that covers the whole assembly may be the finest expression of the F-117 Stealth fighter look that has rampaged through the enthusiast PC hardware scene in recent years. Although it sticks out maybe a quarter-inch beyond the 10.5″ length of the card itself, there’s no way that thing shows up on radar.

The HD 7950 Vapor-X sells for $329.99 at Newegg and comes with a bunch of inducements to buy, including copies of Sleeping Dogs, Hitman: Absolution, and Far Cry 3, along with a 20%-off coupon for Medal of Honor: Warfighter. There’s also a $20 mail-in rebate attached right now. If you buy two, AMD CEO Rory Read will come to your house and personally serenade you from outside of your window. I hear he has quite the voice.

Looking over the listings at Newegg, 7950 cards are going for as little as $299.99. However, only a few other cards can match the Vapor-X’s 950MHz boost clock, and they all cost more than the Sapphire.

We’ve pitted the HD 7950 Vapor-X against our returning champ from the GeForce side, the Zotac GTX 660 Ti AMP. This baby sports Zotac’s charming “angry bumblebee” look but is scaled down massively from its GTX 670 and 680 brethren. The card is only 6.75″ long, giving it a distinctive miniature vibe we like to call “low BOM cost chic.”

There’s something to be said for keeping costs down, though. The GTX 660 Ti AMP! is currently going for $299.99, even though it’s a hot-clocked card. In fact, the Zotac’s boost frequency is one of the highest among the GTX 660 Ti cards available. Also, its 6.6 GT/s memory is 10% faster than most of its competitors, even though they cost as much as 350 bucks.

In a bid not to be totally left behind by AMD’s cornucopia of bundled games, most GTX 660 Ti cards right now (including the AMP!) come with a free copy of Assassin’s Creed III. Also, Zotac currently matches Sapphire’s $20 rebate offer with its own, for those who enjoy filling out microscopic forms.

Base

clock

(MHz)

Boost

clock

(MHz)

Peak

ROP rate 

(Gpix/s)

Texture

filtering

int8/fp16

(Gtex/s)

Peak

shader

tflops

Memory

transfer

rate

Memory

bandwidth

(GB/s)

Sapphire HD 7950 Vapor-X 850 950 30 106/53 3.4 5.0 GT/s 240
Zotac GTX 660 Ti AMP! 1033 1111 27 124/124 3.0 6.6 GT/s 159

Although they’re positioned against each other in the market, these two cards really are somewhat different classes of hardware, as both the picture and table above illustrate. The 7950 is based on a slightly cut-down Tahiti GPU with a 384-bit memory interface. The GTX 660 Ti’s interface is half that width at 192 bits. The Radeon has the theoretical advantage in ROP rate, shader flops, and memory bandwidth—and the gap is quite large, in the last case. The 7950 has 3GB of memory, too, while the 660 Ti has 2GB. The GeForce can eclipse it only in texture filtering prowess.

The 7950 even has more appetite for power, requiring six- and eight-pin auxiliary inputs, while the GTX 660 Ti gets away with dual six-pin plugs.

Still, Nvidia’s Kepler architecture has proven to be shockingly efficient in many cases, so the GTX 660 Ti’s lower specs won’t necessarily translate into lower performance. That’s why we test these things. Speaking of which…

Our testing methods

As ever, we did our best to deliver clean benchmark numbers. Tests were run at least three times, and we’ve reported the median result.

Our test systems were configured like so:

Processor Core i7-3820
Motherboard Gigabyte
X79-UD3
Chipset Intel X79
Express
Memory size 16GB (4 DIMMs)
Memory type Corsair
Vengeance CMZ16GX3M4X1600C9
DDR3 SDRAM at 1600MHz
Memory timings 9-9-11-24
1T
Chipset drivers INF update
9.3.0.1021

Rapid Storage Technology Enterprise 3.5.0.1101

Audio Integrated
X79/ALC898

with Realtek 6.0.1.6662 drivers

Hard drive Corsair
F240 240GB SATA
Power supply Corsair
AX850
OS Windows 8
Driver
revision
GPU
base

core clock 

(MHz)

GPU
boost

 clock 

(MHz)

Memory

clock

(MHz)

Memory

size

(MB)

Zotac
GTX 660 Ti AMP!
GeForce 310.54 beta 1033 1111 1652 2048
Sapphire
Radeon HD 7950 Vapor-X
Catalyst
12.11 beta 8
850 950 1250 3072

Thanks to Intel, Corsair, and Gigabyte for helping to outfit our test rigs with some of the finest hardware available. AMD, Nvidia, and the makers of the various products supplied the graphics cards for testing, as well.

Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests.

In addition to the games, we used the following test applications:

Some further notes on our methods:

  • We used the Fraps utility to record frame rates while playing either a 60- or 90-second sequence from the game. Although capturing frame rates while playing isn’t precisely repeatable, we tried to make each run as similar as possible to all of the others. We tested each Fraps sequence five times per video card in order to counteract any variability. We’ve included frame-by-frame results from Fraps for each game, and in those plots, you’re seeing the results from a single, representative pass through the test sequence.
  • We measured total system power consumption at the wall socket using a Yokogawa WT210 digital power meter. The monitor was plugged into a separate outlet, so its power draw was not part of our measurement. The cards were plugged into a motherboard on an open test bench.

    The idle measurements were taken at the Windows desktop with the Aero theme enabled. The cards were tested under load running Skyrim at 2560×1440 with the Ultra quality presets, 4X MSAA, and FXAA enabled.

  • We measured noise levels on our test system, sitting on an open test bench, using an Extech 407738 digital sound level meter. The meter was mounted on a tripod approximately 10″ from the test system at a height even with the top of the video card.

    You can think of these noise level measurements much like our system power consumption tests, because the entire systems’ noise levels were measured. Of course, noise levels will vary greatly in the real world along with the acoustic properties of the PC enclosure used, whether the enclosure provides adequate cooling to avoid a card’s highest fan speeds, placement of the enclosure in the room, and a whole range of other variables. These results should give a reasonably good picture of comparative fan noise, though.

  • We used GPU-Z to log GPU temperatures during our load testing.

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Borderlands 2

First up is my favorite game of the year so far, Borderlands 2. The shoot-n-loot formula of this FPS-RPG mash-up is ridiculously addictive, and the second installment in the series has some of the best writing and voice acting around.

As you may know, our game benchmarking methods are different from what you’ll find elsewhere, in part because they’re based on chunks of gameplay, not just scripted sequences. We’re trying something different this time around: embedding videos of typical gameplay sessions in the article. Below is a look at our 90-second path through the “Opportunity” level in Borderlands 2.

As you’ll note, this session involves lots of fighting, so it’s not exactly repeatable from one test run to the next. However, we took the same path and fought the same basic contingent of foes each time through. The results were pretty consistent from one run to the next, and final numbers we’ve reported are the medians from five test runs.

We used the game’s highest image quality settings at the 27″ Korean monitor resolution of 2560×1440.

Our first result is a simple plot of the time needed to render each frame during one of our test runs. Because the frame render times are reported in milliseconds, lower times are preferable. Note that, although you may see FPS-over-time plots elsewhere, those usually are based on averaging FPS over successive one-second intervals; as a result, they tend to mask momentary slowdowns almost entirely. Our plots are sourced from the raw frame time data instead.

Right away, this approach gives us some insights. The GTX 660 Ti’s frame times tend to be very low, generally under 20 ms and rarely ranging above that mark. By contrast, the Radeon HD 7950’s plot is riddled with spikes up to twice that long or longer.

A traditional FPS average doesn’t really capture the difference in how these two cards perform in Borderlands 2. Yes, the Radeon’s average is lower, but it’s still over the supposedly golden 60-FPS mark. Usually, producing an average that high would be considered quite good, but we felt the difference between the 7950 and the GTX 660 Ti clearly while testing.

We think gamers would be better served by skipping the FPS average and instead taking a latency-focused approach to frame delivery, if they really want to understand gaming performance. One alternative method is to consider the 99th percentile frame time, which is simply the threshold below which 99% of all frames have been generated. In the chart above, the Radeon HD 7950 has delivered 99% of the frames in under 31.7 milliseconds or less. That means all but the last one percent of frames were produced at a rate of 30 FPS or better—not too shabby.

Compared to the GeForce, though, the Radeon isn’t doing so well. The GeForce delivers 99% of frames in under 20 milliseconds, which is the equivalent of about 50 FPS. That’s why playing the game on the GeForce feels perceptibly smoother. I think these 99th percentile numbers more accurately convey the sense of things one gets from studying those initial frame time plots—and from playing the game on both cards.

Our 99th percentile cutoff has proven to be a pretty good choice for capturing a sense of comparative performance. However, we have to be careful, because it’s just one point along a curve. We can plot the entire frame latency curve, using data taken from all five runs, in order to get a better sense of overall performance. Over time, I’ve grown accustomed to reading these curves, and they’re now my favorite way to illustrate gaming performance.

As you can see above, the Radeon’s performance is very close to the GeForce’s much of the time. The two cards’ latency curves are similar up to about 90% of the frames rendered. Once we reach the last 10% or so, though, they begin to diverge, with the Radeon’s curve shooting upward sooner, to higher reaches. For some reason, the 7950 struggles to render a portion of the frames as quickly as its GeForce counterpart. We know from the initial frame time plots that those difficult frames are distributed throughout the test run as intermittent and fairly frequent spikes.

Our 99th percentile metric rules out the last one percent of frames, instead focusing on the general latency picture. That’s helpful, as we’ve seen, but we also want to pay attention to the worst delays, the ones that are likely to impact the fluidity of gameplay. After all, a fast computer is supposed to curtail those big slowdowns.

We can measure the “badness” of long frame times by adding up all of the time spent working on frames beyond a given threshold. In this case, we’ve picked 50 milliseconds as our cutoff. That’s equivalent to 20 FPS, and we figure any animation moving slower than 20 FPS will probably be noticeably halting and choppy. Also, 50 ms is equivalent to three vertical refresh intervals on a 60Hz display.

These results are somewhat heartening. Although the Radeon does spend twice as long above our threshold as the GeForce, neither card wastes much time at all working on especially long-latency frames. In other words, both cards offer pretty good playability in this test scenario. Subjectively, I prefer the smoother gameplay produced by the GeForce, but the Radeon doesn’t struggle too mightily. Still, the gap between them is much larger than the 64-to-72 difference in FPS averages would seem to suggest.

Guild Wars 2

Yes, yes I am attempting to benchmark an MMORPG. Guild Wars 2 has a snazzy new game engine that will stress even the latest graphics cards, and I think we can get reasonably reliable results if we’re careful. My test run consisted of a simple stroll through the countryside, which is reasonably repeatable. I didn’t join any parties, fight any bandits, or try anything elaborate like that, as you can see in the video below.

Uh oh. Once again, we’re seeing lots of spikes from the 7950, while the GeForce’s frame times are smoother and more consistent. Interestingly, though, the Radeon still produces more total frames across our test run, as you can tell from its longer line in the plot above. You know what that means?

Yep, despite all of the spikes, the Radeon’s FPS average is actually higher than the GTX 660 Ti’s. The difference is minor, but the FPS average really doesn’t tell the whole story.

Our latency-focused metrics capture the difference, though. The Radeon and GeForce perform almost identically through the majority of the latency curve, but the 7950 struggles with the last five percent of frames. Again, those problem frames are interspersed throughout the test run. The result is another big gap between the two cards’ 99th percentile frame times, again in favor of the GTX 660 Ti.

Only the 7950 registers on our “badness” meter in this test scenario. The GTX 660 Ti doesn’t take over 50 milliseconds to produce any of the frames, while the 7950 spends almost a fifth of a second crunching frames above our threshold.

Sleeping Dogs

Our Sleeping Dogs test scenario consisted of me driving around the game’s amazingly detailed replica of Hong Kong at night, exhibiting my terrifying thumbstick driving skills.


This game’s graphics are intensive enough that we were easily able to stress these GPUs at 1080p.

This is a closer contest than we saw in the first two games. The frame time plots for both cards show some spikes, as one might expect when the game must constantly stream in new areas of the city. The spikes are more frequent on the GeForce, but the magnitude of the spikes on the Radeon is greater. The result is a clear advantage for the 7950 in the FPS average, but only a slight edge in our latency-sensitive 99th percentile frame time metric—and more time spent by the Radeon above 50 ms.

Assassin’s Creed III

This game appears to be a thought experiment centered around what would happen if the Quaker Oats guy had invented parkour in 18th-century Boston. As you’ll see in the video below, the one thing that would not have happened is his hat falling off. That thing must be glued on there.

Since the AC3 menu doesn’t lend itself to screenshots, I’ll just tell you that we tested at 1920×1080 with environment and shadow quality set to “very high” and texture and anti-aliasing quality set to “high.” I understand that the “high” AA setting uses FXAA HQ with no multisampling. This game also supports Nvidia’s TXAA, but Nvidia has gated off access to that mode from owners of Radeons and pre-Kepler GeForces, so we couldn’t use it for comparative testing.

Well, this one is more straightforward than the others, at least. The GTX 660 Ti is much, much faster than the Radeon HD 7950 in this scenario, regardless of how you slice it.

Hitman: Absolution

In this game, Max Payne has sobered up and gotten a job with a shadowy government agency, yet somehow things still went totally sideways. He’s decided to stop talking about it so much, which is a relief.

I wasn’t sure how to test this game, since the object appears to be avoiding detection rather than fighting people. Do I test by standing around, observing guards’ patrolling patterns? Also, it seems that some areas of this game are much more performance-challenged than others, for reasons that aren’t entirely clear. Ultimately, I decided to test by taking a walk through Chinatown, which is teeming with people and seems to be reasonably intensive. I can’t say that good performance in this scenario would ensure solid performance in other areas of this game, though.

And we’ve finally found a good use for DX11 tessellation: bald guys’ noggins.

Yikes. This game is part of AMD’s Gaming Evolved program and is bundled with the Radeon HD 7950 right now, as is Sleeping Dogs. I had really expected better things from the Radeon as a result. You can see that the 7950’s average frame time is much lower than the GTX 660 Ti’s, but the spikes—followed by short frame times, likely due to buffering—are present throughout the test run.

The curve tells us that the high-latency frames only comprise about five percent of the total frames produced by the 7950. Still, since those long render times are present, the 7950 actually trails the GeForce GTX 660 Ti in our two latency-sensitive metrics.

Medal of Honor: Warfighter
Warfighter uses the same Frostbite 2 engine as Battlefield 3, with advanced lighting and DX11 support. That’s fortunate, because I struggled to find any other redeeming quality in this stinker of a game. Even playtesting it is infuriating. I actually liked the last Medal of Honor game, but this abomination doesn’t belong in the same series. If you enjoy on-rails shooters where it constantly feels like you’re in a tunnel, bad guys pop up randomly, and your gun doesn’t work well, then this is the game for you.


Well. Even though we have the image quality settings cranked at a resolution of 2560×1440, neither of these cards struggles in the least with the rendering workload. The result is an almost identical finish in every metric, with the slightest of advantages to the 7950 in the latency-focused numbers.

The Elder Scrolls V: Skyrim

No, Skyrim isn’t the newest game at this point, but it’s still one of the better looking PC games and remains very popular. It’s also been a particular point of focus in driver optimizations, so we figured it would be a good fit to include here.

We did, however, decide to mix things up by moving to a new test area. Instead of running around in a town, we took to the open field, taking a walk across the countryside. This change of venue provides a more taxing workload than our older tests in Whiterun.

Note: do not aggro the giants.

Not again! The Radeon HD 7950’s plot looks more like a cloud than a line, since it’s populated by a series of long and short frame times back to back, along with some nasty spikes into the 50 and 60 millisecond range.

The 7950 really doesn’t perform too poorly here—it delivers 99% of the frames in about 25 milliseconds or less, the equivalent of 40 FPS. Even if there is a lot of variance in its plot, much of it comes below that 25 ms threshold, and you’d be hard-pressed to notice it.

The more vexing problem is the series of larger spikes that happen occasionally as we tread through the countryside. Those are easily noticeable and interrupt the flow of the animation. Once again, the spikes on the GTX 660 Ti are smaller and less frequent.

Power consumption

Our idle power measurements demonstrate the impact of AMD’s ZeroCore Power feature, where the Radeon HD 7950 GPU powers down most of itself when the screen goes into power-save mode. When it kicks in, ZeroCore Power drops total system power consumption by 19W. Without it, when the system is idle at the desktop, the GTX 660 Ti draws a little less power than the 7950.

When running a game—Skyrim, in this case—the 7950-equipped system draws 20W more at the wall socket than the GTX 660 Ti-based test rig.

Noise levels and GPU temperatures

Both of these cards are reasonably quiet at idle, but the 7950 becomes virtually silent in ZeroCore Power mode, when its fans stop spinning. Then, only the faint whine of our slow-spinning CPU cooler generates any sound above the noise floor of our test environment.

Sapphire has tuned its Vapor-X cooler to maintain very low temperatures under load, likely in order to ensure lots of overclocking headroom for those who wish to tinker. They pay a price in additional fan noise, but that cooler is beefy enough to keep the noise levels fairly modest, regardless.

Personally, I prefer the fan profile Zotac has chosen for its GTX 660 Ti AMP!, which still keeps temperatures in check (67° C is practically cool, for a GPU) but holds down noise levels, as well. Even with its tiny cooler, the Zotac card makes less noise than the Sapphire.

Conclusions

This certainly isn’t the outcome we expected going into this little exercise. Given AMD’s expanded involvement with game developers and a claimed across-the-board increase in driver performance, we expected the Radeon HD 7950 to assert itself as the best choice in its class. Instead, the Radeon’s performance was hampered by delays in frame delivery across a number of games.

Our first instinct upon seeing these results was to wonder if we hadn’t somehow misconfigured our test systems or had some sort of failing hardware. We test Nvidia and AMD GPUs on separate but identical systems, so to confirm our numbers, we switched the cards between the systems and re-tested. The Radeons still exhibited the same patterns of frame latency, with no meaningful change in the results. We wondered about the possibility of a problem with our Sapphire HD 7950 Vapor-X card or its Boost BIOS causing the slowdowns, but swapping in an older, non-Boost Radeon HD 7950 card from MSI produced very similar results.

We’re also quite confident the problem isn’t confined to a single set of drivers. You see, this article has had a long and difficult history; it was initially conceived as an update comparing Catalyst 12.8 and 12.11 beta drivers. However, driver updates from AMD and Nvidia, along with some additional game releases, caused us to start testing over again last week. I can tell you that we’ve seen the same spiky frame time plots in most of these games from three separate revisions of AMD’s drivers—and, yes, Catalyst 12.11 is an improvement over 12.8, all told, even if it doesn’t resolve the latency issues.

In the end, we’re left to confront the fact that the biggest change from our prior graphics reviews was the influx of new games and new test scenarios that stress the GPUs differently than before. (The transition to Windows 8 could play some role here, but we doubt it.) For whatever reason, AMD’s combination of GPU hardware and driver software doesn’t perform as well as Nvidia’s does in this latest round of games, at least as we tested them. That’s particularly true when you focus on gameplay smoothness, as our latency-focused metrics tend to do.

Speaking of which, we can show you the overall performance picture using our famous value scatter plots. The performance results come from all seven of the games we tested, averaged via a geometric mean to reduce the impact of outliers. The prices come from current listings at Newegg for the exact cards we tested. As always, the most desirable combinations of price and performance will be located closer to the top left corner of the plot.


Pop back and forth between the 99th percentile and average FPS plots, and you’ll see two different stories being told. The FPS average suggests near-parity performance between the 7950 and the GTX 660 Ti, with a tiny edge to the GeForce. The 99th percentile frame time, though, captures the impact of the Radeon’s frame latency issues and suggests the GTX 660 Ti is easily the superior performer. That fact won’t be a surprise to anyone who’s read this far.

Armed with that info, we can dispense with the talk about game bundles, rebates, and pricing shenanigans that might shift the value math in favor of one camp or another. Instead, we have a crystal clear recommendation of the GeForce GTX 660 Ti over the Radeon HD 7950 for this winter’s crop of blockbuster games. Perhaps AMD will smooth out some of the rough patches in later driver releases, but the games we’ve tested are already on the market—and Nvidia undeniably delivers the better experience in them, overall.

I’m forced to be concise on Twitter.

Comments closed
    • Geonerd
    • 7 years ago

    I would REALLY like to see the TR guys adjust image settings in an attempt to isolate (if possible) the source of these hiccups.
    Maybe the latency spikes only occur when running a given setting above a certain level? Maybe they only occur above a given resolution, etc.
    This would mirror a user’s first response – to turn down the details until the game runs smoothly. This sort of testing, if successful, might also provide needed insight into the nature of the issue….

    • sammied54413
    • 7 years ago
    • indeego
    • 7 years ago

    [i<]Faulting application name: Kdbsync.exe, version: 0.0.0.0, time stamp: 0x4f67a718 Faulting module name: amdocl.dll, version: 10.0.1084.2, time stamp: 0x50bb0f7e Exception code: 0xc0000005 Fault offset: 0x00057c8e Faulting process id: 0x1754 Faulting application start time: 0x01cdd241b85a2f68 Faulting application path: C:\Program Files (x86)\AMD AVT\bin\Kdbsync.exe Faulting module path: C:\WINDOWS\SYSTEM32\amdocl.dll[/i<] Goes without saying this is a regular occurrence for the AMD software I have installed over the years. Such crap...

    • UltraPin
    • 7 years ago

    nVidia developed a very balanced product. The only bad is it’s performance is keeping really good 670 and 680 cards from mainstream buyers.:)

    • Bensam123
    • 7 years ago

    So aggregating good ideas. A comparison of W7 to W8 should be made, ‘stock’ cards should be tested (or the cards should have their clocks/mem reduced to stock levels), and a comparison should be done with powertune at 0% and +20%.

    I’ve personally seen powertune +20% make my games run a lot more fluid, even when the chip isn’t under 100% load.

    • PopcornMachine
    • 7 years ago

    As a comparison, I checked another review including 660ti and 7950.

    The review below compares the cards at stock speed and overclocked.

    The 7950 comes out ahead in almost all cases. But if you compare the stock 7950 to overclocked 660ti, you see what the results were here.

    I really do thing the significant overclock on the 660ti was the main difference.

    [url<]http://www.overclockersclub.com/reviews/nvidia_gtx_660ti_roundup_asus_msi_galaxy/[/url<]

    • HisDivineOrder
    • 7 years ago

    Clearly, Techreport has an axe to grind against AMD. Remember a few months back when one of your guys nearly got killed TWICE by AMD? Once by a falling object from the ceiling? And then from a deathly ailment that left you miserable and wishing for Intel chips?

    Did nVidia suddenly start to seem very awesome to you about then? Did you wake up in the middle of the night in a cold sweat calling out Jen-Hsun Huang’s name? Did you find yourself enabling PhysX at its highest levels even with AMD cards and then cursing them for it? Perhaps did you start having blackout spells? Perhaps after one such time, you found your butt really hurt and you went to a mirror, turned and–to your shock and amazement–you saw a giant TWIMTBP’ed tattooed on your bottom with an arrow pointing down at your anus? Did your significant other start complaining because you suddenly took an interest in having two of everything in order to enjoy it, including her?

    Perhaps did you start joining AMD forums to complain about drivers? After you got over the plague AMD gave you, did you suddenly develop an intense desire to remind everyone how ATI used to make horrible drivers and that ATI thought TruForm was the best thing since the Rage MAXX?

    Do you look at Newegg and whisper to yourself, “You get what you pay for,” as you ignore deals for Radeon 7970’s for $300 with tons of free games that you actually want to play?

    After the Green Agent from nVidia set up an AMD-hating plague to debilitate you and the Blue Agent from Intel hiding in the ceiling shoved that beam over to hit you, did you notice how your Radeon’s were suddenly never quite smooth enough for you?

    When you left your significant other because he/she bought an AMD Radeon card for Sleeping Dogs and was thinking about a Bulldozer CPU while they were on sale, did you realize how lost you were? Or did the bias just overwhelm all reason, resulting in this article?

    Else, how could you EVER speaketh against thine lord, AMD the All-Loving, AMD the All-Giving, AMD the Sacred, AMD the Divine, AMD the Bringer of Free Games, AMD the Releaser of Magic Drivers? Else, how could you?

    You could not. No, sir. You could not. You must be biased. MUST. BE. You and all around you must be biased and infected with nV! For shame, sir. For shame!

    /sarcasm

      • yogibbear
      • 7 years ago

      TLDR: Snark!

      • vvas
      • 7 years ago

      Congratulations, you win this comment thread. :^)

    • d0g_p00p
    • 7 years ago

    scott, you should have bumped physx up to med at least in BL2. not only is it a feature in the game (cannot disable) but it also makes the game pretty nice visually and really adds to the weapon effects. you did state you were using max details.

      • nanoflower
      • 7 years ago

      That wouldn’t be fair to the AMD card since Physx is an Nvidia only feature.

        • d0g_p00p
        • 7 years ago

        I figured that was the case. Maybe would be nice to see how performance is impacted using higher levels of phys. I usually turn it off but its really nice in BL2. I can kind of understand why its not able to be turned off in the game

          • I.S.T.
          • 7 years ago

          Physx Low in BL2 is basically just the same as the console settings for particles and stuff. It’s medium and high that are actually different.

    • gmskking
    • 7 years ago

    Wow, the difference in size is crazy.

    • PopcornMachine
    • 7 years ago

    The results here don’t make any sense based on other reviews I’ve seen.

    Really puzzled. Perhaps the significant overclock on the 660ti has something to do wtih it?

    • carlotto
    • 7 years ago

    Hi Scott, thank’s for the article, very well written and clear.
    I’m wandering if it’s possible that the Power Boost be the origin or, at least, could interfere with the (incostant) framerate.
    What is the reaction time for Power Boost to kick in (and out) ?

    • PixelArmy
    • 7 years ago

    FrostBite 2 (MOH:W) not exactly unprecedented:
    [url<]http://www.anandtech.com/show/6159/the-geforce-gtx-660-ti-review/13[/url<] Unlike a lot of links being posted, this uses the same AMP card. Remember the AMP base clocks (1033) start about 5% higher than the stock 660 ti's avg. boost clock speed (980)...

      • BestJinjo
      • 7 years ago

      That review is outdated. The AMD Frostbite 2.0 game engine fix didn’t come until Catalyst 12.11 drivers and it was a major 20-30% increase. Even Anandtech noted this:
      [url<]http://www.anandtech.com/show/6393/amds-holiday-plans-cat1211-new-bundle[/url<] Any review of BF3 or MOH:W without Catalyst 12.11 drivers is misleading. The review you posted is from August. You can even check reviews of MOH:W with the latest drivers. HD7950 boost is competing with GTX670 and GTX660Ti is way behind in this game, especially at the high resolution as tested in TR's review: [url<]http://www.techspot.com/review/603-best-graphics-cards/page8.html[/url<] or [url<]http://www.guru3d.com/articles_pages/medal_of_honor_warfighter_graphics_vga_performance_review,6.html[/url<]

        • PixelArmy
        • 7 years ago

        Fair enough…

        Still…
        – The guru3d link uses nvidia drivers (310.33) that according to the release notes, don’t contain BF3 and MOH:W optimizations that techreport’s version (310.54) does…
        – None compare an OC’d version of the 660ti.
        – The guru3d link uses an older cpu (bloomfield)
        – The techspot link is a lot closer spec wise.

        Does any of this matter? At least the first two issues…

    • ptsant
    • 7 years ago

    I use a vanilla 7950 which I got for 279 CHF (ultra cheap) to play competitive PvP in GW2. I do notice slight stuttering right after a map change and maybe when entering a new zone. However, this is temporary and doesn’t take more than 1-2 sec max say every 5-10 min. I imagine that maybe the game is caching stuff or swapping textures to GPU RAM or decompressing or something similarly “one-off”. I can’t compare with a geforce, but honestly I never was bothered enough to care. It is barely but clearly noticeable.

    My main issue with this review is that it goes contrary to what ArenaNet posted before game launch (https://www.guildwars2.com/en/news/bill-freist-talks-optimization-and-performance/). According to the developers, 7950 systems get as much FPS as 680 systems. Yes, I know, this is not a direct comparison between identical systems and yes, it isn’t on win8 and doesn’t test the newest drivers. Nevertheless, we’re talking about probably hundreds of users and the information comes from a reliable source. My decision to get a 7950 was heavily influenced by these data.

    Anyway, I do appreciate the kind of effort you put into your reviews and I really don’t question your good will but the GW2 results appear strange to me.

      • indeego
      • 7 years ago

      Read below comments. Many people suspect Windows 8 ATI drivers prematurely ejac- oh, OK, I’ll be good.

        • cynan
        • 7 years ago

        Hmm. If this [url=http://www.tomshardware.com/reviews/windows-8-gaming-performance,3331-11.html<]Tomshardware article[/url<] is of any validity, then these driver problems must be limited to Tahiti as Pitcairn's performance delta in Win 7 vs Win 8 is pretty much nil: Performance for the HD 7850 (and 660) remained almost identical across Win 7 and Win 8 (yes, they only show avg FPS, but frame time and avg FPS should at least be correlated). Also, I see that the test system was the X79 platform. I've been following reports of ongoing issues with HD 7000 cards and X79 (just look at the most recent beta Catalyst driver update - it addresses lock-up issues specific to X79 platform). I wonder if this has anything to do with it. Probably not, as the 660Ti review was done on the X79 platform as well...

    • rrr
    • 7 years ago

    A bogus test. In Tomshardware’s tests, 660Ti had drops to 78xx like performance on high resolutions with tons of eye candy and we’re supposed to believe that a few driver updates suddenly make it stomp 7950?

    This site is trolling people.

      • steelcity_ballin
      • 7 years ago

      Then go read Tom’s Hardware, idiot.

        • rrr
        • 7 years ago

        Oooooh, someone angry I don’t agree with results? You lack arguments? Then maybe stop making a fool out of yourself with such barely literate responses.

        Seriously, if someone pointing out sth like that offends you so much, maybe it’s time to get out more and deal with real people in person? For your sake, I hope you can talk with them not using word “idiot” in every sentence.

          • albundy
          • 7 years ago

          Agreed! NEVER trust only one source. I learned that the hard way. Be vigilant and question everything.

      • kniaugaudiskis
      • 7 years ago

      Keep in mind that this is not a reference GTX660Ti. It has a memory overclock to 6,6GHz (effective), which adds quite a bit of performance, since this particular card tends to starve on memory bandwidth at higher settings. I’ve tried myself overclocking a GTX660Ti’s memory to 7GHz (effective) and the performance gains were about ~15% across pretty much all the latest titles with the exception of Sniper Elite benchmark, where performance increased by a whopping 25%. For those who wanted to see Far Cry 3 benchmark, here you go: [url<]http://www.techspot.com/review/615-far-cry-3-performance/page2.html[/url<] . Seems like a stock GTX660Ti falls right between a HD7950 and a HD7950 boost on average and ties a HD7950 at 2560x1600 4x MSAA @ Ultra, however at those settings neither provides very playable frame rate.

    • Krogoth
    • 7 years ago

    The problem isn’t with the hardware.

    I suspect that problem is from a stupid bug with AMD drivers and WDM 2.0. WDM has always been quirky throughout its history with odd bugs that happen under certain games (mostly legacy stuff that depends on Directdraw).

    I’m curious to see what happens if you throw in Windows 7 drivers under Window 8. Windows 7/Vista drivers will work under 8 the only caveat it that Windows 8 will be forced to run at WDM 1.1 mode. I’m wondering if the same performance problems will happen again.

    • ashley01x93
    • 7 years ago
      • Deanjo
      • 7 years ago

      ….. and if you ever post links of ssk in a tutu again you will have to deal with me….

    • Deanjo
    • 7 years ago

    What I would like to see is a retest on an AMD based system to see if there is still a huge discrepancy. It wouldn’t be the first graphics card that becomes a victim due to chipset errata.

    • sschaem
    • 7 years ago

    So Dirt Showdown is not included in the previous charts because nvidia does poorly because of its limited compute shader capabilities (GI deferred lighting).
    But AC3 is included even so it look like the game is using some nvidia specific optimizations?

    ” (We’ve omitted DiRT Showdown because the vast gulf in brand-based performance there skews the results pretty wildly, even though we’re using a geometric mean. Clearly, that game is an outlier of sorts.)”

    But its OK to include a brand new, potentially buggy game on AMD HW in this chart ?

    7950 about 50% faster then 660 ti DS
    7950 about 70% slower then 660 ti AC3

    I smell a rat

      • superjawes
      • 7 years ago

      Last page…

      [quote<]The performance results come from all seven of the games we tested, averaged via a geometric mean to reduce the impact of outliers.[/quote<] The previous results with DiRT did not use geometric mean, therefore it would have skewed the results more than AC3 would, or is, here.

        • Bensam123
        • 7 years ago

        Wait, so TR has started weighting benchmarks? (geometric mean)

        While I disagree with how the initial post is made, I am starting to disagree with TRs methodology. I don’t think it’s TRs job to either automatically interpret statistics and show me only what they think I should see or play god and decide who should and shouldn’t be exempt from testing.

        IF a graphics card performs bad it shouldn’t be given an exception, just as Radeons aren’t. Too much emphasis is now being put on the end results. It’s entirely possible to over analyze statistics and start to shape them in a way that only you want to see (or want others to see), which is starting to look like what is happening here. TR is becoming increasingly fixated on normalization. While I agree it’s a good idea to have a steady frame rate, it’s not a good idea to start analyzing your data in a way that only shows central tendency (even though TR doesn’t report variance).

        For instance time spent beyond Xms actually has more meaning to me then 99th percentile. I actually skip 99th percentile now that I know what it does as it basically weights the benchmarks.

        That’s why you give people raw data or you do some light processing. I don’t think there are a whole lot of people on here that even know what a geometric mean does, I didn’t and had to look it up. I’ve taken college level statistics courses and it was never mentioned once. It’s important to point out that this IS NOT A NORMAL AVERAGE or MEAN. It isn’t easy to figure out easy. This most definitely will throw a lot of people off and they’ll just take the end chart for a normal one, which is what I did.

        If you want to show a geometric mean, fine, but make sure you show a normal mean too so people can compare.

          • superjawes
          • 7 years ago

          When you say “normal mean,” do you just mean averaging everything without weights?

          That might be fair, but I think the geometric mean is also important because it helps to solve the issue of games performing abnormally well or worse. That way, you can still include games like AC3, DiRT Showdown, and Hawx without skewing the results completely. That’s really the happiest medium I’ve seen because you can focus your choice on one or two games, but won’t get thrown off because a game you will never play skewed the average toward a specific card’s favor.

          Personally, I think it’s a crusade for what is correct more than what pleases people, which is why so much focus has been put on frame times. the end result is meant to convey the real experience as much as possible, which can’t always be captured by average FPS and histograms.

            • Bensam123
            • 7 years ago

            Yes… I agree that a geometric mean has a place, but it most definitely has to be pointed out (instead of a small note) and it shouldn’t be used as a straight out replacement for a normal mean.

            Abnormally well and worse results are part of the actual data. One of the first things you learn in statistics is you don’t throw out results, even if it doesn’t agree with your conclusion. You can attempt to rationalize or address them, but you don’t simply remove or obscure them because they don’t fit your hypothesis or conclusion. Benchmarking doesn’t have a hypothesis and it’s not set out to do prove one thing, so I don’t know why they’re removed in the first place. Benchmarking is all about simply presenting the information in the most unbiased and unfettered way. Doing what companies cant or wont.

            Unless these benchmarks have a ulterior motive…

            • Waco
            • 7 years ago

            I agree with everything above here. I didn’t even notice that when reading through the first time…

            • Bensam123
            • 7 years ago

            I didn’t either! Apparently they’ve been doing it since the inside the second benchmark for CPUs, which I didn’t know at the time. I didn’t notice it till one of the comments posted it out and then Scott pointed out they’ve been doing it since then.

          • vvas
          • 7 years ago

          Weighting benchmarks? What on earth are you talking about?

          The geometric mean isn’t anything super complicated: instead of adding N numbers and dividing by N (arithmetic mean), you multiply N numbers and take the Nth root (geometric mean). Easy. Compared to the arithmetic mean, it’s more robust to getting swayed by outliers, which is the intended outcome here.

            • Bensam123
            • 7 years ago

            But that is weighting in this case, as they want to emphasize stable results. Results with less variance have a better chance of changing the mean. Over normalizing results also ends up making the results less meaningful for describing the overall set of data as well.

        • jimbo75
        • 7 years ago
    • Cyco-Dude
    • 7 years ago

    i would like another test done using windows 7 x64 instead of windows 8.

      • BestJinjo
      • 7 years ago

      If they redo the test, they should increase PowerTune to +20% in CCC. That’s the only way to get Boost technology to work correctly.

        • Bensam123
        • 7 years ago

        I actually agree with this, not with getting boost to ‘work properly’, but that increasing powertune by 20% increases fluidity tremendously. I’ve witnessed this myself with my own 7870.

        This should be tested with +20% as well.

    • moose17145
    • 7 years ago

    I very much appreciate the hard work that you guys put into this review, so please don’t take this post as my making it out as if I do not appreciate the hard work, time, and effort that went into this review, along with every other review you guys have made. But too many variable have changed from the previous Graphics reviews to this review. Especially from the original 660Ti review to this review. And, there are very contradicting results between the two reviews leading me to not be quite sure of which review I should listen to. As a few people have stated,

    Variables that have changed from the original 660Ti review to this review are
    – Driver versions (But we have been asking for the latest drivers to get tested, so this is a given)
    – All new / different games (with Skyrim being the sole exception)
    – The in game test location within Skyrim has changed to a area which should hopefully be more intensive
    – And the Operating System has changed from Windows 7 to windows 8.

    Obviously a few people have been commenting that this review seems counter to every other review out there, and most importantly, this review is counter to TR’s own review that was done on the 660Ti. In the initial 660 Ti review the 7950 and 660 Ti were neck and neck for overall 99th percentile performance in your conslusion. In fact you had to exclude the Dirt: Showdown results from the making the overall value plot overly in favor of the Radeons, and even then the 7950 still tied 660 Ti in terms of performance. But now with this review the exact opposite seems to be happening. What changed? Well… unfortunately everything except Skyrim has changed, which makes pinning down what happened more difficult.

    When you initially tested the 660Ti in Skyrim, even the slower 7870 tied with the 660Ti, and none of the Radeons produced the jaggy mess of frame times that we saw in this review. And yet AMDs latest drivers should have improved performance over that initial review. Which you confirmed. You stated that you went back to older driver revisions and still saw the jaggy frame times in every driver version, but also that the latest drivers did show improvement upon the older drivers. So that being said, we should expect that the frame times from the initial 660Ti review in the Skyrim test should have gone down… not up and less consistent.

    What could have caused this? Well I can think of a few things. One is that you moved the test scene from inside the town of Whiterun to out into a more open wilderness type of environment. This could mean that the Radeons do well while inside town, but perform less admirably while out in the open wilderness that composes most of the game environment. One possible way of determining this would simply be to head back to your original test location inside Whiterun and perform the original Skyrim test again and see what happens. If the card produces smooth frame times like we initially saw, then it could very possibly be the issue that the Radeons just don’t handle the open environment as well as they handle scenes in town.

    However, if you do go back to the original test scene, and still see the inconsistent high frame times that we saw in this review, then likely that is not the case and something else is at play here, such as the transition from Windows 7 to Windows 8. In this case redoing the testing in windows 7 would seem to be in order to see if that fixes many of the poor showings for the Radeon. It could very well be that the AMD and NVidia drivers just arean’t ready for prime time yet, or perhaps there is an underlying issue with Windows 8 itself. I included NVidia in there as well for their drivers potentially not being 100% compared to the windows 7 drivers, because as a few other people stated, frame times were up across the board, even for the GeForces. Is this because you succeeded in making the benchmark more demanding, or is it because you were running it in windows 8? We can’t really be 100% sure without going back and retesting in both the new more demanding test location, as well as the original location in both Win 7 and Win 8. Doing this would give us two different test locations, one in town and one outside in the wild, as well as giving us a comparison of the general performance between both Windows 7 and Windows 8. This would obviously impose more work on your part, but it would high light areas where the cards either excel or struggle in the game (wilderness areas vs in town areas), but could also expose any performance issues / differences between the two operating systems.

    Either way, something happened between the initial 660 Ti review and now to make the 660 Ti appear to be more appealing, and not by any small margin either. I guess in my mind these results simply shouldn’t have happened, especially when the two cards were tied in performance in the first review, and that was on drivers that were much older than what was used in this review, so if anything the 7950 should have gotten faster. But it didn’t… why? I do not think that the results are invalid just because every other review site has pretty much shown the 7950 to be the performance winner, in fact I think that makes these results MORE valid. Knowing if the Radeons struggle inside windows 8 or compared to windows 7 would be very important for people who are in the market for a new system because it might help them determine either

    a.) Which video card to go with (AMD vs NVidia)
    b.) Which operating system they want to go with (win8 or win7)
    c.) some combination of the above two.

    I definitely feel like a more in depth look into what happened between the two reviews is in order and why the 660 Ti is now so handily beating the 7950, when previously you have to exclude a test just to bring the 7950 down to on par with with the 660 Ti. As I said, too many variables have changed to be able to isolate why this is the case. I suspect it has to do with the change from windows 7 to windows 8 (as that is easily the biggest variable change), but as stated… we can’t be 100% sure of that.

    Perhaps a review of Windows 8 performance compared to Windows 7 performance is in order?

      • Meadows
      • 7 years ago

      WaltC, is that you?

        • chuckula
        • 7 years ago

        That post was so long that if someone were to write a concise summary of that post, then I would still post a tl;dr to the summary.

          • Bensam123
          • 7 years ago

          Basically he just said restesting and comparisons should be done as these results, including using Win7 vs Win8, aren’t normal. That was actually one of his last sentences.

          Definitely needs to be more concise though.

            • moose17145
            • 7 years ago

            Sorry the post got so long. I didn’t realize it had gotten so long until I already posted it. I just wanted to express my thoughts on this without it coming across as “I disagree with the results therefore they are invalid!!!”

            • Darkmage
            • 7 years ago

            Because I have an attention span longer than a gnat (being old has its advantages) I actually appreciated the amount of detail you included. Most importantly, I like the fact that you didn’t accuse TR staff of 1) Being biased, 2) Being incompetent, or 3) Being lazy. Hell, you were downright civil.

            I wish more Gerbils were like you.

            • sweatshopking
            • 7 years ago

            yeah. +1 for that.

            • moose17145
            • 7 years ago

            [quote<] I like the fact that you didn't accuse TR staff of 1) Being biased, 2) Being incompetent, or 3) Being lazy. Hell, you were downright civil.[/quote<] Thank you! That is what I was trying to go for. Also... 1.) Because I don't think TR is trying to be bias... 2.) I don't believe TR is incompetent, in fact their reviews are so detailed and in depth that they are too much for your average laymen... 3.) I understand how much work goes into a review like this, so I know they aren't being lazy... Either way, the test results are fascinating to me. I legitimately want to know whats going on / what's causing the 7950 to perform so badly. I don't accuse sites of being biased because they come to a totally wacky conclusion compared to everyone else on the internet. I have been tinkering with computers long enough to know that completely silly and minute differences between two systems cause cause that system to behave totally differently to another system with identical / near identical hardware. Example... Back in the day when I built my first computer, I bought a FX 5200 (vanilla... not even the ultra version...), and long story short ended up not being very impressed with its performance. So I saved some money and bought a Radeon 9800 Pro (seriously epic videocard). I was expecting this amazing leap in performance... but when I installed it I was very underwhelmed. No matter what I couldnt get 3DMark to post scores anywhere close to what TR was showing it could do. Oh sure I was only running a 2.6Gz P4 instead of the 3.2GHz P4 (or whatever TR's test rig had at the time in it...), but that shouldn't have affected the score by having it be 1/3 to 1/2 the score as what TR was showing. So I tried all kinds of junk to get the system performing better... but to no avail... I was a sad panda feeling like I was lied to about the performance of the card. Then one day I was going to be adding some more stuff to my system, and along with that stuff I figured a PSU upgrade should be in order. The instant that PSU had brought the system online I could tell right away that things were running smoother. I reran the 3DMark bench and it beat TR's score! The issue this whole time was that the PSU I had was a POS! It was some 300 watt Antec PSU... I never thought about that being the culprit because Antec had such a good reputation at the time, and from everything I was told / read, a 300 watt PSU should have been plenty to run that system just fine. On top of that, other than performing slower than what I was expecting, it was otherwise rock solid, no crashes, instabilities, nothing... and I had been told and read everywhere that usually if the machines PSU isn't up to snuff then the system should be experiencing some crashes, BSODs, general instability, etc. So I understand how you can have a totally different experience as everyone else because of some minor difference between the two machines. Same thing with software. I have seen one installed program running in the background totally mess around with how a system performs. But I hardly would say that makes a test invalid...

            • sparkman
            • 7 years ago

            tl;dr

      • Bensam123
      • 7 years ago

      Yup some retesting and comparisons were in order with these results, but sadly they weren’t done. Win 7 should’ve been compared as well as other areas in games where these cards have been known to perform differently (so you can rule out other sources).

    • JuniperLE
    • 7 years ago

    I wonder if both cards (from AMD and NVIDIA) have the exact same input latency? it’s common to see added smoothness being achieved with addition input latency.

    • yogibbear
    • 7 years ago

    I think the AC3 results should be thrown out. There is obviously a HUGE bug from either Ubisoft or AMD in relation to how that game works on their cards and its not fair. As an Nvidia owner I think it looks stupid to include these results but toss out the Dirt 3 results in previous tests.

    See this thread of some 70+ pages of AMD owners whinging:
    [url<]http://forums.ubi.com/showthread.php/727804-Low-fps-in-some-locations-probably-found-a-reason/[/url<]

      • superjawes
      • 7 years ago

      ::ahem::

      [quote=”Damage”<]Guys, just wanted to point out that we used a geometric mean to average our overall results for those final scatter plots: [url<]http://en.wikipedia.org/wiki/Geometric_mean[/url<] We switched to the geo mean in response to the controversies over the exclusion of games like Hawx 2 and DiRT Showdown in past reviews, when they seemed like brand-skewed outliers. With the geomean, particularly low scores should have less influence on the overall result, without us having to fret over what's fair to both sides. The geo mean may not be a perfect solution, either, but I doubt there is one. We're hoping this change is a worthwhile improvement, at least. Thought you should know![/quote<] Agreed that there are issues, but it's not a strict 1:1 or tossing out of outlying data. By taking a geometric mean, AC3's impact on the final results is less significant. And actually, Scott provided the value graphs sans AC3 in another response.

        • yogibbear
        • 7 years ago

        Changes the average FPS / $ plot in the complete opposite direction and then some….

        Then assuming the latency issue is Windows 8…. the entire conclusion is different for 95% of gamers.

          • superjawes
          • 7 years ago

          I also think Scott mentioned that he’s imaging some disks to investigate that as well. I am curious to see if 7 -> 8 makes a substantial difference.

          As for FPS perspective, average FPS also hides the frame latency issues the Radeons had this time around. The 99th percentile results were virtually unchanged when excluding AC3.

          My takeaway from this review is that I’m glad I’m not buying hardware right now. The 7950 (I think) still has the sweet game deal, but the 660 Ti performed better. Hopefully the latency issue dissapears before the 8xxx comes out next year.

            • yogibbear
            • 7 years ago

            Yeah I’m not buying either… and personally the only game I don’t own IS AC3 from either bundle… Just saying that there’s some significant issues leaving this article without a follow up explaining some of the quirks.

      • Bensam123
      • 7 years ago

      Yeah, this is starting to turn into a contention point… Make an exception for one game and all of a sudden you have to start making exceptions for others too. Kinda funny, they just simply removed Dirt from the benchmarks.

    • Chrispy_
    • 7 years ago

    I’m not surprised the 660Ti has come out on top; The 7950 has been on the market almost nine months longer than the Geforce, which is [i<]A Big Deal™[/i<] in today's market - with 12-month product cycles, the 7950 has spent most of its life competing against previous-gen Fermi solutions and will soon be replaced by the 8000-series. Despite the awesome inside-the-second testing here, [b<]I am actually dubious about the results[/b<]; I bought a 7950 to handle my Korean screen's 2560x1440 and I have sunk almost 200 hours into Borderlands2 at almost exactly the same settings you guys tested at, the only difference being that I run with VSycn [i<]on[/i<]. According to [url=https://techreport.com/r.x/7950-vs-660ti/bl2.gif<]this plot[/url<] the 7950 crosses the 33.3ms frametime threshold [b<]a lot[/b<], and with VSync on, I would notice the drop to 20fps without question. Even if it was just one frame of high-latency, it would be evident as a skip, and I'm not seeing that: I am getting consistently smooth, skip-free gaming at 60fps (under 16.7ms) barely ever seeing a missed frame (which would mean a render-time of between 16.7 and 33.3ms). Three consecutive frames before an update, caused by render-times of over 33.3ms is totally unacceptable to me, even if it only occurs rarely during heavy firefights. I would [b<][i<]DEFINITELY[/i<][/b<] have lowered the settings if I was getting that sort of performance. [i<]Edit: Actually, I see you have Physx set to "low", whilst on my machine Physx is greyed-out, probably because I uninstalled Physx system software. Could this be a factor, or just a red-herring?[/i<]

      • alienstorexxx
      • 7 years ago

      use radeonpro!!! force triple buffering. double buffering on amd, that’s why the game drops to 30fps, gearbox developers crashed it on the second update. you can also play on windowed fullscreen mode if you use the same screen res on desktop, so you don’t need vsync.

        • Chrispy_
        • 7 years ago

        What exactly has that got to do with my post? :confused:

          • alienstorexxx
          • 7 years ago

          before you edited, you were complaining about fps drops from 60 to 30, that’s because of vsync double buffering.

            • Chrispy_
            • 7 years ago

            The edit was from 30 to 20fps, because I mistakenly divided one second by 33.3ms to get 30. Actually, when it’s [i<]over 33.3ms[/i<] vsync will default to the next interval, which is 50ms - giving an effective 20fps, not 30fps. Your comments show that you don't appear to quite understand how vsync works. Rather than patronize or attempt to condense a complex issue into one paragraph, I'd heartily recommend you read [url=http://www.anandtech.com/show/2794/2<]this article[/url<] instead. With a full understanding of the whole vsync and frame-delivery process, you will likely get more out of TR's inside-the-second articles, too.

      • l33t-g4m3r
      • 7 years ago

      Dunno, I have trouble trusting numbers from any games that use PhysX. It’s about time developers use something else if they really want fancy particle effects, but I guess it’s not about what’s best for the consumer, but what kickbacks they can get from nvidia.

      • chuckula
      • 7 years ago

      Yes, the 7950 has been out for 9 more months… meaning that AMD has had more time to come up with driver optimizations that should have given the 7950 the advantage. The silicon doesn’t change between these cards over time and the fact that the 660 launched later should actually give the advantage to AMD not Nvidia. I’m also a little surprised that the 7950 is being compared to only the 660.. I was always under the impression that the 7950 was only half a step below the flagship 7970, which would make the real competitor the older and bigger GTX 670 instead of the 660…

        • BestJinjo
        • 7 years ago

        The company that launches 2nd generally has the advantage because they can see exactly where they can price the product to fit into the marketplace. Also, unlike GCN, Kepler is not a new architecture, but just a rebalanced Fermi. Therefore, it is NV who has the advantage with more mature drivers out of the gate since they had 2 years to work on them under Fermi. Did you forget how immature Fermi drivers were at launch given that it was NV’s brand new architecture?

        For your 2nd point, to make it fair, GPUs are generally compared on price. If you can buy a GTX670 for $280 USD on Newegg, then a comparison to the GTX670 for a prospective buyer would be perfectly valid. Right now GTX660Ti is priced very similarly to HD7950 boost cards, while you can find HD7970 1Ghz versions for $360-370, which is similar to GTX670s.

    • phez
    • 7 years ago

    Time to throw in the towel, AMD.

    • alienstorexxx
    • 7 years ago

    damn, even on this high end graphic cards the ac3 has that stuterring? what a shame. on ubisoft forums, official ubisoft moderators said it happens to users who downloaded the pirate version. LoL!!

    • Damage
    • 7 years ago

    Guys, just wanted to point out that we used a geometric mean to average our overall results for those final scatter plots:

    [url<]http://en.wikipedia.org/wiki/Geometric_mean[/url<] We switched to the geo mean in response to the controversies over the exclusion of games like Hawx 2 and DiRT Showdown in past reviews, when they seemed like brand-skewed outliers. With the geomean, particularly low scores should have less influence on the overall result, without us having to fret over what's fair to both sides. The geo mean may not be a perfect solution, either, but I doubt there is one. We're hoping this change is a worthwhile improvement, at least. Thought you should know!

      • jimbo75
      • 7 years ago
        • derFunkenstein
        • 7 years ago

        why don’t you get bent?

        • Damage
        • 7 years ago

        Oh, sure. Interesting question.

        For the record, we switched to the geomean back in August and first deployed it in this review:

        [url<]https://techreport.com/review/23246/inside-the-second-gaming-performance-with-today-cpus[/url<] So that's the timing. Anyhow, I pulled the AC3 results and made new value plots. Here's the 99th percentile frame times: [url<]https://techreport.com/r.x/7950-vs-660ti/value-99th.gif[/url<] [url<]https://techreport.com/r.x/7950-vs-660ti/value-99th-noac3.gif[/url<] And here's average FPS: [url<]https://techreport.com/r.x/7950-vs-660ti/value-fps.gif[/url<] [url<]https://techreport.com/r.x/7950-vs-660ti/value-fps-noac3.gif[/url<] Looks like the removal of AC3 results has virtually zero impact on the 99th percentile results, but it does move the needle enough on the FPS average to give the 7950 a slight edge in overall perf. You guys can decide what to make of that. I may consider just leaving out AC3 in the future, if the gap remains constant over time. Hrm. However, I would caution against making too much of an FPS average. The frame latency problems of the Radeons in our tests are real, and they affect the smoothness of gameplay in ways you can feel. That is the core of our findings, and it is not changed one bit by a slightly higher FPS average for one card or the other.

          • jimbo75
          • 7 years ago
            • superjawes
            • 7 years ago

            [quote<]However, I would caution against making too much of an FPS average. The frame latency problems of the Radeons in our tests are real, and they affect the smoothness of gameplay in ways you can feel. That is the core of our findings, and it is not changed one bit by a slightly higher FPS average for one card or the other.[/quote<] ::sigh::

        • alienstorexxx
        • 7 years ago

        i think ac3 may have an important influence on final result, but i don’t think they do it on purpose. they are just testing the latest games, and probably the most played game in this year wich is skyrim. there is no point on testing bf3 because moh uses the same engine, and probably will get the same results. dirt showdown is by far the worse dirt ever. and if you think, there’s no more games that could put amd on the place that it should be, that is no more nor less that 20 buck ahead on performance.

      • superjawes
      • 7 years ago

      I definitely prefer that method to one of throwing out data.

      It might take a lot more work, but I think the only way to imrpove the effects of outliers is to weigh each result with a statistical bias. Again, not perfect (and I’m not sure exactly how you might calculate this), but when you have a game that clearly favors a manufacturer (Hawx or DiRT), you could weigh those data sets less than games that have better correlation between card power (clock speed, etc.).

      You might also be able to include older data sets this way by gradually giving older tests less and less weight until including them has no appreciable impact on final results.

      (I may or may not have enjoyed Nate Silver’s poll analysis this year.)

      • Bensam123
      • 7 years ago

      So you add them, then simply normalize them? That’s not the same as adding them and most people don’t know what a geometric mean is (I didn’t so I looked it up). I didn’t even notice this till I read about it in a different post.

      If you guys are going to weight a average, I’d suggest adding a normal average for people to compare the two.

    • rechicero
    • 7 years ago

    After reading this great article I must say we need just a “control” gametest with Win7. Just one test, to check if this is Win8 related (in that case, who cares?) or something we should take seriously when choosing a GPU.

    As stated in the Steam survey, this review is only useful to the 4,7% of gamers that use Win8.

    I insist, you don’t need the whole battery of test, just one, to check the consistency of the results.

      • spuppy
      • 7 years ago

      This article showed that the Radeon definitely has issues with Windows 8 (and uses the same testing methods as here, I believe the only other site to do so), and that was with older drivers/games/video cards: [url<]http://www.hardcoreware.net/windows-7-vs-windows-8-performance/[/url<] AMD is still at fault for not coming up with decent drivers 4 months after the stable release of the OS, while Nvidia sees performance improvement almost across the board in Windows 8.

        • rechicero
        • 7 years ago

        Well, if that’s true, this article is pointless. Windows 8 is 4,7% of the gamer world (right now). You can’t say “this GPU is better” when testing it with a OS that virtually nobody uses and when one of the tested items has problems with that OS.

        I don’t care if AMD is at fault or not, I only care if the 660 Ti is much better than AMDs GPU, as it looks like or not.

        What I can’t understand is why Scott took the time to swap the cards and didn’t think of a control test with Win7. Scott, maybe you should say something about this. The validity of the data looks like more important than the visualization.

          • Damage
          • 7 years ago

          On the contrary, if the Radeon’s latency spikes stem from general problems with Win8 performance, that’s big news, in my view. I’m imaging some disks now and will see about testing that theory. Will take some time to do properly, though, so bear with us!

            • MrJP
            • 7 years ago

            Great news. My 7950 and Windows 8 installation files (unused as yet) wait with bated breath…

            • rechicero
            • 7 years ago

            Of course. Thanks a lot for taking the time to see into the matter!

            • Chrispy_
            • 7 years ago

            This is why we love you Scott.

    • MrJP
    • 7 years ago

    In the Borderlands 2 “Time spent beyond 50ms” metric, how is it possible for the two cards to score 15ms and 33ms respectively? Surely the minimum (non-zero) aggregate time for frames above 50ms would be 50ms, i.e. a single frame just over the 50ms cut-off. Isn’t it impossible to get a number between 0 and 50 in this metric?

    As with other commentators, I think the 7950 results do look strange in many of these games, and in soem cases very like what you’d expect to see from a multi-GPU setup. Could this be some strange Windows 8/driver/FRAPS interaction?

      • Damage
      • 7 years ago

      Good observation, but you’ve misinterpreted what we’re reporting–which is, I’ll admit, perhaps not explained as well as it should be. What we’re doing is adding up all of the time spend *beyond* the 50-ms threshold, not all of the time spend in *total* working on frames that take longer than the threshold.

      For instance, if a frame took 80 ms to render and the threshold was 50 ms, this frame would add 30 ms to the “time spent beyond x.”

      We want to penalize the really long waits, not a series of 51-ms frames.

      We used to just count the number of frames that took longer than 50 ms, but we decided to get smarter about it after some discussion with friends and readers. We explained our move to this new method here:

      [url<]https://techreport.com/review/22151/nvidia-geforce-gtx-560-ti-448-graphics-card/2[/url<] Hope that helps. Thanks for taking the time to clarify!

        • MrJP
        • 7 years ago

        Thanks for taking the time to explain!

    • Tonim89
    • 7 years ago

    “We will test the latest game releases but won’t test MOH:W, instead we will create a new methodology that doesn’t make any sense.”

      • internetsandman
      • 7 years ago

      What new methodology? Inside the second? They’ve been doing this for months, and have explained it very well in every previous GPU review

      • Dissonance
      • 7 years ago

      Looks like you missed the page with MOH:W results:

      [url<]https://techreport.com/review/23981/radeon-hd-7950-vs-geforce-gtx-660-ti-revisited/8[/url<]

    • deruberhanyok
    • 7 years ago

    Scott,

    I really dig TR’s “new” review format. The focus on latency really makes a big difference in how the results are compared. And this review, in particular, has me leaning towards a GeForce card for the first time in 4 years.

    Anyways, as I was reading this over it reminded me of something from way, way back in the day I wanted to share. It’s not directly relevant, but I’m feeling nostalgic today. 🙂

    I don’t know how many of you remember “bleem!”, the playstation emulator software. This was the one legally sold in stores because it didn’t use any sort of BIOS image or other proprietary code. I can’t remember the name of the main programmer, but I remember a conversation with him about graphics drivers at the time (this was the TnT2, GeForce 256 and GeForce 2, Voodoo 2 SLI and Voodoo 3 era).

    We were discussing the “stuttering” effect that happened on some games for those of us using nvidia hardware – this was on the bleem! forums, and a lot of people were trying to figure out the cause – when the programmer joined us, saying he’d spoken to nvidia about the issue in their drivers. Apparently it had something to do with what was called a “texture swizzler” (funny name, so of course I remember that). If I remember correctly, the textures were being loaded in a way that caused this stutter effect, and it only appeared in certain games / programs. Bleem! was one of them.

    There was some back and forth with nvidia and the end result was that, well, they weren’t going to fix anything and that was that. Years later I remember still having the problem, until the GeForce 3 cards came out. Maybe it was a change in DX8 that fixed it, or maybe it was the newer architecture. Idunno. But when I got one I went back and tried a few older games – the problem was gone.

    Eventually Bleem! disappeared – sales weren’t enough to continue development, I guess, and it was outpaced by the more “grey area” emulator programs and interest dried up.

    I don’t know why it didn’t occur to me before, but while I’m sure “texture swizzlers” no longer exist, the end result was the same: you could have a much higher reported average frame rate and still have a gameplay experience that wasn’t really “smooth”. That stuttering was far more annoying than a slower average FPS if the slower average was constant.

    So: thank you, TR, for changing the way we can compare this stuff. I like to think we see driver improvements like this as a direct result of the work you do, and in that case, no matter which card is faster / better / more bang for buck, everybody wins.

    • sweatshopking
    • 7 years ago

    CONSPIRACY, GUIZE!

    • lethal
    • 7 years ago

    Whoho new Max Payne : Hitman Absolution !

    [quote<] Hitman: Absolution In this game, Max Payne has sobered up and gotten a job with a shadowy government agency, yet somehow things still went totally sideways. He's decided to stop talking about it so much, which is a relief. [/quote<]

    • krutou
    • 7 years ago

    Instead of using the ‘Frame Time by Milliseconds’ graph, reporting 99th percentile times and reporting time spent beyond X ms, its probably more intuitive to use a histogram of the Frame Times. A histogram would easily be able to show users ‘oh shit this or that card has, on average, higher frame latencies’. Then, to be scientific, you would give a mean frame latency and a standard deviation.

    The ‘Frame Latencies by percentile’ graph can stay though.

      • JustAnEngineer
      • 7 years ago

      +1 for bringing up the histogram suggestion again.

      • spuppy
      • 7 years ago

      Can you share an example of this?

        • superjawes
        • 7 years ago

        [url=http://en.wikipedia.org/wiki/Histogram<]WikiWiki[/url<] This could be problematic if there are too many cards, but then I would just make buttons to switch between cards and card families. Basically, a histogram would show a quantative measure of where the frame times are. EDIT: That's right, Scott tried that in the article he linked below this one. By putting frames in "buckets," you lose the idea that a random spike in frame time interrupts the flow, resulting in a choppier experience.

      • cegras
      • 7 years ago

      I suggested mean and std dev values a while back, TR did not seem receptive. Frame time graphs are raw data and do not really convey much. Besides, the hardest part is gathering data. I can work ip the data 10 different ways in matlab with scripting.

      • Damage
      • 7 years ago

      Thanks for the feedback. Not every data set lends itself to the same set of tools for visual presentation. We have tried the histogram approach, and we discussed it here, with examples:

      [url<]https://techreport.com/review/22151/nvidia-geforce-gtx-560-ti-448-graphics-card/2[/url<] It really just didn't show us what we'd hope. For a real-time system that is extremely latency-sensitive, I believe the way we're presenting our data is more fitting. I've not been persuaded that mean-plus-deviation results are entirely helpful, either, for several reasons. For one, we are addressing a broad audience and want to make the info we present easy to interpret for all. Also, back to the real-time system issue, the problem we wish to highlight is long frame times, not short ones--and we are interacting with a display subsystem that has certain limits. Placing a negative value on "variance" itself without reference to our goal of consistent frame delivery within certain time limits doesn't necessarily accomplish what we need. Variance on the low side of the mean isn't a bad thing, for one. Anyhow, I am open to other suggestions and am way overdue in discussing the suggestions one reader, jensend, has given us regarding mean squares as a possible avenue for quantifying overall performance. Way may yet find better ways to present things. However, I think we have to keep the goals of easy readability and appropriateness to the task in mind.

        • krutou
        • 7 years ago

        I meant frequency plot, which are usually represented as histograms. Your graph for the 560 Ti was a ‘bar’ time plot.

        I’ve created an example here [url<]http://filebin.ca/OyTZYC1OwQr/FrameTimeHistogram.xlsx[/url<] The most important graph to emphasize is the 'frame latencies by percentile' graph. Especially important is to highlight the slope of the graph and the skew at the end.

          • Damage
          • 7 years ago

          Did you scroll down the page I linked? I included several styles of visualizations, including one like your example. Unless you somehow process the data differently, though, the presence and impact of much longer frame times isn’t well represented in a histogram plot of any type.

            • krutou
            • 7 years ago

            You’re using 5 bins. Rule of thumb is 10 bins. I’d suggest 15 bins or so. It might help to use more than 2000 frames although that might be a pain if you’re calculating frame times by hand. I’d also delete any leading/trailing bins that say ‘0’.

            • Damage
            • 7 years ago

            I count 10 bins, and those trailing ones are non-zero, although it’s not visually obvious. That’s the problem.

            • Firestarter
            • 7 years ago

            All three diagrams are equally horrible though. Have you considered just putting the raw data of an old article (say, the first 7970 review?) out there so that us gerbils can have a go at visualizing it? I mean, I know getting the data is a huge part of what you do for a living, and giving it away is not unlike publicly broadcasting company secrets, but I think there are TR readers out there who could manhandle any data that you gathered in those tests and spit out very readable histogram graphs.

            • derFunkenstein
            • 7 years ago

            We can do it ourselves with FRAPS if we’re terminally bored.

            • Bensam123
            • 7 years ago

            And have the exact same hardware used in the tests…

            • MrJP
            • 7 years ago

            It’s really quite simple to do this yourself with FRAPS and then dump the data into Excel for processing. Once it’s in, you can then play around with lots of different styles of analysis. Use the PERCENTILE function to slice up the data for the latency plots. You can even change the presentation to equivalent FPS rather than frametime if that’s your preference 😉

            The tedious part is importing the raw data and playing around with formula ranges when you have big differences in number of frames across different benchmarks. Must be very dull when you’re doing as many different benchmarks as Scott, though more intelligent scrpting/macros could help significantly.

          • superjawes
          • 7 years ago

          The page he linked is a histogram (titled: Battlefield 3 – distribustion of frame times).

          The issue is that you still end up with a lot of frequency on lower frame times, but frequency of a particular frame time doesn’t directly translate to a better or worse experience because one or two extremely high times (time spent beyond x seconds) generally results in a poorer experience.

          Imagine you’re driving between two cities on the interstate. One trip takes you about three hours, your average speed is 60 mph, but you spend and extra thirty minutes stopped because of an accident. On another day, there has been some wintery weather and your average slows down to 50 mph, but you don’t spend any time in stopped traffic. Even if your second trip takes a little longer, you avoid the stress of being stopped on the interstate.

          That long stop is what the results should penalize because you will remember it most vividly, but that data is marginalized when you start any sort of averaging. When your average frame time is 25 ms, one 200 ms frame is more noticable than five 66 ms frames, despite having a higher frequency at 66 ms.

      • Bensam123
      • 7 years ago

      Yeah it would be nice to show variance and standard deviation, but TR has been resistant to this for some reason. I’ve mentioned this numerous times before. I personally skip 99th percentile frame times and go to the time spent beyond Xms.

    • Arclight
    • 7 years ago

    That’s a pretty clear win, though i’m still shocked at the difference in size between those two cards.

    • internetsandman
    • 7 years ago

    I still think Satans Sinkhole would have been a better test area for Borderlands 2, but it’s great to see a refreshed game lineup, not so great to see the Radeon struggle like that

    • albundy
    • 7 years ago

    although many games show the cards being on par, the large fluctuations on other games tell another story, like which gaming company is in bed with a certain gpu maker. to each his own.

    • raghu78
    • 7 years ago

    there is some problem here. were the latest AMD CAP installed. the cap are advised even for single GPU installations. also were the games tested patched to the latest available patches. something does not agree here.

    Games like Medal of Honor warfighter are clearly faster on HD 7950 boost. in fact HD 7950 boost is equal to GTX 670. HD 7950 boost is 30% faster than GTX 660 Ti. hardocp shows the HD 7950 boost faster than GTX 670 at 1440p with 4x MSAA. the HD 7950 boost with 4X MSAA matches GTX 660 Ti with 2x MSAA. its not even funny. no amount of factory overclock can make up a 30% performance deficit.

    [url<]http://www.techspot.com/review/603-best-graphics-cards/page8.html[/url<] [url<]http://www.hardocp.com/article/2012/11/07/medal_honor_warfighter_gameplay_performance_review/4[/url<] [url<]http://www.guru3d.com/articles_pages/medal_of_honor_warfighter_graphics_vga_performance_review,6.html[/url<] the worst part is GTX 660 Ti is ahead of HD 7950 boost at 1440p with 4x MSAA where the ROP and bandwidth crippled GTX 660 Ti gets hammered. maybe techreport has some axe to grind against AMD. [url<]https://techreport.com/blog/23638/amd-attempts-to-shape-review-content-with-staged-release-of-info[/url<]

      • BestJinjo
      • 7 years ago

      Exactly! Thank you for confirming the exact same things I pointed out. Skyrim benchmark is even worse as GTX680 loses to HD7950 boost at high resolution in that title and here GTX660Ti beats it. AMD has a very large advantage in Skyrim. It’s not even close:

      [url<]http://media.bestofmicro.com/9/M/348970/original/Skyrim%202560.png[/url<] or with latest cats 12.11s [url<]http://images.hardwarecanucks.com/image//skymtl/GPU/12-11-DRIVERS/12-11-DRIVERS-72.jpg[/url<]

    • Jigar
    • 7 years ago

    In short AMD’s Windows 8 driver currently sucks, run this games on windows 7 machine and HD7950 would be all over GTX 660Ti

    • Novuake
    • 7 years ago

    OK so a VERY biased selection of games… BUUUUT it was said that they selected the latest releases and Nvidia has much better release date drivers on most games as they spend A LOT more cash and work more closely with game developers prior to launch. Give AMD a few weeks and they will catch up…

    • mockingbird
    • 7 years ago

    Exactly as I’ve always said, ATI/AMD’s drivers are not quite up to par with nVidia’s, and their cards should be avoided. With both of these cards averaging $300, it’s a no brainer, the 660 Ti is far far superior.

    Why didin’t you test Civilization 3?

      • MadManOriginal
      • 7 years ago

      Probably because it’s a 10 year old game that literally any current graphics card can play. 🙂

      • BestJinjo
      • 7 years ago

      I’ve used many ATI/AMD and NV cards over the years and never had any major issues with either that weren’t resolved over time. You make it sound like ATI/AMD drivers are broken. My guess is you’ve never used HD4000-6000 series at all. Back to the review. Why is nearly every other professional review that tested these cards under Windows 7 reporting the opposite findings? You suddenly think everyone else is wrong but there isn’t some possible issue with the OS/drivers in this review?

      How do you explain that in other reviews that just recently tested HD7950 with boost, those cards are neck and neck with GTX670 in framerates, after testing 18 games! [url<]http://www.techpowerup.com/reviews/HIS/HD_7950_X2_Boost/28.html[/url<] In this November 2012 review, HD7950 with boost beat GTX660Ti without any problems: [url<]http://www.hardocp.com/article/2012/11/12/fall_2012_gpu_driver_comparison_roundup/8[/url<] Same story for HD7970Ghz vs. GTX680 in latest reviews: [url<]http://www.hardwarecanucks.com/forum/hardware-canucks-reviews/57413-amd-12-11-never-settle-driver-performance-17.html[/url<] or MSI GTX680 Lightning vs. HD7970 Ghz Edition Matrix: [url<]http://www.youtube.com/watch?v=sJaoY0-kfk8[/url<] You are saying all these reviews are wrong but when you look at the framerates in the final conclusion chart at TR and GTX660Ti is beating HD7950 950mhz on average. Please find any other recent professional review that shows this under Windows 7. Good luck!

        • MadManOriginal
        • 7 years ago

        I didn’t look at every single game in every single review but I checked a few different. The results are similar [b<]for the games tested by TR[/b<]. Some sites do test cards with many more games, if you want to take that average, great, but that doesn't make TRs results 'wrong' when in fact they align with those on other sites.

          • BestJinjo
          • 7 years ago

          The results are not similar at all. I just posted a detailed comparison of Medal of Honor Warfighter and Skyrim. The results are completely off the mark, especially in Skyrim. Go see my response to Silus above. In Skyrim, HD7950 boost beats GTX680 at 2560×1440/1600 and if you go ask people who play the game, this will be confirmed. It’s not only looking at the average of 18 games, but diving deeper into the FPS breakdown of specific games tested at the resolutions in this review. If you had read this review and others in detail, it would have been clear to you that when you see GTX660Ti winning in Skyrim or MOH:W at 2560×1440, those results jump out as red flags. It would be no different than if somehow AMD won in Assassin’s Creed 3 in 1 review when 10 others show the opposite. Skyrim is literally 30-40% faster on HD7950 boost cards at 2560×1440/1600 with AA. The results here just don’t make any sense for that game.

      • mockingbird
      • 7 years ago

      Oops, I meant Civilization 4.

        • Bensam123
        • 7 years ago

        You mean Civ 5?

      • Fighterpilot
      • 7 years ago

      Dude,an overclocked 7950 shreds a 660Ti like confetti….if you are going to make such uninformed claims,at least choose a card like the 670 which is somewhat in the same performance league.
      The only “no brainer’ is,unfortunately,you.

        • raghu78
        • 7 years ago

        yeah a HD 7950 boost shreds a GTX 660 Ti when both are compared at stock. Overclocked HD 7950(1.2 Ghz) beats GTX 670(1.3 Ghz). at the same clocks the HD 7950 is 3 – 7% (avg 5%) slower than HD 7970.

        [url<]http://hexus.net/tech/reviews/graphics/34761-amd-hd-7950-vs-hd-7970-clocks/?page=2[/url<] MOH Warfighter, Skyrim, Hitman Absolution, Sleeping Dogs are significantly faster on HD 7950 boost. Skyrim TPU - at 1600p HD 7950 boost is close to 50% faster than GTX 660 Ti [url<]http://www.techpowerup.com/reviews/HIS/HD_7950_X2_Boost/22.html[/url<] techspot - at 1600p HD 7950 boost is close to 50% faster than GTX 660 Ti [url<]http://www.techspot.com/review/603-best-graphics-cards/page9.html[/url<] Every other site puts the HD 7950 boost on par with GTX 670. so TR should run these tests on Windows 7 SP1 to address the majority of users who still use Windows 7. Windows 8 in fact is having a torrid time with terrible reviews. and many people are going to prefer Windows 7 for its traditional desktop design focus. [url<]http://www.nngroup.com/articles/windows-8-disappointing-usability-both-novice-and-power-users/[/url<] [url<]http://www.sfgate.com/technology/article/Windows-8-best-to-pass-it-up-review-4025070.php[/url<]

    • Fighterpilot
    • 7 years ago

    So…you are recommending a 660Ti over a 7950?
    You must be tripping.
    No BF3 results or tests?…oh wait that’s right…AMD cards own BF3 these days…got it.

      • BestJinjo
      • 7 years ago

      This new TR testing methodology seems to be raising more questions than answers when the results are a complete 180* from every other professional review site that tested HD7950 boost vs. GTX660Ti recently. I am not even sure it reports the smoothness of gameplay properly. When I look at a frames per second spread over the testing period, I can clearly compare two GPUs and compare frame rate dips of one to the other over the testing period.

      [url<]http://www.hardocp.com/images/articles/1354536281d5FMDGIb5r_4_6.gif[/url<] In that graph, you can tell right away that HD7970Ghz provides a smoother gaming experience and maintains higher minimum frames for the majority of the test run. I still think there are some driver/W8 OS issues here.

        • Firestarter
        • 7 years ago

        The graph that has me really scratching my head is the Skyrim graph: [url<]https://techreport.com/r.x/7950-vs-660ti/skyrim.gif[/url<] I mean, here we see the 7950 supposedly taking 5ms for one frame (that's 200fps) but over 25ms for the next (40fps). We know that the 7950 would not be able to actually produce a frame of Skyrim's landscape in just 5ms, and we also know that it shouldn't take 25ms. The only way I can explain this is that a frame has been generated, but the display is being delayed even while the next frame is already being drawn. Anyway, if 5ms and 25ms frames alterating like they do in that graph, the game would feel [i<]extremely[/i<] jittery, in the 'what the hell is wrong with my computer' kind of way, and in a way that you could actually record and put on youtube for us to look at. I simply cannot agree with this statement: [quote<]Even if there is a lot of variance in its plot, much of it comes below that 25 ms threshold, and you'd be hard-pressed to notice it.[/quote<] However, in the article Scott doesn't mention any subjective complaints about the game feeling off or jittery, only a statement about the bigger spikes that are noticable. This leads me to think that there is something wrong with the measurement, not the game, the drivers or the GPU.

          • swaaye
          • 7 years ago

          Interesting point.

            • Firestarter
            • 7 years ago

            [url<]https://techreport.com/r.x/geforce-gtx-660ti/skyrim-7870.gif[/url<] [url<]https://techreport.com/r.x/7950-vs-660ti/skyrim.gif[/url<] Just look at the difference between those graphs. In the first review, the 7870 is doing pretty fine, while in the second review, the 7950 is flipping its metaphorical shit. Now, the 7870 and 7950 are 2 different cards, but they run the same drivers and are based on the same architecture, and in the first review they have similar scores (the 7950 is just a bit faster across the board). This kind of regression in performance is not something that I'd take lightly, especially after AMD released drivers that (although in beta) deliver impressive performance increases in major titles.

            • swaaye
            • 7 years ago

            I think a comparison between driver versions could be insightful. They could be breaking stuff while fixing other problems.

            • Firestarter
            • 7 years ago

            Except that Scott commented that he saw this behaviour in the older drivers as well.

            • swaaye
            • 7 years ago

            If it is measurement error then the question is why does the GeForce seem to be measuring ok…

            • Firestarter
            • 7 years ago

            Your guess is as good as mine!

      • Silus
      • 7 years ago

      Yes…when the results show as much in the games tested, that’s what one usually does. Of course for a better picture you should always check other review that use other games, if those other games are more important to you. No sole review is the de-facto guide to buying new hardware, but of course none of that matters to you, because all you care for is defending your brand, regardless of results. If the results “agree” with you, you praise your brand! If the results “disagree” with you, then you criticize the results are the site that shows them. Typical fanboy drivel…

      Also, the fact that MOH Warfighter uses the exact same engine as BF3, doesn’t matter ? Got it.

        • BestJinjo
        • 7 years ago

        You seem to also have missed some of the points the members here are providing. Disregarding the frame time graphs, look at the Frames per Second metric. While it is true that TR’s specific testing is unique, the Frames per Second measurement should be in line in terms of actual videocard standing compared to other reviews. You shouldn’t end up with something like GTX660Ti beating HD7970 on average or you would know something has gone terribly wrong in the review. Let’s take a look at actual frame rates in at least 2 games where things are way off.

        When I look at Medal of Honor Warfighter testing, the actual frames per second are faster for GTX660Ti at *gasp* 2560×1440 max. Then I go and check how the actual FPS compares in other reviews:

        2560×1600 7950 boost beats GTX670:
        [url<]http://static.techspot.com/articles-info/603/bench/MOHWF_03.png[/url<] 2560x1600 7950 boost ties GTX670: [url<]http://www.guru3d.com/articles_pages/medal_of_honor_warfighter_graphics_vga_performance_review,6.html[/url<] Doesn't it strike you odd how GTX660Ti is beating a 950mhz HD7950 in frames per second at high resolution in MoH:W when no one else reports this? What about Skyrim at high resolutions? HD7950 boost is 44% faster in this review with AA: [url<]http://www.techpowerup.com/reviews/HIS/HD_7950_X2_Boost/22.html[/url<] In this review HD7950 boost beats GTX680 and is much faster than 660Ti: [url<]http://www.techspot.com/review/603-best-graphics-cards/page9.html[/url<] Same in this review: [url<]http://www.computerbase.de/artikel/grafikkarten/2012/test-nvidia-geforce-gtx-660/23/[/url<] In TR's review, GTX660Ti beats HD7950 in Skyrim and everyone else shows it beating GTX680 in Skyrim and thrashing GTX660Ti. You can ask Skyrim players and they will tell you at 2560x1440 or 2560x1600, HD7950 is faster than GTX680 in this game. I think you are starting to get the picture that even the framerates make no sense in some of those games compared to what everyone else reports. You have to ask yourself if the testing is valid or something else is the problem. The results are too much different from users who own the cards and countless other professional reviewers. Also, this should have struck you as a red flag right away - HD7950/7970/7970Ghz have been faster in games at high resolutions than GTX600 series. HD7950 boost is actually 22% faster than GTX660Ti at 2560x1600 on average across more twice as many tested games in other reviews. Look how recent this review is too: [url<]http://www.techpowerup.com/reviews/HIS/HD_7950_X2_Boost/28.html[/url<] Don't you find it odd that HD7950 boost goes from 22% faster at high rez vs. GTX660Ti to way slower in TR's review? That doesn't strike you as extremely unusual, especially when GTX660Ti gets a win in Skyrim, a game that are completely dominated by AMD at high resolutions?

    • brucethemoose
    • 7 years ago

    Well, that’s surprising… history and specs tell us the 7950 is the faster card by a decent margin. Raw FPS isn’t too far off, but just look at those stutters from the AMD card.

    Borderlands 2 and AC3 favor Nvidia, yet Hitman Absolution and Sleeping dogs favor AMD and still stutter quite a bit.

    Yet both cards perform identically in Medal of Honor… yikes, what’s going on here?

    It should be noted that the 7950 Vapor X has a ton of OC headroom. 20%-25% OCs on the core are common, but that GTX 660 Ti has virtually no OC headroom. But even an OC as big as that wouldn’t make up for all those long frames in some of the games.

    • flip-mode
    • 7 years ago

    I love how small the Geforce card is. Lots of performance per cubic inch.

      • Bensam123
      • 7 years ago

      Is that what your wife tells you? 😮

      (I had to)

        • superjawes
        • 7 years ago

        You are a terrible person…but so am I, which is why I +1’d you.

    • l33t-g4m3r
    • 7 years ago

    Meh. The 7870 is the sweet spot, and nvidia isn’t competing there. That said, this is a driver issue, much like BF3, which obviously should be fixed over time, and isn’t a major deal breaker.

    Also, I’m sorry if this offends some people, but I’m going to agree with jimbo in that there are constant minor discrepancies with the benchmarks that tilt towards favoring nvidia, whether or not this is done on purpose. Because of this I compare other sites for numbers, while reading TR for the big picture. The games picked for this rematch don’t hold any particular value for me, especially when AC3 is a new release, and AMD often takes a while to catch up with their drivers. Therefore, the only conclusion I got from this article is that AMD has work to do with the drivers, and nothing more.

    Metro 2033 not using dx11 shaders with tessellation giving extreme bias towards the 460
    [url<]https://techreport.com/review/19844/amd-radeon-hd-6850-and-6870-graphics-processors/12[/url<] Batman AC not using dx11 [url<]https://techreport.com/review/23419/nvidia-geforce-gtx-660-ti-graphics-card-reviewed/8[/url<] Thinking for yourself is the new black. ]:-P

      • yogibbear
      • 7 years ago

      Batman’s AC dx11 path on release was buggy as hell.

        • l33t-g4m3r
        • 7 years ago

        All the other reviews had it on, and it worked fine on my 470. If you don’t test dx11, or are testing it sporadically, that skews the numbers and quite easily the perception of the card if you were expecting dx11 to be on, like with every other review.

        Also, you may notice a link to a second game: Metro 2033, where dx11 shaders were turned off, but tessellation was on.

        When I read a review, I look at EVERY setting used so that I know how a site get’s it’s perspective. Using questionable settings is grounds for throwing out the numbers and any perspective derived from said numbers. It’s quite easy to game a review with wonky settings, even accidentally, and most of us know that. If you don’t take things into proper perspective, then you get a skewed conclusion.

        The only thing I can honestly pull from this rematch is that AMD’s drivers are immature on win8 and need work, and that AMD is behind the curve supporting new games. It also seems fishy that certain previously used games like BF3 weren’t used, especially after AMD fixed it’s previous performance deficiency. It’s not like TR’s forgotten about trinity, and frankly this review appears to be just a well hidden slam and call to fix the drivers, which if that’s what it is I’d rather see AMD being directly called out for drivers instead of beating around the bush, because otherwise it’s a waste of time.

        I think we’ve all seen enough reviews to know the 7950 is faster than the 660 Ti, aside from drivers and TWIMTBP titles, so that’s the only conclusion I’m seeing here.

        Also, shouldn’t PhsyX be OFF and not LOW in the borderlands2 benchmark? Of course, that’s only if you want to see honest numbers, and if low is the minimum setting allowed by the game that should outright invalidate bl2 as a benchmark.

    • tviceman
    • 7 years ago

    I don’t know if you want to take the time to look into this, but I don’t think AMD’s frame latencies were this out of whack before it’s “never settle” performance drivers came out. Any chance those drivers increased throughput at the expense of consistency?

      • tviceman
      • 7 years ago

      Why is this down voted? It’s a very plausible explanation.

        • BestJinjo
        • 7 years ago

        For starters, other reviewers who tested the “never settle” performance drivers who focus on custom sequences and game experience have not noticed this. They actually acknowledged the opposite, that these drivers increase performance without any tricks or reduction in image quality.

        HardOCP Fall GPU roundup which I linked above or Hardware Canucks Catalyst
        AMD 12.11 “Never Settle” Driver Performance investigation:
        [url<]http://www.hardwarecanucks.com/forum/hardware-canucks-reviews/57413-amd-12-11-never-settle-driver-performance-2.html[/url<] Your conspiracy theory does not align with any other major review. You think other reviewers wouldn't say that AMD's single cards feel choppy in games despite higher frame rates and that with older drivers such issues were not present? No one else is reporting this. We would have heard about this from many European websites who investigate cheating with a magnifying glass. There is no doubt that some of these games do run faster on NV hardware, but your explanation points to the entire driver family somehow tanking the smoothness of HD7000 series yet this review says they experienced this with Catalyst 12.8 drivers. That alone discounts your theory that "never settle" drivers are the cause. If the never settle drivers are the problem, why haven't thousands of AMD users on forums and online been reporting this; and also why didn't going back to Catalyst 12.8 fix these issues in this review?

    • ZGradt
    • 7 years ago

    Dang. I thought the Zotac would do way worse in the high res tests. I wonder how a 670 would score, which is basically a 660Ti with a higher fillrate…

    • tviceman
    • 7 years ago

    [H] came to similar conclusions (although in a less scientific manner) with their 680 SLI vs. 7970GE CFX review. It would be nice if one of you had access to a high speed camera, to see if it would be possible to capture these frame latencies.

      • BestJinjo
      • 7 years ago

      Not sure what CFX vs. SLI have anything to do with stuttering on single GPUs. From that [H] review the conclusion compared GTX680 to HD7970Ghz:

      “We even found, that on the whole, single-card HD 7970 GHz Edition video card provided better performance in triple-display gaming.”

      The review you refer to proves the opposite of this review, and finds the Radeon to have a more consistent gaming experience.

      It’s been known for a long while that SLI feels smoother than CFX but that is nothing new and mostly irrelevant for comparing single cards. What I find very unusual is that that majority of other reviewers which just recently tested HD7950 OC cards found that they are within a hair of GTX670 and HD7970Ghz was also faster than 680 in the same reviews. These results are consistent across just about any recent review, except this one. This review is actually the recent outlier. When 9 reviews show 1 thing and 1 shows the opposite one has to wonder if Windows 8 is the issue here? No other review website is reporting that GTX660Ti provides a better gaming experience but then they are testing in Windows 7 not 8. I also wonder if the power tune was increased properly in the control panel.

      [H] – Fall 2012 GPU and Driver Comparison Roundup November 12th review had this to say:

      “This pricing band was far less competitive as the Radeon HD 7950 with Boost simply demolished the GTX 660 Ti across the board with regards to raw frame rates and overall game play experience across our suite of testing. Not to be left out, the Radeon HD 7870 held up well on its own, matching the game play experience and raw frame rates to the GTX 660 Ti from the middle pricing band (except for Sleeping Dogs).”
      [url<]http://www.hardocp.com/article/2012/11/12/fall_2012_gpu_driver_comparison_roundup/8[/url<] How can 1 review have HD7950 with Boost demolishing a GTX660Ti in overall gameplay experience and this review says it's not even close? If you look at the discrepancy in frametimes data in the TR review, based on these large deviations in favor of GTX660Ti, I wouldn't even be surprised if GTX660Ti would outperform an HD7970Ghz in this review. Now you have to ask yourself, does that make sense based on the data you see and what everyone else is reporting? You can't discount the possibility that there are some possible OS/driver compatibility or test system issues here as the performance delta is too much in favor of GTX660Ti, but yet nearly everyone else shows HD7950 with boost being only a hair behind GTX670 and is faster than GTX660Ti, especially at higher resolutions. I think the testing should be redone with powertune to full in CCC and Windows 7 should be used to isolate potential issues. I wouldn't be surprised if AMD's cards perform worse in Windows 8 than under Windows 7.

      • BestJinjo
      • 7 years ago

      I just read that entire review you mentioned. I am not sure what cards you own but in their CFX vs. SLI testing they didn’t use Radeon Pro, which skews the conclusion on smoothness.

      Their response:

      “We don’t use any third party utilities to change the out-of-box experience between NVIDIA and AMD GPU performance/gameplay experience. We feel it is up to AMD to implement these features to improve its platform, if it so chooses, if it doesn’t, well then we are left with what we are left with as the experiences between both. If we use a third party utility for AMD, then we’d have to use one for NVIDIA, and then its just not fair.”

      Now if you actually read that review as I just finished reading, HD7970Ghz CFX provided higher image quality and performance in 4 out of 4 testing scenarios, especially dominating in Sleeping Dogs and Battlefield 3. That means with Radeon Pro you could cap the frames to a certain level using a function called Dynamic Framerate Control in Radeon Pro. This essentially enables Dynamic Vsync. This a similar tech to NV’s Adaptive Vsync that is already present in NV’s drivers. To get this technology for AMD, you download this 3rd party tool. Without using Radeon Pro in a CFX review, testing smoothness becomes irrelevant and honestly misleading for people running nearly $1,000 of GPUs browsing enthusiast forums such as this one.

      If you are not familiar with how Radeon Pro works, here is a review that tested it and eliminated micro-stutter to levels below GTX690 due to manual control Radeon Pro offers:

      [url<]http://www.tomshardware.com/reviews/radeon-hd-7990-devil13-7970-x2,3329-11.html[/url<]

    • clone
    • 7 years ago

    a friend who works for MS is giving me Win8 for Xmas…. guess I won’t be using it.

      • brute
      • 7 years ago

      your friend is a cheapass and hates u

        • clone
        • 7 years ago

        he’s a software engineer and see’s nothing but roses/opportunity with the app store potential built into Win8.

        I’m not sure the choice was to do me a favor so much as a zealous belief that everyone has to move to Win8 so that it’ll be successful.

          • Firestarter
          • 7 years ago

          [quote<]I'm not sure the choice was to do me a favor[/quote<] He got that license for free and is giving it to you because he doesn't want to buy you a present.

            • clone
            • 7 years ago

            we’ve never bought one another a gift before, I’m not going to buy him one now, their is an outstanding debt between us that’s far higher than $40.00.

            that said at least he was thinking of me 🙂

            • Firestarter
            • 7 years ago

            Ah good to know! Sorry if I came across as a douchebag..

            • clone
            • 7 years ago

            np.

            • Deo Domuique
            • 7 years ago

            Are you his drug dealer? 😛

            Seriously though, if you add the problematic DX9 on all 7000 series cards, I’m really disappointed. The black artifacts on many DX9 games are annoying. It’s like the cards are defective, one year now… They supposedly fixed this issue on 12.11 beta 11 drivers, but the problem is still intact -nothing has been fixed. Monday morning I’m sending second time my 7950 for RMA, and this time I’ll get the 670. I’ve had enough.

            • clone
            • 7 years ago

            updated, DX9 support isn’t nearly so bad as mentioned, installed and began playing several titles the other day just to see and most ran fine.

            have had a harder time with Nvidia over the years regarding legacy titles…. not absolutely horrible of course, I can’t expect lifetime support for games released in 1998 – 2002 but I have had better luck with AMD since Nvidia released the 8800 series half a decade ago.

            • clone
            • 7 years ago

            so I grabbed my lower end HD 7xxx and tried it with a slew older games to see what happened given your comments, several DX8 titles, a DX 7 and a I don’t know DX version maybe 5 or 6 and a cppl DX9’s.

            MechCommander, MechWarrior 4 all expansions (3 games), Dungeon Siege, Dungeon Siege 2, Alien Vs Predator, HomeWorld, and a few others.

            aside from MechWarrior Mercenaries they all ran fine, Mercenaries said the driver revision wasn’t compatible despite the other 2 having no issues.

            I don’t know what games you are playing but I’ve had a far worse experience trying to use modern Nvidia cards, games were unplayable (Total Annihilation), some crashed, some didn’t render correctly (FreeSpace2) and some graphics glitched into annoyance (Dark Crusade).

            I’m using a GTX 460 in my main and an AMD HD 7750 in my low end box and given the experience it’s still a tossup which I’ll go to.

      • glacius555
      • 7 years ago

      Why don’t you give/sell it to me then? 😉

        • yogibbear
        • 7 years ago

        I’ll buy it off you for $15. 🙂

        • clone
        • 7 years ago

        doesn’t Windows8 upgrade sell for $40.00?

          • Deanjo
          • 7 years ago

          $100 overpriced.

            • clone
            • 7 years ago

            $39.99 right at the Windows Store.

            [url<]http://windows.microsoft.com/en-CA/windows/buy[/url<]

      • HisDivineOrder
      • 7 years ago

      The great thing about the Windows 8 license is that it is also a license for Windows 7.

      • brookespl091
      • 7 years ago
    • Bensam123
    • 7 years ago

    This is really weird… There definitely isn’t something right going on here. GW2 isn’t DX11 or anything either, so it’s most definitely using similar technology that’s already present in everyday games.

    That’s really what doesn’t add up here. Just because games are ‘new’ doesn’t mean they present a completely different workload for these graphics cards. All the DX candy has been spread out on the table, so these results should be largely comparable to other games that have been tested in the past…. But they aren’t. The hardware hasn’t changed, DX standards haven’t changed, the only thing that has is Windows 8 (which I don’t think should be used for testing the same way Vista shouldn’t have been used over XP) and the games.

    This leads me to believe that this isn’t the fault of the hardware, but rather some tomfoolery that is happening in the games or in software. Something is wrong if the framerate operates like a oscilloscope.

    When something this drastic pops up, I’m surprised you guys didn’t take things further… Like testing the OS to see if you get the same results on W7.

    Although not mentioned, you can get a 7950 for $275 with MIR…

      • eofpi
      • 7 years ago

      [quote<]testing the OS to see if you get the same results on W7[/quote<] I would like to see this too. The frame histograms for the 7950 look remarkably similar to SLI/Crossfire consistency issues, but weren't present in earlier reviews featuring a 7950. So either the drivers are broken, the OS is broken, or the 7950 TR tested in May is a completely different design than the one they tested here.

        • Bensam123
        • 7 years ago

        Exactly what I was thinking! Definitely reminds me of Sli/Crossfire oscillations, only these are single GPUs. Definitely something is mucking about causing stuttering for the graphics card. Something is making the graphics card wait…

        These results aren’t even comparable to the results TR has been doing on the 7xxx series up till now, besides Crossfire. They’re too far off the mark and a new round of games shouldn’t cause this to happen. More thorough analysis needs to be done here. TR just can’t drop a bombshell like this and not follow up on it.

          • yogibbear
          • 7 years ago

          Agree completely. Bumping to top in the hopes that Scott replies.

    • tbone8ty
    • 7 years ago

    Benchmark Far Cry 3!!!!!

      • brucethemoose
      • 7 years ago

      ^^ why isn’t it already there?

        • tbone8ty
        • 7 years ago

        nope

      • yogibbear
      • 7 years ago

      [url<]http://www.guru3d.com/articles_pages/far_cry_3_graphics_performance_review_benchmark,6.html[/url<] Basically... 660ti == 7950 Boost at 1920x1200. At 2560x 7950 Boost starts to pull away.

        • spuppy
        • 7 years ago

        Frames per second.. have you learned nothing?

          • yogibbear
          • 7 years ago

          Just because TR’s inside the frame tells me more, doesn’t mean I can’t draw conclusions from average fps plots too. It just means that it would be worthwhile seeing a TR inside the frame plot for FC3 to conclude if there’s any driver fiasco going on with that game too.

            • spuppy
            • 7 years ago

            If FPS scores are within 5 or so, no you can’t come to a conclusion. And in some cases FPS can be identical but still be completely off in frame times. Witcher 2 is an example, as is comparing AMD CPU between Windows 8 and Windows 7

            • tbone8ty
            • 7 years ago

            Tech spot has a good article with CPU scaling as well.

            No frame time but another perspective

            [url<]http://m.techspot.com/review/615-far-cry-3-performance/[/url<]

            • tbone8ty
            • 7 years ago

            Seems as though Amd has some work to do on their driver for farcry 3… nvidia has a slight edge when msaa and aa are enabled…Hopefully they fix the frame times in other games as well.

            Otherwise price drop? Lol

            • BestJinjo
            • 7 years ago

            “Using the driver version Catalyst 12.11 beta 11 together with CAP2 (of the framerate increases in FC3 with Radeons) and Geforce 310.64 Beta (which increases the performance of the Geforce) shows Far Cry 3 is a rather unusual power structure of current DX11 graphics cards with strong AMD Advantage: The Radeon HD 7970 Edition GHz significantly overhauled the GTX 680 – followed by the GeForce GTX 670”
            [url<]http://www.pcgameshardware.de/Far-Cry-3-PC-217540/Tests/Far-Cry-3-Test-Grafikkarten-CPU-Benchmarks-1036726/[/url<] Another review highlights a huge gap between HD7970Ghz and 7950/7870 (only 4% apart): [url<]http://www.computerbase.de/news/2012-12/eigene-benchmarks-zu-far-cry-3/[/url<] This clearly shows HD7950 is GPU bottlenecked. Luckily it overclocks to 1100-1200mhz. I don't think price drops are necessary because of 1 game. You seem to have missed reviews with 15-18 games where HD7950 950mhz is clearing beating GTX660Ti: [url<]http://www.techpowerup.com/reviews/HIS/HD_7950_X2_Boost/28.html[/url<] You should check other reviews to 2nd and 3rd and 4th opinions and you'll see TR's review is the only one that shows 660Ti winning (TR's review is the only review which used Windows 8). This was done just this month: "Radeon HD 7950 with Boost simply demolished the GTX 660 Ti across the board with regards to raw frame rates and overall game play experience across our suite of testing." [url<]http://www.hardocp.com/article/2012/11/12/fall_2012_gpu_driver_comparison_roundup/8[/url<]

            • Bensam123
            • 7 years ago

            You know there is more to a conclusion then raw performance, right? Like price, bundled games, if you support one company or another, how much you actually value a small performance difference, power draw, noise, and maybe even aesthetics… W8 isn’t anything close to the norm so a W7 benchmark would be more accurate for most gamers.

            I know other tech websites that report standard deviation and variance, yet TR doesn’t. Does that mean you can’t come to a meaningful conclusion reading articles here because they don’t? Variance and standard deviation are very important for understanding distributions.

    • gamoniac
    • 7 years ago

    In this somewhat boring week, it is great to find an article with new insights into hardware that have been reviewed. Nicely done, Scott. This is why checking on TR’s site my daily routine.

    • jimbo75
    • 7 years ago
      • chuckula
      • 7 years ago

      Uh… front page giveaways where TR loudly and repeatedly hawks AMD GPUs: Check

      Jimbo75 is a fanboy idiot who has a very interesting selective-fact filter: Check

        • Bensam123
        • 7 years ago

        And you’re a Intel fanboi?

        I don’t think a website would turn down hardware they got for free to give away…

          • chuckula
          • 7 years ago

          Hey moron: If you ever bothered to read *all* of my posts you’ll see that I have no problem saying that AMD is stronger in areas where AMD is actually stronger. If you ever read the forums you’d see that I’ve recently been inquiring about a Trinity HTPC system for a task where using Trinity makes sense. If you ever read my posts about AMD’s GPUs you’d know that 1. I’ve defended AMD against Nvidia fanboys, and 2. I’ve repeatedly said that I’d really like to use an AMD GPU on my desktop but the main reason I’m using Nvidia right now is that the quality of Catalyst drivers on Linux is not up to par. If AMD put more resources into Catalyst on Linux, then I’d switch in a heartbeat.

          The difference is that unlike you I’m not a shill: I use the best tool for the job. I also don’t blindly copy & paste the same idiotic drivel like “we should all lobotimize ourselves and just buy AMD products without thinking because Intel is t3h 3vilz!” which is what you post on this site 10 times a day whenever the word “Intel” is dropped.

            • Bensam123
            • 7 years ago

            Hey Idiot: Welp, that hasn’t been my experience with you. I’ve made very concise arguments for and against and all you do is hop on the Intel bandwagon and start spouting nonsensical BS…

            Just saying you’re impartial isn’t the same as being impartial. So you’re either a hypocrite or you were simply lieing when discussing some issues with me for the sake of making me look wrong, despite how you really feel.

            I mean all someone has to do to see that you’re lieing is look at another one of the front page posts:

            [url<]https://techreport.com/news/24009/amd-were-not-abandoning-socketed-cpus#metal[/url<] It's starting to seem like you just say whatever makes it seem like you're 'right' regardless of how you feel about something. You're great at hyperbole and taking things extremely out of context (such as your quote which isn't even a quote). I suppose this falls under the 'two faced' category. I can't even begin to understand why you'd even suggest that I need to read 'all' your posts when you don't even have the decency to read the sentences out of the ones you're responding to. So we can add extremely egotistical and self-centered to hypocrite and a liar, you have all the good qualities. Did you see what I did with the starting insult? Pretty ingenious isn't it?

            • chuckula
            • 7 years ago

            1. Looks like I was right all along because I base my opinions on facts instead of wishful thinking: [url<]https://techreport.com/news/24020/intel-will-offer-socketed-cpus-for-the-foreseeable-future[/url<] 2. You probably think that I'm an Intel fanboy because I attack you when you go off onto one of your Intel-ate-my-puppy and AMD-is-magical-unicorn-candyland-funtime rants. If you were an Nvidia fanboy like OU812, then you'd swear that I was an AMD fanboy... try some recent threads out for a look: [url<]https://techreport.com/news/24012/rumor-next-gen-radeons-due-in-the-second-quarter[/url<] [url<]https://techreport.com/news/23959/jpr-nvidia-gained-in-pc-graphics-last-quarter?post=689580[/url<] [url<]https://techreport.com/news/23884/intel-joins-the-data-parallel-computing-fraternity-with-xeon-phi?post=686005[/url<] P.S. --> For the record OU812 is 10x more annoying than you are. Even though I think you're wrong most of the time, at least you try to make arguments that are more than rehashes of AMD powerpoint slides. He can't even make it that far.

        • xeridea
        • 7 years ago

        Check the facts. They only included games where the 660 did well, or it was close. They excluded any game where the 7950 would win by a big margin, or any games where compute is used. They are also testing on Win8 which no one cares about, and likely still has immature drivers (for both cards). This makes it look like the 660 TI owns and the 7950 totally sucks, when in reality they are fairly evenly matched in games, with the 7950 dominating compute (which many don’t care about, but is still a factor).

          • MadManOriginal
          • 7 years ago

          Yeah, I’m sure TR chose games that they knew would run better overall on the GTX 660. Or not:

          [quote<]Of course, we have a new crop of games for the holiday season, headlined by titles like Borderlands 2, Hitman: Absolution, Sleeping Dogs, and Assassin's Creed III. AMD's newfound aggressiveness means [u<]many of these games are part of its Gaming Evolved program[/u<], so they should run very well on Radeon graphics cards—and maybe, you know, not so well on those pesky GeForces.[/quote<] They are new big name games, and some of them *come with* Radeons. If anything I would have expected a pro-AMD bias to show up in the results.

            • Silus
            • 7 years ago

            For xeridas to know that, he would have to read the review…and we can’t have that…actually reading the review is not part of any fanboy’s agenda. That would cloud their goal of asking others to “check facts” to support their unfounded claims.

          • tviceman
          • 7 years ago

          Sleeping Dogs and Medal of Honor. Both Gaming Evolved games. Both run better on AMD hardware. Both in this test.

        • jimbo75
        • 7 years ago
          • superjawes
          • 7 years ago

          The DiRT Showdown results have been discussed several times. The takeaway is that AMD worked directly with the developers to utilize AMD cards to give them an edge. In fact, from the same article you linked…

          [quote<]We've added the latest entry in the DiRT series to our test suite at the suggestion of AMD, [b<]who has been working with Codemasters for years on optimizations for Eyefinity and DirectX 11.[/b<][/quote<] This isn't just a matter of being an outlier. This DiRT game was basically engineered to give AMD cards an edge. If the same happened with AC3 (or if TR knew that something along those lines was happening), that game would be excluded for Nvidia favorability. However, that does not seem to be the case.

            • jimbo75
            • 7 years ago
            • superjawes
            • 7 years ago

            You know it could be possible that AMD is just having issues with AC3 code, which is possible considering the conclusions page (if you cared enough to read it). Scott makes it pretty clear that he tried to clean up the AMD results by changing drivers, test systems, and even a different 7950. If you have some evidence showing Nvidia engineering the results, I am sure that Scott would be open to including it.

            And I’m not saying that Nvidia was “hobbled” in Showdown, but as was pointed out in several of these articles, AMD cards are unusally better because of direct communication between Codemasters and AMD, which resulted in Showdown utilizing AMD cards better. That’s not a natural performance gap. It’s useful if you really like DiRT games and want best performance in that game, but if you’re playing several differen games, the inclusion of Showdown results artificially skews the conclusion. I believe the same happened with HAWX some time ago, where the results were excluded because they skewed results in Nvidia’s favor [i<]artificially[/i<]. I understand that AC3 skews the results, but I see no evidence that gap is artificial. A gap alone could just be a driver issue on AMD's side, but as I said before, if you have some hard evidence to the contrary, I would be happy to see it, and I am sure that TR would make a correction.

            • jimbo75
            • 7 years ago
            • superjawes
            • 7 years ago

            Note the bold text.

            [quote<]We've added the latest entry in the DiRT series to our test suite at the suggestion of AMD, [b<]who has been working with Codemasters for years on optimizations for Eyefinity and DirectX 11.[/b<][/quote<] Or from [url=https://techreport.com/review/23150/amd-radeon-hd-7970-ghz-edition/6<]this article...[/url<] [quote<]Well, I suppose this is what happens sometimes when a GPU maker works closely with a game developer to implement some new features. Showdown simply runs better on Radeons than on GeForces, and it's not even close. We've seen lots of similar scenarios in the past where Nvidia took the initiative and reaped the benefits. Perhaps this is karmic payback. ... The GeForces are just overmatched here.[/quote<]

            • jimbo75
            • 7 years ago
            • superjawes
            • 7 years ago

            Where is the PROOF that Nvidia deliverately hobbled AMD cards on AC3?

            I presented evidence. You simply ignored it.

            • jimbo75
            • 7 years ago
            • superjawes
            • 7 years ago

            Attacking the opposition without presenting anything to back up your own position. You MUST be a politician.

            • jimbo75
            • 7 years ago
            • superjawes
            • 7 years ago

            Why do you see everything as nefarious? There’s nothing even wrong with developers working with AMD or Nvidia. The issue is that the improved performance skews the results, and not necessarily because one chip is “better” than another. I’m not saying that AMD is hobbling anyone or anything, but that working with Codemasters pads the results in their favor by streamling some things.

            But since you just dismiss what was said with the DiRT results (in every TR article on the matter), how about something from Codemasters? [url=http://blogs.amd.com/play/2011/07/27/dirt-3-qa-%E2%80%93-david-doel-with-codemasters/<]Here,[/url<] David Doel talks about specifically working with AMD using AMD technologies to improve the experience. Again, not nefarious, but no one should be surprised that AMD cards perform significantly better in that game than Nvidia ones. Now it's your turn (and you still have not provided [b<]anything[/b<] to support your claim). Can you prove that Nvidia is deliberately improving their results in AC3? Also, Scott tried to appease you here by using a geometric mean, lessening the impact of AC3 results because it is an outlier.

            • jimbo75
            • 7 years ago
            • superjawes
            • 7 years ago

            Nope.

            You refuse to support your position with [i<]anything.[/i<] Therefore, I have no obligation to do anything. Come back when you really want to have a discussion and not a one-sided tantrum against Nvidia.

      • HisDivineOrder
      • 7 years ago

      There, there. AMD isn’t perfect. You’ll get used to this new world you’ve discovered eventually.

      Or you could go read another review that suits your worldview more. I’m sure there’s a fair and balanced news network that’ll suit you just fine if you only want to hear more of what you already “know” to be true.

      Otherwise, you read reviews for different perspectives and you’ll appreciate the perspective of someone doing something a little different than all those other reviews you keep referring to in other posts.

    • tbone8ty
    • 7 years ago

    if you picked both cards with the same price range…

    right now on newegg you can get both the Zotac 660 Ti Amp and the “vanilla” Sapphire 7950 with boost for $279.99. with the 660 Ti I get 1 game and the 7950 I get 3 games, all with average IGN scores of around 9.

    i’d still pick the 7950 over the 660 Ti any day of the week.

    • Pantsu
    • 7 years ago

    Would be nice to see some FC3 frametime results too with the newest drivers. FC2 was always one of the biggest stutterfests in terms of erratic frametimes, and what I’ve heard so far from FC3 is that it also has similar issues. Dunia engine seems to have some problems with its frame delivery.

    Also you should do some testing with a game or two that has frame delivery issues and check how the different settings affect it. Things like ambient occlusion or AA settings can change the results completely in terms of frame delivery.

    It’s surprising that the 7950 fares so badly in these tests. Granted, some of them are Nvidia sponsored titles, but even the Gaming Evolved titles seem to have issues. Both companies still need to work a lot harder to provide a more fluid experience in games rather than just concentrating on theoretical FPS numbers.

    • JustAnEngineer
    • 7 years ago

    Can you see any performance difference if you change the Guild Wars 2 LOD distance to Ultra instead of High?

      • MKEGameDesign
      • 7 years ago

      In my own testing, LOD is one of the lower budget options in the engine. Ultra–>High only nets a 1-2% change.

    • nanoflower
    • 7 years ago

    Scott, I appreciate the work that goes into making a review like this but it still leaves me wanting more. What I would like to see is when you have a card like this 7950 that is performing poorly on the 99th percentile test but well in the FPS test when tested at the edge, how about backing off on some of the options to see where it does well in both tests. What I mean is take any of the games and back off on some of the options (perhaps on the AA or back down on the resolution) and see where the card starts to perform close to the same in both 99th percentile and FPS.

    I know this is more work but at this point after reading this review the only thing I know is that if I have a high resolution monitor and every bell and whistle turned the Zotac is better than the Sapphire card. But most people don’t have a ultra high resolution monitor and will end up playing at1920x1080. With only the data provided it’s hard to say if the 7950 would work well for most people or not. That would make the reviews more useful for myself and probably others.

    Thanks for all your hard work.

      • continuum
      • 7 years ago

      The impression I get is that at 1920×1080 either of these is more than overkill… heck reading GTX 660 and 7870 articles tells me stepping down a notch in performance still results in plenty of performance at 1920×1080.

        • nanoflower
        • 7 years ago

        You may be right. I would like to have some hard data that confirms that. I’m sure that if we turn down everything both cards would be great in both the FPS and 99th percentile but I would like to have some idea where that border is (how far can you push each card before it starts to fail to perform in either FPS or 99th percentile tests.)

    • jazper
    • 7 years ago

    This review/commentary explains exactly why the “software decode” JIT on the nV platform is superior – it can be tweaked at a more fundamental level and while it does use more cpu cycles this is not as big an issue these days when we are wealthy in that respect.

    • ApockofFork
    • 7 years ago

    Those results were surprisingly one sided…

    nice article.

    • extreme
    • 7 years ago

    I think the figures could be better if you included a secondary Y axis for FPS. Then the readers wouldn’t wonder what framerate each frame time corresponded to. You can usually place it on either side of the chart too.

    Either way, these “inside the second” articles are excellent.

    • jonjonjon
    • 7 years ago

    nice work. i would love to see a follow up with the 2 cards overclocked to the max and/or with 7970, 670 and 680. im wondering if the issues amd has is strictly a driver problem or is something else because you always hear people say nvidia has better drivers then amd. also in hitman and borderlands if you did a blind test would you be able to tell the difference between the 2 cards? i read a review of the 7870 on tomshardware and they said the 7870 has blurrier textures then previous cards. i always wonder if either company reduces the IQ on certain cards so they can get better fps and are more competitive.

      • jessterman21
      • 7 years ago

      MOAR DATA, Scott. MOAR DATA!

      Lol, poor guy.
      Great article!

    • henfactor
    • 7 years ago

    Scott has the most classy Twitter plugs. Followed!

    • UltimateImperative
    • 7 years ago

    [quote<][i<]Warfighter[/i<] uses the same Frostbite 2 engine as [i<]Battlefield 3[/i<], with advanced lighting and DX11 support. That's fortunate, because I struggled to find any other redeeming quality in this stinker of a game.[/quote<] I can see the "MOH serves as an Engine beta for the next BF" motivation, but in this case the game is so bad that I would rather have seen BF3, since very few people are going to spend time playing the mess that is MOH: Doorfighter. Edit: thanks for subjecting yourself to it so we don't have to, though.

      • Damage
      • 7 years ago

      Heh. Thanks, man. It was rough. I didn’t know a playing a game could feel so much like punishment.

      • I.S.T.
      • 7 years ago

      Well, it sold millions of copies, and I imagine most of those will be playing MP rather than SP. So, it’s a valid test from the perspective of how popular the game.

        • UltimateImperative
        • 7 years ago

        Most of those copies were sold on consoles, though. According to VGChartz, it’s only up to 150,000 copies on PC, worldwide. [url<]http://www.vgchartz.com/game/70424/medal-of-honor-warfighter/Global/[/url<]

          • NAG3LT
          • 7 years ago

          I look at VGChartz PC data with a lot of suspicion. F.e. look at Witcher 2 PC numbers on their site: [url<]http://www.vgchartz.com/game/39247/the-witcher-2-assassins-of-kings/[/url<] - they claim 0.79M sales now, but devs have already repoted over 1.1M sales back in February: [url<]http://www.vgchartz.com/game/39247/the-witcher-2-assassins-of-kings/[/url<]. So VGChartz data is incomplete, you can refer to it as lower estimate. Of course, the full number is still likely lower than consoles, but maybe not dramatically so.

    • I.S.T.
    • 7 years ago

    Can’t say I expected these results.

      • wierdo
      • 7 years ago

      Yeah, I’m curious to see Win8 vs Win7 comparisons using the old & new games TR uses for testing, might be a bit early to switch to only Win8 for benchmarking when most users aren’t on it yet.

      • My Johnson
      • 7 years ago

      No, I didn’t expect them either. Looking at the data I’d say AMD seriously needs to get its driver team crackin’.

      • chuckula
      • 7 years ago

      NOBODY EXPECTS THE TR INQUISITION!

      • brute
      • 7 years ago

      yes very interestly

    • StuG
    • 7 years ago

    Strange, seeing a lot of “beyond 50ms” frames with the Radeon that were not present before. I read that AMD is having issues pushing out a full fledged Windows 8 driver, wonder if this is linked to it’s relatively poor showing here.

      • StuG
      • 7 years ago

      Actually, when looking back it appears we lost considerable performance on both cards between the two tests:

      [url<]https://techreport.com/review/23419/nvidia-geforce-gtx-660-ti-graphics-card-reviewed/7[/url<] 7950 from 91ish to 69ish 660TI from 87ish to 74ish On both cards frames beyond 50ms are now showing where before they were not. While the drivers have changes, they are moving in the wrong direction if they are the culprits. Something seems strange with the results here, too big of a drop in both cards to flat out ignore.

        • Damage
        • 7 years ago

        As noted in the review, this is a different test in a different area of the game that tends to be a tougher workload.

          • StuG
          • 7 years ago

          Ah, missed that. Noted!

    • desertfox
    • 7 years ago

    I don’t see the article. Am I missing something?

      • DancinJack
      • 7 years ago

      Yes.

      • colinstu12
      • 7 years ago

      660Ti = more performance for less $. #done

        • rpsgc
        • 7 years ago

        Funny how every other review site disagrees with you.

Pin It on Pinterest

Share This