Today’s graphics cards: the value angle

The last time we attempted to quantify the value propositions of a large cross-section of competing products, we concentrated on microprocessors. Ever since then, we’ve wanted to explore the same concept with graphics cards. Thanks to our latest round of graphics card reviews, which culminated with the massive GeForce 9 series multi-GPU extravaganza last month, we’ve ended up with enough benchmark data to paint a fairly complete picture of today’s mid- to high-end GPU market.

Armed with this information, we’ve taken another crack at quantifying value, this time by looking at what sort of GPU power you get for your dollar. The results are interesting, if nothing else. Read on to see what we found.

Quantifying GPU value

This is an iffy exercise, this attempt to quantify a value proposition that often involves considerations one can’t easily boil down to numbers. We have no illusions that this little thought experiment will yield the only or best information you need in order to decide which graphics card to buy. Our graphics card reviews include a mix of subjective impressions and objective test data, and they’re probably your best starting point in these matters.

Heck, even if you want to go purely for a “raw value” approach, you may be better served by looking at one of our recent reviews, like the aforementioned GeForce 9 multi-GPU extravaganza. You could conceivably just hunt for the cheapest card that achieves playable frame rates at (or near) your monitor’s native resolution and call it a day. There’s nothing inherently wrong with that, and it’s certainly a very value-oriented approach. But it won’t get you anything much in the way of future-proofing, and it won’t tell you how much graphics processing capability you’re getting for your money.

For this article, we wanted to focus more closely on the question of GPU power per dollar. To make that work, we’ve relied on test data from (as much as possible) clearly GPU-limited scenarios—performance results obtained at very high resolutions (most often 2560 x 1600) with some level of antialiasing and anisotropic filtering enabled. That way, we can highlight questions of performance scaling. We can see how the cut-down shader power and memory capacities of the less expensive cards impact performance and observe how much multi-GPU solutions can distance themselves from their single-GPU brethren.

Obviously, concentrating on GPU-limited scenarios like these sidesteps the question of “playable frame rates” in today’s games, as we’ve noted. Vast differences in GPU power may not be readily apparent at 1680×1050 in currently popular titles, but they may become very important in the future as game developers ratchet up the shader effects and scene complexity, packing more richness into each pixel. Our hope is that we can help you make a more informed evaluation of the value proposition, one that considers how your video card might serve you over its entire lifespan.

Our test subjects

Anyway, let’s have a look at our subjects. We’ve included results for 20 GPU configurations in total, nine of which are single cards…

Single
card
GPUs Price
Radeon HD 3850 1 $154.99
GeForce 9600 GT 1 $159.99
Radeon HD 3870 1 $164.99
GeForce 8800 GT 1 $189.99
GeForce 8800 GTS 512 1 $254.99
GeForce 9800 GTX 1 $309.99
Radeon HD 3870 X2 2 $389.99
GeForce 8800 Ultra 1 $519.99
GeForce 9800 GX2 2 $549.99

…and 11 are multi-card setups:

Multi-card
setup
GPUs Price
Radeon HD 3850 CrossFire 2 $309.98
GeForce 9600 GT SLI 2 $319.98
Radeon HD 3870 CrossFire 2 $329.98
GeForce 8800 GT SLI 2 $379.98
Radeon HD 3870 X2 + 3870 3 $554.98
GeForce 9800 GTX SLI 2 $619.98
Radeon HD 3870 X2 CrossFire 4 $779.98
GeForce 9800 GTX SLI 3 $929.97
GeForce 8800 Ultra SLI 2 $1,039.98
GeForce 9800 GX2 SLI 4 $1,099.98
GeForce 8800 Ultra SLI 3 $1,559.97

You might wonder where we sourced the prices listed above. In our CPU value article last year, we simply took bulk prices directly from AMD’s and Intel’s respective websites. Real-world CPU prices fluctuate and don’t necessarily reflect bulk pricing, but the two are often pretty close, and bulk prices have the benefit of coming from “official” sources.

Graphics card pricing is a very different matter. AMD’s and Nvidia’s official launch prices for new GPUs are only loose guidelines, and they often cover a range of possible prices—for instance, “$199-249” for the GeForce 8800 GT. Cards usually drift from their launch prices ranges after a while, too, so we were left with no option but to nab prices from online retailers. In the end, we relied on a mix of prices from Newegg and our price search engine. Overall, we believe the figures we picked should do a good job of representing typical real-world pricing.

We ran into a little snag while collecting prices for the Radeon HD 3850 and GeForce 9600 GT, however, since the models we tested are both significantly marked-up “overclocked in the box” models. To avoid skewing our data too much, we looked for cheaper cards with similar clock speeds. We selected a GeForce 9600 GT with 700MHz core and 950MHz memory speeds to be our price reference for the 700MHz/1000MHz card we tested, and we picked a 720MHz/910MHz Radeon HD 3850 to do the same for our 725MHz/900MHz test model. Those slight clock speed differences shouldn’t impact performance too substantially, and the lower prices are better indicators of what you can expect to pay for a Radeon HD 3850 or GeForce 9600 GT these days.

Presenting the data

We have our reasoning, our performance results, and our prices. Here’s how we sliced and diced the data to give us some insight.

We carried over two tools from our CPU value comparo. One is the simple “performance per dollar” chart, which, in this case, ranks cards based on how many frames per second $10 will buy. The exact formula is: ((frames/second)/price))*10. We relied on our good friend Alexander Hamilton rather than his buddy George Washington for this task, because ranking cards based on FPS per dollar yields hard-to-read numbers with too many leading zeros.

Ranking performance per dollar doesn’t give us the whole picture, though—far from it. As you’ll see in the next few pages, the one card that ranks highest in performance-per-dollar charts is usually the cheapest (and thus the slowest), and it’s often followed by slightly slower but equally expensive competitors. To get a look at which cards lie in the proverbial “sweet spot” of price and performance, we used scatter plots like this one:

These scatter plots may look a little intimidating, but they’re really quite simple. The best possible offering would be in the top left corner, offering the highest frame rate for $0, while the worst possible one would be at the bottom right, yielding 0 FPS for the most money. We obviously never get such extreme cases, but the best offerings still typically lie closer to the top left corner of the plot. In the example above, the GeForce 9600 GT looks to be a better deal than either the Radeon HD 3870 or the Radeon HD 3850. The GeForce 9600 GT SLI config puts in a very good showing, as well. Conversely, the GeForce 9800 GTX three-way SLI setup is a poor value.

Now that that’s (hopefully) clear, let’s move on to the benchmarks.

Test notes

As we said on the previous page, our GeForce 9600 GT and Radeon HD 3850 were both “factory overclocked,” so their performance is a little higher than other cards of the same name. All of our other cards run at their stock speeds, except for the GeForce 8800 GTS 512MB, which is “factory overclocked” pretty typically for a model in its price range.

Our testing methods

As ever, we did our best to deliver clean benchmark numbers. Tests were run at least three times, and the results were averaged.

Our test systems were configured like so:

Processor Core
2 Extreme X6800
2.93GHz
Core
2 Extreme X6800
2.93GHz
Core
2 Extreme X6800
2.93GHz
System
bus
1066MHz
(266MHz quad-pumped)
1066MHz
(266MHz quad-pumped)
1066MHz
(266MHz quad-pumped)
Motherboard Gigabyte
GA-X38-DQ6
XFX
nForce 680i SLI
EVGA
nForce 780i SLI
BIOS
revision
F7 P31 P03
North
bridge
X38
MCH
nForce
680i SLI SPP
nForce
780i SLI SPP
South
bridge
ICH9R nForce
680i SLI MCP
nForce
780i SLI MCP
Chipset
drivers
INF
update 8.3.1.1009

Matrix Storage Manager 7.8

ForceWare
15.08
ForceWare
9.64
Memory
size
4GB
(4 DIMMs)
4GB

(4 DIMMs)

4GB
(4 DIMMs)
Memory
type
2
x Corsair
TWIN2X20488500C5D
DDR2 SDRAM
at 800MHz
2
x Corsair
TWIN2X20488500C5D
DDR2 SDRAM
at 800MHz
2
x Corsair
TWIN2X20488500C5
DDR2 SDRAM
at 800MHz
CAS
latency (CL)
4 4 4
RAS
to CAS delay (tRCD)
4 4 4
RAS
precharge (tRP)
4 4 4
Cycle
time (tRAS)
18 18 18
Command
rate
2T 2T 2T
Audio Integrated
ICH9R/ALC889A

with RealTek 6.0.1.5497 drivers

Integrated
nForce 680i SLI/ALC850

with RealTek 6.0.1.5497 drivers

Integrated
nForce 780i SLI/ALC885

with RealTek 6.0.1.5497 drivers

Graphics Diamond Radeon HD
3850 512MB PCIe

with Catalyst 8.2 drivers

Dual
GeForce
8800 GT 512MB PCIe

with ForceWare 169.28 drivers

Dual
GeForce
9800 GTX 512MB PCIe

with ForceWare 174.53 drivers

Dual Radeon HD
3850 512MB PCIe

with Catalyst 8.2 drivers

Dual
GeForce
8800 Ultra 768MB PCIe

with ForceWare 169.28 drivers

Triple
GeForce
9800 GTX 512MB PCIe

with ForceWare 174.53 drivers


Radeon HD 3870 512MB PCIe

with Catalyst 8.2 drivers

Triple
GeForce
8800 Ultra 768MB PCIe

with ForceWare 169.28 drivers

Dual
GeForce
9800 GX2 1GB PCIe

with ForceWare 174.53 drivers

Dual

Radeon HD 3870 512MB PCIe

with Catalyst 8.2 drivers

Dual
Palit GeForce
9600 GT 512MB PCIe

with ForceWare 174.12 drivers



Radeon HD 3870 X2 1GB PCIe

with Catalyst 8.2 drivers


Dual Radeon HD 3870 X2 1GB PCIe

with Catalyst 8.3 drivers



Radeon HD 3870 X2 1GB PCIe
+

Radeon HD 3870 512MB PCIe

with Catalyst 8.3 drivers

Palit
GeForce
9600 GT 512MB PCIe

with ForceWare 174.12 drivers

GeForce
8800 GT 512MB PCIe

with ForceWare 169.28 drivers

EVGA
GeForce 8800 GTS 512MB PCIe

with ForceWare 169.28 drivers

GeForce
8800 Ultra 768MB PCIe

with ForceWare 169.28 drivers

GeForce
9800 GX2 1GB PCIe

with ForceWare 174.53 drivers

GeForce
9800 GX2 1GB PCIe

with ForceWare 174.53 drivers

Hard
drive
WD
Caviar SE16 320GB SATA
OS Windows
Vista Ultimate
x86 Edition
OS
updates
KB936710, KB938194, KB938979,
KB940105, KB945149,
DirectX November 2007 Update

Please note that we tested the single and dual-GPU Radeon configs with the Catalyst 8.2 drivers, simply because we didn’t have enough time to re-test everything with Cat 8.3. The one exception is Crysis, where we tested single- and dual-GPU Radeons with AMD’s 8.451-2-080123a drivers, which include many of the same application-specific tweaks that the final Catalyst 8.3 drivers do.

Thanks to Corsair for providing us with memory for our testing. Their quality, service, and support are easily superior to no-name DIMMs.

Most of our test systems were powered by PC Power & Cooling Silencer 750W power supply units. The Silencer 750W was a runaway Editor’s Choice winner in our epic 11-way power supply roundup, so it seemed like a fitting choice for our test rigs. The three- and four-way SLI systems required a larger PSU, so we used a PC Power & Cooling Turbo-Cool 1200 for those systems. Thanks to OCZ for providing these units for our use in testing.

Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests.

We used the following versions of our test applications:

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

A starting point: Fill rate

Since we’re interested in GPU horsepower, a quick peek at our results for 3DMark06’s multitextured fill rate test seems in order. The number of texels (a.k.a. texture elements, or multi-textured pixels) a given GPU can crank out per second is a long-standing key performance metric. Shader power is becoming more important, but it’s more difficult to test well; we’re not convinced any one of 3DMark06’s shader tests is a good overall measure of shader power. This quick multiextured fill rate benchmark should give us a decent indication of each configuration’s texture throughput.

These numbers aren’t always good predictors of actual in-game performance, but they do give us a sense of this one important variable. The GeForce 8800 GT lands at the top of our texel throughput value rankings in both its single and dual-card configurations.

The distribution in the scatter plot highlights some realities about current GPU architectures. AMD’s RV670 GPU powers all current Radeons, and its texture filtering capacity is relatively modest. As a result, the red points on the plot don’t extend too far up the Y axis, even in three- and four-GPU configurations, where they’re reaching beyond $500. On the Nvidia side of things, the newer G92 GPU packs quite a bit more texture filtering punch than the older G80 chip in the GeForce 8800 Ultra. The three- and four-way G92 configs are just sick, although you’ll pay dearly for them. The GeForce 9600 GT, meanwhile, is based on the G94 GPU, which has roughly half the texture filtering and shader power of the G92. That blunts some of this card’s otherwise-potent value appeal.

Call of Duty 4: Modern Warfare

We tested Call of Duty 4 by recording a custom demo of a multiplayer gaming session and playing it back using the game’s timedemo capability. We enabled 4X antialiasing and 16X anisotropic filtering and turned up the game’s texture and image quality settings to their limits.

Nvidia’s GeForce 9600 GT and GeForce 8800 GT reign supreme in our performance-per-dollar rankings for Call of Duty 4, and our scatter plot shows they’re far and away better deals than anything in the same price range.

Interestingly, the plot also shows an almost-linear progression of prices and frame rates, and we don’t really see a cut-off point after which prices spike and performance stagnates like on our CPU charts.

Looking past the GeForce 9600 GT and 8800 GT, AMD’s Radeon HD 3850 and 3870 CrossFire configurations seem to offer the best performance in the $300-400 range, followed by Nvidia’s GeForce 9800 GX2 in the $500-600 range. We should note that the SLI configurations with the GeForce 9600 GT and 8800 GT perform abnormally poorly at Call of Duty 4 at this particular resolution, a problem Nvidia could solve in future drivers. If that happens, we’d wager that dual 9600 GTs or 8800 GTs would be a better value proposition than the 3850 and 3870 CrossFire offerings—and possibly than the GeForce 9800 GX2.

Enemy Territory: Quake Wars

We tested this game with 4X antialiasing and 16X anisotropic filtering enabled, along with “high” settings for all of the game’s quality options except “Shader level” which was set to “Ultra.” We left the diffuse, bump, and specular texture quality settings at their default levels, though. Shadows, soft particles, and smooth foliage were enabled. Again, we used a custom timedemo.

We’ve excluded the three- and four-way CrossFire X configs here since they don’t support OpenGL-based games like this one.

The GeForce 9600 GT and 8800 GT SLI configurations don’t run into any roadblocks in Quake Wars. The 9600 GT SLI setup is second in our performance-per-dollar chart, and our scatter plot confirms its attractive positioning. For about the same price as the GeForce 9800 GTX, the 9600 GT SLI config yields a 50% higher frame rate, crushing rival AMD CrossFire setups in the process.

Unlike in Call of Duty 4, we actually have somewhat of a value cut-off point here, too. Beyond the 9600 GT SLI setup, you have to pay almost twice as much for a relatively minor frame rate gain (with the GeForce 9800 GX2 and two-way GeForce 9800 GTX), and almost three times as much for a significant one (with three-way 9800 GTXs).

Of course, running an SLI setup has noteworthy downsides, primarily in terms of space taken, power consumed, and noise produced. Those looking for the best single-GPU value can’t go wrong with either a GeForce 9600 GT or a GeForce 8800 GT. The latter’s greater shader power might make it more worthwhile in future games, but we’re not seeing a large gap between the two even at these very stressful settings.

Half-Life 2: Episode Two

We used a custom-recorded timedemo for this game, as well. We tested Episode Two with the in-game image quality options cranked, with 4X AA and 16 anisotropic filtering. HDR lighting and motion blur were both enabled.

Our results in Half-Life 2: Episode Two mirror those from Quake Wars. The GeForce 9600 GT, 9600 GT SLI, and 8800 GT take the top three spots in the performance-per-dollar chart, and our scatter plot shows dual GeForce 9600 GTs in a very favorable spot.

In the single-GPU lane, the GeForce 9600 GT and 8800 GT take the cake again. The pricier single-GPU offerings don’t compare well when you can get a much faster SLI config for roughly the same price.

Crysis

We tested Crysis in the “Recovery” level, early in the game, using our standard FRAPS testing procedure (five sessions of 60 seconds each). The area where we tested included some forest, a village, a roadside, and some water—a good mix of the game’s usual environments.

Because FRAPS testing is time-consuming and Crysis has much higher system requirements than other titles, we tested lower-end cards at 1680 x 1050 and higher-end models at 1920 x 1200. You’ll find G92 SLI and CrossFire X configs in both sets of results.

At 1680 x 1050, the GeForce 9600 GT and Radeon HD 3870 are almost tied on the value scale in Crysis. That said, our scatter plot makes the GeForce 8800 GT look better than either, considering its higher performance and only marginally higher price. Moving up to the $300-400 range, the 9600 GT SLI setup looks like the best overall deal yet again—a perhaps surprising result, considering Crysis’ apparently heavy use of shaders.

At 1920 x 1200, we tested only two single-GPU offerings: the GeForce 9800 GTX and the GeForce 8800 Ultra. The 9800 GTX is hands-down the best overall deal according to our performance-per-dollar chart, with the next step up—Nvidia’s own GeForce 9800 GX2—costing almost twice as much for a 27% increase in performance. We didn’t have the opportunity to test 9600 GT and 8800 GT SLI setups at this resolution, though, so it’s difficult to draw a definite conclusion.

Unreal Tournament 3

We tested UT3 by playing a deathmatch against some bots and recording frame rates during 60-second gameplay sessions using FRAPS. This method has the advantage of duplicating real gameplay, but it comes at the expense of precise repeatability. We believe five sample sessions are sufficient to get reasonably consistent and trustworthy results. In addition to average frame rates, we’ve included the low frames rates, because those tend to reflect the user experience in performance-critical situations. In order to diminish the effect of outliers, we’ve reported the median of the five low frame rates we encountered.

Because UT3 doesn’t natively support multisampled antialiasing, we tested without AA. Instead, we turned up the game’s quality sliders to the max and disabled the game’s frame rate cap.

Epic’s latest multiplayer shooter paints a picture similar to Quake Wars and Half-Life 2: Episode Two, in that the GeForce 9600 GT and GeForce 9600 GT SLI setups are the two best-positioned offerings, both according to our value chart and our scatter plot. We should nonetheless point out that the Radeon HD 3870 CrossFire config sits close to the dual 9600 GTs.

Above the $300-400 range, the next step up looks to be the three-way CrossFire X configuration with the Radeon HD 3870 X2 and Radeon HD 3870. This dual-card, three-GPU setup produces the highest frame rate we’ve seen in UT3 so far, and it costs almost the same as a much slower, single-GPU GeForce 8800 Ultra.

Average performance

To make our data a little easier to digest, we’ve rolled numbers from Call of Duty 4, Enemy Territory: Quake Wars, Half-Life 2: Episode Two, and Unreal Tournament 3 into a single, “average” set of results. Since CrossFire X doesn’t yet support OpenGL games like Quake Wars, we’ve left the three- and four-way Radeon configs out of this comparison. We’ve also omitted Crysis since we didn’t test all cards at a single resolution in it.

Boiling down our previous results into one graph and one plot clarifies things for us nicely. As we’ve been saying all along, the GeForce 9600 GT and the GeForce 8800 GT are the best-positioned single-GPU offerings, and the GeForce 9600 GT SLI setup is the next best step up—even with its lackluster numbers from our Quake Wars test lowering its average slightly. Among the Radeons, the HD 3850 CrossFire solution deserves particular distinction; its performance nearly matches than of the Radeon HD 3870 X2, at a much lower price.

Overall, the almost linear relationship we initially noted between price and performance remains largely intact, with some notable exceptions at the high end of the market. Any config involving a GeForce 8800 Ultra is a relatively poor value with the advent of newer G92-based cards. Things are a little rosier for the GeForce 9800 GTX-based three-way SLI, whose outstanding performance almost (well, kinda) tracks with its outrageous price. Meanwhile, the 9800 GX2-based quad SLI setup has unique one-two combo of being slower than the 9800 GTX three-way rig while costing considerably more.

Conclusions

Over the past few pages, we’ve seen three things. The first is that scatter plots are really cool. Seriously, just look at ’em. Organizing this type of data could hardly get any better.

Our second observation is that two GeForces, the 9600 GT and 8800 GT, easily offer the best overall value in the single-GPU world. The 9600 GT is more attractive now, since it offers performance almost equal to the 8800 GT for a few bucks less. However, we suspect the GeForce 8800 GT could become more interesting in the months to come, should games start harnessing more of its additional shader processors and texture filtering capacity.

Finally, perhaps our most unexpected observation is that multi-GPU setups have the potential to deliver solid value. Mid-range cards like the GeForce 9600 GT and Radeon HD 3850 offer strong value propositions, and that effect is multiplied by pairing two of them together. Two 9600 GTs can be faster than a single GeForce 8800 Ultra, despite the fact that they cost substantially less. Similarly, two Radeon HD 3850s are a better deal than a single Radeon HD 3870 X2, if your motherboard can accommodate them. Of course, SLI and CrossFire bring with them a whole stable of caveats involving chipset compatibility, multi-monitor support, and the need for driver profiles. High-end multi-GPU configs can add additional expense in the form of higher PSU and cooling requirements, as well. But with both AMD and Nvidia now offering high-end cards with dual GPUs onboard, multi-GPU looks like it’s here to stay.

Comments closed
    • mattthemuppet
    • 11 years ago

    cool article, though I was a little surprised after the value angle title to find no “value” orientated cards like the 3650 series (no point including the 8600GT though, everyone knows how crap that was). I’d find it pretty interesting to see how playable, and with what detail, lower end cards are at the resolutions the majority of people play at.

    I understand the arguments behind testing at crazy resolutions, but all that left me with was the vague understanding that any of these cards will be fine at up to 1650×1050, rather than which one is the best value for me. I also don’t see much merit to the future proofing argument, especially for graphics cards. Surely it’d make more sense to by a mid-range card every year for 2-3 yrs than to blow a similar amount on a high end card and hope that you’ve somehow guessed right about future game requirements, so that the high-end card will last longer than a year and a half?

    Still that’s just me 🙂

      • Jambe
      • 11 years ago

      All cards ran well at 1680×1050, so the lowest-priced would be the best value for anybody using a resolution that size or smaller. For higher resolutions, any of the multi-GPU systems in the top-left quadrant of the graph on page 1 would represent a good value; and again, the cheapest which fits a budget represents the /[

    • sigher
    • 11 years ago

    I wish techsites developed some standard to more clearly differentiate multi SLI/CF card set-ups in listings, it’s so annoying how they just intersperse them and don’t even give them another color in graphs and you painfully have to stare and try to mentally get a picture of it all.
    Oh well, maybe in another 100000 years eh.

      • indeego
      • 11 years ago

      Many of us just skip to the conclusions, reducing the hits on the site.

      Oh well. Eventually they’ll noticeg{<.<}g

    • sdack
    • 11 years ago

    Where are the 9600GT X2 cards? Will they ever become available?

    Such a card will change the entire market and I want one!

      • Forge
      • 11 years ago

      Never. The 9800GX2 is as low end as Nvidia is likely to take dual GPU single slot solutions. A 9600GX2 would overlap the 9800GTX on price and devalue it. Nvidia does not want to cannibalize the 9800GTX that way.

      I’m guessing you’re one of those few people that bought a non-SLI mobo and now regret it, since you’re not just grabbing two 9600GTs and dropping the SLI bridge on. That’s a bummer for you.

      I think it’s Asus that keeps demoing weird dual/triple GPU cards but then never putting them into full production. Go tell them.

    • dragmor
    • 11 years ago

    My attempt at the cost of power for the cards
    §[< https://techreport.com/forums/viewtopic.php?f=3&t=59023<]§

      • flip-mode
      • 11 years ago

      Good stuff, thanks.

    • mczak
    • 11 years ago

    No 256MB 3850? Would have loved to see this. These cards can be had dirt cheap (like 110$), way cheaper than the cheapest 256MB 8800GT or 384MB 8800GS, and the ATI HD series suffers not nearly as bad as the current GeForce cards under memory pressure (which usually rules out the 256MB 8800GT card as a serious contender). Should offer excellent value.

      • BoBzeBuilder
      • 11 years ago

      Given you can find 512MB 3850 for less than $100 if you look hard enough, there’s little point in reviewing a 256 version.

        • gerryg
        • 11 years ago

        Actually, at the resolutions they’re testing with, a 256MB card would limit the GPU severely, if it could even run the test at all.

          • mczak
          • 11 years ago

          You’ve got some point even though the HD2/3x series has way less of a problem when it’s running out of video ram than the GF8/9 series, there’s a limit to this. Since all tests only use the super-high-end resolution of 2560×1600 with 4xAA (not everyone has a 30″ monitor…) with the exception of Crysis (which uses a lot of vid mem anyway), it probably would suffer quite a bit.

          • SPOOFE
          • 11 years ago

          But the resolutions they tested at were to ensure that they were GPU bound, not necessarily to show off capabilities.

        • mczak
        • 11 years ago

        Maybe with mail-in rebates, but using “normal” offers the cheapest 256MB HD3850 are quite a bit cheaper than the cheapest 512MB offers (or do you have a counterexample?)

          • BoBzeBuilder
          • 11 years ago

          Um, no I don’t. But still, I’d say 512MB is worth the extra few bucks since 256 can become a serious bottleneck in certain situations.

    • Firestarter
    • 11 years ago

    I’d love to see the final averaged graphs to be updated with the real time prices of the video cards, by taking the average of the cheapest stock clocked card available at several retailers. I know this would be non-trivial to implement and would probably require some regular maintenance, but it would add tremendously to the value of this article for the coming few months.

    As it stands now, it’s a snapshot. A damn informative one, but a snapshot still.

      • gerryg
      • 11 years ago

      Ditto, that would be interesting, even if it’s just something that’s run as a batch job every week, so “near real-time” is achieved. Rather than doing it for all the charts, I would take 3 games and average the average FPS to make a composite score. I’d recommend Crysis, HL: Ep2, and UT3 since the first is a monster and 2nd two are roughly representative of lots of other games using their engine. One chart w/bang-for-the-buck, updated every week for price, would still be better than what anyone else does, and real-timey enough to work. Could post it with the deal of the week post.

    • kvndoom
    • 11 years ago

    Stuff like this makes me love my 8800GT even more than before. I’m gonna marry that card someday.

      • sp1nz
      • 11 years ago

      Agreed. The 8800GT continues to impress me as well.

    • sp1nz
    • 11 years ago

    Quick question… Since readers have replied to a hardware survey back in Feb.2008 and the majority responded with a monitor size between 17 and 22 inches… Why are we still seeing test resolutions of 2560×1600 when a closer resolution to you average readership would be 1680×1050? Just asking…

      • Cyril
      • 11 years ago

      From the article:

      For this article, we wanted to focus more closely on the question of GPU power per dollar. To make that work, we’ve relied on test data from (as much as possible) clearly GPU-limited scenarios—performance results obtained at very high resolutions (most often 2560 x 1600) with some level of antialiasing and anisotropic filtering enabled. That way, we can highlight questions of performance scaling. We can see how the cut-down shader power and memory capacities of the less expensive cards impact performance and observe how much multi-GPU solutions can distance themselves from their single-GPU brethren.

      Obviously, concentrating on GPU-limited scenarios like these sidesteps the question of “playable frame rates” in today’s games, as we’ve noted. Vast differences in GPU power may not be readily apparent at 1680×1050 in currently popular titles, but they may become very important in the future as game developers ratchet up the shader effects and scene complexity, packing more richness into each pixel. Our hope is that we can help you make a more informed evaluation of the value proposition, one that considers how your video card might serve you over its entire lifespan.

        • sp1nz
        • 11 years ago

        I understand the reasoning you’ve presented (and RTFA), and I mostly agree with it, but I would still propose the inclusion of a baseline performance calculation to give people a solid understanding of the case you are trying to present… Namely that of value. I just think you need to qualify everything with an appreciable baseline, that’s all. I feel that you’ve left some important information out, information that other comments have addressed more elegantly than I. (#13,#24)

        Don’t get me wrong, I really appreciate the work you guys do. These types of articles are REALLY difficult to write. I’m just trying to offer a suggestion to make your review more valuable 🙂

    • bdwilcox
    • 11 years ago

    Where’s the Voodoo 3000? I think it presents the best value on the market now. Could you please include it next time?

    • Silus
    • 11 years ago

    Great article guys! IMO, these are the types of articles that interest people the most, along with system recommendation guides.

    • willyolio
    • 11 years ago

    very nice. i’d just like to suggest one more thing to add to the graphs- a trendline. having a line of best fit for all these models would certainly help people see the “average” amount of additional performance they should expect per extra dollar, as well as how far above the trend the best value cards are.

    • ssidbroadcast
    • 11 years ago

    q[

    • matnath1
    • 11 years ago

    Is it just me or are these sentances a little difficult to understand?

    “Last time we attempted to quantify the value propositions of a large cross-section of competing products, we concentrated on microprocessors. ”

    Shouldn’t it read THE last time we attempted…

    And this needs a re-write. I had to read it three times!

    “This is an iffy exercise, this attempt to quantify a value proposition that often involves considerations one can’t easily boil down to numbers. We have no illusions that this little thought experiment will yield the only or best information you need in order to decide which graphics card to buy”.

    I just read it again and still don’t get what you guys are trying to say???

      • sluggo
      • 11 years ago

      Leaving out the “The” reflects a style of writing that is closer to the spoken word. There’s nothing grammatically incorrect about it, and the style is in keeping with the rest of the article.

      The other sentences make perfect sense to me. The authors are simply saying that there’s no one perfect way to do a comparitive evaluation of this many cards, many of which are aimed at different markets.

      Writing about technology is not easy, but Cyril and Scott have this stuff wired. Easy to read, enjoyable and informative. Hat trick!

        • matnath1
        • 11 years ago

        Agreed….Maybe I shouldn’t try to read this stuff at 1 a.m.

    • marvelous
    • 11 years ago

    I wish Techreport stuck a 8800gs into the mix. This card is by far the cheapest of the bunch and perform faster than 3850 and equal terms with 9600gt minus AA performance. It would easily be neck and neck with 9600gt price performance per $$$.

    • Vanakov
    • 11 years ago

    Any reason why the 8800GTS 512MB wasn’t included in the SLI results, it looks to perform the same as the 9800GTX, but costs less, surely it would be better value than the 8800GT in this case?

    • reactorfuel
    • 11 years ago

    Interesting stuff, but I wonder about how the statistics play out a bit. Frames per second per ten dollars is a decent metric, but it doesn’t take some critical stuff into account.

    For instance, while I understand that testing at high resolutions is important to discern the difference between graphics cards, it’s still not as useful from a value standpoint. Somebody who’s shelled out for a 30″ monitor to game on probably doesn’t care as much about value as someone with a more pedestrian 20″ or 22″ model. It’d be interesting to see how the performance-per-dollar stuff changes when lower resolutions are put in play.

    Also, pure framerate per ten dollars can be somewhat misleading. Imagine a $20 video card that offers an average of 7 fps across the games tested (not that it exists, of course – this is a thought experiment). It would absolutely dominate the performance-per-dollar ranking, but for people who wanted to actually play games, a $150 card capable of sustaining playable framerates would be superior. Alternately, an $2000 setup capable of 500 fps in everything might look pretty good, but for the majority of gamers, a less expensive card that still offered playable framerates would probably be a better buy. It’d be interesting to see some weighting, so 0-30 fps is vital, 30-60 fps contributes a significant amount, and over 60 falls off rapidly. Defining the weighting function would of course be another source of argument, and coming up with a good one would be a lot of work – but it’s interesting to think about.

      • flip-mode
      • 11 years ago

      I think it is something of a moot point considering the resolutions and settings they used – it’s not 8600GT territory. Besides that, they didn’t test some super cheapo card. Furthermore, since the frame rates are actually shown and it’s not just a frame rater per dollar score card, all the information needed to make an informed decision is provided.

        • Peldor
        • 11 years ago

        r[

          • derFunkenstein
          • 11 years ago

          1680×1050 would have been a more realistic set of tests, but they would have come out largely flat – none of those cards are really limited at that resolution.

          • flip-mode
          • 11 years ago

          No, it’s still moot because he made reference low end cards, for one, and for another, all the data he wanted was really already available in the test, he either wasn’t paying attention or was too lazy. That’s just my interpretation.

      • Anomymous Gerbil
      • 11 years ago

      Huh? The data is all in very easy-to-read charts, so you can take out of it whatever you want – price, performance, or both.

        • reactorfuel
        • 11 years ago

        Well, first of all, the data aren’t necessarily what would be useful to the average, value-oriented enthusiast. Testing video cards at very high resolutions and with a QX processor is great if you want to find the video cards’ limits. It’s less useful if you’re building a system with, say, an E8400 and a 20-22″ monitor and are trying to find out what’s the best value for you.

        As for the comments about weighting, it’s certainly possible to look at the scatter plots and figure things out for yourself. However, in the eternal quest to boil down “what is the best video card?” to a single number, a weighted function would be very useful. Playability at reasonable resolutions is mandatory; a card that can’t deliver that isn’t worth the price, even if it’s very cheap. Anything over 60 fps is wasted for anybody playing on an LCD (which is to say, most people these days), although some headroom is nice to ensure smooth action when things get hairy and the framerate dips.

        Ultimately, I really liked the article. Determining the best value in today’s market is a very worthy and sometimes confusing goal, the methodology was solid, and the results were presented pretty well. I’m not trying to point fingers and say that it sucks; I’m trying to look at ways it might be even better.

          • Usacomp2k3
          • 11 years ago

          He said in the article that they also tested at high resolutions because that would help simulate what would happen with future games.

    • DrDillyBar
    • 11 years ago

    Very interesting. Well done. I’m fairly pleased how my video card plays out, considering the state the market was in at the time I upgraded last.
    Anyway you can take power consumption into consideration in future versions of this without borking the system you have? (maybe track the card’s individual power consumption x number of cards etc)
    Edit: I should read the posts THEN post myself. *roll*

    • oldDummy
    • 11 years ago

    It’s been a long time since I’ve used graphs for anything constructive.
    But the Average performance graph shows a gap between the two “hero” SLI setups and the 9800 GX2 without a large jump along the X axis.
    How could that be read?
    In addittion, as stated in the article, the platforms these cards were run on differ. I would like to see the delta between the nvidia chipsets and various Intel chipset mb’s running the GX2. I don’t know if that is what this data shows or not.

    • VILLAIN_xx
    • 11 years ago

    I know people hate this stuff, but some cheapskates may like this.

    $99 after mir.

    §[< http://www.newegg.com/Product/Product.aspx?Item=N82E16814131096<]§

    • vdreadz
    • 11 years ago

    Nice compilation of data for us readers although some readers would have liked to see other types of data added to the mix as well.

    • flip-mode
    • 11 years ago

    The conclusion mimics a statement I made in Fox’s “SLI debate” thread – the 9600GT and the HD3850 are the two configs that make multi-gpu worth it. It was pretty obvious looking at the most recent benches along side some online prices. I’m sensing that it’s a pretty consistent trend with multi-gpu that there are really only one or two products on offer that you can team together and get some real value. Teaming two (or more) high end products together simply costs way to much, and teaming two low end products together is just a doubling of disappointing performance.

    Edit: and BTW, these value articles are just about the best things you (TR) guys can do in terms of what’s in the best interests of your readers. I hope they become a staple item.

      • vdreadz
      • 11 years ago

      I seconded what you say as well….Got to give TR team credit!

        • no51
        • 11 years ago

        Agreed! Hooray for the TR team!

      • ssidbroadcast
      • 11 years ago

      flip-mode is on point.

    • dragmor
    • 11 years ago

    Nice article.

    Since your basing this on the value angle you could have included running costs. Use the idle and load power ratings and apply a formula similar to the following. PC is on 8 hours a day 80% of the time GPU idle, the other 20% GPU load (for a heavy gamer).

    Then add the cost of the power consumed by the cards in 1 year at whatever you local rates are.

      • continuum
      • 11 years ago

      Running costs would definitely be interesting. If complicated, at least…. not sure how feasible it’d be to add.

      I was tempted to go 8800GT 512MB or 8800GTS 512MB as I think the extra shader power will be useful in the near-future, but the 9600GT looks awfully good.

      • shank15217
      • 11 years ago

      Thats a slightly harder analysis, where are you getting your usage numbers from? You can extrapolate this information from the idle and load power of these cards, Techreport does publish this data. Finally how do you think it would change the outcome of the selection? Cards that take more power overall will have the highest cost of ownership? You can do this analysis yourself with the data from this site.

        • dragmor
        • 11 years ago

        I’ll give it a go myself tonight if no one else has done it.

        My guess is that the ATI cards will increase in value due to lower idle power. Of course if you take into account nvidia’s new motherboards then the 9800GTX and GX2 cards will take no power at idle, so it would give them a little better value as well.

Pin It on Pinterest

Share This