Can a sub-$100 graphics card get the job done?

We have not, I must admit, been great fans of sub-$100 graphics cards here at TR. Yes, we’ve reviewed them pretty faithfully over the years, but as I said in our review of the Radeon HD 2400 series: “If you’re spending less than [$100] on graphics, you’re getting the sort of performance you deserve, regardless of which brand of GPU you pick.” In other words, cheaping out will get you lousy frame rates and spotty gameplay. It wasn’t just spite to say so; it was definitely true.

But what if you could cheap out and get more than you deserve for it? What if, through the magic of technological progress, dropping 80 bucks on a video card could get you a GPU that will slice through the latest games with relative ease? What if it could help decode HD video streams perfectly, even on a slow CPU? If such a beast existed, should you consider spending more, or would it just be future-proofing and fluff?

Exhibit A in our quest for knowledge is the brand-spanking-new Radeon HD 4670 graphics card, which threatens to tilt our assessment of the market on its ear. The 4670 inherits its GPU DNA from the Radeon HD 4800 series, which crams a tremendous amount of graphics power into a relatively small chip. AMD has scaled down this same basic design to fit into a budget-class GPU, and in doing so, it has brought unprecedented levels of graphics power to spendthrifts everywhere. To put the 4670’s graphics power into perspective, this $79 card has twice the shader power and three times the texturing capacity of the most capable game console, the Xbox 360 (assuming the spotty info on game consoles I found out there is correct.) If you’re more of a PC-oriented person, consider this: the 4670 has roughly equivalent shader power and over twice the texturing capacity of the Radeon HD 2900 XT, the first DirectX 10-class Radeon, a high-end card which debuted at under $400 18 months ago. And the 4670’s architecture is arguably more efficient.

There are mitigating factors here, of course. The biggest one is the 4670’s relatively anemic memory bandwidth, which is under a third of the 2900 XT’s. But the trends are favorable for cheapskates, for a variety of reasons. Better compression, smarter caching, and the proliferation of programmable shaders may mean memory bandwidth is at less of a premium, for instance. Not only that, but AMD’s competition over at Nvidia has responded to the 4670 by adding another cheap video card to its portfolio, as well. The affordable entries in these two firms’ product portfolios stretch from about 60 bucks to 170 bucks, with multiple increments in between. Even if the dirtiest of dirt-cheap video cards won’t cut the mustard, surely something in there will. The questions is: How little can you get away with spending? Let’s take a look.

The Radeon HD 4670 steps up

The Radeon HD 4670 doesn’t look like much at first glance. In fact, it looks like pretty much any other low-end graphics card.

However, fitting that profile isn’t a bad thing at all, really. A card like this one will easily go into just about any PC, maybe even that cheapo HP that you picked up at Costco without realizing its built-in graphics sucked harder than a Dyson. The board itself is just over 6.5″ inches long, and it’s content to draw power solely from the PCI Express slot—no auxiliary power connection needed. AMD rates the 4670’s peak power draw rather vaguely at “under 75W,” the max a PCIe slot can supply, but still not terribly much. Even most cheap power supplies should be able to keep this puppy fed.

Lurking beneath that modest cooler is an RV730 GPU. If you’ll permit me to geek out a little bit, I’ll give you its specs. Like its big brother RV770, the RV730 is a DirectX 10.1-capable graphics processor with a unified shader architecture and a full suite of modern features. The RV730 chip is quite a bit smaller than its older sibling, though. Manufactured by TSMC on a 55nm process node, the RV730 has an estimated 514 million transistors stuffed into an area of 145 mm². In the RV730, AMD has cut down the RV770 design by halving the number of shader execution units per SIMD partition from 16 to eight and by reducing the number of SIMD partitions from 10 to eight. What’s left are 64 superscalar execution units, each of which has five ALUs. Multiply that out, and you have 320 ALUs or stream processors (SPs), as AMD likes to call them.

As I’ve said, that’s quite a bit of shader power, with just as many SPs as the Radeon HD 2900 XT, though the RV730’s SPs should be more efficient and have a few new capabilities, including DirectX 10.1 support. AMD has made one concession to the RV730’s budget status by removing native hardware support for double-precision floating-point math, a feature really only used by non-graphics applications tapping into AMD’s stream computing initiative. The rest of the compute and data sharing provisions built into the RV770 remain intact in the RV730, though.

The outlook gets even rosier for the Radeon HD 4670 when we consider texturing capacity, a weakness for prior Radeons but a strength here. Because this architecture aligns texture units with SIMD partitions, the RV730 has eight texture units, each of which is capable of sampling and filtering four texels per clock. That’s 32 texels per clock from a low-end GPU, not far at all from the 40 texels/clock of the Radeon HD 4850 and 4870.

The RV730 has only two render back-ends, each of which can write four pixels per clock to the frame buffer. Yet those render back-ends are quite a bit more capable than the ones in the Radeon HD 2000 and 3000 series, with twice the throughput for multisampled antialiasing, 64-bit color formats, and depth/stencil only rendering. In practical terms, the RV730 should be even more capable, relatively speaking, since the render back-ends in those older Radeon HD 2000- and 3000-series GPUs had to rely on shaders to help with some of the antialiasing work. The RV730 does not. The two render back-ends each sit next to a 64-bit memory controller, giving the RV730 an aggregate 128-bit path to memory. That’s half what you’ll get in a $149 video card, but twice what you might expect from a $79 one.

Wow, so I really geeked out there. Sorry about that.

Back on planet Earth, the Radeon HD 4670 will come in two versions. Both will have a 750MHz GPU and shader core, but they’ll differ in memory size and speed. The first version will have 512MB of GDDR3 memory clocked at 1GHz, for an effective 2GT/s. This is the version we have in Damage Labs for testing, and it’s probably the more sensible of the two. The second will have a full gigabyte of GDDR3 memory at a lower 900MHz clock and 1.8GT/s data rate. Either one should set you back just a penny shy of 80 bucks, according to AMD, and indeed, there’s a 512MB MSI card selling for exactly that price at Newegg right now.

(I guess, technically, I should say it’s a “512 MiB” card, but I’d rather claw my eye out with a fork.)

The cards come with a couple of dual-link DVI outputs and a TV-out port. Our sample came with a dongle to convert the TV-out port to component video and another to convert a DVI port to HDMI.

The other Radeons in the race

If $79.99 is too rich for your blood, you may be interested in another RV730-derived graphics card, the Radeon HD 4650. We had hoped to include one in this comparison and tried to make arrangements to receive one for testing, but unfortunately, it hasn’t made it here yet. Not that it likely matters much. The 4650 is much less powerful than the 4670 due to lower core (600MHz) and memory (500MHz of GDDR2) clock speeds. It will save you ten bucks, though. since they’re available for $69.99. The 4650’s main appeal to PC enthusiasts may be for use in home theater PCs, since this card draws under 60W max and some versions may offer support for DisplayPort audio, which is a new feature in the RV730. Of course, Radeon HD cards have long supported audio over HDMI, as well.

Say you’re willing to really open up the piggy bank and move upmarket a little bit in order to get more performance than the 4600 series can offer. What’s next? Well, there are quite a few slightly older video cards hanging around in the market that once cost nearly 200 bucks but now hover not far from $100. These cards are last year’s stars, but today they’re selling at a generous discount.

A perfect example of such a beast is this Radeon HD 3850 from Diamond. Although the 3850 started out as a cheaper alternative to the Radeon HD 3870, the two products have essentially merged over time. 3850s added larger, dual-slot coolers, went to 512MB of GDDR3 of memory, and gained 3870-like clock speeds. Meanwhile, many 3870s have shed their expensive GDDR4 memory and resorted back to cheaper GDDR3 memory, which is faster clock for clock. This Diamond card is a pretty good example of the breed overall, with a 725MHz GPU clock and 512MB of GDDR3 memory clocked at 900MHz. It’s selling for $119.99 at Newegg, along with a $20 mail-in rebate, taking the net price down to either $99.99 or $119.99 plus 20 bucks worth of shattered dreams, depending on the whims of the rebate gods.

The 3850 may be a little bit older, but the premium you pay for this card over the 4670 will net you a higher-end product with a 256-bit memory bus. In fact, the 3850 has nearly twice the pixel fill rate and memory bandwidth of the 4670. Unlike the 4670, you will need an auxiliary PCIe power lead for this card, but its requirements are still quite modest.

We could dilly-dally around with 3870 cards that feature slightly higher GPU core and memory clocks for a little bit more money, but it’s difficult to see the point when you have AMD’s own Radeon HD 4850 hanging out there for not much more.

Take, for example, this Asus Radeon HD 4850. Not only is it a cornucopia of graphical goodness, with roughly twice the memory bandwidth and shader power of the Radeon HD 4670, but it has an historically implausible archer chick surrounded by rose petals on it. Just looking at it, I feel calm and secure, though slightly aroused. Those feelings are only heightened by its $169.99 list price and $30 mail-in rebate. If the rebate works out, we’re talking 140 bucks net for an incredibly powerful graphics card, calling into question the whole rationale for this article because, well, why buy anything else? I dunno, but perhaps Nvidia has an answer.

The GeForce side of the equation

Truth be told, Nvidia has many answers for the low-end video card market—perhaps too many. Those answers start at under $60 with the GeForce 9500 GT, most directly a competitor for the Radeon HD 4650. Our example of the 9500 GT has one big thing going for it.

Yep, that’s a passive cooler. Zotac’s ZONE edition of the 9500 GT comes completely fanless, with an HMDI adapter and no need for an auxiliary power connection. It also has 512MB of relatively quick GDDR3 memory running at 800MHz (or 1600MT/s). This is a new product we’ve not yet found for sale online, but Zotac does expect something of a premium for its passivity: the suggested retail price is $79.99. I can see paying that for something this well-suited to an HTPC build. Happily for the penny pinchers among us, though, you can grab a fan-cooled Zotac 9500 GT with the same 550MHz core clock and slower GDDR2 memory for $69.99 at Newegg, along with a $15 mail-in rebate, taking the pretend price down to $54.99. That’s pretend cheap!

The GPU that powers the GeForce 9500 GT is known as G96, and you can see it pictured above. The G96 quietly debuted along with the 9500 GT a little while back, but this is the first time it’s made it into Damage Labs. The G96 has the same basic capabilities and functional blocks as the G84 chip that first saw duty in the GeForce 8600 series of graphics cards. That includes two thread processing clusters for a total of 32 stream processors and 16 pixels per clock of texture filtering capacity, along with two ROPs partitions (what AMD calls render back ends) for a total of eight pixels per clock of output and a 128-bit path to memory.

The G96 is quite a bit smaller than the G84, though, thanks to a process shrink from 80nm to 65nm. In fact, with only about 314 million transistors, the G96 is smaller and less complex than the RV730. By my shaky measurements, this chip is roughly 121 mm². I believe we have the 65nm version of the G96 here, but Nvidia plans to transition the G96 to a 55nm fab process without renaming it, so some G96s may be even smaller.

As the G9x designation suggests, the G96 does have some improvements over its G8x predecessor, including support for PCIe 2.0, better shader instruction scheduling, improved color compression, DisplayPort support, and a couple of new PureVideo features—dynamic contrast enhancement and dual-stream video decoding for picture-in-picture commentary on Blu-ray discs.

Feature-wise, then, the G96 is pretty capable. But despite its similar size and 128-bit memory interface, the G96 is a much less potent GPU than the RV730 in terms of both shader power and texture filtering throughput. That is, perhaps, why Nvidia has chosen an old product with a new name to counter the Radeon HD 4670.

Here’s the GeForce 9600 GSO. This product was previously known as the GeForce 8800 GS, but it has been dusted off and renamed as part of the GeForce 9 series. Confusingly, the 9600 GSO isn’t based on the G94 GPU that GeForce 9600 GT cards are. Instead, it’s driven by the same G92 graphics processor that’s inside everything from the GeForce 8800 GT to the GeForce 9800 GTX, only here it’s been chopped down to six thread processing clusters and three ROP partitions. The net result is 96 stream processors, 48 texels per clock of filtering power, and a 192-bit memory interface. Strange, but true.

EVGA sells the card pictured above with a 555MHz core clock, 1350MHz shaders, and 384MB of GDDR3 memory clocked at 800MHz. Nvidia would like to position this product directly opposite the Radeon HD 4670, and in terms of basic capabilities, it’s quite close. But they’re playing pricing games to get there. You can buy the 9600 GSO at Newegg for $99.99, and it comes with a preposterous $50 rebate.

Hmmmmm. That’s over half the value of the product. What do you think they’re expecting the redemption rate to be on that one?

Nowhere in our price search engine can you buy the 9600 GSO for less than $99.99 straight up, and many places are offering a rebate of only $20. All of this seems dicey to me, but if you’re willing to take the risk of getting fleeced by a rebate company, I suppose the 9600 GSO is potentially competitive with the 4670. It will, however, require case clearance for a 10″-long card and an auxiliary PCIe power connection.

If you’re going to make those sorts of accommodations, you might well do better to drop 125 bucks straight up for a GeForce 9600 GT. That’s what the BFG Tech 9600 GT OCX pictured above costs, and it comes complete with a swanky “ThermoIntelligence” custom cooler and much higher-than-stock clock speeds of 725MHz core, 1850MHz shaders, and 972MHz memory. Since it’s based on the smaller, narrower G94 GPU, the 9600 GT doesn’t have quite as much shader or texturing power as the 9600 GSO, but it has vastly more memory bandwidth.

If you’re not yet bewildered by your choices, then what the heck, gaze upon this GeForce 9800 GT card from Palit. The 9800 GT is simply a renamed version of the venerable GeForce 8800 GT, complete with a set of core and memory clocks comparable to a bone-stock 8800 GT. As the model numbers indicate, this card is a little upscale compared to the 9600 GT, both in terms of price and performance. The card you see pictured above from Palit has an exceedingly quiet, shrouded custom cooler and 1GB of GDDR3 memory. For these things, you’ll pay a little more; this card costs $169.99 at the ‘egg and has a $20 rebate attached. However, you can get virtually the same thing with 512MB of RAM and no shroud for $129.99, too, which would seem to make the 9600 GT we mentioned above pretty much superfluous.

The fine gradations of Nvidia’s product lineup don’t end there, either. The final card that fits within our basic price parameters is the GeForce 9800 GTX+. The “plus” is there to designate that this is the 55nm version of the G92 chip, and the GTX+ is intended to do battle with the formidable Radeon HD 4850. Again, rather than price the GTX+ straightforwardly against the 4850, though, Nvidia has elected to create a byzantine pricing infrastructure that would make even a phone company jealous. You can find this card listed on Newegg at $199.99. Err, excuse me, that’s $199.99. When you click to put it into your shopping cart, the price then shows up as $189.99. If you were to buy it, you’d then be entered into the great rebate lottery, in which winners will be awarded $30 each. Potential bottom line: $159.99, which is less than the list price of that Asus Radeon HD 4850, but more than the after-rebate net cost.

Phew. So those are the cards we’re considering. We’ve thrown a lot of specs at you, but don’t be daunted. We’re going to offer some direct comparisons in terms of theoretical capacities and then test performance, so you can see exactly how these various choices compare.

Test notes

So look, we’ve done the typical review site thing and compared this range of graphics cards using a test rig based on a very high end processor, the Intel Core 2 Extreme QX9650. Why would we do such a thing? Well, hear me out.

First, we wanted to be sure that we were pushing the graphics cards as hard as possible, so they would live up to their potential. Using a top-end CPU ensures that the processor doesn’t become the performance constraint. Second, in reality, the gap between our QX9650 and the actual processors you may find in many enthusiast systems isn’t as great as you might think. We’ve easily taken a $120 Core 2 Duo E7200 to over 3GHz on a basic air cooler. Yes, we’re using a quad-core CPU, but having more than two cores doesn’t tend to make much difference in gaming performance just yet. Third, our two GPU test rigs are outfitted with identical QX9650 processors, and honestly, without matching processors, our testing time would have been quite a bit longer.

One area where having a high-end CPU could skew our results is video playback testing, where we look at CPU utilization while playing a Blu-ray disc. For those tests, we swapped in just about the slowest Core 2 processor we could find, a Core 2 Duo E4300 clocked at 1.8GHz.

Our testing methods

As ever, we did our best to deliver clean benchmark numbers. Tests were run at least three times, and the results were averaged.

Our test systems were configured like so:

Processor Core 2 Extreme QX9650 3.0GHz
System bus 1333MHz (333MHz quad-pumped)
Motherboard Gigabyte GA-X38-DQ6
BIOS revision F9a
North bridge X38 MCH
South bridge ICH9R
Chipset drivers INF update 8.3.1.1009
Matrix Storage Manager 7.8
Memory size 2GB (4 DIMMs)
Memory type Corsair
TWIN2X40966400C4DHX
DDR2 SDRAM
at 800MHz
CAS latency (CL) 4
RAS to CAS delay (tRCD) 4
RAS precharge (tRP) 4
Cycle time (tRAS) 12
Command rate 2T
Audio Integrated ICH9R/ALC889A
with RealTek 6.0.1.5618 drivers
Graphics
Radeon HD
4670 512MB GDDR3 PCIe

with Catalyst 

8.53-080805a-067874E-ATI drivers

Diamond Radeon HD
3850 512MB PCIe

with Catalyst 8.8 drivers
Asus Radeon HD 4850 512MB PCIe
with Catalyst
8.8 drivers
Zotac GeForce 9500 GT ZONE

512MB GDDR3 PCIe

with ForceWare 177.92 drivers 

EVGA 
GeForce 9600 GSO 384MB PCIe

with ForceWare 177.92 drivers

BFG 
GeForce 9600 GT OCX 512MB PCIe

with ForceWare 177.92 drivers

Palit GeForce
9800 GT 1GB PCIe

with ForceWare 177.92 drivers 

GeForce
9800 GTX+ 512MB PCIe

with ForceWare 177.92 drivers

Hard drive WD Caviar SE16 320GB SATA
OS Windows Vista Ultimate x64 Edition
OS updates Service Pack 1, DirectX March 2008 update

Thanks to Corsair for providing us with memory for our testing. Their quality, service, and support are easily superior to no-name DIMMs.

Our test systems were powered by PC Power & Cooling Silencer 750W power supply units. The Silencer 750W was a runaway Editor’s Choice winner in our epic 11-way power supply roundup, so it seemed like a fitting choice for our test rigs.

Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests.

We used the following versions of our test applications:

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

The theory—and practice

You’ve already heard me ramble on at length about texture filtering and memory bandwidth, and you may be wondering why. Well, specs are pretty important in graphics cards, even to this day. No, they’re not destiny—a more efficient architecture might outperform a less efficient one, even if the latter had higher peak performance in theory. In fact, it happens all the time. But constraints like memory bandwidth do tend to dictate relative performance, especially among similar products from the same GPU maker. Below, I’ve compiled some key numbers for the cards we’re testing and a few of the higher-end ones we’re not, so you can get a sense of the landscape. Please note that these numbers are based on the actual clock speeds of the cards we’re testing, not the “stock” clocks established by the GPU makers for each GPU type.

Peak
pixel
fill rate
(Gpixels/s)

Peak bilinear

texel
filtering
rate
(Gtexels/s)


Peak bilinear

FP16 texel
filtering
rate
(Gtexels/s)


Peak
memory
bandwidth
(GB/s)

GeForce 9500 GT

4.4 8.8 4.4 25.6

GeForce 9600 GSO

6.7 26.6 13.3 38.5

GeForce 9600 GT

11.6 23.2 11.6 62.2

GeForce 9800 GT

9.6 33.6 16.8 57.6
GeForce 9800 GTX+

11.8 47.2 23.6 70.4
GeForce 9800 GX2

19.2 76.8 38.4 128.0
GeForce GTX 260

16.1 36.9 18.4 111.9
GeForce GTX 280

19.3 48.2 24.1 141.7
Radeon HD 4650 4.8 19.2 9.6 16.0
Radeon HD 4670 6.0 24.0 12.0 32.0
Radeon HD 3850 11.6 11.6 11.6 57.6
Radeon HD 4850

10.0 25.0 12.5 63.6
Radeon HD 4870

12.0 30.0 15.0 115.2
Radeon HD 4870 X2

24.0 60.0 30.0 230.4

Notice the tremendous range we’re looking at between the cheapest video cards and the most expensive. The Radeon HD 4870 X2 has just shy of ten times the memory bandwidth of the GeForce 9500 GT—and we’re using the more expensive version of the 9500 GT with GDDR3. There is a real sense in which you get what you pay for when you buy a graphics card. The key specs do tend to track with price.

The Radeon HD 4670 is an important baseline for us because, at 80 bucks, it has easily more texture filtering capacity than the pricier Radeon HD 3850—and its new architecture is almost assuredly more efficient, too. Heck, the 4670 has nearly as much filtering capacity as the 4850. The 3850 and 4850, though, both have quite a bit more memory bandwidth, about twice as much. I’m intrigued to see whether the 4670 can overcome that deficit. If it can, at least in part, it will signal something important about the viability of low-end graphics cards.

The 4670’s would-be competition from Nvidia, the rebate-driven GeForce 9600 GSO, boasts slightly higher capacities in every category than the 4670. That will make for interesting times. Here’s what happens when we test these things with a directed benchmark.

3DMark’s color fill rate test measures the graphics card’s ability to draw pixels, essentially, and we’ve found that this test tends to be limited by memory bandwidth more than anything else. The cards are largely true to form here, with the exception that the Radeon HD 4850 edges ahead of the GeForce 9800 GTX+.

The texture fill test is arguably more important for performance in many games, and it’s typically less limited by memory bandwidth alone. In fact, the Radeon HD 4670 outperforms both the Radeon HD 3850 and the GeForce 9600 GT in this test, despite having less memory bandwidth. Still, the 4670 can’t keep pace with the 4850; the 4850 almost doubles the 4670’s texturing throughput, just as it has about double the memory bandwidth.

Peak shader
arithmetic (GFLOPS)

Single-issue Dual-issue

GeForce 9500 GT

90 134

GeForce 9600 GSO

259 389

GeForce 9600 GT

237 355

GeForce 9800 GT

339 508
GeForce 9800 GTX+

470 705
GeForce 9800 GX2

768 1152
GeForce GTX 260

477 715
GeForce GTX 280

622 933
Radeon HD 4650

384
Radeon HD 4670

480
Radeon HD 3850 464
Radeon HD 4850

1000
Radeon HD 4870

1200
Radeon HD 4870 X2

2400

And no, the units in the texture fill rate test don’t seem to track with our expectations at all—they seem to be off by miles. I’ve contacted the folks at FutureMark about this problem repeatedly, and they’ve told me repeatedly that the people who might address it are on vacation. This has been going on since, oh, June-ish, so I suggest you apply for your job at FutureMark today. The benefits are excellent.

The table to the left shows the next piece of the GPU performance picture, and increasingly the most important one: shader processing power. We’ve split these theoretical peak numbers into two columns in order to allow room for a quirk of Nvidia’s shader processors: they can issue an additional multiply instruction in certain cases, raising their theoretical peak shader arithmetic capacity by a third. They can’t use this additional MUL in every situation, though, and old GeForces can’t use it as often as the newer GTX 200 series due to some architectural constraints.

That said, the Radeons have some constraints of their own, including the arguably more difficult instruction scheduling required by their five-ALU-wide superscalar execution units. So, of course, we’ll want to measure shader power with a few directed tests, as well.

To set the stage for that, note that the Radeon HD 4670 again is a potential overachiever. Its 480 GFLOPS of peak shader power match the single-issue numbers for the GeForce GTX 260, a much more expensive card. The GeForce 9600 GSO, the would-be competitor to the 4670, trails it by quite a bit, theoretically. However, the 4670’s relatively weak memory bandwidth could hold it back.

That appears to be just what happens. Despite having a little more theoretical shader prowess than the Radeon HD 3850, the 4670 trails the 3850 in each one of the shader tests. The GeForce 9600 GSO also has a leg up on the 4670 in each case.

All of this sets the stage nicely for what comes next, which is real game tests. Games will most likely care less about any single one of these performance factors. Instead, each one will stress its own distinct mix of them. The question is: how will our cheap graphics cards, with their obvious strengths and weaknesses, hold up overall?

Call of Duty 4: Modern Warfare

We tested Call of Duty 4 by recording a custom demo of a multiplayer gaming session and playing it back using the game’s timedemo capability. We’ve chosen to test at display resolutions of 1280×1024, 1680×1050, and 1920×1200, which were the three most popular resolutions in our hardware survey.

We didn’t go easy on the cheap cards, either—we enabled image quality enhancements like antialiasing and anisotropic filtering where appropriate. As you’ll see, most of the cards handled them quite well. Because the cheapest cards can suffer quite a bit from having such things enabled, though, we did test at 1280×1024 with AA and sometimes aniso disabled, depending on the game, to coax frame rates well into playable territory on most cards.

As you can see, every single card tested except for the GeForce 9500 GT is able to crank out frames at a rate of over 60 per second in CoD4 at 1280×1024 with edge antialiasing disabled, and even the 9500 GT averages well above 30 FPS. That demonstrates why cheap video cards are somewhat interesting these days. As the display resolutions and image quality increase, the pack separates into several clear groups. The Radeon HD 4670, 3850, and the GeForce 9600 GSO bunch together, as do the Radeon HD 4850 and GeForce 9800 GTX+. The GeForce 9600 GT and 9800 GT form their own group between the other two, while the 9500 GT is alone at the back of the pack.

To give you a sense of what these numbers mean, I found the Radeon HD 4670 to be borderline playable at 1680×1050. You might be able to play through the single-player campaign at these settings, but you would probably want to dial back something—the resolution, AA, or aniso—in order to get it running quickly enough for multiplayer. Still, that’s quite good for an $80 video card.

Half-Life 2: Episode Two

We used a custom-recorded timedemo for this game, as well. We tested with most of Episode Two‘s in-game image quality options turned up, including HDR lighting. Reflections were set to “reflect world,” and motion blur was disabled.

The basic groupings we saw in Call of Duty 4 are also apparent here, but frame rates are higher overall. Despite having half the memory bandwidth, the Radeon HD 4670 matches the 3850 once again. And in Episode Two, in my experience, both the 4670 and the GeForce 9600 GSO are more than up to the task of delivering smooth gameplay at 1680×1050 with 4X AA and 16X aniso. In fact, they’re both pretty good at 1920×1200, as well.

Enemy Territory: Quake Wars

We tested this game with “high” settings for all of the game’s quality options except “Shader level” which was set to “Ultra.” Shadows and smooth foliage were enabled, but soft particles were disabled. Again, we used a custom timedemo recorded for use in this review.

Our usual groupings are upset a little bit here by the relatively stronger performance of the Radeons. The 4670 opens up a big lead on the GeForce 9600 GSO, and the Radeon HD 4850 takes a commanding lead over the GeForce 9800 GTX+. The thing is, we’re again looking at quite acceptable frame rates at higher resolutions with some of the cheaper cards, including the 4670.

Crysis Warhead

Rather than use a timedemo, I tested Crysis Warhead by playing the game and using FRAPS to record frame rates. Because this way of doing things can introduce a lot of variation from one run to the next, I tested each card in five 60-second gameplay sessions. The benefit of testing in this way is that we get more info about exactly how the cards performed, including low frame rate numbers and frame-by-frame performance data. The frame-by-frame info for each card was taken from a single, hopefully representative play-testing session.

We used Warhead‘s “Mainstream” quality level for testing, which is the second option on a ladder that has four steps. The “Gamer” and “Enthusiast” settings are both higher quality levels.

As you may know, performance in the original Crysis was a bone of contention for many people, who thought the game ran too slowly. The folks at Crytek claim to have paid more attention to making sure Warhead runs well on most systems, and they’ve even introduced a Warhead-ready PC that costs just $700. That system is outfitted with a GeForce 9800 GT, and based on our tests, that’s a pretty good fit. The median low frame rate we encountered on the 9800 GT was 25 FPS, or about the speed of a Hollywood movie. Averages were well above that. While playing the game with these settings, the 9800 GT’s performance was very much acceptable.

In fact, most of the cards were able to handle Warhead reasonably well. The glaring exception in this case was the Radeon HD 4670, which just wasn’t up to the task. In the heat of the action, frame rates dipped into the teens, and performance felt sluggish. That may have been its relatively anemic memory bandwidth coming into play; the Radeon HD 3850 didn’t struggle nearly as badly.

Overall, the Nvidia-based cards fared better in this brand-new game, which seems to be typical these days. I wouldn’t be surprised to see AMD deliver a driver update in the coming weeks that substantially improves Radeon performance with Warhead, but more games come out of the box well optimized for GeForce cards.

Blu-ray HD video decoding and playback

One of the things that buying a new graphics card will get you that an older card or integrated graphics solution might not have is decode and playback acceleration for HD video, including Blu-ray discs. The latest GPUs include dedicated logic blocks that offload from the CPU much of the work of decoding the most common video compression formats. To test how well these cards perform that function, we used CyberLink’s new release 8 of PowerDVD, a Lite-ON BD-ROM drive, and the Blu-ray disc version of Live Free or Die Hard. Besides having the “I’m a Mac” guy in it, this movie is encoded in the AVC format (which includes H.264 video compression) at a 28Mbps bit rate.

Video decode acceleration is particularly important for the miserly among us, because budget CPUs sometimes have trouble playing back compressed HD video without a little bit of help. Most of today’s dual-core processors can still handle it, but they won’t have many extra cycles left over to do anything else. Compounding the problem, we’ve found in the past that low-end video cards sometimes run into bandwidth limitations when assisting with HD video decoding, playback, and image scaling.

In order to really stress these cards, we installed a lowly Core 2 Duo E4300 processor (a dual-core 1.8GHz CPU) in our test rig, and we asked the cards to scale up our 1080p movie to fit the native 2560×1600 resolution of our Dell 30″ monitor. We then recorded CPU utilization over a duration of 100 seconds while playing back chapter four of our movie.

Good news all around here. None of the video cards had any trouble playing our movie, with no apparent dropped frames or other playback glitches. As you can see, the Radeons tended to do a little bit better job of unburdening the CPU than the GeForces, but all of the cards proved to be more than capable—even the lowly GeForce 9500 GT.

We considered also testing image quality using the HD HQV benchmark, but we decided against it for a number of reasons. HQV tests the efficacy of post-processing algorithms like noise reduction and edge enhancement, but the reality is that on a properly mastered Blu-ray disc, you really won’t need such things. In fact, Nvidia’s drivers leave both noise reduction and edge enhance off by default. AMD’s do have noise reduction enabled, but not at a very aggressive setting. Nvidia suggests the reviewer should tune his video card’s noise reduction slider manually in order to achieve the best score in HQV, but I have a hard time imagining that many users would tune their video cards’ noise reduction algorithms on a per-video basis.

The big thing to take away from these tests is that even the least capable video card in the bunch is more than adequate at accelerating HD video playback.

Power consumption

We measured total system power consumption at the wall socket using an Extech power analyzer model 380803. The monitor was plugged into a separate outlet, so its power draw was not part of our measurement. The cards were plugged into a motherboard on an open test bench.

The idle measurements were taken at the Windows Vista desktop with the Aero theme enabled. The cards were tested under load running Half-Life 2 Episode Two at 1680×1050 resolution, using the same settings we did for performance testing.

One nice benefit of a low-end graphics card is low power consumption, as illustrated by these results. The Radeon HD 4670’s power draw, both at idle and when running a game, is quite good, especially considering its performance. The GeForce 9600 GSO draws more power all around.

Noise levels

We measured noise levels on our test systems, sitting on an open test bench, using an Extech model 407727 digital sound level meter. The meter was mounted on a tripod approximately 12″ from the test system at a height even with the top of the video card. We used the OSHA-standard weighting and speed for these measurements.

You can think of these noise level measurements much like our system power consumption tests, because the entire systems’ noise levels were measured. Of course, noise levels will vary greatly in the real world along with the acoustic properties of the PC enclosure used, whether the enclosure provides adequate cooling to avoid a card’s highest fan speeds, placement of the enclosure in the room, and a whole range of other variables. These results should give a reasonably good picture of comparative fan noise, though.

So what happened? Quite simply, most of the cards didn’t register above the ~40 dB volume threshold of our sound level meter. That’s good news overall, even though it means I need to buy a new sound level meter soon. Of course, the passively cooled Zotac card isn’t like to show up on any sound level meter, but the reality here is that many of these coolers, especially the custom ones on BFG Tech 9600 GT and the Palit 9800 GT, are exceptionally quiet. Some of the higher end cards are louder, especially the Radeon HD 4850 and the GeForce 9800 GTX+, because they generate more heat and must deal with it.

GPU temperatures

Per your requests, I’ve added GPU temperature readings to our results. I captured these using AMD’s Catalyst Control Center and Nvidia’s nTune Monitor, so we’re basically relying on the cards to report their temperatures properly. These temperatures were recorded while running the “rthdribl” demo in a window.

You might have expected the Zotac 9500 GT with the passive cooler to run hot, but AMD’s latest Radeons do, too. That seems to be the result of a conscious decision on AMD’s part to tune its fan speed controllers to allow higher temperatures. The tradeoff here is that the Radeons are relatively quiet. Some folks have raised longevity concerns about video cards that run this hot, but AMD insists its cards can handle those temperatures. We’re still waiting to get our hands on a Radeon HD 4850 with a custom cooler that might produce both lower temperatures and noise levels than the stock one. Again, the custom coolers from BFG and Zotac are exemplary on both fronts.

I should address a common misconception while we’re talking about these things. The fact that a video card runs at higher temperatures doesn’t necessarily mean it will heat up the inside of your PC more than a cooler-running card. You’ll want to look at power consumption, not GPU temperatures, to get a sense of how much heat a video card produces. For example, the Radeon HD 4670 is among the hottest cards in terms of GPU temperatures, but it draws less power—and thus converts less of it into heat—than almost anything else we tested. Its higher temperatures are simply the result of the fan speed/temperature thresholds AMD has programmed into the card.

Conclusions

So can you get away with spending less than $100 on a video card? In certain circumstances, you bet. If you have a monitor that’s 1280×1024 or smaller, a very affordable graphics card like the $80 Radeon HD 4670 will allow you to play many of the latest games with ease. Even at 1680×1050, in fact, the Radeon HD 4650 and GeForce 9600 GSO can produce acceptable frame rates. You may have to compromise a bit, dialing back features like antialiasing or in-game image quality settings, in order to get acceptable performance in the most demanding of today’s games, but the compromises probably won’t be too terrible. That’s particularly true for the many games ported to the PC or co-developed for game consoles. The limits of the Xbox 360 and PlayStation 3 establish a baseline than even some of the cheapest PC graphics cards can meet.

Among those cheaper cards, the Radeon HD 4670 sets a new standard for price-performance ratio and all-around desirability. Compared to the would-be competition from Nvidia, the GeForce 9600 GSO, the 4670 has slightly higher overall performance, lower CPU utilization during Blu-ray playback, less need for clearance inside of a PC chassis, and lower power consumption. Thanks to this last trait, the 4670 doesn’t require a separate PCIe power lead, either, so it should slot right into granny’s Dell (or yours) with very little drama. And you don’t have to rely on a mail-in rebate in order to get the 4670 at its list price of $79.99, unlike the 9600 GSO.

Still, you’ve seen the numbers in the preceding pages. Make up your own mind, but personally, I can’t get past the value proposition of cards like the Radeon HD 4850 and the GeForce 9800 GTX+. Especially the 4850. New games are coming out all of the time, and many of them, like Crysis Warhead, will make a bargain-priced GPU beg for mercy. Reaching up into the 4850’s range ($140 after rebate, $170 before) will get you roughly twice the real-world GPU power of a Radeon HD 4670. That’s ample graphics power to turn up the eye candy in most games, even at 1920×1200, and some honest-to-goodness future-proofing, too. That’s also value even a cheapskate like me can appreciate.

Comments closed
    • Damage
    • 11 years ago

    I’ve just updated the theoretical GPU capacity table in this review to correct the fill rate numbers for the GeForce GTX 260. The revised numbers are slightly lower. The performance results remain unaffected.

    • dolemitecomputers
    • 11 years ago

    The evga card is already sold out at Newegg. Hopefully they get more in soon since that is a good deal.

    • xii
    • 11 years ago

    Not everyone plays games or at least the kind of games that would need serious graphical processing power. I understand the larger part of the audience of this website might favor the latest hardware and so the article keeps this in mind, but price and power consumption (and associated heat) might be more important to a lot of people. While it’s true that just a few bucks more might give you a faster card that uses more power and produces more heat, there really isn’t too much of a point to nudge people to buy more than their requirements are.

    There’s some interesting information in this comparison, though. You get a lot for very little these days…

    • Kurotetsu
    • 11 years ago

    Is the need for auxiliary power in video cards because the power provided by the PSU is ‘cleaner’? Or because the PCI-E x16 slot doesn’t provide enough power on its own? Or some combination of both?

    I ask because if the 4670 winds up doing really well, with OEMs and consumers, it could maybe lead to a shift toward greater power efficiency and the, probably very slow, phasing out of auxiliary power connectors on at least mainstream cards and below. But, depending on the need for aux power, that would either require alot of collaboration with motherboard makers or it may not be possible at all (or at least it’d be restricted to sub-$100 cards).

      • MadManOriginal
      • 11 years ago

      It doesn’t require more than the 75W the PCIe slot can provide, if it did it would have the connector 🙂 As for future cards, sure lower end ones may continue to not need an extra plug but I don’t see that changing on the higher end.

        • Cyco-Dude
        • 11 years ago

        well, the pcie 2 spec provides 150w of power doesn’t it? it could be that future high-end cards won’t need them if they opt to use that. of course, they wouldn’t be backwards compatible with 1.0 slots. perhaps somewhere in the middle; the card will have the additional power plug for use with 1.0 slots, but wouldn’t need them for 2.0 slots.

          • UberGerbil
          • 11 years ago

          When PCIe 1.0 was introduced, one of the touted advantages was that because it delivered 75W it would eliminate the need for aux power connectors that were starting to appear on AGP cards. And indeed it did… for about one generation of video cards.

          I expect the same thing will happen with 2.0. We’ll be seeing video cards that need a connector because 150W isn’t enough.

          Note too that providing 150W within the motherboard is non-trivial, especially if you expect to provide a true 150W to each of multiple slots to support Crossfire/SLI. I wouldn’t be surprised if some mobos don’t conform to that part of the spec, whether the mobo makers publicly admit it or not (just look at all the USB ports that don’t provide a full 5W, especially on notebooks). It’s much easier to provide that much juice via dedicated cables than it is to route it through a multilayer PCB.

    • moritzgedig
    • 11 years ago

    128bit vs 256bit
    the HD38x0 is 256bit and the HD46x0 is 128bit
    but the HD38x0 does not get stronger compared to the HD46x0 when AA is turned up.
    why?

      • MadManOriginal
      • 11 years ago

      Because it’s AA implementation was just THAT bad. Base specs like memory amount or bus width or core speeds don’t guarantee one will be better than another.

    • matnath1
    • 11 years ago

    I just sold my second quad core system paired with an HD4850 and downgraded to a mid range core 2 with the HD 4670. It seems that I can pocket about $400 and maintain 95% of my other systems functionality with the only exception of CRYSIS gaming.

    Put simply any framerate speeds over 40 fps are probably not noticeable to the average person. Crysis and Warhead are the only games that my new system takes a noticeable hit on. Otherwise I’m just fine with the $500 in savings…

    OLD SYSTEM
    Dell Inspiron 530 with Q9300 3 gigs ram, HD 4850 250G H/D
    New system Dell Inspiron 530 with Core 2 E7200 @2.53 2 gigs and HD4670 250G H/D

    Both gaming on a 22″ DeLL LCD.

    Quad core is mostly a waste of money for gaming in the here and now and the extra frames per second resulting from the better card are not noticeable anyway in most games. (1680 x 1050 with 2x 8x) No need to upgrade the power supply and fool with surgery time and the extra money needed for it. Sure I may have to take it down a slight notch in eye candy here and there but so what!

      • Cyco-Dude
      • 11 years ago

      lies; the more the better. first-person shooters (online, even moreso) are definitely better with high fps (60+). you don’t need to be a pro gamer to feel the difference either.

    • Jambe
    • 11 years ago

    It’d be neat to see a new cooler review. I gather there are some VGA coolers on the market that’ll fit the 4850.

    • Forge
    • 11 years ago

    Brace for impact. Slashdot just linked.

      • willyolio
      • 11 years ago

      took them long enough.

    • Bummer
    • 11 years ago

    Damage..i think u guys should include more overclocking results, with gpu-z screen shots.Most of the cards tested relly so good overclocks, especially with Rivatuner 2.10 coming out.Cheers.
    After all this is an enthusiast site..much appreciate it

    • Xenolith
    • 11 years ago

    Just replaced my 2600xt with a 4670. 3dmark05 score went from just over 10k to over 15k. The card is smaller, and uses less juice. Very pleased.

    • AMDguy
    • 11 years ago

    Damage, you identified those flowers as roses when they are most likely intended to be cherry blosssoms.

    Back to Botany 101 for you! 😉

    • marvelous
    • 11 years ago

    I think 4670 is great for granny computers with 300 watt power supply but that GSO is just cheaper AR and overclocking beast. At least EVGA are great about rebates. Got mine rebate within 5th week.

    I got the EVGA 8800gs when they were $110 after rebate and these cards are a steal at $50. Have it clocked 741/1836/1058 from 550/1375/800 with stock cooling. 8800gt performance for fraction of the cost.

    A glimpse of what a GSO is capable of at semi high clocks of 680/1700/950 vs 4670

    §[<http://www.bit-tech.net/hardware/2008/09/11/amd-ati-radeon-hd-4670-512mb/6<]§

      • Bombadil
      • 11 years ago

      The <12 W load difference between the 4670 and GSO is hardly going to change power supply requirements of a system. The idle power savings of the 4670 is huge, but it just isn’t worth >$30 to me. I’m sure the 4670 is going to be the better deal in the future.

      The stripped higher end parts with “unneccessary” extra power connectors have always been nice for overclocking. My previously longest held video cards: a softmod $140 Radeon 9500 non-Pro and a $121 Radeon X800GTO. I wonder how long I’ll keep this $50 GSO?

        • marvelous
        • 11 years ago

        Did you get any overclocking done yet? How high did you reach?

          • Bombadil
          • 11 years ago

          No, it plays Team Fortress 2 at 1600×1200 with 8X CSAA fine without it, but I will probably check out overclocking it tonight even if it is not needed. Oh, yeah, my 300W power supply is certainly overkill for the GSO and 3.2 GHz E2140–yes, I am cheap. 🙂

            • marvelous
            • 11 years ago

            I think you might need a better power supply before you overclock. You might just blow up your cheap computer. 😛

            • Bombadil
            • 11 years ago

            Haha, I have burned out a 350W Sparkle power supply before, but this 300W is better than some >400W supplies I’ve had. Its only real deficiency is fairly low efficiency with <50W loads, but it was $21. §[<http://www.newegg.com/Product/Product.aspx?Item=N82E16817103149<]§

          • Bombadil
          • 11 years ago

          Initial test seems 700 core/1750 shader/950 memory works just fine on this GSO. 16,710 on 3DMark05. ~100 fps 1600x1200x8CSAA in TF2–not bad for $50.

            • marvelous
            • 11 years ago

            I think you can definitely push little more on the memory. Like 100 more mhz. Those memory are supposed to be 1ns. Rated at 1000mhz and easily pushed to 1050.

            • Bombadil
            • 11 years ago

            & #82, The memory starts to get flaky around 1018 MHz.

      • Xaser04
      • 11 years ago

      Judging by these posts it sees to be that the 8800GS (and I assume the 9600GSO) really do like to overclock.

      I know my XFX Alpha dog 8800GS has gone from 580/1500/1400 to 750/1900/2100 without any issues.

      I havn’t bothered to try any further but I have no doubts that I could push a little more from the card.

      The only issue now is the cooler which is stuck at 100% fan speed all of the time (it has no controller).

      The 4670 does seem to be a good little card however here in the UK where it is priced at around £60 I can’t really seen the point of it when you can get a 9600GT for £66-£70.

        • Bombadil
        • 11 years ago

        I am not so sure about overclocking on the GSO. Even turned down to 680/1700/1000 mine occasionally crashes in TF2 after half an hour or so. Perhaps the GDDR3 is getting flaky as it heats up. Overclocked Radeons definitely fail more gracefully.

    • SubSeven
    • 11 years ago

    Hmm. Nice review, thanks man. Is it me or does it seem that the 8.8 catalyst drivers have the 4850 quite a bit better than the intial drivers with which this card was originally reviewed at? I don’t remember the 4850 whipping the 9600GT or the 9800GTX (let alone the GTX+) like that. At the risk of sounding like a fanboi and starting all kinds of unpleasantries, someone send this review to our good friend, Asif1924, and tell him to pay attention to the 4850 and its overall “inferiority to Nvidia counterparts” 🙂

    • DrDillyBar
    • 11 years ago

    Are they going to release a PCI version of this I can plug into my future Atom 330 secondary system? Only reason I’ve not gotten one of the $100 Atom 330 boards yet is I’d like a decent PCI video card to plug into it (current options are limited… GeForce 6200 or Radeon HD 2400pro based)

      • bthylafh
      • 11 years ago

      Do Atom systems really not come with PCIe? Ouch.

    • thermistor
    • 11 years ago

    The big loser here is the 9500GT…there are cheaper prev/gen cards that defeat it at every turn and you don’t give up either power consumption or noise. And there are tons of passive designs in lots of prev/gen…2600XT, 2600Pro, 3650, 8600GT all beat the 9500GT, and if you go back to DX9, the field widens further.

    And any card that requires a PEG connector where a comparably-performing card does not is the other big loser. As someone up-thread said, anyone with an OEM Dell or HP is gonna love the 4670.

    As an aside, I’m using an OLD HP chassis with a 200W PSU for a E6300/ATI3450 (HTPC) and it runs 24/7 recording shows and playback, and the PSU is already 6 years old. Don’t be too quick to knock OEM PSU’s.

    • Bombadil
    • 11 years ago

    I think the HD 4670 is a very nice card, but I am pretty happy with my new $50 AR 9600 GSO §[<http://www.newegg.com/Product/Product.aspx?Item=N82E16814130356.<]§ Evidently it can overclock a bit too §[<http://www.legitreviews.com/article/797/1/.<]§

      • MadManOriginal
      • 11 years ago

      How does the memory overclock on your personal card? *Nevermind, I hadn’t read all the posts yet.

      • MadManOriginal
      • 11 years ago

      i’ve figured out between a few links posted here a rough way to count overclocking. This is really not accurate across different companies but seems OK within the same product line for sequential products. It seems as if a pretty well but reasonably attainable OC is almost equal to the next card up at stock speed. I am looking specifically at the 9600GSO-9600GT-9800GT.

    • Scrotos
    • 11 years ago

    Why no S3 Chrome included? It seems like it’d be a perfect fit for a sub-$100 graphics card review. And I’d have loved to see how it stacked up against the more mainstream offerings.

      • Meadows
      • 11 years ago

      Badly. It’s one of the worst bang-for-buck offerings in existence.

        • Chrispy_
        • 11 years ago

        I have to agree with the guy here.
        Even when you ignore the poor performance (The Chrome S27 offers similar performance to a Geforce4 in my experience), there are compatibility issues with the Chrome too, giving you rendering errors in several games that just make them unplayable – As a gaming GPU, it’s just not fit for purpose.

          • Scrotos
          • 11 years ago

          Yeah, but they are getting better. The latest is, what, a 400GT or 430GT or something? It’d just be nice to see it put to the same testing methodology that TR does. I’ve seen other random one-off type of reviews on random websites at random times, but I admire the way the tests are done here enough that I’d like to see how the card fares with these tests, even though the card might be a total pile of dung.

            • Meadows
            • 11 years ago

            I’m telling you, other review sites have tested the latest GTX too and included it in value charts, it’s the worst on the list.
            S3 needs to do better, they’re far from what they used to be.

    • gerryg
    • 11 years ago

    Edit: went back and re-read the testing notes.

    2 comments: I can buy the need to avoid super long testing times, but I still think a reasonable total system price for comparison is still important for reviews like this one. If you took the time to use an E4300 for the video tests, why not use it for the whole review?

    Second, it would be very useful to have prices as tested on the parts chart under the Test Notes. The variation point (graphics card, hard drive, CPU, whatever) could just have it’s price next to it and the reader has to add that part to the system cost for the other parts.

    • Suspenders
    • 11 years ago

    Personally, I think one of the best features of the HD 4670 is it’s low power consumption and that it doesn’t require any extra PCIE 6 pin power connectors. As you say, perfect for “granny’s HP”, or Dells, which come with woefully inadequate power supply’s as standard. Not having to buy a better power supply makes upgrading a much less painful, and cheaper, proposition now, and you’re not sacrificing much in terms of performance.

    A few too many times in the past, my “potential PC gamer” friends have been turned off of getting a video card right after I start talking about power supplies….

    Good (and amusing!) article all around. One quibble is no 1280×1024 testing on Crysis Warhead?

      • Peldor
      • 11 years ago

      If someone comes out with a fanless option, the 4670 may well be my next card. The 9500GT is just too weak to suit a 1680×1050 monitor IMO.

      • tocatl
      • 11 years ago

      Yea, crysis warhead is the only game where that card seems weak, but maybe at 1280×720 the 4670 would push 25 in average, and make the game playable, this card is great for people with a 15.4′ or 16′ monitors, the card gives a minimun of 50fps in most games in low resolutions, thats great for the value…

    • Ruiner
    • 11 years ago

    Now which one of these will play Fallout 3?

    • Meadows
    • 11 years ago

    g{

    • Pagey
    • 11 years ago

    Thanks for the review! I’ve been waiting on this one. It’s good to see some great competition coming out of AMD/ATI on the graphics front.

    • BiffStroganoffsky
    • 11 years ago

    Thanks for showing us penny-pinching cheapskates some luv Damage.

    • derFunkenstein
    • 11 years ago

    OK, seriously, you could build a gaming machine around an X2 2.6GHz, an AMD 770FX motherboard (or an nForce if you prefer) and a 4670 for under $400. Games-capable PC’s might finally get in line with console prices.

    If I wanted a games-capable PC, the price is certainly right.

      • Kurotetsu
      • 11 years ago

      Theres another article comparing $50 CPUs, so you could probably cut the price down even more. A 4200+ or a BE-2400 could also work (if you overclock).

      §[<http://www.hardcoreware.net/reviews/review-370-1.htm<]§

      • Corrado
      • 11 years ago

      Except that the 360 can be had for $199 brand new now, so you’re still at 2x the cost. I”m not disputing your point though, since I’ve recently moved back towards PC gaming after about 2 years of 360 gaming. I built myself a new quad core PC with dual 4850’s and I’m loving the hell out of STALKER: Clear Sky. I credit STALKER: Shadow of Chernobyl with bringing me back to the PC fold.

        • Ruiner
        • 11 years ago

        How many Red Rings of Death do you get for your $199?

          • Usacomp2k3
          • 11 years ago

          They don’t cost you any more…

        • derFunkenstein
        • 11 years ago

        Normally, yes, it’d be $199 but if you want a reasonably large number of XBLA titles or DLC for your games, you still really need a hard drive. I think it’s more reasonable to compare it to the $300 version, the 60GB 360.

          • ew
          • 11 years ago

          I’d also throw in Xbox Live subscription and $10 more per game.

            • derFunkenstein
            • 11 years ago

            I’d skip the Live subscription and play by myself or local mutliplayer – people on Xbox live (and really, any game for that matter) are USUALLY morons. Enough so that I won’t pay for another Gold subscription.

        • reactorfuel
        • 11 years ago

        Keep in mind, most 360 users also have computers. If their old system is getting sluggish, then it’s only a $100-150 premium to go from a decent new web/email/Office system to a gaming system more powerful than a 360. The only big difference between a competent office box and a decent gaming system these days is the video card, anyway. Even a pretty modest dual-core is capable of running any game on the market right now, and RAM’s so cheap it’s not likely to be an issue.

        Even a complete overhaul of an existing system isn’t very expensive – around $300-325 for a new motherboard ($90 GA-EP43-DS3L), CPU ($85 E5200), RAM ($30-40 of whatever 2 gig set of DDR2 is cheapest), and video card ($90-120, take your pick). Not only is that in the same price range as a standard 360 (with quite a bit more horsepower), it’ll probably show decent benefits to general system responsiveness, too.

        I’m not saying that the 360 (or any other console) is a bad deal at all. It’s got some great exclusives, and it’s a lot easier to have friends over for games, nachos, and beer when you don’t all have to haul your own PCs and worry about keyboard spills. 🙂 Still, from a gaming-power-for-money standpoint, the PC has improved dramatically. A couple of years ago, PC games carried a pretty big price premium when you factored in the required hardware. These days, the PC is arguably the best gaming value out there.

        • BobbinThreadbare
        • 11 years ago

        Neither of you are including the price of a screen, and a decent HDTV is a lot more than a decent Monitor.

          • Usacomp2k3
          • 11 years ago

          Unless you use your monitor as your HDTV like I do 😉 Dell 2407 ftw!

          • derFunkenstein
          • 11 years ago

          The 360’s VGA cable does pretty well, or if you prefer digital, HDMI->DVI cables aren’t bad, and you can do analog audio.

      • indeego
      • 11 years ago

      Why bother comparing pc’s to consoles? You can get 100x done on a PC outside of games. the value is immeasurably better for PC’sg{<.<}g

        • derFunkenstein
        • 11 years ago

        Because I already have my computer and it’s plenty fast, though 3D performance is hobbled somewhat by the GMA950. The whole purpose of this exercise was leisure.

          • MadManOriginal
          • 11 years ago

          GMA950 is a little more than ‘somewhat’ crippled 😉 But if it does what you need that’s what matters.

        • ish718
        • 11 years ago

        Because there are plenty of people that won’t do the other 100x things outside of gaming…

    • sp1nz
    • 11 years ago

    Thanks for including 1680×1050! Good article.

      • eitje
      • 11 years ago

      agreed. definitely a positive, getting that 1680×1050!

    • Kurotetsu
    • 11 years ago

    Awesome article. I’m more than likely going to be switching to a cheaper, single slot GPU come winter and this article presents alot of nice options. I’m just getting tired of fatass dual slot cards that have more power than I need and block up my expansion ports.

    • vshade
    • 11 years ago

    Isn´t the 1gb 4670 fitted with ddr3(the same of the main ram) instead of the gddr3?

    • I.S.T.
    • 11 years ago

    I found a serious error in your review. The 9600 GSO has been out for quite some time. §[<http://forum.beyond3d.com/showthread.php?t=48106&highlight=9600+GSO<]§ Damn near five months. The card was not put out to compete with the Radeon HD 4670.

      • Rza79
      • 11 years ago

      It is! nVidia has nothing else to compete with it actually.

        • I.S.T.
        • 11 years ago

        35, think about it. How could nVidia specifically release a card to compete with a product that wasn’t even available nearly five months ago?

        Now, they could start marketing it as a competitor, but that’s not the same, is it?

      • PRIME1
      • 11 years ago

      Not to mention the 9600GT which has been out for a long time is about the same cost and a lot faster than the 4670

        • flip-mode
        • 11 years ago

        Is it just that simple? Probably not.

        4670:
        is smaller
        uses less power and…
        doesn’t need aux power
        is less expensive
        has “good enough” performance for many cases

        The 9600gt is faster and quieter.

        The 9600gt at 1280×1024 4xAA 16xAF averaged 14% faster in these tests. Its cheapest shelf price is 16% more expensive.

        It sounds like the products are pretty balanced given their prices. The 9600gt is only 15% more, but it is only 15% faster too, so it doesn’t provide any more value than the 4670. Given time, the 4670 should drop in price, perhaps nearing $65 or $70 in just a few weeks.

        In other words, this isn’t nearly a case of a one-sided win for either company.

    • TurtlePerson2
    • 11 years ago

    Well, I just bought a $50 AR 9600 GSO a couple of weeks ago and I’m quite happy with it. I have a 1280×1024 monitor and I just don’t understand why I’d pay 2x or 3x as much to get something that will max out every game. I can run any game and have it look good, even Crysis, and anything made before 2008 will run on max, except Crysis.

    I would argue that unless you have a 1920×1200 monitor there’s no reason to go past a 9600 GT. The price you pay for performance doesn’t scale linearly, so you end up paying a lot more for only slightly better performance.

    • flip-mode
    • 11 years ago

    Nice review Scott. Nice array of cards tested. Nice explanation of where the card stands with respect to the competition.

    I can’t say that I’d buy the card rather than looking up market. It’s one of those things where if my budget was $50 I’d try to stretch and get the card, if my budget was $80 I’d try to get the 9600GT, if my budget was $100 I’d save up for the 4850.

    Even though it has been said billions of times before, it is always worth saying again – Nvidia has the most cluster mucked product line up possible. It does not seem possible to screw it up any worse. It is so confusing that, were it not for articles like these, I’d veer away from Nvidia’s cards simply because I would not know what was under the hood based on the product name.

      • MadManOriginal
      • 11 years ago

      Your post demonstrates how tight things are at the low end. When there’s an option every $20-25, not even counting sales or MIR deals that really throw a wrench in the pricing, the decision isn’t easy. ‘What’s $20 more?’ of course you can take that argument up pretty high but when the differences in prices are the equivalent of not going out to the bar for a night or buying coffee in the morning or mowing a lawn for the youngsters, it really makes one go back and forth between the choices.

        • mattthemuppet
        • 11 years ago

        Another way of looking at it is how I approach upgrades – I always go for the lowest product of the latest generation that’ll do what I need it to do now. When I need something better, there should be another generation or two out, and I’ll repeat again – if there’s no money there’s always overclocking 🙂

        My current rig is a Sempron 2400, 7300GT (nice one though) and 120GB hdd – my next (soon hopefully, this one’s creaking at the seams) will probably be a E5200, 4670 and WD 640GB. I could stretch up $30 on the CPU for a E7200 and $10-20 for a 9600GT (once prices for the 4670 have stabilised), but I doubt at 1280×1024 I’d notice much difference.

    • zqw
    • 11 years ago

    I know your comparison was about shaders. But, the actual strength of the xbox 360 is the essentially unlimited bandwidth of it’s edram (256GBps.) You can’t really stack it against conventional video cards. You can with the PS3, but then you’re unfairly ignoring the cell which is often used for early stages in the graphics pipeline.

    • Fighterpilot
    • 11 years ago

    The 4670 is quite a nice card for the price.
    Good to see the HD4850 V 9800 GTX+ debate settled too.
    I guess # 6 would rather that got hushed up tho 🙂

    • masaki
    • 11 years ago

    Thanks Scott, a nice article. The conclusion really made my mind up on a 4850. Surely to get one later this year.

      • d0g_p00p
      • 11 years ago

      The 4850 is a fantastic card, I am glad I bought one. Pair it up with another 4850 and it’s even better. Just get aftermarket cooling to keep the temps down. I feel like the 4850 is the 9700Pro if this generation.

        • toyota
        • 11 years ago

        you dont need aftermarket cooling. all you need to do is take a few minutes and adjust the fan profiles.

        • Lianna
        • 11 years ago

        Gigabyte GV-R485MC-1GH is passive and has 1GB memory; as for aftermarket, I added super-quiet 80mm fan (am I paranoid?) and it keeps the card at or below 60C.

    • shtal
    • 11 years ago

    My Radeon HIS HD4850 Iceq4 TurboX 685core/1100memory.
    I overclock to 700core/1100memory.

    I ran this benchmark “rthdribl” and it was 62c Load-MAX and fan running 65% percent. And at idle 43c and fan running 55% percent.

    This card I currently own: HIS IceQ4 TurboX HD4850
    §[<http://www.techpowerup.com/reviews/HIS/HD_4850_IceQ4<]§ EDIT: And it is very quite!

      • toyota
      • 11 years ago

      you say its quiet but the very article you linked to says “/[

        • shtal
        • 11 years ago

        I understand what you are saying!

        But what is weird with my video card – is at default the fan runs 41% percent at idle, but when I increase to 55% the noise level decreases for some reason. “I don’t know but it’s weird”

        EDIT: But when I go to 65% then the noise level higher then when it runs at 41% fan

    • MadManOriginal
    • 11 years ago

    With prices as fluid as they are they’re already outdated. There are some 9600GT for prices simialr to the 4670 if the MIR fairy is on your side but they aren’t huge $50 MIRs.

    Is there any point to the 9800GT? The 9600GT is virtually the same or so close that you wouldn’t know when playing. Is there a reason they’re so close in your testing? I’ll have to go back and check the clockspeeds used. On a related note, I miss having the full graphs for the statistic comparison chart. Just showing fillrates etc doesn’t help me remember where the cards fall in the lineups, especially with NVs confusing naming.

    I’m considering a downgrade because I simply play games ery raqrely these days but the long-time gamer in me doesn’t want to give up the possibility to play games a little bit. I am torn between the various cards around $100. The gamer side says get a 9800GT ~$110-120, the saver side says get a 9600GT for $80-90 or even that $50 9600GSO. Yes, I believe in MIRs. I am at dual 1680×1050 right now but am pondering going to a single 1920×1200. This is why I ask the question above, your input on the relevance of the 9800GT would be appreciated Damage.

      • tocatl
      • 11 years ago

      “Is there any point to the 9800GT? The 9600GT is virtually the same or so close that you wouldn’t know when playing” , Thats exactly what i tought when i saw the performance of the 9600GT in crysis warhead, even with the 4850 there is a 5fps difference wich isnt a lot…

        • MadManOriginal
        • 11 years ago

        Well after going back and looking the 9600GT in this test is oc’d a decent amount on the core and shaders, it’s probably got about a 10% boost from that if speed increase is linear with overclock. I understand we’re looking at ‘what’s available in the market at certian prices’ but given how prices change quickly it would be nice to know exactly how a stock-clock card stacks up.

    • mattthemuppet
    • 11 years ago

    great article, been looking forward to it for ages. The value issue here in Oz is a hard one – 9600GT cards have been around for a while so they’re bumping around at $120, whereas the few 4670s that come up are min $130. So, I like the massively reduced (30W at idle is huge) power consumption of the 4670, but the 9600GT spanks it in games and probably will keep doing so for longer. Hmm.

    On a similar point – for your power consumption graphs, can you test at 1) idle 2) CPU fully loaded (CPUburn or similar) and 3) CPUburn+ATItool, that way it’ll be easier to get an idle and load figure for the graphics card. At the moment, the load power value is going to be a mix of CPU and GPU load, though that’s also an informative, real world value.

      • Chrispy_
      • 11 years ago

      There’s little point to adding a CPU load (but not GPU load) test.

      The CPU is kept the same in each case so they’d just end up duplicating the idle graphs (with another 60W or similar added to each for the CPU load increase)

        • mattthemuppet
        • 11 years ago

        Chrispy – check out §[<http://www.silentpcreview.com/article870-page6.html<]§ to see what I'm talking about. At the moment there are idle and game load measurements. Idle is obvious, but the game load power consumption is due to part CPU load and part GPU load, so it won't tell you the max power consumption of the card alone. Measuring CPU load and CPU+GPU load will allow you to subtract the first from the latter and get GPU load power consumption. As it stands the testing methodology doesn't tell us this.

    • SecretMaster
    • 11 years ago

    Wow. Great article Damage, I’ve been waiting for this one for awhile now. Looks like I have a very affordable option to update one of our comps with a 7600GT with a 1280×1024 monitor. Sweet.

    Oh and I found one typo in the article

    “In fact, most of the cards were able to handle Warhead reasonably well. The glaring exception in this case was the *[

    • grantmeaname
    • 11 years ago

    why are you using the SLI motherboard for the nVidia cards? In previous reviews, you only used it for SLI…

      • Damage
      • 11 years ago

      Ugh, man. I used an X38 board for everything, just messed up the testing config table. I’ve corrected it. Sorry.

        • robspierre6
        • 11 years ago

        Great article damage.Im getting a 4850 for my bro.

    • clone
    • 11 years ago

    at $79 it would have been nice to see some crossfire tests just for shit and giggles.

    at that price point it’s a no risk option and something I’d exploit just for the sake of it, then when done after a few weeks I’d just pull the cards and sell one or both depending on whether or not I wanted to hang onto one as a temp solution while parting out the current system for the next upgrade.

    I’m going to be stuck using an X800 GT this time while my motherboard, cpu, ram, and 4850 leave….. I forgot to order a video card with the rest of the parts and now having seen the release of Nvidia’s 216 cards suspect a price adjustment or refresh is due out from ATI that might be worth waiting for.

    • adisor19
    • 11 years ago

    “Just looking at it, I feel calm and secure, though slightly aroused.”

    I LOLD

    Adi

    • Vrock
    • 11 years ago

    That 4670 looks like a fine card to replace my 7800GT, for pennies. My new monitor is 1280×1024, too…so no worries. Hmm.

    • PRIME1
    • 11 years ago

    You can get a 9600GT for $95
    §[<http://www.newegg.com/Product/Product.aspx?Item=N82E16814133239<]§ A far better card than the 4670. Seems to be the king of the sub $100 cards based on the review.

      • indeego
      • 11 years ago

      Man I just bought a similar card (ECS passive) for twice that about 6 months ago, too. The passiveness was worth it to me thoughg{<...<}g I don't remember video cards coming down in price so much...

        • Corrado
        • 11 years ago

        You can thank AMD/ATI for that.

    • cobalt
    • 11 years ago

    Great article, will have to read more in-depth when I find time.

    FYI, You can get a 9600GT for $99 at zipzoomfly right now, $69 with the MIR! (EVGA’s dual-slot cooler edition; 10008655 is their SKU.) I think that’s a phenomenal deal….

    • Krogoth
    • 11 years ago

    HD 4650 and 9500GT are plenty fast. It is so sick that you can get so much performance for so little cash.

    They only suffer in AA due to the lack of memory bandwidth.

    • dolemitecomputers
    • 11 years ago

    I don’t understand the testing setup for these scenarios. I believe the $1000 processors make up a lot of slack and it is unrealistic that someone can afford a cpu that expensive and have a $100 graphics card in their system. Maybe I’m not looking at it with the right point of view.

      • Krogoth
      • 11 years ago

      It is unlikely that any of these benches were CPU-bounded. 😉

        • ish718
        • 11 years ago

        I disagree.
        So your saying if they were to use an x2 4200+ in these benchies instead, it would make no difference since its not CPU bound?

      • grantmeaname
      • 11 years ago

      this is the same platform they always use for testing graphics cards. I’m guessing it’s so they can use the results in later reviews.

      • UberGerbil
      • 11 years ago

      Let’s suppose you’re right, and the CPUs are making up a lot of the “slack” in these tests. Now imagine the tests were run instead with some low-end $40 CPU. Presumably that cheap-o CPU wouldn’t be “making up the slack,” so the GPUs would be sitting around waiting for it. The faster GPUs would be waiting around more, but essentially they’d all be twiddling their thumbs, because they can’t run on ahead rendering multiple frames if the CPU can’t feed them the data. The net result: the GPUs would all score within a few fps of each other. There would be no way to determine which GPU was faster and which was slower, because the CPU would be bottlenecking the tests.

      In reality, of course, the CPU isn’t “making up the slack.” It’s not like the CPU can help out the GPU by doing some software rendering in the background if it has the time. The CPU just has to be fast enough to handle the game logic and memory transfers. Beyond that, it will be the one twiddling its thumbs. Which is fine, because this isn’t a test of CPUs, it’s a test of GPUs. So that’s what we want — nothing else to be the limiting factor so the differences between the GPUs are as stark as they can be.

      You see the same situation in CPU tests, where the screen resolution is held “unrealistically low” in the tests so that the fill rate of the GPU doesn’t become the limiting factor. And we get people asking why these tests aren’t done with lesser GPUs or higher resolutions. It’s the same principle.

      • Rza79
      • 11 years ago

      You like to see charts where all card are capped by the cpu? How will you know which one is faster?
      BTW it’s a GPU review … not platform review!

      • eitje
      • 11 years ago

      tell you what. to resolve the issue, damage can mail me the cards for his tests, and i’ll run the same scenarios on my SN10000EG. or, i can mail my board to him, and in his ample idle time, HE can re-run all of the tests.

      For that matter, these are sub-$100 video cards… maybe i’ll just go buy them all myself!

    • no51
    • 11 years ago

    My first graphics card that I bought was a Ti4200. I think I got it for under 120$ To me it was a big step up from the TNT’s and Rage’s that I had in my older computers. It’s interesting to see how that much money today can give you in terms of performance.

Pin It on Pinterest

Share This