AMD’s Radeon HD 5830 graphics card

AMD’s line of DirectX 11 graphics cards has been fleshed out rather nicely since its inception late last year, with products spanning from well under a hundred bucks to somewhere north of $600. Yet that product line has always had a metaphorical gaping hole right in the center, between the $170-ish Radeon HD 5770 and the $300-ish Radeon HD 5850. That’s huge. You could drive a metaphorical truck through it, metaphorically speaking. And a great many PC gamers like to buy their graphics cards precisely within that soft spot between 200 and 250 bucks, because a nice mix of price and performance traditionally can be had there.

At long last, AMD is endeavoring to address that vibrant segment of the market with its latest Radeon, the HD 5830. That’s probably a good thing, too, since Nvidia’s DX11 GPUs are later than closing time at Taco Bell—and, unlike Taco Bell, not really poised to address the more value-conscious segments of the market.

Now that we’ve successfully compared a GPU to an Enchirito, my work here is nearly done. All that remains is to evaluate the Radeon HD 5830’s performance and value proposition versus the other offerings on the market—and, for potential upgraders, against some rather older graphics cards from the same price range.

Less stuff, more speed, say what?

As long as you’ve read our Radeon HD 5870 review, you can know everything you need to know about the Radeon HD 5830 in a few short paragraphs. The 5830 is based on the same “Cypress” GPU as the 5870, but it’s had various internal bits and pieces deactivated in the name of product segmentation. Although that very fact itself may seem tragic, GPU makers have long engaged in these practices, in part because they can manage to make use of quite a few chips that may not entirely work perfectly. Heck, the Radeon HD 5850 has already followed that path, and the 5830 comes behind it.

The 5830 is a little strange, though. Not only have six of Cypress’s 20 SIMD cores been disabled—leaving it with 1120 ALUs (or stream processors, as AMD calls them) and 56 texels per clock of filtering capacity—but fully half of the render back-ends or ROPs have been nixed, as well. That means the 5830 has considerably less pixel throughput and antialiasing power than its elder siblings. Yet AMD has left its four 64-bit memory interfaces entirely intact, so that its 1GB of 4Gbps GDDR5 memory yields just as much memory bandwidth as the Radeon HD 5850. Freaky! That makes for a very different balance of resources than other Cypress-based cards. The evil geniuses in AMD product planning have somewhat compensated for this mass deactivation of ROPs by giving the 5830 an 800MHz core clock speed—that’s 75MHz higher than the 5850, believe it or not.

Pixel
fill rate
(Gpixels/s)

Texel
filtering
rate
(Gtexels/s)
Memory
bandwidth
(GB/s)
Shader
arithmetic
(GFLOPS)
Radeon HD 5770

13.6 34.0 76.8 1360
Radeon HD 5830

12.8 44.8 128.0 1792
Radeon HD 5850

23.2 52.2 128.0 2088
Radeon HD 5870

27.2 68.0 153.6 2720

The math pretty much works out in the end, though, with the 5830 nestled in between the 5770 and 5850 in key rates for texture filtering and shader arithmetic. So long as its relatively low ROP throughput balances out its beefy memory bandwidth, the 5830 shouldn’t cause too many fights at the dinner table.

After deliberations that apparently continued right up until the eve of the 5830’s introduction, AMD has chosen to set the 5830’s suggested e-tail price almost equidistant between the 5770 and 5850 at $240, or in gas-station format, $239.9999.

We expect unusual variety from the Radeon HD 5830 on a number of fronts, in part because AMD hasn’t produced a reference design for this product. Instead, it has suggested board makers might want to base their cards on the Radeon HD 5870 board design. Board vendors are free to do otherwise, though, and they’ll have to come up with their own custom cooling solutions.

XFX’s version of the 5830 will be based on the same relatively compact board design as the firm’s 5850 card, with an angular custom cooler apparently intended to confuse radar systems. XFX expects these cards to hit online retailers later this week at a price around the $239 mark, bundled with a copy of Aliens vs. Predator, a new game fortified with DirectX 11 effects.

We don’t have too much information yet about Gigabyte’s offering, but it will apparently feature twin props for extra thrust, along with a much larger heatsink and longer PCB than the XFX card. Gigabyte looks to have used the 5870 reference board as its template, at least for this first attempt. The result appears to be a much longer card than even the standard Radeon HD 5850.

Sapphire has followed a similar path with its 5830, although it has stuck with a single, enormous fan for the cooler. This card is already listed at Newegg for $239.99, although it’s not currently in stock. Another version bundled with Modern Warfare 2 is apparently available now for $264.99. You can probably expect to see MW2 bundled with 5830 cards from a number of AMD’s partners, although we were kind of expecting to see cards priced right at $240 to include MW2. That $25 premium makes the game bundle less enticing.

I have to say that, at first glance, the selection of custom coolers above is a little bit disappointing. The reference coolers from both AMD and Nvidia these days use a blower situated at the end of the card. The blower pulls air in from the system, pushes it through a shroud across the heatsink and the GPU, and exhausts the heated air out the back of the case. This arrangement tends to work very well, even in cramped quarters and multi-GPU configurations. The cards pictured above might have spectacular thermals, amazingly low noise levels, and excellent adaptability—we don’t know, since we haven’t tested them yet. But we’ve had problems with similar coolers in the past, especially in multi-GPU configurations. Given the choice, I’d prefer a proper blower-and-shroud combo any day, especially since that sounds kinda racy.

Heck, we wound up with just such a combo, since the 5830 card we received from AMD for testing looks to be a Radeon HD 5870 with the appropriate clock speeds set and bits disabled. This card should do a fine job of representing the 5830’s performance, but noise levels, GPU temperatures, and even power consumption may vary on the actual products. That hasn’t stopped AMD from offering power consumption estimates of 125W at peak and 25W when idle, though.

Eyefinity to the sixth: Coming soon

There’s one more member of the Radeon HD 5800 family slated for release soon: the Radeon HD 5870 Eyefinity6 edition, the specialized card code-named “Trillian” that will feature six display outputs. This isn’t just a display wall setup intended for department stores, either. True to its name, the Eyefinity6 will allow for multi-monitor gaming across six displays, amazingly enough.

An early Trillian card. Can you say “connector density?”

This card will differ from the regular 5870 in several respects, the most obvious being the ominous array of six Mini DisplayPort connectors poking out of the expansion slot cover. Unlike the regular 5870, the Eyefinity6 card will require an eight-pin auxiliary power input in addition to a six-pin one, because it will draw more power when driving six monitors simultaneously. Another adjustment is the addition of a second gigabyte of memory on the board. Since six 30″ displays total approximately 24 million pixels, the additional on-board RAM will likely be needed, he said in a breathtaking understatement.

Even a six-megapixel config benefits from 2GB of RAM, according to AMD.

The key to attaching more than two displays to today’s Radeons is DisplayPort, so that will be the Trillian card’s preferred connection type. In order to accommodate up to two older monitors, though, the card will ship with a grand total of five adapters: two passive Mini-DP-to-DVI adapters, one passive Mini-DP-to-HDMI adapter, and two Mini DisplayPort-to-DisplayPort plug converter. That should cover most eventualities, although I could see a lot of folks needing more Mini-DP-to-DP adapters.

A drool bucket, unfortunately, is not included, so you’ll have to make your own arrangements there.

Dead Space at over 24 million pixels

We don’t have exact pricing yet, but AMD expects the 5870 Eyefinity6 edition to ring up at somewhere between $400 and $500.

We had initially expected Trillian cards to be available way back in late September, not long after the 5870’s launch. A couple of things conspired against Trillian’s timely introduction, including supply problems with Cypress GPUs and, especially, an OS compatibility snag. Although AMD was able to present six displays to Windows Vista as a single, large surface ready for use in 3D accelerated games, making that happen in Windows 7 involved an additional technical hurdle. Win7 would allow for up to four 3D-accelerated displays, but not six. Implementing the software changes to work around this OS limitation took some time, and AMD elected to hold off on introducing the Eyefinity6 product until that problem was solved.

Happily, Trillian setups should now benefit from some of the major feature improvements in AMD’s newer Catalyst drivers, including the ability to define display groups, easier switching between different multi-monitor configs, and —thank goodness!—bezel compensation for Eyefinity displays.

One of those improvements is the ability to combine CrossFire multi-GPU setups with multi-monitor Eyefinity display surfaces. The appeal here is obvious, since pushing 24 megapixels with a single 5870 GPU is possible and sometimes quite workable, but not for every game. Generating that many pixels at the right quality levels would tax any single graphics chip. Making CrossFire work on this scale presents some challenges, however, as AMD readily admits. The core issue is the fact that the dedicated CrossFire interconnect used for passing completed frames between cards has “only” enough bandwidth to sustain a 2560×1600 display resolution. Even three 1080p displays will exceed its capacity. The alternative is to transfer frame buffer data via PCI Express, which is what AMD does when necessary. Using PCIe works, but it can limit performance scaling somewhat—don’t expect to see the near-linear scaling one might get out of a dual-card setup in the right game with a single display. That’s not to say mixing CrossFire with Eyefinity won’t be worth doing. Based on AMD’s performance estimates, frame rates could improve between about 25% and 75% when adding a second GPU with a 12-megapixel, six-monitor array.

In fact, this is something we’d like to test soon. You know, for science.

One might be tempted to pair up a regular Radeon HD 5870 with an Eyefinity6 version for maximum performance, but remember that each card in a CrossFire group takes on the attributes of the lowest-spec card in the bunch. In this case, pairing a Radeon HD 5870 1GB with a 2GB Eyefinity6 card would reduce the effective memory size of each card to 1GB—one step forward, two steps back, most likely. Just something to keep in mind if you’re looking to build such a setup.

Another possibility that would seem to make a certain amount of sense would be a dual-GPU Radeon HD 5970 board with six display outputs and, say, 4GB of memory onboard (2GB per GPU). AMD wasn’t ready to announce such a beast when we inquired about this possibility, but we were told to “watch this space.” The firm says it won’t prevent add-in board makers from concocting such a monster, so we could well see something along those rather intimidating lines on the market before long.

Grandpa laces up his skates

Our recent foray into providing some broader historical context for our hardware reviews proved sufficiently popular that we thought we’d try it again, this time for graphics cards. To make that happen, we had to dial back the number of games we tested and focus instead on a breadth of cards—all within the context of a very limited amount of time for testing.

At any rate, we have chosen to test a number of current graphics cards from just above, just below, and somewhat near the 5830’s price. We’ve also selected some older cards from about the same price class dating back over a number of years. Here are our key match-ups to watch:

  • Radeon HD 5830 vs. erm… — Finding a direct competitor to the Radeon HD 5830 isn’t easy. The natural candidate would have been the GeForce GTX 275, but those have essentially evaporated from e-tail listings. For the time being, they seem to have been replaced by higher-clocked variants of the GeForce GTX 260, which offer essentially the same mix of price and performance. We have chosen one such card—an Asus with 216 SPs, a 650MHz core clock, 1.4GHz shaders, and 2.3GHz memory—to represent the GTX 260 in our testing. Cards like this one will set you back about $220. Lower-clocked GTX 260 variants may cost you less, but they are all in alarmingly short supply these days.

    Although the 5830 replaces the Radeon HD 4830 in spirit, it’s more of a direct successor, price-wise, to the Radeon HD 4890. Due to limited testing time and our desire to situate the 5830 among its Radeon HD 5000-series brethren, we didn’t include a 4890 in our tests. An outrage, I know! But we did include the Radeon HD 4870, simply because we figure more folks own them and might be considering an upgrade. The 4890 should typically be about 10-15% faster than the 4870, for comparison.

    In a sense, the 5830’s most notable competition may be the 5770 and 5850, since they’re also DX11 products with potentially more appealing value propositions.

  • Radeon HD 4850 vs. GeForce 9800 GTX — We decided to go historical on this one a little bit, so rather than comparing the more current offerings based on these GPUs, the GeForce GTS 250 and the Radeon HD 4850 1GB, we reached back into the parts bin for the original items. Our Radeon HD 4850 512MB is an Asus model from the first wave back in June of ’08.

    At the time of the 4850’s launch, the incumbent offering from Nvidia was the GeForce 9800 GTX, dating back to April of 2008. Nvidia quickly countered the 4850 with higher-clocked variants of the 9800 GTX, including the 9800 GTX+ based on a newer 55-nm GPU. We probably should have tested that one, but when I reached into the Damage Labs parts bin, for some reason, I pulled out an original XFX 9800 GTX with default clocks of 675MHz (core), 1688MHz (shaders) and 2200MHz (memory), well below the 738/1836/2200MHz frequencies of the 9800 GTX+. Truth be told, GeForce 9800 GTX cards with their original clock speeds kept selling for quite a while after the introduction of the Radeon HD 4850 before the 9800 GTX+ overtook them, but I’m still kicking myself over this selection.

From left to right: Radeon HD 5770, Radeon HD 4850, and Radeon HD 3870

  • Radeon HD 3870 vs. GeForce 8800 GT — Now we’re getting a little older school. The GeForce 8800 GT ruled the $199-249 price range for quite a while, starting with its October 2007 unveiling. The 8800 GT’s price-performance ratio was a revelation at the time, and when AMD pulled back the curtain on the Radeon HD 3870 the following month, it struggled to keep pace. Nevertheless, both were good values at the right price, and they were both quite popular, as was the Radeon HD 3850, a close relative of the 3870.

    We’ve chosen a couple of representatives left over from our massive comparo of mid-range graphics cards back in early 2008. Asus’s EN8800GT TOP has clocks of 700/1750/2020MHz, north of the GeForce 8800 GT’s base frequencies of 600/1500/1800MHz, which puts its performance alarmingly close to our GeForce 9800 GTX, as you’ll see. We gave it a TR Recommended award in that comparo. Asus represents the other team, too. The EAH3870 TOP’s core and memory clocks of 850MHz and 2.25GHz, respectively, are well above the Radeon HD 3870’s 775MHz/2250MHz stock speeds.

From left to right: GeForce 8800 GT, GeForce 7900 GTX, GeForce 7900 GS

  • GeForce 7900 GS & GTX vs. the whippersnappers — Back in the fall of 2006, the new hotness at $199 was the GeForce 7900 GS. We reviewed it, liked it, and told folks “the GeForce 7900 GS stands alone as the best value in graphics.” The 7900 GS delivered performance levels closely comparable to its more expensive predecessors, including the GeForce 7800 GTX and the GeForce 7900 GT, for hundreds less. ATI soon countered with the Radeon X1950 Pro, and competitive balance was restored.

    Since we had quite a few more Radeons than GeForces to manage from newer generations, I decided to test older cards from the Nvidia camp. The 7900 GS seemed like a perfect candidate, but as we got into the test results, I realized something: with only 256MB, the 7900 GS was likely to be performance limited by its video memory size. To rectify that situation, I pulled out an old GeForce 7900 GTX 512MB, from May of ’06, and put it through the paces, too. So we’ve included not one, but two four-year-old video cards in the bunch.

Test notes

Our GPU test rigs have been pretty much the same for a while now, and since they’re based on Core i7-965 Extreme CPUs and Gigabyte X58s mobo, we see little need to upgrade the core components. However, Corsair recently hooked us up with some new memory kits for these systems that take us all the way to 12GB in six DIMMs. Like so:

These are high-grade Dominator DIMMs, and we expect the extra RAM could be helpful when we start installing multiple 2GB video cards in these systems and driving three or more displays. Although these Dominators come with auxiliary DIMM cooling fans, we found that our open-air test rigs were perfectly stable without them, impressively enough. Our thanks to Corsair for providing the memory.

Our testing methods

As ever, we did our best to deliver clean benchmark numbers. Tests were run at least three times, and we’ve reported the median result.

Our test systems were configured like so:

Processor Core i7-965 Extreme 3.2GHz
Motherboard Gigabyte EX58-UD5
North bridge X58 IOH
South bridge ICH10R
Memory size 12GB
(6 DIMMs)
Memory type Corsair Dominator
CMD12GX3M6A1600C8

DDR3 SDRAM at 1333MHz

Memory
timings
8-8-8-24-2T
Chipset drivers INF update 9.1.1.1015

Matrix Storage Manager 8.9.0.1023

Audio Integrated ICH10R/ALC889A

with Realtek 6.0.1.5919 drivers

Graphics Asus
EAH3870 TOP Radeon HD 3870 512MB

with Catalyst 8.703-100210a-095560E drivers

Asus
EAH4850 Radeon HD 4850 512MB

with Catalyst 8.703-100210a-095560E drivers

Diamond
Radeon HD 4870 1GB

with Catalyst 8.703-100210a-095560E drivers

Gigabyte
Radeon HD 5770 1GB

with Catalyst 8.703-100210a-095560E drivers

Radeon HD
5830 1GB

with Catalyst 8.703-100210a-095560E drivers

Radeon HD
5850 1GB

with Catalyst 8.703-100210a-095560E drivers

Asus
EAH5870 Radeon HD 5870 1GB

with Catalyst 8.703-100210a-095560E drivers

XFX GeForce
7900 GS 480M 256MB

with ForceWare 196.34 beta drivers

GeForce 7900
GTX 512MB

with ForceWare 196.34 beta drivers

Asus
EN8800GT TOP GeForce 8800 GT 256MB

with ForceWare 196.34 beta drivers

XFX GeForce
9800 GTX 675M 512MB

with ForceWare 196.34 beta drivers

Asus
ENGTX260 TOP GeForce GTX 260 896MB

with ForceWare 196.34 beta drivers

Asus
ENGTX285 TOP GeForce GTX 285 1GB

with ForceWare 196.34 beta drivers

Hard drive WD Caviar SE16 320GB SATA
Power supply PC Power & Cooling Silencer 750 Watt
OS Windows 7 Ultimate x64 Edition RTM

Thanks to Intel, Corsair, Gigabyte, and PC Power & Cooling for helping to outfit our test rigs with some of the finest hardware available. AMD, Nvidia, XFX, Asus, Diamond, and Gigabyte supplied the graphics cards for testing, as well.

Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests.

We used the following test applications:

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Running the numbers

We’ve already looked at the theoretical peak numbers for the Radeon HD 5830 compared to its closest relatives, but the table below will put it into a bit broader perspective.

Peak
pixel
fill rate
(Gpixels/s)


Peak bilinear

INT8 texel
filtering
rate
(Gtexels/s)

Peak bilinear

FP16 texel
filtering
rate
(Gtexels/s)


Peak
memory
bandwidth
(GB/s)

Peak shader
arithmetic (GFLOPS)

Single-issue Dual-issue
GeForce 7900 GS

7.7 9.6 44.8
GeForce 7900 GTX

10.4 15.6 51.2
GeForce 8800 GT

11.2 39.2 19.6 64.6 392 588
GeForce 9800 GTX

10.8 43.2 21.6 70.4 432 648
GeForce GTS 250

12.3 49.3 24.6 71.9 484 726
GeForce GTX 260 (216 SPs)

18.2 46.8 23.4 128.8 605 907
GeForce GTX 275

17.7 50.6 25.3 127.0 674 1011
GeForce GTX 285

21.4 53.6 26.8 166.4 744 1116
Radeon HD 3870

13.6 13.6 13.6 73.2 544
Radeon HD 4850

11.2 28.0 14.0 63.6 1120
Radeon HD 4870

12.0 30.0 15.0 115.2 1200
Radeon HD 4890

14.4 36.0 18.0 124.8 1440
Radeon HD 5750

11.2 25.2 12.6 73.6 1008
Radeon HD 5770

13.6 34.0 17.0 76.8 1360
Radeon HD 5830

12.8 44.8 22.4 128.0 1792
Radeon HD 5850

23.2 52.2 26.1 128.0 2088
Radeon HD 5870

27.2 68.0 34.0 153.6 2720

These theoretical capacities don’t correspond directly to performance, of course. Much depends on the quirks of the GPU architectures and their implementations. We can measure some of these things with directed tests, though, to give us a sense of how the cards compare. Sadly, we’ve not been able to include the older, DirectX 9-only graphics cards in these tests, because 3DMark Vantage requires DirectX 10.

I’ve only included partial information in the table above for the two GeForce 7-series cards, in part because of some limitations of these older architectures. For example, the G71 GPU could filter FP16 texture formats, but it couldn’t do so in conjunction with multisampled antialiasing. Counting FLOPS on a non-unified shader design is also a little tricky, so I’ve abstained. Nonetheless, progress in the past four years has been substantial. The Radeon HD 5830 has 4.6 times the texture filtering capacity and 2.8 times the memory bandwidth of the GeForce 7900 GS. Similarly, the Radeon HD 5870 has 4.4 times the filtering rate and triple the memory bandwidth of the GeForce 7900 GTX.

We’ve often thought that GPU performance in 3DMark’s color fill rate test seems to be limited primarily by memory bandwidth. Notice how much faster the Radeon HD 4870 is than the Radeon HD 5770, for instance. The 5770 has a slightly higher theoretical peak fill rate, but the 4870 has nearly twice the memory bandwidth and proves markedly faster in this directed test.

The 5830, however, breaks that trend by delivering much a lower measured fill rate than the 5850, though their memory bandwidth on paper is identical. Heck, the 4870 outscores the 5830, too, even though it has slightly less theoretical peak fill rate and memory bandwidth. Something about the way AMD pruned back the Cypress GPU’s render back-ends produces unexpectedly poor results in this test.

3DMark Vantage was released in April, 2008, and only in the past few weeks has FutureMark fixed the units output by its texture fill rate test. I’m not sure why this obvious bug, about which we exchanged e-mails with FutureMark several times back in ’08, took so long to squish. At least we now have our first set of 3DMark texturing results that make intuitive sense.

Those results show us something we’ve long known: that AMD’s recent GPUs score much better than Nvidia’s in this benchmark. Being able to put units to them, though, gives us some additional insight. Notice how the Radeons reach very close to their theoretical peaks for INT8 filtering, while the GeForces are just as close to their half-rate FP16 peaks. We’ve long thought this was a test of FP16 texture filtering rate. What’s going on here?

When we asked Nvidia to explain why its GPUs were only reaching about half of their potential, we received an interesting answer. Turns out, Nvidia told us, that this test does indeed use FP16 texture formats, but it doesn’t filter the textures, even bilinearly. It’s just point sampled, believe it or not. The newer Radeons, it seems, can point-sample FP16 textures at their full rate, even though they can’t filter them at that rate. Nvidia’s GT200 samples FP16 textures at half of the INT8 rate, hence the disparity. Interestingly, Nvidia says the upcoming GF100 can sample FP16 textures at full speed, so it should perform better in this test, once it arrives. Trouble is, we’d really rather be measuring the texture filtering rates, which matter more for games, than the raw texture sampling rates of these GPUs.

For what it’s worth, the Radeon HD 5830 does sample FP16 textures at a much higher rate than the Radeon HD 4870 or 5770. In theory, it should be able to filter them faster, as well.

Performance on these shader power benchmarks tends to vary quite a bit from one GPU architecture to the next. As a result, the 5830 exchanges victories with its closest rival, the GeForce GTX 260, from one test to the next. Meanwhile, the 5770 and 5850 tend to bracket the 5830 exactly as one would expect. The more interesting result may be the fact that the 5830 is between two and four times the speed of the Radeon HD 3870.

DiRT 2

This excellent new racer packs a nicely scriptable performance test. We tested at the game’s “high” quality presets with 4X antialiasing in both DirectX 9 and DirectX 11 modes (DiRT 2 appears to lacks a DX10 mode). For our DirectX 11 testing, we enabled tessellation on the crowd and water. Because this automated test uses computer A.I. and involves some variance, we tested five times at each resolution and have reported the median results.

This is one very good-looking game, but astoundingly, even the Radeon HD 3870 is able to play it pretty fluidly—minimum frame rate: 29 FPS—at 1920×1080. Everything faster is more than capable, including the 5830.

The match-up between the GeForce GTX 260 and the new Radeon comes down to performance scaling at different resolutions. The 5830 is faster at lower resolutions, but the GTX 260 is increasingly more competitive as the demands on the GPU grow. At 2560×1600, the 5830 trails, though by a trivial margin of a few frames per second.

The GeForce 7900 cards are nothing if not consistent. The answer appears to be: 13 FPS. No matter what. Although the 7900 GS runs out of video memory at 2560×1600 and can’t start the game. I did test the GeForce 7 cards at 1366×768, as well, and guess what? 13 FPS for the 7900 GS and 14-15 FPS for the 7900 GTX.

DiRT 2‘s extra DirectX 11 effects don’t change the look of the game too terribly much, but they do tax the GPUs quite a bit more. Here, the gap between the Radeon HD 5830 and 5770 is negligible, while the 5850 is about 10 FPS faster at each res.

Borderlands

We tested Gearbox’s post-apocalyptic role-playing shooter by using the game’s built-in performance test. We tested with all of the in-game quality options at their max. We couldn’t enable antialiasing, because the game’s Unreal Engine doesn’t support it.

Embarrassingly, the 5830 is barely any faster than the 4870 in Borderlands—and in this case, we can’t blame it on the reduction in anti-aliasing power caused by the 5830’s ROP-ectomy. We’re not even using AA. The GeForce GTX 260 is substantially faster in this game, and overall, the Nvidia cards tend to have higher minimum frame rates than the Radeons.

Even without antialiasing, Borderlands is off the menu for the GeForce 7 cards. We could probably scale back the resolution and image quality quite a bit and get this game to run acceptably, but the settings we’re using just overwhelm their abilities. Meanwhile, the GeForce 8800 GT looks to have been a much wiser choice than the Radeon HD 3870 in the light of this contemporary game. The 3870’s average frame rate at 1680×1050 of 28 FPS matches 8800 GT’s minimum frame rate. In fact, even at 1920×1080, you may not need to upgrade from an 8800 GT, given the frame rates it’s producing.

Left 4 Dead 2

In Left 4 Dead 2, we got our data by recording and playing back a custom timedemo comprised of several minutes of gameplay.

Here’s another example of the 5830 barely outperforming the Radeon HD 4870, which isn’t really a good omen for a $240 graphics card. Once again, the GTX 260 is faster at the highest resolution, too.

On the historical front, the GeForce 8800 GT continues to spank the Radeon HD 3870 in newer games. The 3870 can’t really handle this game at these quality levels. And yeah, the GeForce 7-series cards are overmatched yet again. Purely in terms of frame rates, at 1920×1080, the 5830 is seven times the speed of the GeForce 7900 GS. Yeah, it might be time to upgrade.

Call of Duty: Modern Warfare 2
Modern Warfare 2 generally runs pretty well on most modern PC hardware, but it does have some parts where lots of activity and heavy use of shader effects can slow it down. We chose to test performance in one such area, where you’re in a firefight inside of an office building. This close-quarters fight involves lots of flying debris, smoke, and a whole mess of enemy soldiers cooped up with your squad in close proximity.

To test, we played through this scene for 60 seconds while recording frame rates with FRAPS. This firefight is chaotic enough that there’s really no hope of playing through it exactly the same way each time, although we did try the best we could. We conducted five playthroughs on each card and then reported the median of the average and minimum frame rate values from all five runs. The frame-by-frame results come from a single, representative test session.

We had all of MW2’s image quality settings maxed out, with 4X antialiasing enabled, as well. We could only include a subset of the cards at this resolution, since the slower ones couldn’t produce playable frame rates.

At last, the 5830 acquits itself reasonably well in one of our gaming tests. The GTX 260 is a tad quicker, but the two cards offer essentially equivalent performance, and the 5830 clearly outperforms the 5770 and 4870.

Power consumption

Because we don’t yet have our hands on a production version of the Radeon HD 5830, and because our testing time was limited, we’ve truncated the last few bits of our usual test suite. We’ve deferred GPU temperature and graphics card noise measurements until we have a production card, and I chose to draw on the results from our Radeon HD 5700 series review to give you a sense of the 5830’s relative power consumption. Although we used older drivers for most of the cards in that review, we don’t expect that to affect power consumption dramatically. Only the results from the 5830 are new here, and yes, we popped out three of the DIMMs so our test rig’s RAM config matched the one from our 5700 review.

We measured total system power consumption at the wall socket using an Extech power analyzer model 380803. The monitor was plugged into a separate outlet, so its power draw was not part of our measurement. The cards were plugged into a motherboard on an open test bench.

The idle measurements were taken at the Windows desktop with the Aero theme enabled. The cards were tested under load running Left 4 Dead at a 2560×1600 resolution.

These results largely fit our expectations for a Cypress-based graphics card. Either a modification to the 5830 reference card or some change to our graphics test config (such as newer drivers) has allowed the 5830-based system to shed six watts at idle versus the Radeon HD 5870. I’m not at all surprised to see the 5830 drawing a little more power under load than the 5850. Although the 5830 has more units disabled, its higher clock speed probably requires higher voltage flowing to the chip, and voltage is the single biggest determinant of power draw. Regardless, the 5830’s power consumption is quite reasonable and manageable.

The value proposition

Before we conclude, we can take a quick analytical look at the Radeon HD 5830’s value proposition. We’ll be taking the same basic approach we have in our past looks at GPU value, but we’ll be basing our results on current pricing and performance.

Per our custom, we’ve averaged pricing for the graphics cards from Newegg, paying careful attention to the higher prices often associated with higher-clocked variants of specific cards, like our GeForce GTX 260. We did our best to sample prices appropriate to the cards as tested. The exception, of course, is the Radeon HD 5830, where we’ve used AMD’s suggested e-tail price.

For performance, we used the average of the frame rates across the four games we tested. In cases where we tested at multiple resolutions, we focused on the highest resolution, since that’s where performance is most strictly GPU-constrained. For the math geeks among us, we considered using a harmonic mean of the frame rates, but we decided against it. We’re aware of the conventional wisdom on this point and are considering how to treat this issue in these little value exercises going forward.

Mashing up price and performance together allows us to produce a simple scatter plot where the best values will tend to be closer to the top left corner of the plot area.

The Radeon HD 5830’s combination of a $240 price tag and performance that’s not much better than a Radeon HD 4870 doesn’t add up to a tremendous GPU-buying value, from a sheer price-performance perspective. If you’re not concerned about power consumption and a DirectX 11 feature set, you’re easily better off with a GeForce GTX 260.

Conclusions

The case against the Radeon HD 5830 was made quite clearly in the value scatter plot on the preceding page. This graphics card’s price-performance proposition just isn’t terribly attractive. That matters a lot in a product like this one, which is essentially a negotiation between the GPU maker and the consumer: we’ll cut the price this much, the performance that much, and then see what you think. The fact that you’re still better off on an FPS-per-dollar basis with a GeForce GTX 260 than a Radeon HD 5830 is a rather disappointing development in a market where we’re used to seeing practically uninterrupted progress.

The case in favor of the Radeon HD 5830 demands our consideration, as well, and is surprisingly multi-faceted. The Radeon HD 5000 series has many benefits, including the highest quality texture filtering and the fastest, best antialiasing capabilities on the market. DirectX 11 is still a bit of an unknown, but game developers do seem to be adopting it. Compared to an older card like the GTX 260, the 5830 may offer higher performance, superior visual effects, or some combination of the two in upcoming games. Beyond that, the 5830 has the ability to drive three monitors at once and play games on them, thanks to AMD’s Eyefinity feature. And we can’t forget that the peak power draw of our 5830-based system was about 25W lower than our GTX 260-based one.

Potential buyers will also have to consider the value of the Modern Warfare 2 game bundle that AMD and some of its board partners are offering. That’s a $60 game, after all, and it should be packaged with many of these cards, as will other games in some cases. We’re not always big on game bundles, but MW2 is a heckuva lot of fun. Of course, if you already own it like many of us do, that won’t matter much to you, I suppose. And if card makers really charge an extra $25 for the MW2 bundle, we’ll be less than compelled.

One question we can’t yet answer is whether the 5830’s lower power draw will translate into lower noise levels than your typical GeForce GTX 260 or Radeon HD 4890. That will depend, of course, on the coolers selected by the board guys, just as the game bundles do. As a result, the ultimate value proposition of a given 5830 card isn’t something we can precisely gauge.

Overall, though, here’s what I think. The Radeon HD 5830 fills a gap in AMD’s lineup that desperately needed filling. The fact that AMD decided to address that need in this exact way, however, is ultimately disappointing. Given the current shortage of viable alternatives and the 5830’s richer feature set, we may end up tentatively recommending the 5830 as the card to choose in this price range, but we can’t do so with any great enthusiasm. Perhaps once the final products have reached the market, we’ll have some more positive indications. The 5830 had a chance to win our unqualified recommendation, though, and it simply hasn’t done so.

Comments closed
    • mutarasector
    • 10 years ago

    “Although AMD was able to present six displays to Windows Vista as a single, large surface ready for use in 3D accelerated games, making that happen in Windows 7 involved an additional technical hurdle. Win7 would allow for up to four 3D-accelerated displays, but not six. ”

    This is not quite correct. It isn’t so much of a /[

    • Suspenders
    • 10 years ago

    I’d just like to request that Napoleon: Total War (or Empire) be included in the next graphics card review. It would be nice hat tip for all of us who aren’t huge fans of first person shooters 🙂

    Otherwise, very interesting round up. It’s interesting how far things have come in just a few years…

    • sigher
    • 10 years ago

    $240 (as a startingprice at release) might be JUST doable, but in europe they all sell for €250. which equals even with the current low euro to $339.-, and I think you can see that’s just totally unacceptable, especially since that’s very close to the €270 they ask for the the HD5850.

    And incidentally the HD5850 is still poorly available according to listings, and they mock nvidia’s issues, hah, and everybody turns a blind eye to these ATI manufacturing issues, even though the cards would have been at least 30% cheaper now if it wasn’t for that.

      • khands
      • 10 years ago

      No, they’d just be making more money faster, until there’s a competitor these will be expensive.

        • sigher
        • 10 years ago

        I’m sure there’s some of that too.

    • flip-mode
    • 10 years ago

    Just wanna say to Mr. Wasson: I really appreciate you taking the time to throw the old card in there. And I really appreciate the value chart. Very, very awesome stuff, Mr. Wasson.

    • Vaughn
    • 10 years ago

    I picked up a 4890 in oct 09 for $189 and after seeing this review i’m still laughing!

    • Rakhmaninov3
    • 10 years ago

    Still like my 9800 GTX

    • flip-mode
    • 10 years ago

    boo boo

      • LoneWolf15
      • 10 years ago

      Need a band-aid for that? 😉

    • LoneWolf15
    • 10 years ago

    I waited for the 5830 to come out. Until a couple of days ago, the jury was still out on what I thought of it. Now I’m just disappointed.

    Combine that with the fact that prices on ATI cards from the 5770 on up just jumped $10-20 across the board at the end of this past week, and I decided to throw out one of the things I wanted –lower power usage —and upgrade from a 4850 to a 4890. In every test I’ve seen, it compares favorably to a 5830, and I was able get one new for about $160 (current 5770 pricing has shot up from $159 to $179-189).

    I wanted a 5850, but at the current price of $299-309, I’m just not willing to budget for it (I might have paid the original MSRP of $259). I think at 1920×1200, the 4890 should do well most of the time, and I don’t see DirectX 11 being a huge thing for awhile longer.

    Thanks Scott, for the review. You and others helped me make the decision to save some coin on my purchase.

    • jokinin
    • 10 years ago

    I don’t really know why this card has even been released, at least at this price point. It barely outperforms my year and a half radeon 4870, an it costs almost the same as my 4870 did.
    And if that wasn’t enough they plan to increase prices even more. Thanks AMD, now i know i must skip this whole 5K generation, and wait until the next one is out and is more affordable.

      • geekl33tgamer
      • 10 years ago

      It’s called AMD DX11 Tax. On March 26th (ish), AMD’s prices will fall slightly when the green team release (using that word carefully) their new cards. It’s also barely faster than a GTX 260, and that can’t be avoided as they cost about $90-100 less to buy.

      This card on a performance vs cost perspective would be a lot better if it was $190-200 instead.

    • Sunburn74
    • 10 years ago

    GTX 260 > HD 5830 at high resolutions? Outrageous son! Outrageous. I bought my gtx 260 for $160 flat and its outperforming a $240 card by that much? To tell the truth, the HD 5850 doesn’t look so good either when you stack the high res gtx 260 scores vs it

      • khands
      • 10 years ago

      This is a factory OC’d GTX 260 (which I really wish they’d mentioned on each of the tests), it’s effectively a GTX 275.

        • JustAnEngineer
        • 10 years ago

        Amen to that. Using a rare overclocked card and not labeling it as such just leads to confusion.

    • gbcrush
    • 10 years ago

    Well, thanks a lot. Reading that has just re-enflamed the buyers guilt I feel over pulling the trigger on an 5850.

    But the fact that this report was able to do so, and that I’d so willing accept it only speaks to how well done I think this article is. Even if you come to the same conclusion as other sites (say, Anand’s)…there’s real meat in the writing here, and of course the humor is pretty thick icing on the cake.

    I definitely like what you’ve done though, both with the Value/Performance graphing, and the comparisons going way back. I tend to read these articles more intently when I’m itching to make an upgrade…and of course, I’m not trying to upgrade from a contemporary card…but from one a good several seasons ago. Keep it up.

      • Freon
      • 10 years ago

      “Well, thanks a lot. Reading that has just re-enflamed the buyers guilt I feel over pulling the trigger on an 5850.”

      Did we read the same article? O_o

    • flip-mode
    • 10 years ago

    The longer the Radeon 5000 series is with us, the less impressive it seems. If anything saves it, it is the fact that any 58xx or 57xx card is fast enough to run today’s games, and probably tomorrows. But pricing turns things sideways. I feel like the 5770 should be $125 max, and the 5850 should be $225 max. That’s just me…. but I don’t see this happening any time soon… Nvidia is likely to launch a very limited number of cards and will price them high which will essentially give AMD no reason to lower prices.

    It’s going to be Christmas 2010 before I think there’s a chance of the situation improving.

      • MadManOriginal
      • 10 years ago

      By this time we should be expecting to see rumors and info on the imminent release of the 6-month respin cards but that doesn’t look to be happening. It would certainly be a bit of a pricedrop driver. At this point AMD may just be holding off on that in favor of the Islands cards in Q4 of this year. There’s your Christmas price shift. 😉

      In the end the 5k series is really the 4k series bulked up on steroids (in some cases, in some cases they haven’t even added performance) with DX11, and the underlying base goes back to the R600. It’s much like NV used the G80 as a base for a long time and Fermi is NVs truly ‘new’ unified architecture.

        • khands
        • 10 years ago

        Everything save the 5870’s and the derivatives have been effectively beefed up /[

      • d0g_p00p
      • 10 years ago

      I am not a cheap ass like most of the TR readers seem to be. ~$300 for the 5850 is a fantastic deal to me (which is why I own one) it will run everything today at max resolution as well as upcoming games well into 2012. If performance starts to suffer, low the rez by a notch.

      People are forgetting that you would have to spend $400+ consistently to have a very fast graphics card that could play current titles at max rez and high settings. The last few years we have seen that trend break and seeing very fast cards for not that much money. However people seem to want to purchase the top end card for $99 and will always complain about the prices.

      Yes, ATi spoiled us with the 4XXX series but that was an exception and not the norm. For a high end card you will pay. Remember spending $400 for a Matrox Millennium, a Voodoo 2 8MB, 9700 Pro, etc? I do.

      What really kills me are the people who don’t game on their PC’s complaining. For a tech enthusiast hardware site, there sure are plenty of cheapskates and whiners.

      Yes, I’ll take some cheese with my whine.

        • flip-mode
        • 10 years ago

        You make a valid point, on the one hand. On the other computer enthusiast have gotten used to more for less in almost all areas – CPUs, mobos, HDDs, and even video cards. This is the first time in my memory that video cards are being released that offer LESS performance while costing MORE money (than a cheaper but higher performing card).

          • rUmX
          • 10 years ago

          I would say the 5830 is an exception to the norm. I really don’t know why this card was even released. The 5830 is just a hack job to fit a price point. I don’t even think these will sell well. Probably, this card will just end up on the retail shelves of B&M stores selling to your average joe.

      • Freon
      • 10 years ago

      I agree more on the 5770 and below, but less on the 5850 and above. The 5850 even at $29-300 seems to stand on its own.

      It’s a rough time for the consumer, though. Little competition, low supply, price premiums. Good time to just ride it out, but if you really need to upgrade I don’t think you’ll be kicking yourself too hard if you pick up a 5850 right now.

      <5770 cards are a tougher call since the DX10 cards seem to be better values on price/performance basis.

        • Firestarter
        • 10 years ago

        Yeah, dirt-cheap quad cores, cheap RAM, terabytes for cheap, reasonably priced SSDs that make 15k SCSI arrays eat dust, IGPs that decode blu-ray and even play COD4, man it’s killing me to be a consumer right now. Especially since all this cheap stuff is around and I still cannot afford to ditch my 3 year old laptop!

          • Rakhmaninov3
          • 10 years ago

          Amen, brother.

          • MadManOriginal
          • 10 years ago

          Screw that, where’s my flying car??!

            • khands
            • 10 years ago

            Also, laser guns.

            • highlandr
            • 10 years ago

            I’m willing to wait a couple of years on the flying car if somebody can get me a personal rocket pack instead.

            I promise to keep out of commercial airspace.

          • flip-mode
          • 10 years ago

          Cheaper, faster, better is the normal pattern for computer hardware. In this case, I just feel that the “cheaper” part didn’t quite materialize as well as it has in the past. Or else the faster. The 5770 has been the main target of my ambivalence, and now the 5830 joins it and even makes the feeling more acute. The 5850’s and 5870’s original launch prices were actually pretty good, and I think when I said “$225 max” earlier for the 5850 I was undercutting it a little. Whatev… just typing what comes to mind…

          • Freon
          • 10 years ago

          This was only a comment with regards to video cards.

          I’m unsure how you would take that as a blanket statement for the computer industry. Why did you stop there? Why not talk about 50″ plasma TVs, smart phones, front loading clothes washers, reverse osmosis water systems, etc.

    • DaveJB
    • 10 years ago

    If I didn’t know better, I’d say the price gap between the 5770 and the 5830 (combined with the amount of cards ATI has already released this generation) means that AMD are planning a “5810” to fit in that gap, but I’m struggling to think of how that’d work from a specification point of view – maybe the same number of cores as the 5770, but with a higher clock speed and the 256-bit memory bus?

    • flip-mode
    • 10 years ago

    Thanks for the review Mr. Wasson. Your assessment of the card seems dead on.

    Maybe it’s the benchmark you’ve chosen, but as others have said, if you have an 8800GT you can still get by, and if you have a 4850 or 9800GTX or better then you’re essentially in great shape.

    And unless you’re coming from something less than an 8800GT, anything less than a 5850 doesn’t seem like an exciting upgrade.

    It’ll be a great day when the 5850 can be had for $200 or even less.

    • Joerdgs
    • 10 years ago

    Kind of missed the Geforce 7 series in the power consumption graphs, wanted to have a final laugh at them. I like how they constantly pumped out 13 fps in DiRT 2

      • derFunkenstein
      • 10 years ago

      Yes, let’s laugh at 5-year-old hardware running current games and how they can’t handle it. Feel sorry for them!

    • dpaus
    • 10 years ago

    I think the best part of the article is the scatter plot, and the best part of that is the ability to graphically show what each card /[

    • indeego
    • 10 years ago

    It’s amusing all the effort TR puts into these $60-$600 cards but they don’t cover AT ALL the 30″ monitors that really take advantage of them. Ah that’s right. They are 2-*[<30<]*x as expensiveg{

      • d0g_p00p
      • 10 years ago

      Not sure what you are trying to say but Scott includes 2560×1600 results for every video card review. Unless you are implying that these cards need to be tested with dual 30″ displays at 2560×1600 resolution each.

        • indeego
        • 10 years ago

        I’m saying what is the point of testing at that 30″ resolution when a) The vast majority of users do not even have a 30″ LCD. b) TR doesn’t even review LCD’s at that size to give us a “complete picture” of the available resolution.

        In fact, now that I look back, they completely ignore resolutions that are much more common for much more “mainstream” LCD’s (1900/1920) at the 24″ area, with few exceptionsg{<.<}g

          • thecoldanddarkone
          • 10 years ago

          They have 1920×1080 benches. The only exception is MW2 which only had 2560×1600.

          • JustAnEngineer
          • 10 years ago

          §[<https://techreport.com/discussions.x/12678<]§ I game at 2560x1600, so those benchmark results are definitely of interest to me.

            • indeego
            • 10 years ago

            Are you linking me to a post that proves my point? The 24″ is much more affordable than the 30″. The 30″ hasn’t dropped a dime since that was postedg{<.<}g

            • sigher
            • 10 years ago

            Due to both ATI and nvidia pushing multiple displays you end up with high overal resolutions but a few relatively cheap monitors to achieve that resolution, at that point using one large monitor is just a quick way to test if a card can push the required number of pixels – unless there’s some kind of bottleneck that makes multiple monitors slower than one large monitor with the same total amount of pixels.
            Would be interesting to do a quick test on that actually, either one could actually be slightly faster than the other, depending on architecture, or it could all be equal.

    • LordEkim
    • 10 years ago

    Nice review, now I’m confident more than before that my two Years old 8800gt will serve me another 2 or more. Played trough Dark Void (2nd hand sale from friend, and sold again) almost all settings on high and minimum frame rate was 40. So no need to upgrade until 5850 is around 90€ and affordable to me.

    PS. don’t bash me on my practice of reselling games because this is the only way I can afford them.

      • djgandy
      • 10 years ago

      Yes my conclusion was the 8800GT is still one of the best value cards around!

    • can-a-tuna
    • 10 years ago

    What has happened to Techreport? They are constantly using over-clocked nvidia cards as comparison. No Crysis or Far Cry 2? I don’t understand having borderlands as a benchmark. You should have taken Mirrors edge too with physX enabled and show how crappy ATI cars are if you go to that line. At least Tom’s has finally learned to under-clock all their over-clocked cards.

      • flip-mode
      • 10 years ago

      ^ Post smells fishy.

        • YellaChicken
        • 10 years ago

        Waka, waka,waka. 🙂

      • grantmeaname
      • 10 years ago

      They explained why they used overclocked cards in the article. They also explained it in every other gpu review on the site in which they used a factory overclocked card, in case you missed it. If that’s what’s available on the market for that price, there’s no reason to pretend it doesn’t exist and test on the “base” or “reference” model’s clock speed, because the overclocked cards for the same amount or slightly more and with better availability to boot is what is actually bought by the customer and thus what the readers need to see the information on.

        • khands
        • 10 years ago

        They really should just put an “OC” after the overclocked cards though.

          • JustAnEngineer
          • 10 years ago

          Agreed. Using an expensive and rare overclocked card and not labeling it as such is not very objective.

          • grantmeaname
          • 10 years ago

          Well, yes, that would be an improvement. But all the people crying Tom’s on us are so clueless…

    • brucect
    • 10 years ago

    whats up with overclocking that 5830.

      • Freon
      • 10 years ago

      It’s on a 5870 board so power shouldn’t be an issue, but already at a clock speed close to the 5870 (800 vs 850 I think?). Luckily so far all the cards appear to have beefy coolers, basically 5870 coolers so until they figure out they can save a few bucks on a smaller cooler…

      It will be interesting to see how much you can overclock a flawed chip. Or perhaps more interestingly, if someone figures out how to enable some of the disabled parts of the chip.

        • brucect
        • 10 years ago

        I will get one of 5830 when comes down under 200 bucks . maybe overclock it 2

    • MadManOriginal
    • 10 years ago

    Here’s a question for you to pass along to XFX, Scott. It appears that the XFX 5830 actually uses a stock 5750 cooler – the one that’s basically just a piece of aluminum – is that true? If so I’d think it’s a pretty big stretch for that cooler, even the 5770 v2 cooler is better than that and it uses quite a bit less power.

      • LawrenceofArabia
      • 10 years ago

      My sense says photoshop mockup that somehow made it as an ‘official’ photo. 1. There is no way a stock 5750 cooler can handle the extra 50 watts of power and 2. I doubt XFX has the fairly magic to shrink an 11′ 5870 reference board down to a 7′ 5750 board.

      Of course if XFX does posses such fairy magics they’ve certainly got an edge over the competition.

        • MadManOriginal
        • 10 years ago

        Yeah the much more compact board is another big question but a reference 5870 board was a ‘suggestion’ by ATi not a requirement, there are in fact no reference designs for the 5830 as mentioned in the article. But the heatsink shroud does say 5830 right on it…wierdness.

        *Mid-post I was inspired to check on XFX’s website. It seems that the picture here is clearly off, the board is not that short but the cooler is representative of the type used (not large, single fan w/ shroud) however I *think* I can see the edge of a heatpipe in the second picture which would make it similar to the 5770 vs heatsink. It’s hard to tell though and no zoom. §[<http://www.xfxforce.com/en-us/products/graphiccards/hd%205000series/5830.aspx<]§

    • ssidbroadcast
    • 10 years ago

    q[

      • flip-mode
      • 10 years ago

      You want outrage? Go look at [H]’s pathetic example of a review!

      • Voldenuit
      • 10 years ago

      Hopefully it’s not to late for ATI to release an upgraded version, much like nvidia did with the GTX 260 SP216?

      I, for one, would like to see more ROPs.

      Though personally I’d really like to see the 5850s back at their original MSRP of $259 or lower.

      • BobbinThreadbare
      • 10 years ago

      l[

        • khands
        • 10 years ago

        They’re otherwise dead weight, they’re getting something for almost nothing with these cards.

    • rastaman
    • 10 years ago

    Scott,
    Great writeup, I can understand why you needed some extra time away from the review. Also, a big thanks for including the 8800GT in the mix. This is a welcome sight to those of us who skip video card generations and want to see an apples to apples comparison with the latest hardware out there. Looks like I’ve got to agree with post #2 & #7!

    One question tho, why did you run Call of Duty: Modern Warfare 2 only at 2560×1600? Were all the lower end cards unplayable at 1680×1050 on that title too?

    ~Rasta

      • toyota
      • 10 years ago

      I bought my gtx260 about 16 months ago for well under $199 which was barely more than a 4850 at that time.

    • DrDillyBar
    • 10 years ago

    “The GeForce 7900 cards are nothing if not consistent.” – I laughed.
    My 4870 trucks along quite nicely at 2.4MP.

    • potatochobit
    • 10 years ago

    maybe I missed it
    where did you talk about the overclock in the review?

    talking about value is fine, however, most people who choose AMD put their value in a fun overclock.

    the 5xxx series is suppposed to get much better numbers with a voltage tweak, correct?
    the 5770 supposedly passes the 4870 and is close to the 4890?

    so the only thing I want to know from any 5830 review is simply this.

    when overclocked, will a 5830 surpass a 4890 that has also been overclocked.

      • Veerappan
      • 10 years ago

      l[

    • BoBzeBuilder
    • 10 years ago

    Wow. I thought the 7900GTX would do better than that, at least at 1680×1050.

    As for the 5830, they should’ve just made it a highly overclocked 5770. It would be cheaper to produce, and might have performed better too.

      • FuturePastNow
      • 10 years ago

      Cheaper is relative, if they’ve got a ton of Cypress GPUs that are only good for up to 1120 SPs and/or 16 ROPs, then it’s cheaper to sell that and get some return on them.

      • OneArmedScissor
      • 10 years ago

      A higher clocked 5770 would use a lot of power and require more expensive memory, as well.

      If high speed GDDR5 were in greater abundance, and if 40nm had turned out to be a bit more efficient, that would make sense, but it’s not quite there just yet.

      I imagine that will be a viable option for the midrange of the 6000 series.

    • grantmeaname
    • 10 years ago

    I’d argue that the 1920*1080 results are a more accurate depiction of the overall trends than the 2560*1600 results.
    That being said, I can’t argue with the scatterplot. The 5830 is not good.

    • StuG
    • 10 years ago

    I feel like the issue was they tried to cram too many cards into the same general area of performance. What they should have done was made the 5770 and 5750 with 256-bit bus’s and than scaled the prices appropriately. Would have made the whole line-up look more complete and more desirable.

    That way the 5770 could have been 180-190 and the 5750 could have been 140-150. Filling in the price gap fine.

    • herothezero
    • 10 years ago

    Great review and a great example of these reviews should be done.

    No contest–the 5770 is a far better choice.

      • khands
      • 10 years ago

      If you’re going down, I’d rather save a little more and grab a 5850 personally, especially if they come back to earth at the end of the month.

        • BoBzeBuilder
        • 10 years ago

        Except 5850s are selling for around $310, which is ridiculous.

          • khands
          • 10 years ago

          Which is why I said wait a month and hope the 470’s are semi-competitive.

            • BoBzeBuilder
            • 10 years ago

            Which is why I’m saying even if 470’s are competitive performance-wise, I don’t see ATI dropping 5850 prices by much. After all GTX 470 is supposed to compete with 5870 so I’m guessing it’ll cost ~$400. At best, we’ll see 5850 @ 290 <~ basically where it was a month ago, which is frustrating.

            • poulpy
            • 10 years ago

            Well that’s a rather different post from the first one liner, that or you found a post compression algorithm that you need to share with the rest of us.

            Edit: On a side note if we’re not in the Golden Age of graphic cards -price wise- we’re still quite far from outrageous prices we’ve seen here and there in the past. Things will probably stay the way they are if Fermi is a flop or 10k units though..

            • BoBzeBuilder
            • 10 years ago

            Last generation was the golden age. Prices were amazing dude, top of the line 4890 for $200. Nice.

            • poulpy
            • 10 years ago

            Yes that’s what I meant, last gen was thanks to a very competitive market.
            Although 4890 launched at $250 though IIRC if you want to compare apples to apples.

            • khands
            • 10 years ago

            I think it really depends on how many units Nvidia wants to sell, if they’re going to keep them scarce (entirely possible) then yeah, if they really want to push it it may go down to ~$250.

    • PRIME1
    • 10 years ago

    I think the reason ATI did so poorly last quarter, is that no one sees these cards as compelling upgrades.

      • grantmeaname
      • 10 years ago

      $250 graphics cards are a very very very tiny proportion of the business any graphics company does, much less a graphics, CPU, and chipset company. Some of the bigger parts of AMD’s revenue: The Xenos Xbox 360 chip, mobile CPU’s, desktop CPU’s, server CPU’s, motherboards for any and all of the above, laptop IGPs, desktop IGPs, cheap desktop AIBs that are OEM-only, cheap laptop discrete GPUs, professional GPUs, and midrange desktop AIBs. Don’t kid yourself.

      Besides, how could a card coming out this week magically go back in time and hurt AMD’s financials for a quarter that’s already over? Derp. Derp.

    • kpo6969
    • 10 years ago

    I’m very happy with my HD5770 and sticking with it.

      • Dposcorp
      • 10 years ago

      q[

      • Nutmeg
      • 10 years ago

      Me too. 5770 does great.

    • FuturePastNow
    • 10 years ago

    It just isn’t faster than the 5770 by enough to justify spending $80 more, and it isn’t faster than the GTX260 by enough to justify spending $40-50 more.

    Unfortunately, I don’t think we can trust Nvidia to actually deliver real midrange Fermi derivatives this year, which means competition won’t increase and prices won’t decrease any time soon.

    • MadManOriginal
    • 10 years ago

    Hey Scott, in the interest of ‘looking at old hardware’ there’s something that’s been on my mind for a while. I haven’t really played any truly modern games (the newest being at least 2 years old) and have actually picked up some older games that have low by modern standard system requirements thanks to Steam sales and the sysreqs are like 6600GT or lower. Now it’s easy to say ‘this card is faster than an old one’ but what isn’t so easy is what new card or *especially* what IGP might be equal to an old one. I say IGP especially because I’ve been thinking of going with an IGP in the future (AMD DX11/Llano, Sandy Bridge IGP, whatever NV comes up with) for the little lightweight gaming I do for my next new system build.

    The problem is that most any IGP review uses more modern games but at stupidly low resolutions and/or settings. Great, we know IGPs are bad for newer games…not the most useful tests. It would however be very useful to know what older cards low-end GPUs or IGPs are equivalent to for the sake of playing older games. Even just one such article would be useful because then we could reference off of it in the future using extrapolation for newer IGPs.

    • mczak
    • 10 years ago

    Too bad it’s much closer in performance to HD5770 rather than HD5850. Probably has something to do with the strange 3dmark color fill result – this card basically acts like it had a 128bit memory interface as far as rops are concerned (also compare this to §[<http://www.hardware.fr/articles/783-3/preview-radeon-hd-5830.html<]§ which has color fill rate results for a couple formats - some are way lower than what you'd expect...).

    • ClickClick5
    • 10 years ago

    Nice review, still one problem…

    Why use console ported games? Remove MW2 from the test list.
    Use a game that is MADE for the PC, or at least tuned past the Xbox 360 copy > paste design.

    • Krogoth
    • 10 years ago

    If you want DX11 get 5770. It is almost as fast, while using less power and space.

    Otherwise sitck with 260 and 4870 for best bang for the buck.

    • Palek
    • 10 years ago

    A couple more articles like this from other respected tech sites and I can see the price tumbling down to $200 very fast.

    Still, some *khm* damage is already done.

    • Fighterpilot
    • 10 years ago

    The best under $250 DX11 card available and it uses far less power both at idle and load than pretty much every card tested.
    Green fans whine but….where’s the Nvidia beef?
    ok I know ,I know….(Still)”Waiting for -[

      • PRIME1
      • 10 years ago

      PhysX > DX11

      So far DX11 is a disappointment. Certainly not a must have.

        • CampinCarl
        • 10 years ago

        Because PhysX is such a required feature.

        *heavy eye roll*

        • OneArmedScissor
        • 10 years ago

        And clown wigs are generally more useful than monocles, but oddly enough, I don’t need either one.

        • Palek
        • 10 years ago

        q[http://www.nzone.com/object/nzone_physxgames_home.html<]§ Pretty underwhelming... The DirectX 11 list looks a lot more appealing (to me at least). Games (upcoming titles included) with DirectX 11 support: Battleforge (out now) The Lord of The Rings Online (January 2010 ??) DIRT 2 (out now) F1 2010 (March 2010) Alien Vs. Predator (Q1 2010) STALKER: Call of Pripyat (January 2010 ??) Supreme Commander 2 (Q4 2010) Crysis 2 (Q4 2010) Battlefield 3 (Q4 2010/Q1 2011) Dungeons and Dragons Online (Q1 2010) Race Driver: Grid 2 (TBA) Metro 2033 (16 March 2010) §[<http://www.digitalbattle.com/2009/10/27/top-10-upcoming-directx-11-games/<]§ §[<http://en.wikipedia.org/wiki/List_of_games_with_DirectX_11_support<]§

          • PRIME1
          • 10 years ago

          Funny how you include “upcoming games” to make a list fuller. So you really have 1 or 2 games available.

          Also DX11 is closed and proprietary and is used to add nothing more than visual effects.

          You really have to bend the truth a lot (break even), for your response to be even close to valid.

            • grantmeaname
            • 10 years ago

            DX11 is proprietary to _[

            • PRIME1
            • 10 years ago

            It’s still proprietary.

            Also there are probably 20-50 times as many PhysX capable GPUs out their as opposed to DX11 GPUs.

            • Palek
            • 10 years ago

            Yes, and 14 out 15 of those PhysX-capable cards slow to a crawl when you enable PhysX. Yay!

            • MadManOriginal
            • 10 years ago

            DX11 is ‘proprietary’ to virtually every ‘gaming’ PC out there though, the same just isn’t true for PhysX. Plus a better comparison is adoption rate…it’s been 1.5+ years after NV bought Ageia and much longer since Ageia introduced PhysX and the adoption rate is pathetic.

            • rUmX
            • 10 years ago

            It’s actually less than 50% because those same users wont be using PhysX unless they enjoy shooting themselves in the foot with single digit framerates.

            • Palek
            • 10 years ago

            First of all, I mostly copied and pasted from the sources so that is why “upcoming” titles are clearly labelled as such for the DirectX 11 games but not for PhysX. Some of those PhysX titles are also upcoming, though.

            And hey, at least I did not hide the fact that many of those games were still in development. I don’t have an agenda here – I don’t think you can say the same without bringing down the roof on your head for shameless lying.

            Yes, there may be fewer titles out now for DirectX 11. However, another way of looking at it: there just aren’t very many new and interesting games coming out for PhysX. DirectX 11, on the other hand…

            The “closed and proprietary” issue has been addressed by b[http://www.bit-tech.net/bits/2008/09/17/directx-11-a-look-at-what-s-coming<]§ By the way, did you know that the Compute Shader feature of DirectX 11 will make PhysX redundant? And it supports nVidia as well as AMD and intel hardware! Good for everyone, right?

            • Meadows
            • 10 years ago

            g{

        • rUmX
        • 10 years ago
        • flip-mode
        • 10 years ago

        Opinion X > Opinion Y; Fanboy X > Fanboy Y; blah blah blah; blabber blabber.

      • HisDivineShadow
      • 10 years ago

      The best… card that costs $240.

      Certainly, I agree with their assessment that the performance advantage over last generation’s cards that all sell for less (some significantly so) makes for a slim recommendation.

      And the review did suggest that if you’ve just got to have DX11 or a better thermal/power utilization, it might prove more beneficial.

      At $240, it’s overpriced for too little performance. You can get most of the performance from a much cheaper 5770 if thermals or DX11 are *that* important to you.

      Eyefinity on a card of this class would be a stretch.

      EDIT:

      And just think of all the price hikes to come when nVidia releases nothing in this segment or lower for months yet…

      Also, PhysX has been used more than DX11 so far. I expect this to change eventually.

        • grantmeaname
        • 10 years ago

        there’s not been a price hike in the past because of that. The only real price hike was the big 40nm failure of TSMC’s drastically cutting supply; when the supply issues cleared up the prices headed back downwards.

          • OneArmedScissor
          • 10 years ago

          Sure, there was no price hike /[

      • derFunkenstein
      • 10 years ago

      by virtue of being the only one. There may as well still be a hole between the 5770 and 5850, because you’re WAY better off with either of those (either a bit more cash in your pocket for similar performance, or substantially better performance for not much more cash). This thing is awful.

      • SecretMaster
      • 10 years ago

      The best isn’t always desirable.

      I for one, will not settle for mediocrity.

    • not@home
    • 10 years ago

    I liked that scatter plot. It is an instant visual representation of what I wanted from the article.

    • MadManOriginal
    • 10 years ago

    “I’ve got a fever, and the only prescription is more ROPs”

    AMD crippled this one a bit too much. :/ Oh well.

      • OneArmedScissor
      • 10 years ago

      I bet you it would have run most games roughly as fast or even faster than the 5850 if they hadn’t done that. It’s quite possible it might have even been right up there with the 5870, depending on how little use the particular game makes of so many SPs.

      Not something they’d want to have showing up in benchmarks.

        • lethal
        • 10 years ago

        Pretty likely given the clockspeed advantage. A 5850 running @ 5870 speeds is pretty close to the real thing:

        §[<http://www.xbitlabs.com/articles/video/display/radeon-hd5850_5.html#sect3<]§

        • MadManOriginal
        • 10 years ago

        At the faster clock speed yes, perhaps. Of course they could have just given it the same clockspeeds as the 5850 but not *half* the ROPs. Even 24 ROPs would have probably made this card a lot more palatable at $240.

        It’s simple logic that in non-shader limited situations a card with just some shaders disabled but all else the same will perform the same 😉

          • OneArmedScissor
          • 10 years ago

          I’m sure it’s a cost thing. It’s probably easier to whack half of them.

    • bdwilcox
    • 10 years ago

    I think you can file the 5830 under the “What the hell were they thinking?” category. What a lame effort on AMD’s part. Disappointing to say the least. Guess I’ll hold onto my money until there’s some actual value in the $200 segment.

      • rhema83
      • 10 years ago

      AMD simply wants to recover some money from Cypress chips that didn’t make the mark for 5850. If the initial wave doesn’t sell well, prices will drop so that AMD at least makes back some money. Given that the 40 nm yields are still not optimal, there will be a constant stream of HD5830 chips into the market.

        • grantmeaname
        • 10 years ago

        It’s hard for me to imagine the chip would be so defective to only have half the ROPs and 70% of the SIMD cores intact, but so good on the rest of the chip that it could run 75MHz faster without breaking a sweat. This is market segmentation, plain and simple, and defective Cyperesses have nothing to do with it.

          • willyolio
          • 10 years ago

          it’s not hard to imagine engineers have dealt with imperfect processes for decades and have begun to design GPUs with the ability to shut down sectors with defects while running other areas at full speed.

            • grantmeaname
            • 10 years ago

            That’s not my argument. Clearly engineers are able to do that; that’s precisely what every modern graphics card except the halo card for each chip and many CPUs do.
            Defects in the wafer and thus the chip are randomly distributed, and it is just _[

            • willyolio
            • 10 years ago

            well, i doubt they’re going to create a new product for every single combination of defective SIMD and ROPs.

            the 5870 has 20 SIMDs. 5850 has 18. the 5830 has 14. is it so hard to believe there are a large number chips being made with anything between 14-17 functioning SIMD units? they can still pick and choose the cream of that particular crop.

            or are you saying that AMD should release a 5835 with 15 SIMDs, 5840 with 16, and 5845 with 17 SIMD units? do you believe that every 5850 released has exactly two defective SIMD units? of course they’re segmenting the market.

            • OneArmedScissor
            • 10 years ago

            “is it so hard to believe there are a large number chips being made with anything between 14-17 functioning SIMD units?”

            Yes lol. There really can’t be many if it took them this long to stockpile enough of them.

            At the price they’re asking, they obviously don’t want it to be a terribly compelling option, and that’s very likely because there aren’t many to go around.

            • grantmeaname
            • 10 years ago

            /[

          • Freon
          • 10 years ago

          Nothing would indicate all 5830 chips have that many flaws. A 5830 chip need only have just enough flaws to make it nonviable at a 5850. Maybe they have some 5830 chips where all of the ROPs are good but they were 1 shader core short of a 5850.

          Picking the the bins could be fairly arbitrary from that perspective. I’m sure it comes down to salvageable parts and some accounting work.

            • grantmeaname
            • 10 years ago

            that’s what I was arguing. This is clearly a marketing and accounting decision, not an engineering necessity.

            • poulpy
            • 10 years ago

            No, everybody seem to agree that it’s a bit of both.

            They must have chips with different degrees of impediment that they recycle as whatever they please.
            Then given the volume needed for any given model they can pick perfectly fit chips and disable units themselves instead of having a 5870 capable chip sitting on a shelve.

            Hard to know in a definite way the proportion given that a) we have no inside data whatsoever and b) the chips have barely started selling so it could still be 100% recycling of 5-10k units so far..

    • Sunburn74
    • 10 years ago

    5830 absolutely terrible card. Nothing to offer.

    BTW, who buys 3 monitors only to pair it with a 5850? I mean if you can afford 3 monitors why can’t you afford the few extra 50-60 bucks to get the substantially more powerful 5850?

      • Sargent Duck
      • 10 years ago

      If you can afford 3 monitors (decent ones, not crappy ones), then forking out for a 5870 will be no problem at all

      • CampinCarl
      • 10 years ago

      I might do something like that. I currently have a ‘crappy’ Acer 22″ and my LG 32″ TV. I was originally thinking I was going to get money back from taxes (that turned out to be FALSE), and was going to spring for a 5850 and another ‘crappy’ 22″ Acer. This would give me two desktop monitors plus the ability to watch movies on my TV over sound-included HDMI. Winnar.

    • OneArmedScissor
    • 10 years ago

    Finally, a set of cards you can actually make comparisons between!

    Thank you.

      • dpaus
      • 10 years ago

      Seconded, and ditto!

    • PRIME1
    • 10 years ago

    l[

      • Vasilyfav
      • 10 years ago

      I wish GTX260 still went for 160$ with free shipping on Newegg.

      Guess I’ll have to wait for Fermi derivatives

      PS: Oh yeah, 5830, terrible card. It plugs the price hole, but not the performance hole.

        • OneArmedScissor
        • 10 years ago

        An artificially created “price hole,” at that.

      • LaChupacabra
      • 10 years ago

      l[<"If you're not concerned about power consumption and a DirectX 11 feature set, you're easily better off with a GeForce GTX 260."<]l Fixed that for ya =)

      • grantmeaname
      • 10 years ago

      no, the hands down best card last generation was the 4850, if we’re talking price-performance (or maybe even the 4670, which performed like a 3870 for half the price of whatever nVidia renamed the 8800GT). Good try.

        • PRIME1
        • 10 years ago

        The 9800GTX/GTS250 was far better than the 4850. You should cry for how biased you are.

          • OneArmedScissor
          • 10 years ago

          That’s rich.

          • clone
          • 10 years ago

          4850 and 4870 made 260 a failure…… a dated product the day after the 4xxx series was released that needed it’s price cut in half just to be noticed.

          the complete lack of any new Nvidia product are the only things getting it the honorable mention in benches today.

          • MadManOriginal
          • 10 years ago

          They are about equal broadly speaking. Don’t be stupid, er, never mind I guess you can’t help it when it comes to NV.

          • rUmX
          • 10 years ago

          The GTX260 and HD4870 are equally identical in performance. The GTX260 that was tested in this article wasn’t the original GTX260 with 192SPs and lesser clocks.

          • Da_Boss
          • 10 years ago

          I think what people are trying to say is that from a Price/Performance perspective, the 4850 was the card too beat of last generation.

          At $199, it offered 90% of the performance of the original GTX 260 at 60% of the price. As I recall, 4850 and 4870 were doing so well at their respective price points, that NV had to re-release the GTX 260 with more shaders cause not even they could justify charging what they were for that card.

          You can’t really deny that happening, can you? Last generation’s winner was the 4850. Hands down.

            • PRIME1
            • 10 years ago

            I got my 260 for $200, so your point is dull.

            • scpulp
            • 10 years ago

            You are my favorite troll of all time.

            Every post is golden.

            • Da_Boss
            • 10 years ago

            I think you missed the point.

            You claimed the 260 to be the best card of last generation. On the merit of what you get for what you spend, most would agree that the 4850 was the card, for the reasons I already stated.

            Hypothetically, of course a 260 would be a better deal at 200. But, a 4850 would be an even better deal at $110 or less, would it not?

            In the end, It comes down to the best value available at the time. Anecdotal rambling about the great deal you got doesn’t really add up to the 260, in general, being a better value assuming both cards are available at or slightly below MSRP.

            • poulpy
            • 10 years ago

            I still don’t understand why people try to post intelligent posts and reason with him.
            I mean even if you have time to spare and/or got some community service to do by all means don’t waste it like that!
            Save your energy: nothing good _[

            • SubSeven
            • 10 years ago

            I bow at your intelligence. <humbly bows>

            • MadManOriginal
            • 10 years ago

            I see it as an attempt at interspecies communication and understanding. Trolls and humans must learn to live together!

            • SubSeven
            • 10 years ago

            But wait… trolls are humans too aren’t they? I just think Prime belongs to a different species all together… Equus Africanus Asinus. They share many similar characteristics (stubborn stupidity or stupid stubbornness), I just haven’t decided which yet.

            • khands
            • 10 years ago

            The official “dinosaur” of Illinois. Dangit.

            • BobbinThreadbare
            • 10 years ago
            • NeelyCam
            • 10 years ago

            Priceless!

            • grantmeaname
            • 10 years ago

            It’s like paying taxes. The whole community wants to absorb his knowledge, so we all have to take turns provoking him or else his wisdom will never come out and we’ll all just never know! Think of it as a sacrifice for your community of gerbils.

            • jss21382
            • 10 years ago

            Prime is a pretty reasonable person in person, come to the BBQ, you’ll see.

            • NeelyCam
            • 10 years ago

            Address?

            • derFunkenstein
            • 10 years ago

            It’s in the Back Porch forum.

            • DreadCthulhu
            • 10 years ago

            I got my 260 for nothing, but that does not negate the awesome Radeon 4850/4870 price performance ratio.

            • Freon
            • 10 years ago

            My anecdotes nullify your argument! *facepalm*

          • Joerdgs
          • 10 years ago

          Re-read the HD4800 series review. The HD4850 beat the 9800GTX on pretty much every front when it came out.

        • Lazier_Said
        • 10 years ago

        Interesting that 21 months ago when it was released the 4850 showed a pretty convincing performance lead over the 9800GTX.

        Yet in the current tests the 4850 is clearly inferior.

      • flip-mode
      • 10 years ago

      Released overpriced at $399, just like the 5870, so the card you’re talking about could be the 5870.

Pin It on Pinterest

Share This