AMD’s Radeon HD 3850 and 3870 graphics cards

As you may know if you follow these things, AMD’s Radeon HD 2900 XT graphics processor, also known as the R600, wasn’t exactly a rousing success in all areas. The chip brought big gains in performance, features, and image quality, but it was late to market. When it arrived, it was fairly power-hungry and simply couldn’t match up to the performance of Nvidia’s top GeForce 8800 GPUs. Worse yet, the gaping hole between the $149 and $399 price points in the Radeon HD lineup left many enthusiasts wanting more.

Fortunately, graphics chips have relatively short lifespans, and the regular introductions of new chips bring ample opportunity for redemption. Today is one such opportunity, as AMD pulls the curtain back on its latest creation: a chip modeled on the R600 architecture that promises similar clock-for-clock performance, yet is under half the size and draws a fraction of the power. Better still, the new graphics cards based on this chip, the Radeon HD 3850 and 3870, look to be very affordable. That’s progress on all the right fronts, and it sets up PC gamers beautifully at a time when we’re seeing more good games released at once than, well, maybe ever.

Here’s the question: is this progress sufficient to allow AMD to catch up with Nvidia’s just-introduced and dauntingly formidable GeForce 8800 GT? Have the AMD guys engineered an even better stocking stuffer? Perhaps. Keep reading for some answers.


The RV670 GPU

If you’ll indulge me, I’d like to start out with the chip geekery, and then we can move on to the graphics cards themselves.

The subject of our attention today is the RV670 graphics processor, which is basically a revised version of the R600 that’s been converted to a smaller chip fabrication process and tweaked in numerous ways. The R600 itself was manufactured on a 80nm fab process, and it packed roughly 700 million transistors into a die area of 408 mm². That’s one big chip, and it was part of a growing trend in GPUs—or shall I say, a trend of growing GPUs. The things were adding transistors faster than Moore’s Law allows, so physical chip sizes were rising like crude oil prices.

This latest generation of GPUs is throwing that trend into reverse. Nvidia’s G92, which powers the GeForce 8800 GT, shoehorns an estimated 754 million transistors into a 324 mm² die (by my shaky measurements) via a 65nm process. The RV670 goes even further; its 666 million transistors occupy only 192 square millimeters, thanks to a 55nm fabrication process. The move to smaller chip sizes means several things, including cheaper chips, lower power consumption, and less heat production. In the case of the R670, AMD says the advantages of a 55nm process over a 65nm one are mainly in size and power. They can fit 30% more transistors into the same space with a 10% reduction in power use, but with no real increase in transistor switching speed over 65nm.

Here’s a quick visual on the chips, to give you a sense of relative size.



The RV670



Nvidia’s G92

You can see quite readily that the RV670 is a fairly small GPU—more of a wide receiver than a fullback like the G92. Yet the RV670 packs the same mix of 3D graphics processing units as the R600, including 320 stream processors, 16 texture units, and 16 render back-ends.

As you may have noticed, though, the RV670’s transistor count is down from the R600, despite a handful of new features. The primary reason for the reduction in transistors is that AMD essentially halved the R600’s memory subsystem for the RV670. Externally, that means the RV670 has a 256-bit path to memory. Internally, the RV670 uses the same ring bus-style memory architecture as the R600, but the ring bus is down from 1024 to 512 bits. Thus, the RV670 has half as many wires running around the perimeter of the chip and fewer ring stops along the way. Also, since the I/O portions of a chip like this one don’t shrink linearly with fabrication process shrinks, removing half of them contributes greatly to the RV670’s more modest footprint.

Of course, the obvious drawback to this move is a reduction in bandwidth, but we noted long ago that the R600 underperformed for a chip with its prodigious memory bandwidth. AMD says it has tweaked the RV670 to make better use of the bandwidth it does have by resizing on-chip buffers and caches and making other such provisions to help hide memory access latencies. On top of that, higher speed memories like GDDR4 should help offset some of the reduction in bus width.

The RV670 GPU — continued



A block diagram of the RV670 GPU. Source: AMD.

The RV670’s 3D graphics portions aren’t totally cloned from the R600. AMD has added some new capabilities and has even convinced Microsoft to update DirectX 10 in order to expose them to programmers. The upcoming DirectX 10.1 and Shader Model 4.1 add support for some RV670-specific features and for some capabilities of the R600 that weren’t available in DX10. These enhancements include, most notably, cube map arrays that allow multiple cube maps to be read and written in a single rendering pass, a capability AMD touts as essential for speeding up global illumination algorithms. DX10.1 also exposes much more direct control over GPU antialiasing capabilities to application developers, allowing them to create the sort of custom filters AMD provides with its Radeon HD drivers. Microsoft and AMD have worked to tighten up some requirements for mathematical precision in various stages of the rendering pipeline in DX10.1, as well, in addition to various other minor tweaks.

Because being DX10.1 compliant is an all-or-nothing affair, the RV670 is the world’s only DX10.1-capable GPU, at least for the time being. I wouldn’t get hung up on the differences between DX10 and 10.1 when selecting a video card, though. I’m pleased to see AMD moving the ball forward, especially on antialiasing, but these aren’t major changes to the spec. Besides, only now are we starting to see differences in game support between DX9’s Shader Models 2.0 and 3.0, and many of those differences are being skipped over in favor of moving wholesale to DX10. (I would be more optimistic about DX10.1 support becoming a boon for Radeon HD 3800-series owners at some point down the line were it not for the fact that nearly every major game I’ve installed in the past two months has come up with an Nvidia logo at startup. That says something about who’s investing in the sort of developer-relations programs that win support for unique GPU features.)

Speaking of arcane capabilities changes, here’s an interesting one for you: RV670 adds the ability to process double-precision floating-point datatypes. This sort of precision isn’t typically needed for real-time graphics, but it can be very useful for non-graphics “stream computing” applications. AMD says the RV670 handles double-precision math at between a quarter and a half the speed of single-precision math, which isn’t bad, considering the application. In fact, they’ve already announced the RV670-based FireStream 9170 card.

So the RV670 includes mojo for many markets. One of those markets is mobile computing, and this time around, AMD is pulling in a mobile-oriented feature to make its desktop chips more power-efficient. The marketing name for this particular mojo is “PowerPlay,” which wraps up a number of power-saving measures under one banner. PowerPlay is to GPUs like Intel’s SpeedStep is to CPUs. At the heart of the mechanism is a microcontroller that monitors the state of the GPU’s command buffer in order to determine GPU utilization. With this info, the controller can direct the chip to enter one of several power states. At low utilization, the GPU remains in a relatively low-power state, without all of the 3D bits up and running. At high utilization, obviously, the GPU fires on all cylinders. The RV670 also has an intermediate state that AMD calls “light gaming” where some of portions of the graphics compute engine are active, while the rest are disabled in order to save power. PowerPlay can also scale core and memory clock speeds and voltages in response to load. These things are handled automatically by the chip, and niftily, AMD has included a GPU utilization readout in its driver control panel.



The Overdrive section of AMD’s control panel now includes a GPU utilization readout

We will, of course, test the RV670’s power consumption shortly.

The RV670 improves on the R600 in a couple of other areas. One of those is its support for HD video playback. AMD’s UVD video decoder logic is fully present in the RV670, unlike the R600, so the RV670 can do most of the heavy lifting required for playback of high-definition video encoded with H.264 and VC-1 codecs. We’ve tested UVD’s performance with Radeon HD 2400 and 2600 series cards and found that those cards couldn’t scale video to resolutions beyond native 1080p (1920×1080). AMD claims the RV670 has sufficient bandwidth and shader power to scale movies up to 2560×1600 resolution, if needed.

To help make that possible, the RV670 includes support for HDCP over dual-link DVI connections and, like the R600, has a built-in digital audio controller it can use to pass sound over an HDMI connection. AMD offers a DVI-to-HDMI converter, as well.

The last bit of newness in the RV670 is the addition of PCI Express 2.0 connectivity. PCIe 2.0 effectively doubles the throughput of PCIe connections, with very little drama. PCIe 2.0 devices like the RV670 remain backward-compatible with older motherboards.

We’ve heard very little in the way of hype for PCIe 2.0, but AMD expects the faster interconnect to become quite useful when it enables “CrossFire X” via new video drivers slated for this coming January. When combined with the new RD790 chipset, RV670-based video cards will be able to run in two, three, or four-way configurations, and the four-way config would involve four expansion slots fed by eight PCIe 2.0 lanes each. In order to make such madness feasible, the RV670’s CrossFire interconnect has been boosted to double the pixel rate per connector, so that only a single physical connector is needed for dual-card CrossFire configs. The second connector on each card could be used for some daisy-chaining action in three- and four-way setups.

AMD is betting in a big way on CrossFire, and plans to address some long-standing complaints with the tech in upcoming drivers. One planned change is the ability to support multiple displays “seamlessly,” a capability one might have expected from the beginning out of multi-GPU acceleration. AMD’s Overdrive utility now allows the overclocking of multiple GPUs, too. The biggest change, though, will be this one: rather than producing another high-end chip to replace the Radeon HD 2900 XT at the top of its lineup, AMD plans to introduce the Radeon HD 3870 X2 this winter, a dual-GPU-on-a-stick card reminiscent of the GeForce 7950 GX2. Given the challenges to multi-GPU performance scaling we’ve seen lately, even with only two GPUs, I’m not sure what to think of this new emphasis on CrossFire. The history of the 7950 GX2, after all, is not a happy one. Time will tell, I suppose.

The cards

Now that I’ve filled your head with ethereal bits of RV670 theory, here’s a look at the hardware.



The Radeon HD 3850

This thin little number is the Radeon HD 3850, the lower end of the two RV670-based graphics cards. This puppy will feature a 670MHz GPU core and 256MB of GDDR3 memory running at 830MHz (or 1.66GHz effective data rate). AMD says this board uses 95W of power and rates its cooler at 31 dBA.



The Radeon HD 3870 sports a dual-slot cooler

And this beefier specimen is the Radeon HD 3870. Unlike the GeForce 8800 GT, this beast packs a dual-slot cooler, which is both a curse and a blessing. Yes, it eats more slots, but it also exhausts hot air out the back of the case and should be able to provide more cooling with less noise than a single-slot design. Aided by this cooler, the 3870 reaches core speeds of 775MHz, and it runs its 512MB of GDDR4 memory at 1125MHz. The big cooler belies small things, though, including a rated board power of 105W and rated noise of 34 dBA.



Both 3800-series cards have dual CrossFire connectors



Even the 3870 needs only a single six-pin PCIe power connection

Both cards have twin dual-link DVI connectors and HDTV-out ports. They’re pretty much, uh, modern video cards as expected.

One nice touch about AMD’s new naming scheme: no suffixes like “XT” and “Pro” anymore. In the 3800 series, only numbers denote higher or lower performance, so things are much easier to decode. The folks in AMD graphics seem to have picked up this idea from the AMD CPU people, interestingly enough. Imagine that.

Doing the math—and the accounting

Those of you who are familiar with this GPU architecture may be jumping ahead. With a 775MHz core clock and only a 256-bit memory interface, how will the Radeon HD 3870 match up to the GeForce 8800 GT? Let’s have a look at some of the key numbers side by side, to give you a hint. Then we’ll drop the bomb.

Peak
pixel
fill rate
(Gpixels/s)

Peak bilinear

texel
filtering
rate
(Gtexels/s)


Peak bilinear

FP16 texel
filtering
rate
(Gtexels/s)


Peak
memory
bandwidth
(GB/s)

Peak
shader
arithmetic
(GFLOPS)
GeForce 8800 GT 9.6 33.6 16.8 57.6 504
GeForce 8800 GTS 10.0 12.0 12.0 64.0 346

GeForce 8800 GTX

13.8 18.4 18.4 86.4 518
GeForce 8800 Ultra 14.7 19.6 19.6 103.7 576
Radeon HD 2900 XT 11.9 11.9 11.9 105.6 475
Radeon HD 3850 10.7 10.7 10.7 53.1 429
Radeon HD 3870 12.4 12.4 12.4 72.0 496

Here are some of the key metrics for various enthusiast-class cards. We already know that the 3800 series’ ostensible competition, the GeForce 8800 GT, pretty much comprehensively outperforms the Radeon HD 2900 XT. As you can see, the HD 3870 slightly surpasses the 2900 XT in terms of pixel and texture throughput and peak shader capacity, but doesn’t have the same mammoth memory bandwidth. And, notably, the HD 3870 trails the 8800 GT in terms of texturing capacity and shader arithmetic. (Be aware that simple comparisons between GPU architectures on shader throughput are tricky. Another way of counting would reduce the GeForce 8-series cards’ numbers here by a third, and justifiably so.)

AMD apparently looked at these numbers, thought long and hard, and came to some of the same conclusions we did: doesn’t look like the 3870’s gonna perform quite as well as the 8800 GT. So here are some additional numbers for you: the Radeon HD 3850 should show up at online retailers today for $179, as should the HD 3870 at $219. This presents an interesting situation. The first wave of 8800 GTs largely sold out at most places, and prices rose above Nvidia’s projected “$199 to $249” range as a result. If AMD can supply enough of these cards and keep prices down, they may offer a compelling alternative to the 8800 GT, even if they’re not quite as fast overall. That certainly seems to be the hope in the halls of the former ATI. Whether that will come to pass, well, I dunno. Let’s see how these things actually perform.

Test notes

You may notice that I didn’t engage in a lot of GPU geekery this time around. I decided instead to focus on testing these new video cards across a range of the amazing new games and game engines coming out this fall. These GPUs are basically “refresh” parts based on existing technology, and their most compelling attributes, in my view, are their incredibly strong price-performance ratios.

In order to give you a better sense perspective on the price-performance front, I’ve included a couple of older video cards, in addition to a whole range of new cards. Roughly a year ago, the Radeon X1950 Pro faced off against the GeForce 7900 GS at $199. This year’s crop of similarly priced GPUs have some substantial advantages in terms of specifications and theoretical throughput, but as you’ll see, the gains they offer in real-world performance are even larger—and they do it while delivering image quality that’s sometimes quite noticeably superior to last year’s models, as well.

You’ll find results for both the X1950 Pro and the 7900 GS in several of our gaming tests and in our power and noise measurements. I’ve had to limit their participation to scripted benchmarks because these cards were generally too slow to handle the settings at which we tested manually with FRAPS. Also, please note that these older cards are using DirectX 9 in Crysis, since they can’t do DX10.

That leads me to another issue. As I said, the older cards couldn’t handle some of the settings we used because, well, they’re quite intensive, with very high resolutions, quality levels, or both. We tested at these settings because we wanted to push the cards to their limits in order to show meaningful performance differences between them. That’s hard to do without hitting a CPU or system-level bottleneck, especially with cards this fast running in multi-GPU configurations. We did test at multiple quality levels with a couple of games in order to give you a sense of performance scaling, which should help. But please don’t take away from this review that a card like the Radeon HD 3850 can’t run most of these games at more common settings. Quite the opposite is true, which is why this new breed of cards is a nice fit for the new wave of games coming out. Most folks won’t run at 2560×1600 resolution, of course. We intentionally pushed the boundaries in order to tease out performance differences.

Also, please note that many of the GeForce cards in the tables below are clocked at higher-than-stock speeds. Nvidia’s board vendors have made a practice of selling their products at multiple clock speeds, and some of our examples are these hot-clocked variants. For instance, the 8800 GTS cards are all clocked at 575MHz (or in the case of the one XFX 320MB card, 580MHz) core clocks and correspondingly higher shader clocks. Obviously, that’s going to change the performance picture. We think it makes sense to include these cards because they’re typically fairly plentiful and available for not much of a premium over stock-clocked versions. They’re what we might buy for ourselves.

The one exception to that rule, at least right now, may be the GeForce 8800 GT. The first wave of these cards looks to have sold out at many online vendors, and all variants are going for something of premium right now—especially the higher clocked ones. We have included one “overclocked” version of the 8800 GT (from MSI) in our tests in order to show you its performance. This card is very fast, but be aware that it is not currently a $199 or even a $249 option.

Finally, in the graphs, Ive highlighted the results for the Radeon HD 3800 series cards in bright yellow so they’re easy to spot. I’ve also highlighted the GeForce 8800 GT in pale yellow, so the closest competition is easier to compare.



Gigabyte’s X38-DQ6 served as our new CrossFire test platform

Our testing methods

As ever, we did our best to deliver clean benchmark numbers. Tests were run at least three times, and the results were averaged.

Our test systems were configured like so:

Processor Core
2 Extreme X6800
2.93GHz
Core
2 Extreme X6800
2.93GHz
System
bus
1066MHz
(266MHz quad-pumped)
1066MHz
(266MHz quad-pumped)
Motherboard XFX
nForce 680i SLI
Gigabyte
GA-X38-DQ6
BIOS
revision
P31 F5h
North
bridge
nForce
680i SLI SPP
X38
MCH
South
bridge
nForce
680i SLI MCP
ICH9R
Chipset
drivers
ForceWare
15.08
INF
update 8.3.1.1009

Matrix Storage Manager 7.6

Memory
size
4GB
(4 DIMMs)
4GB
(4 DIMMs)
Memory
type
2
x Corsair
TWIN2X20488500C5D
DDR2 SDRAM
at 800MHz
2
x Corsair
TWIN2X20488500C5D
DDR2 SDRAM
at 800MHz
CAS
latency (CL)
4 4
RAS
to CAS delay (tRCD)
4 4
RAS
precharge (tRP)
4 4
Cycle
time (tRAS)
18 18
Command
rate
2T 2T
Audio Integrated
nForce 680i SLI/ALC850

with RealTek 6.0.1.5497 drivers

Integrated
ICH9R/ALC889A

with RealTek 6.0.1.5497 drivers

Graphics XFX
GeForce 7900 GS 480M 256MB PCIe

with ForceWare 169.01 drivers

Dual

Radeon X1950 Pro 256MB PCIe

with 8.43 drivers

Dual
XFX
GeForce 7900 GS 256MB PCIe

with ForceWare 169.01 drivers

Dual

Radeon HD 2900 XT 512MB PCIe

with 8.43 drivers

GeForce
8800 GT 512MB PCIe

with ForceWare 169.01 drivers

Dual
Radeon HD 3850 256MB PCIe

with 8.43 drivers

Dual
GeForce
8800 GT 512MB PCIe

with ForceWare 169.01 drivers

MSI
NX8800 GT TD512E 512MB PCIe

with ForceWare 169.01 drivers

XFX
GeForce 8800 GTS XXX 320MB PCIe

with ForceWare 169.01 drivers

XFX
GeForce 8800 GTS XXX 320MB PCIe
+ MSI
NX8800GTS OC 320MB PCIe

with ForceWare 169.01 drivers

EVGA
GeForce 8800 GTS SC 640MB PCIe

with ForceWare 169.01 drivers

Dual
EVGA
GeForce 8800 GTS SC 640MB PCIe

with ForceWare 169.01 drivers

MSI
GeForce 8800 GTX 768MB PCIe

with ForceWare 169.01 drivers

Dual
GeForce 8800
GTX 768MB PCIe

with ForceWare 169.01 drivers


Radeon X1950 Pro 256MB PCIe

with 8.43 drivers


Radeon HD 2900 XT 512MB PCIe

with 8.43 drivers


Radeon HD 3850 256MB PCIe

with 8.43 drivers


Radeon HD 3870 256MB PCIe

with 8.43 drivers

Hard
drive
WD
Caviar SE16 320GB SATA
OS Windows
Vista Ultimate
x86 Edition
OS
updates
KB36710, KB938194, KB938979, KB940105,
DirectX August 2007 Update

Thanks to Corsair for providing us with memory for our testing. Their quality, service, and support are easily superior to no-name DIMMs.

Our test systems were powered by PC Power & Cooling Silencer 750W power supply units. The Silencer 750W was a runaway Editor’s Choice winner in our epic 11-way power supply roundup, so it seemed like a fitting choice for our test rigs. Thanks to OCZ for providing these units for our use in testing.

Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests.

We used the following versions of our test applications:

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Enemy Territory: Quake Wars

We’ll start with Quake Wars since this game’s simple “nettimedemo” allows us to record a gaming session and play it back with precise repeatability on a range of cards at a range of resolutions. Which is what we did. A lot.

We tested this game with 4X antialiasing and 16X anisotropic filtering enabled, along with “high” settings for all of the game’s quality options except “Shader level” which was set to “Ultra.” We left the diffuse, bump, and specular texture quality settings at their default levels, though, to be somewhat merciful to the 256MB and 320MB cards. Shadows, soft particles, and smooth foliage were enabled where possible, although the Radeon X1950 Pro wasn’t capable of handling soft particles.

Mommy.

Sooo.. much…. data.

Where to start? I suppose by pointing out that we didn’t test the Radeon HD 3870 in CrossFire because AMD only supplied us with a single card, despite our best efforts. We’ll try to get a second one soon.

Beyond that, the HD 3800-series cards come out looking reasonably good here. The HD 3850 utterly trounces the GeForce 8600 GTS, Nvidia’s lower-end offering with 256MB of memory. The “overclocked” version of the 8600 GTS we tested is selling for $169.99 at Newegg, but it’s clearly outclassed by the HD 3850. Last year’s models, the GeForce 7900 GS and Radeon X1950 Pro, are also quite a bit slower than the HD 3850. The 3850 doesn’t fare too well in CrossFire, though, where it struggles to keep pace with a single HD 3870 and just doesn’t appear to scale well.

As for the Radeon HD 3870, it shadows the GeForce 8800 GT from a distance of about five FPS at our two higher resolutions. That’s pretty close, and the HD 3870 is delivering eminently acceptable frame rates at 1600×1200 with 4X AA and 16X aniso—no mean feat. Although it has substantially less memory bandwidth than the Radeon HD 2900 XT, the 3870 performs almost exactly the same, even up to 2560×1600 resolution. Perhaps that 512-bit memory interface was overkill, ya think?

Crysis demo

Crytek has included a GPU benchmarking facility with the Crysis demo that consists of a fly-through of the island in which the opening level of the game is set, and we used it. For this test, we set all of the game’s quality options at “high” (not “very high”) and set the display resolution to—believe it or not—1280×800 with 4X antialiasing. Even that was a little bit rough on some of the cards, so we tried again with antialiasing disabled and the game’s post-processing effects set to “Medium.” At these lower settings, we expanded the field to include some older and lower-end graphics cards, to see how they compare.

The HD 3850 and the GeForce 8800 GTS 320MB—especially the GeForce—both seem to suffer here because of their smaller amounts of onboard memory. Beyond that, the results are mixed. The HD 3870 again performs almost exactly like the 2900 XT. With 4X antialiasing and high-quality post-processing enabled, the HD 3870 hits the same median low score as the GeForce 8800 GT, though with a slower average. Without AA and enhanced post-processing, though, the HD 3870 trails the GT significantly.

By the way, I’ve excluded multi-GPU configs from this test because the Crysis demo and these driver revisions don’t appear to get along. Nvidia has just released some SLI-capable drivers, and we’re expecting a patch for the full game to enable better multi-GPU support. We’ll have to follow up with results from the full game later.

Unreal Tournament 3 demo

We tested the UT3 demo by playing a deathmatch against some bots and recording frame rates during 60-second gameplay sessions using FRAPS. This method has the advantage of duplicating real gameplay, but it comes at the expense of precise repeatability. We believe five sample sessions are sufficient to get reasonably consistent and trustworthy results. In addition to average frame rates, we’ve included the low frames rates, because those tend to reflect the user experience in performance-critical situations. In order to diminish the effect of outliers, we’ve reported the median of the five low frame rates we encountered.

Because the Unreal engine doesn’t support multisampled antialiasing, we tested without AA. Instead, we just cranked up the resolution to 2560×1600 and turned up the demo’s quality sliders to the max. I also disabled the demo’s frame rate cap before testing.

Once more, the HD 3870 does its best 2900 XT impression, turning in very similar frame rates. The 3870 is a little slower than the 8800 GT here, but you’ll be hard-pressed to feel the difference subjectively.

The poor HD 3850 is a bit overmatched at this resolution, which is to be expected from a 256MB graphics card. Pardon my impulse to stress things. To keep things in perspective, the HD 3850 will run the UT3 demo as smooth as glass at 1920×1200 with these same quality settings, averaging 52 frames per second and hitting a low of 30 FPS.

Call of Duty 4

This game is about as sweet as they come, and we also tested it manually using FRAPS. We played through a portion of the “Blackout” mission at 1600×1200 with 4X antialiasing and 16X aniso.

Here the HD 3870 again performs pretty much identically to the Radeon HD 2900 XT. Are we detecting a pattern? (Lightbulb appears in thought balloon. Ding!) Unfortunately for AMD, that’s not enough performance to keep up with the 8800 GT.

Meanwhile, the HD 3850 continues to struggle with CrossFire scaling. The 2900 XT doesn’t scale well here, either, though.

TimeShift

This game may be a bizarrely derivative remix of Half-Life 2 and F.E.A.R., but’s it’s a guilty-pleasure delight for FPS enthusiasts that has a very “action-arcade” kind of feel to it. Like most of the other games, we played this one manually and recorded frame rates with FRAPS. We had all of the in-game quality settings maxed out here, save for “Projected Shadows,” since that feature only works on Nvidia cards.

Another game shows us a similar pattern. The HD 3870 is fast, but not as fast as the 8800 GT—yet it’s delivering what feels to me like playable performance at 1920×1200 with 16X aniso. The HD 3850 can’t quite handle this resolution as gracefully.

BioShock

We tested this game with FRAPS, just like we did the UT3 demo. BioShock’s default settings in DirectX 10 are already very high quality, so we didn’t tinker with them much. We just set the display res to 2560×1600 and went to town. In this case, I was trying to take down a Big Daddy, a generally unsuccessful effort.

The HD 3870 again slots into things about where you’d expect, and the cards with less than 512MB of RAM onboard again suffer here due to my penchant for testing at high resolutions. I had to exclude the Radeon HD 3850 CrossFire here because it was painfully slow—like three frames per second. I believe this was just a memory size issue. Dropping to a lower resolution did seem to help.

Team Fortess 2

For TF2, I cranked up all of the game’s quality options, set anisotropic filtering to 16X, and used 4X multisampled antialiasing at 2560×1600 resolution. I then hopped onto a server with 24 players duking it out on the “ctf_2fort” map. I recorded a demo of me playing as a soldier, somewhat unsuccessfully, and then used the Source engine’s timedemo function to play the demo back and report performance.

Unfortunately, I wasn’t able to complete my TF2 testing before Valve pushed out an update over Steam and rendered my recorded demo incompatible the latest version of the game. I decided to go ahead and give you what results I have, even though the Radeon HD 3870 isn’t included. I really like Steam, but I sure don’t like its weak-to-useless user-side change control.

Moving on….

Power consumption

We measured total system power consumption at the wall socket using an Extech power analyzer model 380803. The monitor was plugged into a separate outlet, so its power draw was not part of our measurement. The cards were plugged into a motherboard on an open test bench.

The idle measurements were taken at the Windows Vista desktop with the Aero theme enabled. The cards were tested under load running BioShock in DirectX 10 at 2560×1600 resolution, using the same settings we did for performance testing.

Check that out! Although the Radeon HD 3870 performs almost exactly like the 2900 XT, it does so while drawing vastly less power. Our HD 3870-equipped test system draws 98W less than our otherwise-identical 2900 XT-based rig does while running BioShock. That’s a massive honking reduction in power consumption versus AMD’s previous-gen chip. Not only that, but the HD 3870 system pulls 21W less at idle and 39W less under load than the GeForce 8800 GT system.

Noise levels

We measured noise levels on our test systems, sitting on an open test bench, using an Extech model 407727 digital sound level meter. The meter was mounted on a tripod approximately 14″ from the test system at a height even with the top of the video card. We used the OSHA-standard weighting and speed for these measurements.

You can think of these noise level measurements much like our system power consumption tests, because the entire systems’ noise levels were measured, including the stock Intel cooler we used to cool the CPU. Of course, noise levels will vary greatly in the real world along with the acoustic properties of the PC enclosure used, whether the enclosure provides adequate cooling to avoid a card’s highest fan speeds, placement of the enclosure in the room, and a whole range of other variables. These results should give a reasonably good picture of comparative fan noise, though.

These measurements probably aren’t precise enough that it’s worth worrying over minor differences like tenths of a decibel. The things to take away from this are the big differences, because those are the ones you’ll notice. Such as: the Radeon HD 2900 XT is rather loud while running games, as is the GeForce 7900 GS. The Radeon HD 3850 and 3870, meanwhile, are both as quiet as they come.

Conclusions

AMD has made tremendous strides with this generation of GPUs. The Radeon HD 3870 delivers almost exactly the same performance as the Radeon HD 2900 XT, yet the chip is under half the size and brings an astounding near-100W reduction in power use while gaming. Since less power is expended as heat, the HD 3870 can be vastly quieter, as well. Honestly, I didn’t expect these sorts of efficiency gains from what is essentially still an R600-derived design—and it certainly appears now that AMD overshot in a major way when it gave the 2900 XT a 512-bit memory interface. That error has been corrected, and R600’s core architecture suddenly looks much nicer than it had before.

AMD needed every inch of that progress in order to come within shouting distance of the outstanding GeForce 8800 GT. Fortunately, they’ve done it, and the Radeon HD 3870 looks like a reasonable alternative, provided AMD can make good on its $219 price target. No, the HD 3870 isn’t as fast as the GeForce 8800 GT, but we tested the latest games at very high resolutions and it still achieved some decent frame rates in today’s games—except for, you know, Crysis, which bogs down any GPU. If you’re using a display with a more common resolution like 1600×1200, 1680×1050, or 1920×1200, the HD 3870 will typically allow you to turn up the eye candy and still get fluid performance. Some folks will probably be willing to take that deal and pocket the difference in price versus the 8800 GT. I can’t say I’d blame them.

And, as complex as these GPUs are, the issues really do boil down to price, performance, and adequacy at the end of the day. The DirectX 10 spec has firmed up the requirements for image quality, and partially as a result, the latest Radeons and GeForces produce very similar, great-looking output. Although I think AMD has something of an edge in terms of HD video playback filtering and noise reduction in some ways, even that is largely a wash. Most HD movies are going to look gorgeous on either card, regardless. I do plan to test HD video playback on these new cards soon, so we can get a closer look, though.

I’m not sure what to make of the Radeon HD 3850. The price is right, and it certainly delivers an awful lot of GPU power for the money. Yet the 256MB memory size in many “enthusiast value” graphics cards increasingly feels like a mismatch with the available processing power as GPU throughput increases. The HD 3850’s memory size may not prevent folks from having a good experience in most of today’s games, especially if they’re playing at display resolutions of 1600×1200 or less. But some games have higher memory requirements than others, and such requirements are always growing as newer games arrive. Features like antialiasing and anisotropic filtering require additional memory, as well. Owners of the HD 3850 may have to turn down some settings and compromise on image quality in order to play some games smoothly, even when the GPU has power to spare.

The precise impact of that compromise is hard to gauge, but I can say with confidence that the HD 3850 is a poor choice for use in CrossFire configurations. Adding a second card can nearly double your effective GPU power, but it does nothing for your effective memory size, which remains the same as with one card. In fact, the overhead for coordinating with another GPU in CrossFire probably consumes some video memory, which may be why CrossFire was actually slower on the HD 3850 in some cases, yet faster on the 2900 XT. That makes an HD 3850 CrossFire rig a pretty extreme mismatch between GPU power and video RAM.

I should mention that budget-minded folks who like the idea of a GPU-RAM size mismatch will have another option soon in the form of the GeForce 8800 GT 256MB for between $179 and $199. It’s no accident that range starts at the HD 3850’s list price, of course, and given what we’ve seen today, the 8800 GT 256MB ought to be faster than the HD 3850. I’d really rather have either company’s 512MB card, though, personally.

The next big question, I suppose, is how pricing and availability of 8800 GT and Radeon HD 3800-series graphics cards will play out over the next little while. I don’t think there are any bad choices here, especially among the 512MB cards. AMD will need to maintain its price advantage, though, in order to offer as compelling a product as the 8800 GT.

Comments closed
    • Damage
    • 12 years ago

    I’ve updated this article to fix some botched labels in the TimeShift graph. Should be correct now.

    • Tommyxx516
    • 12 years ago

    Do we actually care if the GeForce 8800GT hits 42.6 fps while the Radeon HD 3850 hits 37.9 fps @ 2560×1600 with Unreal Tournament 3?

    Doing a benchmark at only 1, and the highest possible, resolution is a flawed review. Most of us don’t play at that resolution nor can we since most monitors don’t even go that high.

    I think whenever the name Crysis comes up, every reviewer feels the need to mention it’s ‘high requirements.’ I’m sort of over it. Yes it has the best looking graphics to date, but it’s certainly not VASTLY superior to Unreal Tournament 3 or Call of Duty 4. Like most pc games, it could be a poorly put together graphics engine or bad programming that makes it such a hardware hog.

    You also need to stop adding in that the 512meg model is VASTLY superior to the 256meg model. Any search over the web shows the gains to be very minimal.

      • danny e.
      • 12 years ago

      you are mistaken in many ways. 512MB is VASTLY superior to 256MB if you play at any relatively high res.

      Also.. Crysis is way different than UT3. Anyone saying “it doesnt look much better” doesnt seem to understand that Crysis is putting way more on the screen. Yeah, UT3 looks great. I could draw you a super high-res image of my wall and it would look great.
      It’d be flat and contain very few polys.. but it wouldnt look much worse than Crysis. right?

        • muyuubyou
        • 12 years ago

        Yep 512MB will get you better unusable FPS than those unusable FPS you get at 256MB. Definitely worth the extra money 😉

          • charged3800z24
          • 12 years ago

          Have you played Crysis…? Everything is tangible. You can shoot everything blow up almpst anything and pick-up/throw etc anything in the game. Bump into a bed.. it moves. etc. I wonder if this Might be a reason it is so harsh on a system.

          I would like to see the review done on the new 8.2 drivers. I have noticed a lot of improvement on my system running Crysis on all High Dx9@1440×900 on my Phenom BE @2.7ghz and HD 2900pro @ 750core 1000(2000)mem. I can play with no glitch. I do hit a low when there is a huge whole map explosion, but other then that it is definently playable. Much more then the previous driver versions.

    • alex666
    • 12 years ago

    The 3870s are turning out to be as scarce as the 8800GTs. Where are the many many thousands of these that supposedly were being released in contrast to the supposedly only 40K 8800GTs? The 3870 is a very compelling card for its price, but if it’s not available . . .

    C’mon AMD, a lot of us are rooting for you, want you to succeed, keep the market competitive, but this is ridiculous. It’s just one screw up after another. I can’t believe I’m saying this, as I border on being a fanboy (though my last two systems have been c2d). But enough is enough.

    • Kent_dieGo
    • 12 years ago

    The best innovation? No more GT/Pro/XT/GTS/PE/GTX/LE/SE guess work. Just one relative number to rule them all. About time.

      • flip-mode
      • 12 years ago

      I’m betting they won’t be able to resist themselves and they’ll tack somthing on the end sooner or later.

        • mortifiedPenguin
        • 12 years ago

        lets hope not.

    • Prospero424
    • 12 years ago

    I really like the 3870, but the 3850 is just too hurt by a lack of memory.

    512MB really should be standard for any mainstream part at this point. I would /[

      • green
      • 12 years ago

      i’m not sure whether to be impressed by the power consumption
      on one hand it leaves the x2900 in the dust and lets hope we never see consumption that high again

      on the other hand it is outperformed by the 8800gt which is at most 40W more power but has 12% more transistors on a slightly larger process with double the ram

        • Prospero424
        • 12 years ago

        I dunno. The 3870 has the same amount of memory with a 14% advantage in power consumption over the 8800GT.

        You’re right about the process being smaller, but remember: the clock rates for the 3870 are 23% higher than the 8800GT as well.

        Either way, I’m glad to see power consumption rates dropping. The 3870 draws less power than the 7900GS, which is roughly a third the speed and has half the memory. Good stuff.

      • provoko
      • 12 years ago

      Powercolor 3850 512mb was reviewed today, looks like it’s performance is as good as 3870 and 8800 GT. It’s hard to tell because there’s so many mixed reviews.

      I’m an accountant, and I’m seriously getting tired of looking at charts, haha.

        • Prospero424
        • 12 years ago

        Wow, they’ve already got 512MB 3850s rolling out? Not bad for $199 MSRP.

    • sigher
    • 12 years ago
    • derubermensch
    • 12 years ago

    If I read right, the main limiting factor is the 265MB of memory? So the 512MB versions to appear on Newegg solve the problem?

    • ew
    • 12 years ago

    666 million transistors
    + red color scheme
    + DAMIT nick name
    = AMD is the devil! OMG!

      • provoko
      • 12 years ago

      Hahaha.

      • Dagwood
      • 12 years ago

      Was beginning to wonder if I was the only one who noticed. Could have adding another million transistors really have hurt? Or maybe they had to sell something to make up for all the red ink on the books?

      Or maybe Gratuitous already said that but I missed it.

      • liquidsquid
      • 12 years ago

      Lol! It must really be a “beast!”

    • elmopuddy
    • 12 years ago

    Great review, although yeah, I’d like to see results at 1650×1080, I think that’s a pretty common rez.
    EP

    • flip-mode
    • 12 years ago

    One of the two 3850s is now sold out at Newegg. Edit, make that two of the three.

    Given the price/performance I can’t say I’m surprised.

    • sigher
    • 12 years ago

    I think dismissing DX10.1 is the same as dismissing patches for vista users.

    The changes basically fix the bugs in DX10, and therefore I think it will be rapidly adopted by gamemakers, especially since they know nvidia will go 10.1 very very soon, and the move from 10.1 when you already have to deal with 10.0 anyway is much easier than the move from 9 to 10 obviously.
    Plus the game makers get pre-releases of DX10.1 from MS and hardware from the big players early.

    I’m also going to venture a guess that the people that dismiss 10.1 now will suddenly herald it as very important once nvidia’s DX10.1 parts are released.. smoothly forgetting what they claim now.

    • sluggo
    • 12 years ago

    It seems to me that AMD/ATI have pretty much hit the target with this release. Sure, the high-end board doesn’t outperform the 8800GT, and if the expectation was that it would then this is a disappointment. But I think ATI has correctly read the landscape.

    The installed base of monitors running at resolutions 1680/1050 and below is substantial. A product that targets that market, in the price/performance gap between the 8600 and 8800, looks very much like a winner to me.

    • BlockheadBrown
    • 12 years ago

    It’s hard to justify a purchase without a lifetime warranty. Most Nvidia MFGs have them.

      • flip-mode
      • 12 years ago

      Yeah, that’s a huge knock against AMD’s cards. IMO.

      • Kent_dieGo
      • 12 years ago

      The lifetime warantee is worth $50 to $100 premium. I have RMA’ed many video cards. Now that ATI has cut to 1 year and all partners only offer “return to place of purchase” RMA service, I only buy eVGA, XFX or BFG cards. An ATI card would have to be very cheap to consider.

        • thecoldanddarkone
        • 12 years ago

        Well, I just purchased a 8800gt with crysis for 300 (was going to buy the game anyways).

        It was deffinitely part of my decision process.

        • flip-mode
        • 12 years ago

        Huh? No way, man. Not for me at least. I mean, it’s great to have and all, but it ain’t worth nowhere near a $50 premium.

      • provoko
      • 12 years ago

      If a card won’t die in a year, it’ll never die.

        • Kent_dieGo
        • 12 years ago

        Fans die after a year. I have RMA’ed several 9800s at 2-3 years with boot failures. Almost every old 8500 I own has dead fan but they seem to work without the fan fine. ATI’s new 1 year RMA service is too short. The ATI partners “return to place of purchase” lack of support unacceptable.

          • tfp
          • 12 years ago

          Have you ever considered replacing the dead fans?

            • JustAnEngineer
            • 12 years ago

            Arctic Cooling’s VGA Silencer is the answer to dead GPU fans.

        • flip-mode
        • 12 years ago

        I had an ATI Radeon 9800 Pro die after 3 years.

          • provoko
          • 12 years ago

          I’ve never had a card die on me. I had a geforce 2 card for years. My 6800 vanilla and 6800 GS cards are kicking hard. As for ATI, my cousins have a 9800 card and it’s been years.

      • Kent_dieGo
      • 12 years ago

      VisionTek has lifetime warranty and USA support. It is about time an ATI partner offered lifetime warranty. They seem to be same price as the non-supported “return to place of purchase” cards.

    • Xenolith
    • 12 years ago

    Anyone know when the mobile modules based on the RV670 chip will be released?

    • NeRve
    • 12 years ago

    Great article Damage – btw I noticed you were running a PCI-Express 2.0 X38 with the HD 38XX… Can you tell a performance with the new high-speed PCI-Express 2.0 bus or does neither card really utilize that bandwidth at all?

    • Dposcorp
    • 12 years ago

    Excellent review as usual, Scott. Many thanks for the hard work.

    The 3870 looks like a sweet deal if it is widely available at a fair price.

    Also, although the 3870 is a bit slower then the 8800GT, it still comes with the onboard audio controller.

    Interesting note:
    The review at overclockersclub of the Sapphire HD 3870 states:
    ” included is a free copy of Steam’s “Black Box” which has games in it such as Portal, Team Fortress 2, and Half-Life 2: Episode Two.”

    (cue the late night info-commercial voice)
    “NOW HOW MUCH WOULD YOU PAY?”

    If the card is available, priced well, performs close to a 8800GT AND Includes the “black box,” I am not sure I see a reason NOT to get it.

      • cygnus1
      • 12 years ago

      they’re sold out of all the 3870’s

    • marvelous
    • 12 years ago

    3850 is plenty for people who have 19″ monitors or even 22″.

      • Deli
      • 12 years ago

      and now, to get the gf to try newer games. lol
      great review as always *bows*.

      Now, on to 790FX and Phenom.

      • End User
      • 12 years ago

      As long as you ignore Crysis.

        • Deli
        • 12 years ago

        no FPS for her.

    • marvelous
    • 12 years ago

    5-10fps less than gt but nearly $100 cheaper than 8800gt at the moment. If they keep the prices down they can surely take a big chunk in the midrange market.

    • matnath1
    • 12 years ago

    Scott:

    Your reviews are always the best written and most entertaining of all of the other sites I check. (Hard/ocp Anandtech cough TOMS and Hexus).
    And this was no exception.
    I do want to cast a vote in favor of more in depth 16 x 12 res tests that show different graphical settings rather than jumping to uber high resoutions which the overwhelming majority of us can’t support anyway. In fact I don’t even look at them because they are simply irrelevant to me.
    Also, I’d love to see the suggested or required power supply for these just like you did in older reviews so I know whether a PSU upgrade is necessary. What happend to the overclocking section. I really wanted to see if the HD 3850 overclocked would surpass the stock 3870 and 2900xt (and see how much it nullifies it’s memory buffer handicap). Ditto on the 3870 ( but to see how much it closes the gap on the 8800 GT).

    Hexus made a VERY strong case that NVIDIA is spending some very smart dollars working with game publishers so that they will optimize their hardware:
    “NVIDIA has realised this is the case and thrown considerable support – money and expertise – in ensuring that games-engine programmers code for its forward-looking architecture.”
    §[<http://www.hexus.net/content/item.php?item=10415&page=14<]§ I'd LOVE to hear your take on this. Perhaps this means we'll see nice gains as more Driver releases come from ATi? I'm about ready to buy one of these but can't see any yet. Are they in stock anywhere today? (I just want to support ATI and the diffferences at my 1680 x 1050 22" are negligible VS the 88000 GT. Edit: Bought the newegg saphire 3870. May come with Valve's Black Box!

      • sluggo
      • 12 years ago

      Just an FYI – Connect3D’s USA operations declared bankruptcy several months back. Any purchase from them would have to be from a European distibutor..

    • alex666
    • 12 years ago

    Anyone have thoughts on how updated drivers for these new RV670 cards might impact performance? Or are these simply “die-shrunk updated R600s” (or words to that effect) as described by at least one other reviewer, and that the current drivers are as optimized as can be? I guess I’m wondering if new drivers that are really optimized for these cards could put the 3870 right up there with the 8800gt.

    • Dude-X
    • 12 years ago

    Hey Damage, the TF2 benchmark is missing the score for the Radeon 3870.

    Also I would like a follow up review on the video capabilities (picture quality and cpu utilization) between the latest GeForce and Radeon cards.

      • totoro
      • 12 years ago

      If you read the article, he explains that he could not finish the benchmarks due to a Steam update.

    • ReAp3r-G
    • 12 years ago

    now would this be enough for prices to go down on the 8800GT?

    • alex666
    • 12 years ago

    So if I’m reading this right, the new AMD 3850 and 3870 cards were tested with current AMD drivers, these were stock cards, while the 8800GT used here was an overclocked MSI (I don’t have the specs on exactly how much it is overclocked) using the most recent nvidia drivers.

    At the very least, it would have been a bit more interesting (and perhaps more equitable) to have compared stock cards across the board.

      • Damage
      • 12 years ago

      Read again. The MSI card was included, as was the stock 8800 GT. My reasons are explained in the test notes, too.

        • alex666
        • 12 years ago

        Yep, I stand corrected.

    • herothezero
    • 12 years ago

    Great review.

    Finally, a worthy competitor and a card that I can recommend to customers. It’s about time DAAMIT got itself together.

    As for the resolution debate, the reason I bought an 8800GTX at $550 was so I /[

    • PRIME1
    • 12 years ago

    So this is basically the same as a GTS 320

    • mongoosesRawesome
    • 12 years ago

    Interestingly, anandtech’s numbers show the 3870 drawing more power at load, while TR shows it drawing substantially less. Could there be a big difference in power draw between samples?

    • mongoosesRawesome
    • 12 years ago

    couldn’t you just run steam in offline mode, in order to keep the same settings while testing TF2 or other steam games?

      • Entroper
      • 12 years ago

      Probably, if you had the clairvoyance to realize that an automatic update would invalidate your demo halfway through testing.

      • eitje
      • 12 years ago

      i bet he does that from now on. 🙂

      • Prospero424
      • 12 years ago

      Well, the point is kind of to test it in a real-world scenario; with a populated server. Without bots, there’s really no other way to do that than to play online.

      As to the update management issue: I can understand how it would be frustrating for those doing benchmarks, but I’m glad they’re pushing the updates out and fixing some really annoying exploits I’ve personally seen in the wild.

      It’s no fun to join up and charge out of the spawn point on your favorite server only to be cut down by a sentry gun that’s been placed in a freaky invisible position underground.

      However, Valve should really offer a realistic, offline benchmarking feature/tool for TF2.

        • Entroper
        • 12 years ago

        q[

    • Krogoth
    • 12 years ago

    Heh, AMD’s graphical division finally delivers a solid product.

    HD 3850 and 3870 are certainly good buys if you want a good performing card, but are on the a limited budget. I wonder how they will fare against Nvidia’s mid-range refresh? (87xx, G92-based)

    I had found it amusing that Damage needed to go all the way to 2560×1600 with some AA in order to bring any of those GPUs to their knees. Crysis is just a pig to itself, but it will likely be the expectation for a while.

    It is just absurd how mid-range GPUs have so much power. It is very hard to justify the cost of the higher-end GPUs unless you already got one or an unlimited budget.

    • flip-mode
    • 12 years ago

    Nice Review. Thank You. Thanks for throwing the older cards in there.

    Regarding the cards:

    This is why it’s best to wait for the refresh. Both these cards as well as the 8800GT are extremely compelling. Given my own 1280×1024 resolution constraints, the 8800GT would be quite excessive. In fact, either of the AMD cards would be more than enough. If I were still into gaming then I’d be gunning for the 3870, no question. It may be too early to say but I think that AMD might have its first compelling product since the x1950pro, CPU products not withstanding. Hopefully these cards will have the staying power of the 9800pro and people can pick one up and keep it as their primary card for the next 2 years.

    Regarding the review:

    On the one hand I wholeheartedly agree with Vrock about the resolutions used for testing. How many people are out there gaming on a 30″ 2560×1600 panel? Then there’s the fact that this is a review of midrange cards. If this were a test of high end cards then I could understand using high end resolutions. Maybe it would be best to limit most tests to 1600×1200 (at the most really, since that’s probably a higher resolution than most are using) and then run just a few tests at higher resolutions than that if you’re trying to make a point.

    On the other hand, I don’t see the harm in running high-res tests since a card that is playable at those resolutions will certainly be playable at the common resolutions.

    In the end I’d urge testing at common resolutions. The reason is that, first off, people have to extrapolate to figure out how the card would perform at their resolution. The second reason is that the high-res tests could cause pronounced differences between the cards that won’t occur at lower resolutions and no amount of extrapolation will correct that.

    As a final note: I picked a terrible time to quit my gaming past time. Who knows, maybe I should give it one last hurrah.

      • Damage
      • 12 years ago

      I only tested two games, UT3 and BioShock, solely at 2560×1600. (I actually tested TF2 at three resolutions for most configs, but then Valve’s update broke my demo.) Yes, that’s very high res, and I agree that it’s something of an atypical case. Please note a few things.

      One, both of those games run the Unreal Engine 3 and don’t support multisampled antialiasing, which makes it harder to push them at lower resolutions.

      Two, I had to define our some of testing specs ahead of receipt of the HD 3800-series hardware and with all relevant configs in mind. I was fearful of running into a situation where the RV670 512MB in CrossFire and 8800 GT in SLI weren’t being pushed. Then it turned out the RV670’s clocks were a little lower than first expected and AMD got shy about sending a second HD 3870 card for us to test. Had I known AMD was going to focus on the 3850 at the expense of the 3870, and had I known 775MHz would be the top clock rather than the rumored 825MHz, I might have chosen a lower res for testing. You, lucky dawg, get to come into this with the benefit of hindsight.

      Three, I reported frame rates for the 3850 in UT3 at 1920×1200 in the text of the review, to give some perspective. I was thinking of you when I ran that additional test at 2AM last night, oh whiner extraordinaire. 😉 I thought such info might be helpful to folks wanting to understand the relative power of the 3850.

        • flip-mode
        • 12 years ago

        l[

      • JustAnEngineer
      • 12 years ago

      q[

        • Usacomp2k3
        • 12 years ago

        Damage and I 😉

    • Fastfreak39
    • 12 years ago

    I’m surprised AMD finally made a quiet and low powered card. If the 8800GT prices remain high I think we’ll see a lot of people opt for the 3870. There’s no noticeable difference at average resolutions.

    • Vrock
    • 12 years ago

    Something about this otherwise quality review that bothers me: for the most part, you guys aren’t showing results for resolutions that most gamers actually play at. Four megapixel resolutions are nifty and all, but with most folks with monitors running at 1280×1024 or so, it’d be nice to see how the cards perform for them, too. Even 1600×1200 or 1650×1080 is getting on the high side IMHO.

      • apkellogg
      • 12 years ago

      Not to bicker with you, but I’d like to see more testing at 1920×1200. A lot of the benchmarks seem to go from 1600×1200 to 2560×1600 where the FPS for the first are fine but the FPS for the second are not (I like closer to 60 FPS than 30), and I’m left wondering on which side 1920×1200 falls on.

      • provoko
      • 12 years ago

      I agree, mine is 1440×900. But I think TR’s reasoning is that the highest resolution will show a cards true power. That and they probably don’t have the time. They did test ET at 1280×1024.

      • Klyith
      • 12 years ago

      Eh, I kinda agree, because not all of the cards scale the same way across resolutions (particularly the 3850 or other 256mb cards).

      But on the other hand, it looks like *[

      • MadManOriginal
      • 12 years ago

      A fair range of resolutions for ‘typical gamers’ use would be 1280×1024 (19″ 5:4) 1680×1050 (20″ and 22″ WS) 1600×1200 (20″ 4:3) and 1920×1200 (24″ WS) The 2560×1600 is pretty rare when you scan the system signatures on most enthusiast forums while 1920×1200 is somewhat common. It’s not bad to include the super-high resolution but the other resolutions should be the focus. 1680×1050, which is what I have, could even be skipped for 1600×1200 even though most gamers will have widescreens because one can just take the 1600×1200 results and add 5-10% frames per second.

      I am happy I got my 8800GT to replace a GTX (lol) for only $236 shipped 🙂 I hit up the Frys.com backorder as soon as the cards were launched since it was selling out everywhere and prices were being jacked, put money in my pocket and for the resolution I run the 8800GT provides just as good gameplay, I just can’t do max AA i[

        • flip-mode
        • 12 years ago

        I’d say even 24″ panels are nowhere near “common”.

          • End User
          • 12 years ago

          24″+ displays are out there in substantial numbers – §[<http://www.steampowered.com/status/survey.html<]§

            • provoko
            • 12 years ago

            Flip-mode is obviously right. I did the math, according to Steam 26% of all monitors are 20″ or more, and *[

            • alex666
            • 12 years ago

            And that’s a steam population, not necessarily representative of the larger population of pc users or even pc gamers.

            • End User
            • 12 years ago

            You may be right. It could be higher. :-p

            • End User
            • 12 years ago

            9% share earns 1920×1200 a spot in the benchmarks.

      • odizzido
      • 12 years ago

      I agree 100%. I run 1280X1024 myself, and 1680X1050 will probably be my next monitor. TR’s reviews lately seem to just skip over pretty much anything I am interested in running at.

      Another thing I would love to see are a couple games where the 320 or 256mb cards choke at 2560X1600 or other insane resolutions beyond anything I run would fare at 1280X1024 or even 1680X1050.

      I mean, just look at the bioshock test. One resolution was tested, 2560X1600. The GTS320 is behind the 640 by a fair amount….but what happens when you drop the res to 1280X1024?….or even 1680X1050? is 320mb enough for that? I have no idea.

      I also dont use AA as I dont care about jaggies. TR uses AA in all tests possible, so combined with my “low” resolution games are pretty much never tested at settings I would ever use.

      Course, I can still compare cards and get a general idea of what I might expect, but still it would be nice to see since I dont have the ability to test all this out myself.

        • Damage
        • 12 years ago

        Guys, thanks for the input. I’ll keep this feedback in mind in the future. As I mentioned in my earlier post, one of the problems here was anticipating the exact mix of hardware we’d be testing and estimating its likely performance before-hand. I kind of expected to be testing dual 3870s at 825MHz core when I chose the UT3 and BioShock resolutions. I wasn’t really worrying about the 256MB cards at that time.

        That said, another thing I think I’m taking away from this is that you guys may want less real-world testing with FRAPS and more timedemos, like we used in Quake Wars. That’s really the only feasible way to do broad comparisons at multiple resolutions. FRAPS is just too time-consuming.

        We did try to do TF2 at multiple resolutions, but ran into this Steam update issue. Ugh.

        Would you like to see less FRAPS with average/minimum scores and more scripted stuff, or just more FRAPS at lower resolutions, where the high-end cards will probably hit a game engine/CPU bottleneck?

        One interesting note on scaling: Nvidia’s drivers won’t let one select 1680×1050 resolution on a higher-res (but still 16:10) monitor. If I could have, I would have tested Quake Wars at 1280×800, 1680×1050, 1920×1200, and 2560×1600. Which would be neato.

        Another thing I’ll take from this discussion: 2560×1600 should be used sparingly. Like bacon. It may be tasty, but too much is unhealthy. (Nitrates cause cancer, you know.)

        Having said all of that, really, if you’re really only looking to run most games at 1280×1024 with no antialiasing or aniso, you shouldn’t be worrying yourself over reviews like this one. Just go buy a $120 video card and be done with it. Seriously.

          • Vrock
          • 12 years ago

          I’d prefer more timedemos with more realistic resolutions, personally. FRAPS is still better, of course, but I understand that it takes up much more time.

          Also, a question: We’ve seen the x1950, 7900GS, etc. in nearly all new graphics card reviews. Are you guys re-running benchmarks on those cards every time, or are you using results you’ve already taken?

          As for the $120 card that plays almost everything fluidly at 1280×1024 (minus AA and AF), where can I get one of those? Seriously, ’cause my 7800GT sure can’t do it. 🙂

            • Damage
            • 12 years ago

            We tested the 7900 GS and X1950 Pro fresh right along with everything else. There’s no other way to get results for new games, drivers, etc. 😉

            • Vrock
            • 12 years ago

            That’s what I thought. I figured you guys would want to do it right, despite that it’s a pain. 🙂

            Which brings me to another question: have you guys considered eliminating some of those cards from the comparison? I realize that you do this to show how last year’s mid range cards perform, but we have so much info already available from the 8800GT, 2900XT reviews, etc…seems like maybe you could save some time and devote it to running the review subject(s) in other resolutions?

          • flip-mode
          • 12 years ago

          LOL at the $120 video card remark. I’d love to ditch my craptastic 1280×1024 monitor. Honestly, when you put it that way it *almost*, and I say this only half facetiously, makes it sound like a better idea for the rest of us to go out and get respectable size monitors.

          A question for you though Damage – do you think there’s any reason not to go with scripted benches? Do you think the scripted benchmarks give an accurate representation of real world gameplay?

          Anyway, the most direct answer I can give you your overarching question is that I think you should test the cards at the resolutions that correspond to the cards’ market segment. I.e. test midrange cards at midrange resolutions, high end cards at high end resolutions. If the system ends up bottlenecking somewhere else (a.k.a. CPU) then I’m tempted to say “so be it” or just include one video card scaling bench per review. Of course, then you’re going to have whiner extraordinaires that will complain over the game you used to perform the scaling test. No matter what you do you know you’re not going to please everyone 😉

            • Vrock
            • 12 years ago

            q[

            • Damage
            • 12 years ago

            Seriously, if you’re running a 1280×1024 monitor, spend more of your budget on a new display before reaching above 200 bucks for a video card.

            I think timedemos will show you your video card’s frame rates, which is a key part of the gameplay equation. Frankly, no number will summarize the whole experience, which is why TR isn’t just a big database of results. I do prefer when scripted tests show me low frame rates as well as averages, though. That’s better data. I happen to think FRAPS produces even better info yet, and I like that in testing it, I’ve played the game on each card.

            Your idea of testing with a class bias is inherently offensive to my democratic ideas, but the concept of aiming for lower resolutions with cheaper cards isn’t entirely loony, I suppose. We just would have to throw out the concept of broad-based comparisons and only test a tiny slice of competitors at a given price range against each other. And no multi-GPU, which you hate anyway. Heh. That would feel awfully limiting, though, for folks who appreciate the broad comparisons like those we offer in this review.

            • Bensam123
            • 12 years ago

            I play with 1280×1024 and 1600×1200 when I can turn everything up.

            Then again I still use a CRT cause I’m still not satisfied with LCDs. My friends are hardcore gamers as well and they still refuse to buy LCDs for their computers. My friend, however, recently bought a new 42″ LCD TV which is cool, but easy to see when the tv falls short in comparison to CRT TVs.

            I, too, would like to see games tested on lower end resolutions such as these (1280×1024 at the least).

            Mid-range and high-end cards should be tested on the same benchmark or you lose the broader picture of the differences between the cards.

            • flip-mode
            • 12 years ago

            Sheesh. I don’t hate SLI-Fire nearly as much as some others do, though I’m not a fan of it.

            As for “class bias”, well, I don’t see how that remark was very useful. And the SLI hater remark for that matter.

            It seems to me I’m just irritating the hell out of you so I’ll just drop it. WTF.

            • Damage
            • 12 years ago

            Just kidding, man.

            • flip-mode
            • 12 years ago

            My bad for being all emo. You make good points about the fact that the scope becomes so narrow. It’s crazy how complicated it is just to choose which res to run the benches at, not to mention choosing the benches themselves, then all the test hardware, yadda yadda yadda.

            • gratuitous
            • 12 years ago
            • Usacomp2k3
            • 12 years ago

            It has happened lots of times. When it comes to presenting wrong information, TR is always quick to point out their own faults. More power to them.

            • gratuitous
            • 12 years ago
            • Entroper
            • 12 years ago

            Dude… he was /[

            • gratuitous
            • 12 years ago
            • JustAnEngineer
            • 12 years ago

            Did you have something to say? Speak up, man!

            • Vrock
            • 12 years ago

            Go away, weirdo. You’re just going to delete your post later. What’s the point of posting at all?

            • sluggo
            • 12 years ago

            Now I understand why you go back and delete all your posts.

            • Dposcorp
            • 12 years ago

            Damage said, q[http://members.arstechnica.com/x/dposcorp/LCDSb.jpg<]§ §[<http://members.arstechnica.com/x/dposcorp/LCDSc.jpg<]§

          • Entroper
          • 12 years ago

          Well, you can get that $120 video card, but you’ll need to get another one in 12 months.

          I have a 19″ monitor that does 1280×1024 native, and the last video card I bought was a 6600 GT. I feel like it’s not worth spending money on something that will suck in a year’s time or less. What I want is a graphics card that runs beautifully at 1280×1024 today, with all options cranked up, and still runs fine 18 months down the road.

          Realizing that you can’t test games that will be out 18 months from now… I do pay attention to higher resolution tests, because they’re indicative of the cards’ relative performance under more demanding workloads. But I also would like to see results at 1280×1024. It would be nice to know if I’ll be locked at my 75 Hz refresh with everything maxed. 🙂

          • DrDillyBar
          • 12 years ago

          If FRAPS really takes that much time for you, and you have to use 5 runs for reliable data, then it’s likely best to use it sparingly. But I would say it’s still a valuable addition to the review process you have built. And the use of scripted Demo’s is ultimately easier to reproduce for those who want to quantify against their current hardware.
          On the debate about resolution, I’ve always been a fan of comparing resolutions by Mpixels, since there used to be a real preformance delta when you hit 2Mpixel resolutions. At this point, 12×10 performance should be omitted as long as you’re going to test at least 2, say 1600×1200 and 2560×1600. Not ideal, but you’ve noted problems getting games to run in 1680×1050. (unless their’s a way to program in the resolution in the nVidia software… that’s how i got 16×10 on my old 7800GS AGP).
          Either way, I thought the review was well done, and your 30″ needs something to do right?

          • odizzido
          • 12 years ago

          That’d be great if a $120 card could run the games I play at 1280X1024 at good settings, but it cant. Even my 8800GTS640 cant. I do use 16XAF though, as clear graphics are very important to me.

          For me, I dont like my games to run choppy. ever. Thats why my favourite benchmarks here are the ones with minimum frame rates.

          A lot of people say 30FPS is smooth, but I find it pretty choppy myself. If the minimum frame rate drops below 40-50FPS, I start turning down the graphics.

          Some games, like crysis, totally thrash my system. Most of my settings are on medium. so yeah, on a E6300, 2gigs of ram, and 8800GTS640, I play crysis at 1280X1024 on medium with no AA, and its still a little choppy to me. Fortunately, I guess, I think crysis sucks so I dont care.

          I will admit I am a little..different…when it comes to displays/graphics. I notice fuzzy/unstable images very easily. 60Hz on a CRT burns my eyes. tearing annoys me, and anything below 40FPS is choppy to me. A lot of people cant even see it.

          there…..thats my rant…

          ————–
          slightly more on topic with regards to your questions…

          FRAPS/time demo’s – I dont care. If I see you did it with fraps, I will just keep in mind the scores will be slightly off because its pretty much impossible to do the same run every time. FRAPS gives real world data though, which is nice. But really, either way I dont really mind.

          l[

          • eitje
          • 12 years ago

          so, you’re saying… tech report is like bacon…

          mmm…. tasty report.

          • JustAnEngineer
          • 12 years ago

          I [b]am[/b] looking for a graphics card upgrade specifically to improve 3D gaming performance at 2560×1600 with 2X or 4X AA. I may be in the minority, though.

          Seriously, 1280×800, 1440×900, 1680×1050, 1920×1200 and 2560×1600 would be a good spread of 16×10 resolutions. These represent the most common sizes for new monitors. There’s a gap between the last two at around 2240×1400, but I haven’t seen many monitors to fill the 3.1 MP slot.

          1920×1200 or 1920×1080 is likely to be extremely popular for the next decade because of the widespread proliferation of HD video devices and content at the 1920×1080 resolution. If I had to pick only one resolution for benchmark comparisons, this would be the one.

        • ReAp3r-G
        • 12 years ago

        i can’t agree more, i run all my games in 1280×1024…nothing higher, i don’t mind that resolution at all, i don’t mind 1024×768 even…

        my monitor has a max of 1440×900 but i don’t even run anything close to that…i try to compromise between eye candy and performance

          • shalmon
          • 12 years ago

          my feelings are along the same wavelength,

          i run a 19″ viewsonic vp191b, and its viewable screen size is perfectly suited for my viewing needs, and my physical desktop real estate. I bought it right when it came out and spend about $600 on it. Hell, it’s still on warranty. Add to this the fact that its image quality still seems to be regarded as some of the best all around compared to other monitors released even today. All things considered…why the hell would i want to upgrade?

          At a stretch i’ve thought about keeping the monitor simply even to find the cheapest video card that will enable me to run crysis at 1280×1024…if i blindly upgrade to a new/larger monitor i’ll need more horsepower to push more pixels. Which card is sufficient depends on your tastes, the game (some are faster than others and some are more demanding than others), and what resolution you’re expecting to run at (i.e., your monitor type and size). This is why i like to see how cards scale…and i think how well a card scales becomes more important as you climb down the performance ladder.

          • JustAnEngineer
          • 12 years ago

          1) Always run your games at the correct aspect ratio for your monitor. A 4:3 CRT should run at 1280×960, *[

            • DreadCthulhu
            • 12 years ago

            Nitpick – you can run a LCD at half of its native resolution (IE running a 2560×1600 at 1280×800) without having any scaling issues since then you have a block of four real pixels acting as one large pixel.

      • paulWTAMU
      • 12 years ago

      Agreed; I love test at 2560Xwhatever, but that would be what, a 26″ or bigger monitor? I only know a few gamers with larger than a 20″, and most of us have 22’s running at 1650X1080.

        • Usacomp2k3
        • 12 years ago

        I believe only 30″ monitors currently support 2560×1600.
        I have a 24″ monitor 😉

      • JustAnEngineer
      • 12 years ago

      Good point. I started a forum topic for this discussion:
      §[<https://techreport.com/forums/viewtopic.php?t=54988&highlight=<]§

    • Evan
    • 12 years ago

    If somebody does indeed make a 512MB AGP version of the 3850 as has been rumored, I’ll be putting one of those in my P4EE rig, since it’d be a substantial improvement over my current 7800GS. Hopefully, that’ll tide me over until Nehalem is available late next year…

    • provoko
    • 12 years ago

    Great review, thanks. Do you guys plan on testing 4 x 3870 gpus? Or will that be after Phenoms are released and AMD’s “Spider” platform is ready?

      • Damage
      • 12 years ago

      CrossFire X drivers are slated for early 2008. We can’t test 3- and 4-way CrossFire just yet.

    • El_MUERkO
    • 12 years ago

    Overclocking?

    Do we have to wait for AMD’s new CPUs and Mobos release before we see overclocking and 4*3870 crossfire? I’m curious to see how the 55nm GPU handles when pushed past it’s factory settings!

    • danny e.
    • 12 years ago

    what this all shows is that those who purchased a GTX when they were first released actually didnt get a half-bad deal. They’re still on top and will continue to be for the next 3 months or so.

    • Entroper
    • 12 years ago

    I like these cards for the sole reason that they’re decent competition for the 8800 GT, and may cause vendors to lower their price premiums on the 8800 GT if nVidia can keep them supplied. 🙂

    • Missile Maker
    • 12 years ago

    Nice review review as usual Damage. All comes down to what you say, can AMD/ATi keep these babies in stock? Otherwise the price difference to up to a 8800GT will be negligible to most (hey, a hundred clams ain’t what it used to be…been to the movies or a ball game lately?). However, the Green Team really StP on 8800GT availability. Whahhh…those fabs in Taiwan on vacation??

    • Bensam123
    • 12 years ago

    Erm, I was all for the next gen of the Radeons till I saw the the scant amount of memory on board. My Radeon x1800XT 512 which I’m still useing has just as much memory sa this current card. I’m not all about more memory always makes things faster, but with the current trend of games and things like megatexture comming out I want close to 756/1gb on my next gen card.

    512 should be a midgrade variant of the card and they should have something close to 640/756 on the 3870. I guess I’ll just wait till a manf releases one with more memory or NVidia relelases a next gen card, which will probably be soon, and probably will smash AMD again. 🙁

    After all they’re just catching up at the same time NVidia is about to move on.

    It would be nice to know when the next iteration of products are about to be released, even a general quarterly estimate would be nice. Not based on trend (even I can tell NVidia should release another variant soon), but somewhat based on fact.

    There is always something better just around the corner, but if I buy this then a week later something better is released it really makes me sad. :'(

    Perhaps TR should consider a calendar for hardware releases or general target dates. I know white paper launches, lack of availabilty, and what not. It would be nice none the less.

      • poulpy
      • 12 years ago

      /[<"After all they're just catching up at the same time NVidia is about to move on."<]/ About to move on? I don't think NVIDIA is going to replace the 8800GT anytime soon, it's hardly it the market in mass quantities yet. It took both ATi and NVIDIA quite some time to fill that middle-ground market gap with products that are actually as good or better as the "same gen" high end parent which is quite a feat (and messes up their lineup IMO). Both of them are likely to refresh the high end design the way they did with those products though and seeing ATi more on the ball on both release and perfs is encouraging for the next iteration. As it should be cheaper, offers a better HD support and seems to scale quite well with xfire (even two HD3850 performs really well apparently) I could see myself with a ATI product this time around.

        • Bensam123
        • 12 years ago

        NVidia releases one really good deal card before moving on like the 7800GS/6800GS. They usually have an entirely new lineup thats a hair better then their original offerings just like ATI does about a year or so after the initial release.

        7800/7900, x1800/x1900. AMD’s was just released, but NVidia just released their really good value card and nothing like a 8900 series.

      • provoko
      • 12 years ago

      768mb/1gb are still useless, and will probably be useless for the next 3-4 years. 512 is useless unless you go beyond 1megapixel of resolution (1280×1024 and up).

      Megatextures was designed to save memory. 100’s of small textures total more than 1 megatexture.

        • Bensam123
        • 12 years ago

        Hmmm I guess I had it flop-floped (megatextures).

        I normally like running games at 1600 x 1200 whenever possible and use anti-aliasing, Currently thats just enough and my card is about 2-3 years old. Even the article states that he feels the amount of memory is a bit mismatched for the power of the cards.

        I don’t replace my card every year.

        • swaaye
        • 12 years ago

        Oh, I don’t think 768MB + is useless at all. Crysis uses it. And, if you’re an Oblivion fan, you can easily use it up if you install Qarl’s Texture Pack…

    • 2_tyma
    • 12 years ago

    im glad they put the x1950 pro in for comparison in most of the tests

    • Gerbil Jedidiah
    • 12 years ago

    I’m on the fence with this one. I want a dual GPU set up, so the price/power consumption issues is twice as important to me. The 8800GT does indeed seem to be faster, and since developers support Nvidia more than AMD lately that further distances the 8800GT from the 3870.

    BUT

    I have a 650 Watt power supply. If the 3870 in crossfire plays Crysis at 1920×1200 with eye candy, and has less power draw for a lower price point, I may just opt for the 3870.

    I think I will have to wait a few weeks to get more info.

    • Nullvoid
    • 12 years ago

    Thanks for a good review Damage. I just wish you would have maintained the same thoroughness of testing shown with ETQW, right through all the other games.

    Clearly the 8800gt is a superior card, but when gaming at smaller resolutions like 1024×768 or 1440×900 (yes, it’s hard to believe but not everyone has a 22″+ monitor), does it have sufficient edge to overlook the lower price and power consumption of the 3870?

      • dragmor
      • 12 years ago

      The 3870 has higher power consumption at load. TR’s graphs are out because they used different motherboards for Nvidia and ATI testing.

        • Nullvoid
        • 12 years ago

        Hmm yes, so I see at the other review sites. Maybe the following sentence:

        “Not only that, but the HD 3870 system pulls 21W less at idle and 39W less under load than the GeForce 8800 GT system.”

        could be clarified to make clear that the difference is solely down to the motherboard used.

        • MrJP
        • 12 years ago

        I’m sure Damage will confirm, but I think you’re mistaken. Reading the test configuration tables, the Intel-based motherboard was only used for the Crossfire configurations. Therefore all single-card tests (including power consumption) were done on the same platform. At least, that’s the way all previous TR reviews have been done.

        • Damage
        • 12 years ago

        Not true. We only used the X38 mobo for CrossFire testing. All cards were tested for power on the same board.

          • Nullvoid
          • 12 years ago

          There’s a disparity somewhere then since every other review of the 3870 listed in your thursday shortbread puts the AMD card either equal to or slightly higher than the 8800gt for power consumption.

          • dragmor
          • 12 years ago

          Thanks for the clarification, its wasn’t clear.

      • Klyith
      • 12 years ago

      /[

        • Spotpuff
        • 12 years ago

        Agree. People are arguing about the logic of testing 2560×1600 resolutions without even batting an eye about SLI/Crossfire, which while seemingly great, has a ton of problems from game compatibility to RAM size issues.

        SLI has always seemed like a waste of money (two mid range cards instead of one high end card, driver issues, game compatibility problems) compared to a larger screen (24″ monitors are <$500 now) that will last for several hardware refresh cycles.

          • flip-mode
          • 12 years ago

          SLI has been heavily criticized every time TR does a GPU review by numerous people over the last couple of years. I don’t think people are not batting an eye at it but they’ve just realized that TR must have it’s own reasons to continue to test it. TR probably believes that there are enough people that are interested in SLI-Fire to justify spending the time to test it. The debate has already been had.

          But the uber-high resolution debate is worthwhile to have again, because all of TR’s tests are being done that way instead of, rather than along side of, the more common resolutions, so it brings into question of value of TR’s tests to the vast majority of people who will be buying these cards. I’d wager that the majority of those people will be running on 19″-22″ panels at or below the 1650×1080 resolution.

            • StashTheVampede
            • 12 years ago

            TR tests SLI and the uber resolution to justify the expense (and to write it off for business expenses). SOMEONE has to buy the hardware and run it, no?

            SLI is good to keep in the mix — most users don’t have it, but if you want to have the UBER box, you should have an idea of how it performs.

            • flip-mode
            • 12 years ago

            Again, the term “uber box” doesn’t really fit the context of a mid-range card review.

    • Jigar
    • 12 years ago

    Excellent review.. BTW HD3870 is impressive…

    • gratuitous
    • 12 years ago
      • JustAnEngineer
      • 12 years ago

      I still can’t hear you.

    • TakkiM
    • 12 years ago

    still not on top…

    if only they can deliver these cards in quantity…

    • dragmor
    • 12 years ago

    Nice review.

    Any chance of a power vs time graph to see how AMD’s Power Play works in a couple of the games.

Pin It on Pinterest

Share This