AMD’s Radeon HD 4870 graphics processor


Not much good has happened for either party since AMD purchased ATI. New chips from both sides of the fence have been late, run hot, and underperformed compared to the competition. Meanwhile, the combined company has posted staggering financial losses, causing many folks to wonder whether AMD could continue to hold up its end of the bargain as junior partner in the PC market’s twin duopolies, for CPUs and graphics chips.

AMD certainly has its fair share of well-wishers, as underdogs often do. And a great many of them have been waiting with anticipation—you can almost hear them vibrating with excitement—for the Radeon HD 4800 series. The buzz has been building for weeks now. For the first time in quite a while, AMD would seem to have an unequivocal winner on its hands in this new GPU.

Our first peek at Radeon HD 4850 performance surely did nothing to quell the excitement. As I said then, the Radeon HD 4850 kicks more ass than a pair of donkeys in an MMA cage match. But that was only half of the story. What the Radeon HD 4870 tells us is that those donkeys are all out of bubble gum.

Uhm, or something like that. Keep reading to see what the Radeon HD 4800 series is all about.

The RV770 GPU

Work on the chip code-named RV770 began two and a half years ago. AMD’s design teams were, unusually, dispersed across six offices around the globe. Their common goal was to take the core elements of the underperforming R600 graphics processor and turn them into a much more efficient GPU. To make that happen, the engineers worked carefully on reducing the size of the various logic blocks on the chip without cutting out functionality. More efficient use of chip area allowed them to pack in more of everything, raising the peak capacity of the GPU in many ways. At the same time, they focused on making sure the GPU could more fully realize its potential by keeping key resources well fed and better managing the flow of data through the chip.

The fruit of their labors is a graphics processor whose elements look familiar, but whose performance and efficiency are revelations. Let’s have a look at a 10,000-foot overview of the chip, and then we’ll consider what makes it different.

A block diagram of the RV770 GPU. Source: AMD.

Some portions of the diagram above are too small to make out at first glance, I know. We’ll be looking at them in more detail in the following pages. The first thing you’ll want to notice here, though, is the number of processors in the shader array, which is something of a surprise compared to early rumors. The RV770 has 10 SIMD cores, as you can see, and each them contains 16 stream processor units. You may not be able to see it above, but each of those SP units is a superscalar processing block comprised of five ALUs. Add it all up, and the RV770 has a grand total of 800 ALUs onboard, which AMD advertises as 800 “stream processors.” Whatever you call them, that’s a tremendous amount of computing power—well beyond the 320 SPs in the RV670 GPU powering the Radeon HD 3800 series. In fact, this is the first teraflop-capable GPU, with a theoretical peak of a cool one teraflops in the Radeon HD 4850 and up to 1.2 teraflops in the Radeon HD 4870. Nvidia’s much larger GeForce GTX 280 falls just shy of the teraflop mark.

The blue blocks to the right of the SIMDs are texture units. The RV770’s texture units are now aligned with SIMDs, so that adding more shader power equates to adding more texturing power, as is the case with Nvidia’s recent GPUs. Accordingly, the RV770 has 10 texture units, capable of addressing and filtering up to 40 texels per clock, more than double the capacity of the RV670.

Across the bottom of the diagram, you can see the GPU’s four render back-ends, each of which is associated with a 64-bit memory interface. Like a bad tattoo, the four back-ends and 256 bits of total memory connectivity are telltale class indicators: this is decidedly a mid-range GPU. Yet the individual render back-ends on RV770 are vastly more powerful than their predecessors, and the memory controllers have one heck of a trick up their sleeves in the form of support for GDDR5 memory, which enables substantially more bandwidth over every pin.

Despite all of the changes, the RV770 shares the same basic feature set with the RV670 that came before it, including support for Microsoft’s DirectX 10.1 standard. The big news items this time around are (sometimes major) refinements, including formidable increases in texturing capacity, shader power, and memory bandwidth, along with efficiency improvements throughout the design.

The chip

Like the RV670 before it, the RV770 is fabricated at TSMC on a 55nm process, which packs its roughly 956 million transistors into a die that’s 16mm per side, for a total area of 260 mm². The chip has grown from the RV670, but not as much as one might expect given its increases in capacity. The RV670 weighed in at an estimated 666 million transistors and was 192 mm².

Of course, AMD’s new GPU is positively dwarfed by Nvidia’s GT200, a 577 mm² behemoth made up of 1.4 billion transistors. But the more relevant comparisons may be to Nvidia’s mid-range GPUs. The first of those GPUs, of course, is the G92, a 65nm chip that’s behind everything from the GeForce 8800 GT to the GeForce 9800 GTX. That chip measured out, with our shaky ruler, to more or less 18mm per side, or 324 mm². (Nvidia doesn’t give out official die size specs anymore, so we’re reduced to this.) The second competing GPU from Nvidia is a brand-new entrant, the 55nm die shrink of the G92 that drives the newly announced GeForce 9800 GTX+. The GTX+ chip has the same basic transistor count of 754 million, but, well, have a look. The pictures below were all taken with the camera in the same position, so they should be pretty much to scale.

Nvidia’s G92

The RV770

The die-shrunk G92 at 55nm aboard the GeForce 9800 GTX+

Yeah, so apparently I have rotation issues. These things should not be difficult, I know. Hopefully you can still get a sense of comparative size. By my measurements, interestingly enough, the 55nm GTX+ chip looks to be 16 mm per side and thus 260 mm², just like the RV770. That’s despite the gap in transistor counts between the RV770 and G92, but then Nvidia and AMD seem to count transistors differently, among a multitude of other variables at work here.

The pictures below will give you a closer look at the chip’s die itself. The second one even locates some of the more important logic blocks.

A picture of the RV770 die. Source: AMD.

The RV770 die’s functional units highlighted. Source: AMD.

As you can see, the RV770’s memory interface and I/O blocks form a ring around the periphery of the chip, while the SIMD cores and texture units take up the bulk of the area in the middle. The SIMDs and the texture units are in line with one another.

What’s in the cards

Initially, the Radeon HD 4800 series will come in two forms, powder and rock. Err, I mean, 4850 and 4870. By now, you may already be familiar with the 4850, which has been selling online for a number of days.

Here’s a look at our review sample from Sapphire. The stock clock on the 4850 is 625MHz, and that clock governs pretty much the whole chip, including the shader core. These cards come with 512MB of GDDR3 memory running at 993MHz, for an effective 1986MT/s. AMD pegs the max thermal/power rating (or TDP) of this card at 110W. As a result, the 4850 needs only a single six-pin aux power connector to stay happy.

Early on, AMD suggested the 4850 would sell for about $199 at online vendors, and so far, street prices seem to jibe with that, by and large.

And here we have the big daddy, the Radeon HD 4870. This card’s much beefier cooler takes up two slots and sends hot exhaust air out of the back of the case. The bigger cooler and dual six-pin power connections are necessary given the 4870’s 160W TDP.

Cards like this one from VisionTek should start selling online today at around $299. That’s another hundred bucks over the 4850, but then you’re getting a lot more card. The 4870’s core clock is 750MHz, and even more importantly, it’s paired up with 512MB of GDDR5 memory. The base clock on that memory is 900MHz, but it transfers data at a rate of 3600MT/s, which means the 4870’s peak memory bandwidth is nearly twice that of the 4850.

Both the 4870 and the 4850 come with dual CrossFire connectors along the top edge of the card, and both can participate in CrossFireX multi-GPU configurations with two, three, or four cards daisy-chained together.

Nvidia’s response

The folks at Nvidia aren’t likely to give up their dominance at the $199 sweet spot of the video card market without a fight. In response to the release of the Radeon HD 4850, they’ve taken several steps to remain competitive. Most of those steps involve price cuts. Stock-clocked versions of the GeForce 9800 GTX have dropped to $199 to match the 4850. Meanwhile, you have higher clocked cards like this one:

This “XXX Edition” card from XFX comes with core and shader clocks of 738 and 1836MHz, respectively, up from 675/1688MHz stock, along with 1144MHz memory. XFX bundles this card with a copy of Call of Duty 4 for $239 at Newegg, along with a $10.00 mail-in rebate, which gives you maybe better-than-even odds of getting a check for ten bucks at some point down the line, if you’re into games of chance.

Cards like this “XXX Edition” will serve as a bridge of sorts for Nvidia’s further answer to the Radeon HD 4850 in the form of the GeForce 9800 GTX+. Those cards will be based on the 55nm die shrink of the G92 GPU, and they’ll share the XXX Edition’s 738MHz core and 1836MHz shader clocks, although their memory will be slightly slower at 1100MHz. Nvidia expects GTX+ cards to be available in decent quantities by July 16 at around $229.

For most intents and purposes, of course, these two cards should be more or less equivalent, including performance. The GTX+ shares the 9800 GTX’s dual-slot cooler and layout, as well. As a result, and because of time constraints, we’ve chosen to include only the XXX Edition in most of our testing. The exception is the place where the 55nm chip is likely to make the biggest difference: in power draw and the related categories of heat and noise. We’ve tested the 9800 GTX+ separately in those cases.

Nvidia has also decided to sweeten the pot a little bit by supplying us with drivers that endow the GeForce 9800 GTX and GTX 200-series cards with support for GPU-accelerated physics via the PhysX API. You’ll see early results from those drivers in our 3DMark Vantage performance numbers.

Shader processing

Block diagram of a single SP unit.
Source: AMD.

Since the RV770 shares its core shader structure with the R600 family, much of what I wrote about how shader processing works in my R600 review should still apply here. The RV770’s basic execution unit remains a five-ALU-wide superscalar block like the one on the right, which has four “regular” ALUs and one “fat” ALU that can handle some special functions the others can’t, like transcendentals.

AMD has extended the functionality of these SP blocks slightly with RV770, but remarkably, they’ve managed to reduce the area they occupy on the chip versus RV670, even on the same fabrication process. RV770 Chief Architect Scott Hartog cited a 40% increase in performance per square millimeter. In fact, AMD originally planned to put eight SIMD cores on this GPU, but once the shader team’s optimizations were complete, the chip had die space left empty; the I/O ring around the outside of the chip was the primary size constraint. In response, they added two additional SIMD cores, bringing the SP count up to 800 and vaulting the RV770 over the teraflop mark.

Most of the new capabilities of the RV770’s shaders are aimed at non-graphics applications. For instance, from the RV670, they inherit the ability to handle double-precision floating-point math, a capability that has little or no application in real-time graphics at present. The “fat” ALU in the SP block can perform one double-precision FP add or multiply per clock, while the other four ALUs can combine to process one double-precision add. In essence, that means the RV770’s peak compute rate for double-precision multiply-add operations is one-fifth of its single-precison rate, or 240 gigaflops in the case of the Radeon HD 4870. That’s quite a bit faster than even the GeForce GTX 280, whose peak DP compute rate is 78 gigaflops.

Another such accommodation is the addition of 16KB of local shared memory in each SIMD core, useful for sharing data between threads in GPU-compute applications. This is obviously rather similar to the 16KB of shared memory Nvidia has built into each of the SM structures in its recent GPUs, although the RV770 has relatively less memory per stream processor, about a tenth of what the GT200 has. This local data share isn’t accessible to programmers via graphics APIs like DirectX, but AMD may use it to enable larger kernels for custom AA filters or for other forms of post-processing. Uniquely, the RV770 also has a small, 16K global data share for the passing of data between SIMDs.

Beyond that, the ability to perform an integer bit-shift operation has been migrated from the “fat” ALU to all five of them in each SP block, a provision aimed at accelerating video processing, encoding, and compression. The design team also added memory import and export capabilities, to allow for full-speed scatter and gather operations. And finally, the RV770 has a new provision for the creation of lightweight threads for GPU compute applications. Graphics threads tend to have a lot of state information associated with them, not all of which may be necessary for other types of processing. The RV770 can quickly generate threads with less state info for such apps.

Peak shader
arithmetic (GFLOPS)

Single-issue Dual-issue

GeForce 8800 GTX

346 518
GeForce 9800 GTX

432 648
GeForce 9800 GX2

768 1152
GeForce GTX 260

477 715
GeForce GTX 280

622 933
Radeon HD 2900 XT

475
Radeon HD 3870 496
Radeon HD 3870 X2

1056
Radeon HD 4850

1000
Radeon HD 4870

1200

Although most of these changes won’t affect graphics performance, one change may. Both AMD and Nvidia seem to be working on getting a grasp on how developers may use geometry shaders and optimizing their GPUs for different possibilities. In the GT200, we saw Nvidia increase its buffer sizes dramatically to better accommodate the use of a shader for geometry amplification, or tessellation. AMD claims its GPUs were already good at handling such scenarios, but has enhanced the RV770 for the case where the geometry shader keeps data on the chip for high-speed rendering.

The single biggest improvement made in the RV770’s shader processing ability, of course, is the increase to 10 SIMDs and a total of 800 so-called stream processors on a single chip. This change affects graphics and GPU-compute applications alike. The table on the right shows the peak theoretical computational rates of various GPUs. Of course, as with almost anything of this nature, the peak number isn’t destiny; it’s just a possibility, if everything were to go exactly right. That rarely happens. For instance, the GeForces can only reach their peak numbers if they’re able to use their dual-issue capability to execute an additional multiply operation in each clock cycle. In reality, that doesn’t always happen. Similarly, in order to get peak throughput out of the Radeon, the compiler must schedule instructions cleverly for its five-wide superscalar ALU block, avoiding dependencies and serializing the processing of data that doesn’t natively have five components.

Fortunately, we can run a few simple synthetic shader tests to get a sense of the GPUs’ processing prowess.

In its most potent form, the Radeon HD 4870, the RV770 represents a huge improvement over the Radeon HD 3870—pretty straightforwardly, about two times the measured performance. Versus the competition, the Radeon HD 4850 outperforms the GeForce 9800 GTX in three of the four tests, although the gap isn’t as large as the theoretical peak numbers would seem to suggest. More impressively, the Radeon HD 4870 surpasses the GT200-based GeForce GTX 260 in two of the four tests and essentially matches the GTX 280 in the GPU particles and Perlin noise tests. That’s against a chip twice the size of the RV770, with a memory interface twice as wide.

Texturing, memory hierarchy, and render back-ends

A single RV770 texture unit. Source: AMD.

Like the shaders, the texture units in the RV770 have been extensively streamlined. Hartog claimed an incredible 70% increase in performance per square millimeter for these units. Not only that, but as I’ve mentioned, the texture units are now aligned with shader SIMDs, so future RV770-based designs could scale the amount of processing power up or down while maintaining the same ratio of shader power to texture filtering capacity. Interestingly enough, the RV770 retains the same shader-to-texture capacity mix as the RV670 and the R600 before it. Nvidia has moved further in this direction recently with the release of the GT200, but the Radeons still have a substantially higher ratio of gigaflops to gigatexels.

With 10 texture units onboard, the RV770 can sample and bilinearly filter up to 40 texels per clock. That’s up from 16 texels per clock on RV670, a considerable increase. One of the ways AMD managed to squeeze down the size of its texture units was taking a page from Nvidia’s playbook and making the filtering of FP16 texture formats work at half the usual rate. As a result, the RV770’s peak FP16 filtering rate is only slightly up from RV670. Still, Hartog described the numbers game here as less important than the reality of measured throughput.

To ensure that throughput is what it should be, the design team overhauled the RV770’s caches extensively, replacing the R600’s “distributed unified cache” with a true L1/L2 cache hierarchy.

A block diagram of the RV770’s cache hierarchy. Source: AMD.

Each L1 texture cache is associated with a SIMD/texture unit block and stores unique data for it, and each L2 cache is aligned with a memory controller. Much of this may sound familiar to you, if you’ve read about certain competitors to RV770. No doubt AMD has learned from its opponents.

Furthermore, Hartog said RV770 uses a new cache allocation routine that delays the allocation of space in the L1 cache until the request for that data is fulfilled. This mechanism should allow RV770 to use its texture caches more efficiently. Vertices are stored in their own separate cache. Meanwhile, the chip’s internal bandwidth is twice that of the previous generation—a provision necessary, Hartog said, to keep pace with the amount of data coming in from GDDR5 memory. He claimed transfer rates of up to 480GB/s for an L1 texture fetch and up to 384GB/s for data transfers between the L1 and L2 caches.

An overview of the RV770’s memory interface. Source: AMD.

The RV770’s reworked memory subsystem doesn’t stop at the caches, either. AMD’s vaunted ring bus is dead and gone, and it’s not even been replaced by a crossbar. Instead, RV770 opts for a simpler approach. The GPU’s four memory controllers are distributed around the edges of the chip, next to their primary bandwidth consumers, including the render back-ends and the L2 caches. Data is partitioned via tiling to maintain good locality of reference for each controller/cache pair, and a hub passes lower bandwidth data to and from the I/O units for PCI Express, display controllers, the UVD2 video engine, and the CrossFireX interconnect. AMD claims this approach brings efficiency gains, with the RV770 capable of reaching 95% of its theoretical peak bandwidth, up 10% from the RV670.

These gains alone wouldn’t allow the RV770 to realize its full potential, however, with only a 256-bit aggregate path to memory. For extra help in this department, AMD worked with DRAM vendors to develop a new memory type, GDDR5. GDDR5 keeps the single-ended signaling used in current DRAM types and uses a range of techniques to achieve higher bandwidth. Among them: a new clocking architecture, an error-detection protocol for the wires, and individual training of DRAM devices upon startup. AMD’s Joe Macri, who heads the JEDEC DRAM and GDDR5 committees, points out that this last feature should allow for additional overclocking headroom with better cooling, since DRAM training will respond to improvements in environmental conditions.

GDDR5’s command clock runs at a quarter of the data rate, which is presumably why the Radeon HD 4870’s memory clock shows up as 900MHz when the actual data rate is 3600 MT/s. Do the math, and you’ll find that the 4870’s peak memory bandwidth works out to 115.2 GB/s, which is even more than the Radeon HD 2900 XT managed with a 512-bit interface or what the GeForce GTX 260 can reach with a 448-bit interface to GDDR3. And that’s with 3.6Gbps devices. AMD says it’s already seeing 5Gbps GDDR5 memory now and expects to see 6Gbps before the end of the year.

An RV770 render back-end unit.
Source: AMD.

The final element in the RV770’s wide-ranging re-plumbing of the R600 architecture comes in the form of heavily revised render back-ends. (For the confused, Nvidia calls these exact same units ROPs, but we’ll use AMD’s term in discussing its chips.) One of the RV770 design team’s major goals was to improve antialiasing performance, and render back-ends are key to doing so. Looking at the diagram on the left, the RV770’s render back-end doesn’t look much different from any other, and the chip only has four of them, so what’s the story?

Well, for one, the individual render back-end units are quite a bit more powerful. Below is a table supplied by Hartog that shows the total render back-end capacity of the RV770 versus RV670, both of which have the same number of units on chip.

RV670 versus RV770 total render back-end throughput. Source: AMD.

According to this table, the RV770’s render back-ends are twice as fast as the RV670’s in many situations: for any form of multisampled AA and for 64-bit color modes even without AA. Not only that, but the RV770 can perform up to 64 Z or stencil operations per clock cycle. Hartog identified the RV670’s Z rate as the primary limiting factor in the RV670’s antialiasing performance.

That’s not the whole story, however. Ever since the R600 first appeared, we heard rumors that its render back-ends were essentially broken in that they would not perform the resolve step for multisampled AA—instead, the R600 and family handled this task in the shader core. Shader-based resolve did allow AMD to do some nice things with custom AA filters, but the R600-family’s relatively weak AA performance was always a head-scratcher. Why do it that way, if it’s so slow?

I suspect, as a result of the shader-based resolve, that the numbers you see for RV670 in the table above are, shall we say, optimistic. They may be correct as theoretical peaks, but I suspect the RV670 doesn’t often reach them.

Fortunately, AMD has confirmed to us that the RV770 no longer uses its shader core for standard MSAA resolve. If there was a problem with the R6xx chips’ render back-ends—and AMD still denies it—that issue has been fixed. The RV770 will still use shader-based resolve for AMD’s custom-filter AA modes, but for regular box filters, the work is handled in custom hardware in the render back-ends—as it was on pre-R600 Radeons and on all modern GeForce GPUs.

Testing RV770’s mettle

So how do the rearchitected bits of RV770 work when you put them all together? Let’s have a look. First, here’s a quick table showing the theoretical peak capacities of some relevant GPUs, which we can use for reference.

Peak
pixel
fill rate
(Gpixels/s)

Peak bilinear

texel
filtering
rate
(Gtexels/s)


Peak bilinear

FP16 texel
filtering
rate
(Gtexels/s)


Peak
memory
bandwidth
(GB/s)

GeForce 8800 GTX

13.8 18.4 18.4 86.4
GeForce 9800 GTX

10.8 43.2 21.6 70.4
GeForce 9800 GX2

19.2 76.8 38.4 128.0
GeForce GTX 260

16.1 36.9 18.4 111.9
GeForce GTX 280

19.3 48.2 24.1 141.7
Radeon HD 2900 XT

11.9 11.9 11.9 105.6
Radeon HD 3870 12.4 12.4 12.4 72.0
Radeon HD 3870 X2

26.4 26.4 26.4 115.2
Radeon HD 4850

10.0 25.0 12.5 63.6
Radeon HD 4870

12.0 30.0 15.0 115.2

Oddly enough, on paper, the RV770’s numbers don’t look all that impressive. The Radeon HD 4850 trails the GeForce 9800 GTX in every category, and the 4870 isn’t much faster in most departments—except for memory bandwidth, of course, thanks to GDDR5. But what happens when we measure throughput with a synthetic test?

Color fill rate tests like this one tend to be limited mainly by memory bandwidth, as seems to be largely the case here. The Radeon HD 4850 manages to outdo the GeForce 9800 GTX, though, despite a slightly lower memory clock. As for the 4870, well, it beats out the GeForce GTX 260 and the Radeon HD 3870 X2, which would seem to suggest that its GDDR5 memory is fast and relatively efficient. The GTX 260 and 3870 X2 have similar memory bandwidth in theory, but they’re slower in practice.

This is a test of integer texture filtering performance, so many of the GPUs should be faster here than in our next test. The RV770 doesn’t look too bad, and its performance scales down gracefully as the number of filter taps increases. But Nvidia’s GPUs clearly have more texture filtering capacity, both in theory and in practice, with 32-bit texture formats.

This test, however, measures FP16 texture filtering throughput, and here, the tables turn. Amazingly, the Radeon HD 4850 outdoes the GeForce GTX 280, and the 4870 is faster still. Only the “X2” cards, with dual GPUs onboard, are in the same league. It would seem Nvidia’s GPUs have some sort of internal bottleneck preventing them from reaching their full potential with FP16 filtering. If so, they’re in good company: the Radeon HD 3870’s theoretical peak for FP16 filtering is almost identical to the Radeon HD 4850’s, yet the 4850 is much faster.

Incidentally, if the gigatexel numbers produced by 3DMark seem confusing to you, well, I’m right there with you. I asked FutureMark about this problem, and they’ve confirmed that the values are somehow incorrect. They say they’re looking into it now—or, well, after folks are back from their summer vacations. In the meantime, I’m assuming we can trust the relative performance reported by 3DMark, even if the units in which they’re reported are plainly wrong. Let’s hope I’m right about that.

Texture filtering quality

You’re probably looking at the table below and wondering what sort of drugs will produce that effect. It’s not my place to offer pharmaceutical advice, and I don’t want to dwell too much on this subject, but the images below are test patterns for texture filtering quality. My main purpose in including them is to demonstrate that not much has changed on this front since the debut of the DirectX 10 generation of GPUs. These are the same patterns we saw in our Radeon HD 2900 XT review, and they’re big, honkin’ improvements over what the DirectX 9-class GPUs did.


Anisotropic texture filtering and trilinear blending

Radeon HD 3870

Radeon HD 4870


GeForce GTX 280

GeForce GTX 280 HQ

The images above come from the snappily-named D3D AF tester, and what you’re basically doing is looking down a 3D rendered tube with a checkerboard pattern applied. The colored bands indicated different mip-map levels, and you can see that the GPUs vary the level of detail they’re using depending on the angle of the surface

The GeForce GTX 280’s pattern, for what it’s worth, is identical to that produced by a G80 or G92 GPU. Nvidia’s test pattern is closer to round and thus a little more perfect, but we’ve found the practical difference between the two algorithms to be imperceptible.

On a more interesting note, the impact of Nvidia’s trilinear blending optimizations is apparent. You can see how much smoother the color transitions between mip maps are with its “high quality” option enabled in the driver control panel, and you’ve seen how that option affects performance on the prior page of this review. Then again, although the Radeon’s test pattern looks purty, AMD has a similar adaptive trilinear algorithm of its own that dynamically applies less blending as it sees fit.

The bottom line, I think, on image quality is that current DX10-class GPUs from Nvidia and AMD produce output that is very similar. Having logged quite a few hours playing games with both brands of GPUs, I’m satisfied that either one will serve you well. We may revisit the image quality issue again before long, though. I’d like to look more closely at the impact of those trilinear optimizations in motion rather than in screenshots or test patterns. We’ll see.

Antialiasing

The RV770’s beefed up texture filtering looks pretty good, but how do those new render back-ends help antialiasing performance? Well, here we have the beginnings of an answer. The results below show how increasing sample levels impact frame rates. We tested in Half-Life 2 Episode Two at 1920×1200 resolution with the rest of the game’s image quality options at their highest possible settings.

To get a sense of the impact of the new render back-ends, compare the results for the Radeon HD 3870 X2 and the Radeon HD 4870. The two start out at about the same spot without antialiasing (the 1X level), with the 3870 X2 slightly ahead. However, as soon as we enable 2X AA, the 3870 X2’s performance drops off quickly, while the 4870’s frame rates step down more gracefully. The 4870 produces higher frames rates with 8X multisampling than the 3870 X2 does with just 2X AA.

I’ve shown performance results for Nvidia’s coverage sampled AA (CSAA) modes in the graph above, but presenting the results from the multitude of custom-filter AA (CFAA) modes AMD offers is more difficult, so I’ve put them into tables. First up is the Radeon HD 3870 X2, followed by the Radeon HD 4870.

Radeon HD 3870 X2 – Half-Life 2 Episode Two – AA scaling
Base

MSAA

mode

Sample

count

FPS Filter

type

Sample

count

FPS Filter

type

Sample

count

FPS Filter

type

Sample

count

FPS
1X 1 98.0
2X 2 66.2 Narrow

tent

4 65.5 Wide

tent

6 62.7
4X 4 65.0 Narrow

tent

6 47.5 Wide

tent

8 46.2 Edge

detect

12 37.7
8X 8 59.1 Narrow

tent

12 26.9 Wide

tent

16 25.5 Edge

detect

24 28.1
Radeon HD 4870 – Half-Life 2 Episode Two – AA scaling
Base

MSAA

mode

Sample

count

FPS Filter

type

Sample

count

FPS Filter

type

Sample

count

FPS Filter

type

Sample

count

FPS
1X 1 96.3
2X 2 84.4 Narrow

tent

4 69.3 Wide

tent

6 66.3
4X 4 79.8 Narrow

tent

6 52.5 Wide

tent

8 51.3 Edge

detect

12 39.5
8X 8 73.1 Narrow

tent

12 31.6 Wide

tent

16 29.2 Edge

detect

24

28.8

The thing that strikes me about these results is how similarly these two solutions scale when we get into the CFAA modes. The 4870 is quite a bit faster in the base MSAA modes with just a box filter, where the render back-ends take care of the MSAA resolve step. Once we get into shader-based resolve on both GPUs, though, the 4870 is only slightly quicker than the 3870 X2 in each CFAA mode. That means, practically speaking, that RV770-based cards will pay a relatively higher penalty for going from standard multisampled AA to the CFAA modes than R6xx-based ones do. You’re simply better off running a Radeon HD 4870 in 8X MSAA than you are using any custom filter. That’s not a problem, of course, just an artifact of the big performance improvements delivered by the RV770’s new render back-ends. Many folks will probably prefer to use 8X MSAA given the option, anyhow, since it doesn’t impose the subtle blurring effect that AMD’s custom tent filters do.

Incidentally, the RV770’s performance also scales much more gracefully to 8X MSAA than any GeForce does. The Radeon HD 4870 outperforms even the mighty GeForce GTX 280 with 8X multisampling, and the 4850 practically trounces the 9800 GTX. Believe it or not, I’m already getting viral marketing emails from amdguyintoronto@hotmail.com asking me to test more games with 8X AA. Jeez, these guys are connected.

Our testing methods

As ever, we did our best to deliver clean benchmark numbers. Tests were run at least three times, and the results were averaged.

Our test systems were configured like so:

Processor Core 2 Extreme QX9650 3.0GHz Core 2 Extreme QX9650 3.0GHz
System bus 1333MHz (333MHz quad-pumped) 1333MHz (333MHz quad-pumped)
Motherboard Gigabyte GA-X38-DQ6 EVGA nForce 780i SLI
BIOS revision F9a P05p
North bridge X38 MCH 780i SLI SPP
South bridge ICH9R 780i SLI MCP
Chipset drivers INF update 8.3.1.1009
Matrix Storage Manager 7.8
ForceWare 15.17
Memory size 4GB (4 DIMMs) 4GB (4 DIMMs)
Memory type 2 x Corsair TWIN2X20488500C5D
DDR2 SDRAM
at 800MHz
2 x Corsair TWIN2X20488500C5D
DDR2 SDRAM
at 800MHz
CAS latency (CL) 5 5
RAS to CAS delay (tRCD) 5 5
RAS precharge (tRP) 5 5
Cycle time (tRAS) 18 18
Command rate 2T 2T
Audio Integrated ICH9R/ALC889A
with RealTek 6.0.1.5618 drivers
Integrated nForce 780i SLI MCP/ALC885
with RealTek 6.0.1.5618 drivers
Graphics
Radeon HD 2900 XT 512MB PCIe
with Catalyst 8.5 drivers
Dual XFX GeForce 9800 GTX XXX 512MB PCIe
with ForceWare 177.39 drivers
Asus Radeon HD 3870 512MB PCIe
with Catalyst 8.5 drivers
Radeon HD 3870 X2 1GB PCIe
with Catalyst 8.5 drivers
Radeon HD 4850 512MB PCIe
with Catalyst 8.501.1-080612a-064906E-ATI drivers
Dual Radeon HD 4850 512MB PCIe
with Catalyst 8.501.1-080612a-064906E-ATI drivers
Radeon HD 4870 512MB PCIe
with Catalyst 8.501.1-080612a-064906E-ATI drivers
Dual Radeon HD 4870 512MB PCIe
with Catalyst 8.501.1-080612a-064906E-ATI drivers
MSI GeForce 8800 GTX 768MB PCIe
with ForceWare 175.16 drivers
XFX GeForce 9800 GTX 512MB PCIe
with ForceWare 175.16 drivers
XFX GeForce 9800 GTX XXX 512MB PCIe
with ForceWare 177.39 drivers
GeForce 9800 GTX+ 512MB PCIe
with ForceWare 177.39 drivers
XFX GeForce 9800 GX2 1GB PCIe
with ForceWare 175.16 drivers
GeForce GTX 260 896MB PCIe
with ForceWare 177.34 drivers
GeForce GTX 280 1GB PCIe
with ForceWare 177.34 drivers
Hard drive WD Caviar SE16 320GB SATA
OS Windows Vista Ultimate x64 Edition
OS updates Service Pack 1, DirectX March 2008 update

Thanks to Corsair for providing us with memory for our testing. Their quality, service, and support are easily superior to no-name DIMMs.

Our test systems were powered by PC Power & Cooling Silencer 750W power supply units. The Silencer 750W was a runaway Editor’s Choice winner in our epic 11-way power supply roundup, so it seemed like a fitting choice for our test rigs. Thanks to OCZ for providing these units for our use in testing.

Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests.

We used the following versions of our test applications:

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Call of Duty 4: Modern Warfare

We tested Call of Duty 4 by recording a custom demo of a multiplayer gaming session and playing it back using the game’s timedemo capability. Since these are high-end graphics configs we’re testing, we enabled 4X antialiasing and 16X anisotropic filtering and turned up the game’s texture and image quality settings to their limits.

We’ve chosen to test at 1680×1050, 1920×1200, and 2560×1600—resolutions of roughly two, three, and four megapixels—to see how performance scales.

Aaaaaand…. wow. The Radeon HD 4850 beats out both its like-priced competitor, the GeForce 9800 GTX, and the slightly more expensive XXX Edition, which is also serving as our proxy for the 9800 GTX+. Doubling up on cards only accentuates the gap between the 4850 and 9800 GTX XXX. Meanwhile, the Radeon HD 4870 just edges out the GeForce GTX 260 at 2560×1600 resolution. At 1920×1200, the 4870 actually manages to outrun both the GTX 260 and 280, but the big GeForce chips come into their own at four-megapixel-plus display resolutions. Trouble is, two 4850s in CrossFire pretty much obliterate a single GeForce GTX 280, regardless—and they cost considerably less.

Half-Life 2: Episode Two

We used a custom-recorded timedemo for this game, as well. We tested Episode Two with the in-game image quality options cranked, with 4X AA and 16 anisotropic filtering. HDR lighting and motion blur were both enabled.

Nvidia’s back in the game a little more here, as the 9800 GTX XXX Edition hangs right with the Radeon HD 4850, in both single-card and dual-GPU configurations. The GeForce is even faster at 2560×1600. However, the Radeon HD 4870’s performance has to be disconcerting for Nvidia; it’s quicker than the GTX 260 in all but the highest resolution, and even there, the 4870 is less than three frames per second behind its pricier rival.

Two 4870s in CrossFire, which also cost less than a GeForce GTX 280, are miles head of anything else we tested.

Enemy Territory: Quake Wars

We tested this game with 4X antialiasing and 16X anisotropic filtering enabled, along with “high” settings for all of the game’s quality options except “Shader level” which was set to “Ultra.” We left the diffuse, bump, and specular texture quality settings at their default levels, though. Shadow and smooth foliage were enabled, but soft particles were disabled. Again, we used a custom timedemo recorded for use in this review.

This one’s a clean sweep for AMD. The Radeon HD 4850 is faster than either variant of the GeForce 9800 GTX, and the 4870 pumps out over 60 frames per second at 2560×1600, outrunning the GeForce GTX 260.

Crysis

Rather than use a timedemo, I tested Crysis by playing the game and using FRAPS to record frame rates. Because this way of doing things can introduce a lot of variation from one run to the next, I tested each card in five 60-second gameplay sessions.

Also, I’ve chosen a new area for testing Crysis. This time, I’m on a hillside in the recovery level having a firefight with six or seven of the bad guys. As before, I’ve tested at two different settings, with the game’s “High” quality presets and with its “Very high” ones, also.

The 4850 trips up a bit in Crysis, where it’s just a hair’s breadth slower than the 9800 GTX. CrossFire scaling looks to be rather disappointing, too, compared to SLI scaling. The 4870, though, comes out looking good yet again by virtue of having beaten up on the hundred-bucks-more-expensive GeForce GTX 260.

Assassin’s Creed

There has been some controversy surrounding the PC version of Assassin’s Creed, but I couldn’t resist testing it, in part because it’s such a gorgeous, well-produced game. Also, hey, I was curious to see how the performance picture looks for myself. The originally shipped version of this game can take advantage of the Radeon HD 3000- and 4000-series GPUs’ DirectX 10.1 capabilities to get a frame rate boost with antialiasing, and as you may have heard, Ubisoft chose to remove the DX10.1 path in an update to the game. I chose to test the game without this patch, leaving DX10.1 support intact.

I used our standard FRAPS procedure here, five sessions of 60 seconds each, while free-running across the rooftops in Damascus. All of the game’s quality options were maxed out, and I had to edit a config file manually in order to enable 4X AA at this resolution.

The RV770 show continues with this unscheduled detour into controversial DX10.1 territory.

Race Driver GRID

I tested this absolutely gorgeous-looking game with FRAPS, as well, and in order to keep things simple, I decided to capture frame rates over a single, longer session as I raced around the track. This approach has the advantage of letting me report second-by-second frame-rate results.

Yowza. The Radeon HD 4870 is nearly twice as fast as the 3870, which is good enough to put it at the very top of the single-GPU solutions. Two 4850 or 4870 cards seem to scale well in CrossFire, as well.

For what it’s worth, I tried re-testing the 3870 X2 with the new Catalyst 8.6 drivers to see whether they had a CrossFire profile for GRID, like the 4800 series drivers obviously do, but performance was the same. I also tried renaming the game executable, but that attempt seemed to run afoul of the game’s copy protection somehow. Oh well.

3DMark Vantage

And finally, we have 3DMark Vantage’s overall index. I’m pleased to have games that will challenge the performance of a new graphics card today, so we don’t have to rely on an educated guess about possible future usage models like 3DMark. However, I did collect some scores to see how the GPUs would fare, so here they are. Note that I used the “High” presets for the benchmark rather than “Extreme,” which is what everyone else seems to be using. Somehow, I thought frame rates in the fives were low enough.

Since both camps have released new drivers that promise big performance boosts for 3DMark Vantage, we tested with almost all new drivers here. For the GeForce 8800 GTX and 9800 GX2, we used ForceWare 175.19 drivers. For the other GeForces, we used the new 177.39 drivers, complete with PhysX support. And for the Radeon HD 3870 and 2900 XT, we tested with Catalyst 8.6. Since the 3870 X2 seemed to crash in 3DMark with Cat 8.6, we stuck with the 8.5 revision for it.

I suppose the final graph there is the most dramatic. That’s where Nvidia’s support for GPU-accelerated physics, via the PhysX API used by 3DMark’s “CPU” physics test, kicks in. Obviously, the GPU acceleration results in much higher scores than we see with CPU-only physics, which affects both the composite CPU score and the overall 3DMark score.

I’m certainly as impressed as anyone with Nvidia’s port of the PhysX API to its CUDA GPU-computing platform, but I’m not sure that’s, you know, entirely fair from a benchmarking point of view. 3DMark has become like the Cold War-era East German judge at the Olympics all of a sudden. The overall GPU score may be a better measure of these chips, and it puts the Radeon HD 4850 ahead of the GeForce 9800 GTX XXX.

Power consumption

We measured total system power consumption at the wall socket using an Extech power analyzer model 380803. The monitor was plugged into a separate outlet, so its power draw was not part of our measurement. The cards were plugged into a motherboard on an open test bench.

The idle measurements were taken at the Windows Vista desktop with the Aero theme enabled. The cards were tested under load running Half-Life 2 Episode Two at 2560×1600 resolution, using the same settings we did for performance testing.

The power consumption of the two Radeon HD 4000-series cards at idle isn’t bad, but it is disappointing in light of what Nvidia has achieved with the GeForce GTX cards. The 4870, in particular, is perplexing, because GDDR5 memory is supposed to require less power. When running a game, the new Radeons look relatively better, with lower power draw than their closest competitors.

Note that those competitors include the GeForce 9800 GTX+, based on the 55nm shrink of the G92 GPU. At the same clock speeds as the 65nm XXX Edition, the GTX+-equipped system draws 11W less power at idle and 25W less under load.

Noise levels

We measured noise levels on our test systems, sitting on an open test bench, using an Extech model 407727 digital sound level meter. The meter was mounted on a tripod approximately 12″ from the test system at a height even with the top of the video card. We used the OSHA-standard weighting and speed for these measurements.

You can think of these noise level measurements much like our system power consumption tests, because the entire systems’ noise levels were measured, including the stock Intel cooler we used to cool the CPU. Of course, noise levels will vary greatly in the real world along with the acoustic properties of the PC enclosure used, whether the enclosure provides adequate cooling to avoid a card’s highest fan speeds, placement of the enclosure in the room, and a whole range of other variables. These results should give a reasonably good picture of comparative fan noise, though.

I wasn’t able to reliably measure noise levels for most of these systems at idle. Our test systems keep getting quieter with the addition of new power supply units and new motherboards with passive cooling and the like, as do the video cards themselves. Our test rigs at idle are too close to the sensitivity floor for our sound level meter, so I only measured noise levels under load. Even then, I wasn’t able to get a good measurement for the GeForce 8800 GTX; its cooler is just too quiet.

There you have it. Not bad. However, I should warn you that we tested these noise levels on an open test bench, and the 4850 and 4870 were definitely not running their blowers at top speed. They’re quite a bit louder when they first spin up, for a split second, at boot time. When crammed into the confines of your own particular case, your mileage will probably vary. In fact, for the 4850, I’d almost guarantee it, for reasons you’ll see below.

GPU temperatures

Per your requests, I’ve added GPU temperature readings to our results. I captured these using AMD’s Catalyst Control Center and Nvidia’s nTune Monitor, so we’re basically relying on the cards to report their temperatures properly. In the case of multi-GPU configs, I only got one number out of CCC. I used the highest of the numbers from the Nvidia monitoring app. These temperatures were recorded while running the “rthdribl” demo in a window. Windowed apps only seem to use one GPU, so it’s possible the dual-GPU cards could get hotter with both GPUs in action. Hard to get a temperature reading if you can’t see the monitoring app, though.

The new Radeons achieve their relatively low noise levels by allowing the GPU to run at much higher temperatures than current GeForces or past Radeons. The 4850, in particular, seems to get ridiculously hot, not just in the monitoring app but on the card and cooler itself—well beyond the threshold of pain. This mofo will burn you.

I’m hopeful that board makers will find some solutions. Shortly before we went to press, we received a poorly documented and possibly incomplete set of files from Sapphire that may allow us flash to a new BIOS revision on the 4850, and I believe their aim is to reduce temperatures. I kind of worry about what they’ll do to the noise levels, but perhaps we can test that. Longer term, one hopes we’ll see 4850 cards with much better coolers on them, perhaps with dual slots and a rear exhaust setup, like the 4870. That would be a huge improvement.

Conclusions

The RV770 GPU looks to be an unequivocal success on almost every front. In its most affordable form, the Radeon HD 4850 delivers higher performance overall than the GeForce 9800 GTX and redefines GPU value at the ever-popular $199 price point. Meanwhile, the RV770’s most potent form is even more impressive, in my view. Onboard the Radeon HD 4870, this GPU sets a new standard for architectural efficiency—in terms of performance per die area—due to two things: a broad-reaching rearchitecting and optimization the of R600 graphics core and the astounding amount of bandwidth GDDR5 memory can transfer over a 256-bit interface. Both of these things seem to work every bit as well as advertised. In practical terms, what all of this means is that the Radeon HD 4870, a $299 product, competes closely with the GeForce GTX 260, a $399 card based on a chip twice the size.

I have to take issue with a couple of arguments I hear coming from both sides of the GPU power struggle, though. AMD decided a while back, after the R600 debacle, to stop building high-end GPUs as a cost-cutting measure and instead address the high end with multi-GPU solutions. They have since started talking about how the era of the large, “monolithic” GPU is over. I think that’s hogwash. In fact, I’d love to see a RV770-derived behemoth with 1600 SPs and 80 texture units on the horizon. Can you imagine? Big chips don’t suffer from the quirks of multi-GPU implementations, which never seem to have profiles for newly released games just as you’d want to be playing them, and building a big chip doesn’t necessarily preclude a company from building a mid-sized one. Yes, Nvidia still makes high-end GPUs like the GeForce GTX 280, but they also make mid-range chips, too.

One example of such a chip is the 55nm variant of the G92 that powers the GeForce 9800 GTX+. If Nvidia can deliver those as expected by mid-July and cut another 30 bucks off of the projected list price, they’ll have a very effective counter to the Radeon HD 4850, nearly equivalent in size, performance, and power consumption.

At the same time, Nvidia is trying to press its advantage on the GPU-compute front by investing loads of marketing time and effort into its CUDA platform, with particular emphasis on the potential value of its GPU-accelerated PhysX API to gamers. I can see the vision there, but look: hardware-accelerated physics has been just around the corner for longer than I care to remember, but it’s never really happened. Perhaps Nvidia will succeed where Ageia alone didn’t, but I wouldn’t base my GPU buying decision on it. If PhysX-based games really do arrive someday, I doubt they’ll make much of an impact during the lifespan of one of today’s graphics cards.

On top of that, AMD has made its own considerable investment in the realm of heterogeneous computing—like, for instance, buying ATI, a little transaction you may have heard about, along with some intriguing code names like Fusion and Torrenza. We got a refresher on AMD’s plans in our recent talk with Patti Harrell, and they’re remarkably similar to what Nvidia is doing. In fact, AMD was first by a mile with a client for Folding@Home, and Adobe showed the same Photoshop demo at the press event for RV770 that it did at Nvidia’s GT200 expo—the program uses a graphics API, not CUDA. Nvidia may have more to invest in marketing and building a software ecosystem around CUDA, but cross-GPU standards are what will allow GPU computing to succeed. When that happens, AMD will surely be there, too.

Comments closed
    • Damage
    • 11 years ago

    FYI, I’ve just updated the theoretical GPU capacity table in this review to correct the fill rate numbers for the GeForce GTX 260. The revised numbers are slightly lower. The performance results remain unaffected.

    • ish718
    • 11 years ago

    274w load total power consumption for the whole system using a HD4870.
    448w load total power consumption for the whole system using HD4870 Crossfire.
    448w – 274w =174w

    So the power consumption of a single HD4870 @load is around 174w
    174w/12v= 14.5amps
    So it draws somewhere around 14.5 amps at load

    Of course that isn’t 100% accurate but I’m sure its close…
    AMD rates the HD4870 @ 160W TDP

    • Voldenuit
    • 11 years ago

    BTW Scott, did you ever get around to testing power consumption at 160/500 core/mem idle?

      • PrincipalSkinner
      • 11 years ago

      Are those default clocks at idle or is that downclocked a bit?

        • Voldenuit
        • 11 years ago

        Those are the supposed new idle profiles from several manufacturers. The first batch of cards shipped with 500/750 core/mem idle clocks.

          • PrincipalSkinner
          • 11 years ago

          That, then, changes things quite a bit. I don’t want my cards at idle using ~70W. 4870 now does.

            • Voldenuit
            • 11 years ago

            Then just flash the BIOS to the new version, or if you’re feeling more adventurous, edit the BIOS yourself.

            • sigher
            • 11 years ago

            It’s tricky to flash the BIOS, old flash utilities don’t work well and many tweaked BIOS’s are for the 4870 but they tell people to install them on the 4850 but that goes bad because the 4870 has GDDR5 not 3.
            Also I don’t think that the 3D clock/2D clock speed is the ‘advanced power management’ ATI spoke about, since that’s suppose to be new and hardware and complex, and the 3D/2D thing is just a software trick that’s been around for ages.
            It seems to me the temp thing is a fan profile issue, the fan on the (old BIOS) 4850 doesn’t spin up until the card reaches 70C and when the card runs at a spicy 88C the fan is at only 46%, but since they all work fine I guess ATI decided having a hot card is better than people bitching about fan noise.

            Oh incidentally I’d like to point out that 3rd part coolers are reported to drop the temps by 30+ degrees celcius!

    • ssidbroadcast
    • 11 years ago

    Dang guys, we’re pushing the 300-comment barrier.

    • Umbragen
    • 11 years ago

    Derek Perez must be getting old, squishy brained. I can’t remember the last time a competitor was able to command mindshare for this long with out a response from NVidia’s chief FUDmaster. Back in the day, the PR shitstorm would be knee-deep by now. The 9800 GTX+ was a half hearted attempt at best, it’s hasn’t hit the shelves yet and by the time it does it will need a price reduction. Could it be that NVidia really was caught off guard?

      • Meadows
      • 11 years ago

      Of course it could. Being unmatched for years (!) in the IT industry inevitably means loosening attention.

    • Fighterpilot
    • 11 years ago

    Thats a bit of a one eyed point of view there Meadows LOL
    If anyone is know for cheating on 3D benchmarks…the name NVidia springs to mind first with most people I’d wager.

    Quote wikipedia:
    Nvidia also began to clandestinely replace pixel shader code in software with hand-coded optimized versions with lower accuracy, through detecting what program was being run. These “tweaks” were especially noticed in benchmark software from Futuremark. In 3DMark03 it was found that Nvidia had gone to extremes to limit the complexity of the scenes through driver shader changeouts and aggressive hacks that prevented parts of the scene from even rendering at all.[10] This artificially boosted the scores the FX series received. Side by side analysis of screenshots in games and 3DMark03 showed noticeable differences between what a Radeon 9800/9700 displayed and what the FX series was doing.[10] Nvidia also publicly attacked the usefulness of these programs and the techniques used within them in order to undermine their influence upon consumers. It should however be noted that ATI also created a software profile for 3DMark03.[11] In fact, this is also a frequent occurrence with other software, such as games, in order to work around bugs and performance quirks. With regards to 3DMark, Futuremark began updates to their software and screening driver releases for these optimizations.

    Both Nvidia and ATI have optimized drivers for tests like this historically. However, Nvidia went to a new extreme with the FX series. Both companies optimize their drivers for specific applications even today (2008), but a tight rein and watch is kept on the results of these optimizations by a now more educated and aware user community

      • Meadows
      • 11 years ago

      Read the f-ing reply I made. You’re not even on topic as we were talking about image quality, where ATI might have just started to lack. On the other hand, I didn’t notice anything with my 8800 GT ever since I got it, and I’ve been updating the drivers regularly.

        • marvelous
        • 11 years ago

        What proof do you have? ATI always has better contrast but that’s up to the eye of the beholder. Okay we get it you love your l[<8800gt<]l.

          • Meadows
          • 11 years ago

          Actually, contrast is higher on nVidia cards. In fact, it’s sometimes too pronounced while playing a video.

    • Voldenuit
    • 11 years ago

    It does seem to me as if shadows in Crysis are not as sharp on my 4850 as I remembered them from playing on my 8800GTS (G80) last year (everything on High).

    Shadows in Mass Effect also look splotchy on the 4850, but I am unable to compare with the 8800 on this game, so for all I know, that could be how they’re supposed to look.

    I do know that nv has always had the edge with hardware shadow support with stencil shadows.

    Can anyone confirm if there is any IQ difference in shadows between red and green?

      • Chaos-Storm
      • 11 years ago

      The shadows don’t look right on my 8800gts 320 either.

        • JustAnEngineer
        • 11 years ago

        Mass Effect’s shadows don’t look right on a Radeon HD3870X2 here.

          • DrDillyBar
          • 11 years ago

          I generally turn shadows down to begin with

            • sigher
            • 11 years ago

            On my x1900 a few driver updates back shadow issues started to appear in supreme commander, it’s a bug ATI themselves (re-?)introduced it seems.
            Not that nvidia doesn’t have such bugs too of course, but to have things working and then go sour after an update, for months, seems a bit strange.
            The issues got better then worse then much better over the last few drivers, but there’s still noticeable issues if you specifically look at objects with shadows, but in normal gameplay it’s doable now, although I do expect better from the drivercoders.
            Oh and the same issues are present on the latest cards so it’s not because of the x1900 getting dated as I thought might be part of it initially.

    • Bensam123
    • 11 years ago

    “We may revisit the image quality issue again before long, though. I’d like to look more closely at the impact of those trilinear optimizations in motion rather than in screenshots or test patterns.” !!!

    I don’t know if you read my post in the last news snip about AMDs new released drivers bringing performance updates, but I believe AMD is cutting corners later on after people do all the image quality tests as things seem to look worse and worse with each patch.

    Similarly, I own a R3870 and if you play WoW, look at the water in Stormwind or the lava in Ironforge under the bridge and there is this jagged lightning bolt like pattern where the textures are just plain missing from them and all the other textured water/lava looks to be of very low quality despite having all the settings on high. Turning AA on/off doesn’t change this.

    “In fact, this is the first teraflop-capable GPU, with a theoretical peak of a cool one teraflops in the Radeon HD 4850 and up to 1.2 teraflops in the Radeon HD 4870. Nvidia’s much larger GeForce GTX 280 falls just shy of the teraflop mark.”

    Wow, way to rub that one in. I’m sure that has someone at marketing cursing.

    “Meanwhile, the chip’s internal bandwidth is twice that of the previous generation—a provision necessary, Hartog said, to keep pace with the amount of data coming in from GDDR5 memory.”

    What, manufacturers actually increasing their chips performance in order to actually utilize faster memory, who would’ve thought?!?!

      • Krogoth
      • 11 years ago

      Son, just take off those green-shaded glasses and accept the cold, hard truth.

      Spreading around non-nonsensical FUD just makes you look foolish.

      Do not start with driver and application specific optimizations BS, both companies are heavily guilty of it.

        • Meadows
        • 11 years ago

        You’ve just said that spreading around *[

        • Bensam123
        • 11 years ago

        WTF… What did anything I say have to do with supporting either ATI or Nvidia? I was simply mentioning issues I’m having with the card, agreeing that image quality should be revisited (for both companies) and commenting on the articles…

        Stop trolling.

        FYI the only cards I’ve bought are ATI/AMD over the years, although I’ve recommended NVidia to people when better or appropriate.

          • Meadows
          • 11 years ago

          You shouldn’t have even replied. Krogoth has an “[insert colour] -shaded glasses” fetish and will force it upon anyone he encounters.

          And he’s not even right. I do believe AMD are cutting corners with the ATI drivers while nVidia are not likely to do that so much. One example is the GeForce 177.19 release – seriously people, it’s a piece of garbage, don’t use it. Use 177.16 or get a modded .inf for 177.41 (latest beta) and enjoy.

          • Krogoth
          • 11 years ago

          I agree, it is does seemed strange that Damage has not pull out the image quality test suite for the latest batch of cards?

          IIRC, was Damage’s rational with the X2900XT review that DX10 GPUs from both parties have practically identical image quality?

          Sorry, to get on your case. It seems there are tons of hardcore fanboys flaming from both parties. That forget or ignore the fact that both companies have been caught red-handed with application-specific optimization cheats.

            • Bensam123
            • 11 years ago

            Not all of us are flaming fan boys. D:

            Yes, I agree both companies have cheated and that’s why some simple image quality tests should be designed and done on a every other driver release bases or something like that. Maybe a Anti-Image-Cheat-DB (or AICDB or some other really long acronym that actually makes a word) where image quality could be compared over time.

            When they nuke the image quality, I strongly believe it’s card specific. I know from the 8xxx series and the 3xxx series forward they redid the image quality, but when I go on my x1800xt at a friends house it looks like ass. Not to mention going on my mobile Radeon on my laptop. It doesn’t matter if the image quality is set to ‘high quality’ in the control panel and all the settings are on high in game, it still looks like I’m wearing blurry glasses or the games are ancient.

            This is just speaking from personal experience. I’m sure it’s the same on the other side, but you /[

            • poulpy
            • 11 years ago

            I for one am all for application specific optimizations as long as there’s no IQ degradation. It’s like tuning your bike for the track you’re going to race, it makes perfect sense.
            To catch any downside of the enhancements it would be nice to have some sort of automated IQ testing against “gold” reference images from the games. Haven’t used it in ages but didn’t 3dMark use to sport something like that?

      • sigher
      • 11 years ago

      Regarding that lava and such issue, did you experiment with turning the catalyst AI setting on disabled? Because that AI is specifically meant to ‘cut corners’ to improve speed on games.
      Not that I didn’t notice a decline too in quality, but then an upsurge again so I’m thinking they are just experimenting with new code to fit new GPU’s and it fails sometimes and needs new fixes in newer drivers, although it would be nice if they tested that stuff internally *[

    • sigher
    • 11 years ago

    It’s funny how they sometimes mentioning that they fear competitors get info on cards too early, but the one thing I notice is that that great anisotropic filtering nvidia added 2 or 3 generations back still didn’t come to the ATI camp, they still have that severe angle dependant stuff, not that it ever bothered me in games though.
    All in all ATI really surprised me, as did nvidia, nvidia with not supporting DX10.1 and not moving forward as much as suspected and ATI for moving way more forward for a lower price than expected.

      • Thresher
      • 11 years ago

      I wonder how much UMAP had to do with that. I hate to see nVidia going down the same path as the consoles, Apple, etc. with this. UMAP should be illegal, it’s just another form of price fixing.

      • CampinCarl
      • 11 years ago

      I wish XFX and eVGA would get in on it. Those two companies are amazing.

    • Asomatous
    • 11 years ago

    l[

    • ew
    • 11 years ago

    Does anyone know what temperature will actually damage the card? If these relatively high temperatures don’t do any damage then I don’t see what all the fuss is about.

    And I don’t care about statements like “higher temps shorten lifetime” unless you have some quantifiable evidence to go with it.

    • matnath1
    • 11 years ago

    I just installed the HD 4850 into my Dell Vostro 200 tower w./ 500w PSU 1.5 gigs Ram and Core 2 Duo E46000 @ 2.4ghz and ran the fly be in game GPU Benchmark at 1680 x 1050 NO A/A with everything set to High and it hit 34.5 frames per second a solid 10 frames per second higher than the 8800 GT setup I ran earlier! (WIN XP DX9).

    I just ran this again with 4x AA and got 24.45 FPS! This is considered playable by most.. Finally the Crysis card is here!

    This card rocks!

    Dell Vostro E4600@ stock 2.4ghz HD 4850 (vistiontek @ stock bt for $150 @ Best Buy last Friday) 1.5g Ram @ 667 500w PSU 22″ Dell LCD

      • Meadows
      • 11 years ago

      Not really. “The Crysis card” does the same in Dx10 with Vista.

    • shank15217
    • 11 years ago

    You know I read a few comments in this thread that presume reviewers have a bias toward one company or another but really its hard not to get excited when great products come out. I dont think Damage’s review was tilted in anyway but he’s definitely excited, its news like this that brings readers to the site. Just look at the number of comments in this thread, 200+ in 2 days. Most of us have a soft spot for AMD because lately there hasn’t been too many players in mainstream computing. Matrox died a long time ago, VIA is hardly in the picture, S3 is trying to cash in on the lucrative sub $50 graphics category etc etc.. so it always comes back to Intel, AMD(ATi) and Nvidia. I for one am greatful we still have a choice.

      • swaaye
      • 11 years ago

      Yeah, I too find the bias claims strange, honestly. Strange and tiring.

      I read many of the major sites and none of them have ever come across as pushing some evil agenda. Usually things are quite nicely objective, even if the reviewer isn’t super thorough or very technically versed. These reviewers are just folks who try to understand the market landscape, interpret marketing propaganda, and attempt to judge performance and value. Everyone has a different viewpoint and no one is going to come to an identical conclusion. It doesn’t necessarily mean they are pushing something for some reason, it means that their perspective on things is simply different or they came to the wrong conclusion.

      IMO, the people out there screaming of biases are actually those with agendas and biases. Fandom knee-jerk stuff.

    • donkeycrock
    • 11 years ago

    Hurray for the Under dog!

    1. Thank you for doing HL2 benchmarks. Many sites dont bench that game any more. But its still the the most relevant game because its the most played, in one mod form or another.

    2. Can you please bring back UT3 benchies, not only does it support physics, but it is also the engine in many upcoming titles. Plus it is also its own engine.

    3. thank you for including 2900xt, i still have one.

    4. would have liked to see 8800 gt sli in benchmarks

    5. can you crossfire 4850 with 4870? if so, can you run the benchies.

    Btw, i think you had a better review than anand this time.

    cant wait for nehalem.

    • matnath1
    • 11 years ago

    I am not tryin to be a nitpickin pain in the AZZZ but why didn’t you guys show Crysis at 1680 x 1050…………AND……………I would have loved to see the 8800 GT in there…Otherwise I already own the HD 4850 thanks to scott’s Mule Kickin analogy last week!

      • moritzgedig
      • 11 years ago

      apperently the available hardware is to powerfull for 1280×1024
      the cards would not perform at their maximum.
      I don’t care for 1680 x 1050 resolution for I can / will not afford a monitor or graphcard that supports it.
      the limiting factor will be shaderpower not filter- and fill-rates.
      but playing at 1280×1024 with 16xAF and 8xAA will tap into this power.
      Thus I would have cared to see them tested with high AA and AF settings.

    • fpsduck
    • 11 years ago

    No one gives a yawn to ATI, err, AMD for this time. 😉

    • Tarx
    • 11 years ago

    Now all ATI needs is are great companies to supply cards & warranty & support & user friendliness, etc. such as what Nvidia has with eVGA and XFX.
    i.e. Imagine if eVGA or XFX putting out ATI HD4850/4870, then you could just change the GPU’s cooler (even WC) to get good temps on top of nice OCs … and the warranty still applies … and it would remain under warranty for a long time, not just 1 year! (3 years I think is all that is really needed, but longer is always nice).

      • Deli
      • 11 years ago

      that’s why i buy from Futureshop/Best buy. buy their warranty for 2 years for a mere $40 (when it’s on sale or on par with normal prices) and then kill it and then new upgrade, pay $40 for another warranty, kill it, another upgrade.

        • MadManOriginal
        • 11 years ago

        Intentionally killing it isn’t really within the bounds of any warranty afaik, even special additionally purchased ones. Of course you can get away with it by lying but that doesn’t make it covered.

        • VILLAIN_xx
        • 11 years ago

        Whats your method of killing it anyways?

          • Deli
          • 11 years ago

          How to kill it? I actually ran into this by accident the first time. I disloged the HSF on my 9800pro while cleaning it (nudged it) but it seem to be flat afterwards. Then I put it back in my machine and artifacts were everywhere. So I thought, hey, just shut down the fan and run3dmark.

      • Kent_dieGo
      • 11 years ago

      VisionTek has lifetime warantee and USA RMA service. Their cards are priced the same as the poor warantee no RMA, return to seller cards.

      • Samlind
      • 11 years ago

      Asus and Gigabyte offer 3 year warranties on graphics cards. Including ATI based models.

    • Vaughn
    • 11 years ago

    Hmm you might be correct. Its a great cooler running it passively right now, I didn’t bother installing the turbo modules have enough case fans.

    With my 3870 @ 825/1200 + S1 … Idle at like 36c and Load around 57c

    Based on the power figures in the review I should see slighly higher load temps with a 4870 + S1.
    I would probably have to install the modules for overclocking tho.

    Good to see AMD back in the fight.

    • pixel_junkie
    • 11 years ago

    Thanks for another great review Scott. Just out of curiosity, are the Crossfire rigs hitting a CPU bottleneck on the two lower resolutions in COD4?

    • indeego
    • 11 years ago

    Man lotta nerd angst in here. reminds me a lot of the bumfights arguing about box tensile strength versus comfort I used to observe down under the docksg{<.<}g

    • MadManOriginal
    • 11 years ago

    I want to know if there ill be a 4850 GDDR5 or not. That would be the perfect pice/performance and with aftermarket cooling… 🙂

      • A_Pickle
      • 11 years ago

      For serious. I’d like to see a nice Arctic Silver cooler for the 4850…

      …does it have the same cooler mount as the 3870…?

        • Vaughn
        • 11 years ago

        That is the question I have on my mind. Can I take my S1 Rev 2 currently sitting on my 3870 and put it on a 4870. If I can do that I don’t really care about the current temps. I don’t do stock coolers ever I always replace them.

        This card looks awesome and i’ve very tempted to buy it. I usually don’t upgrade until the new card is twice as fast as my current. This card certainly is I have $300 burning in my pocket right now!

          • d0g_p00p
          • 11 years ago

          The mounting holes are the same between the 3870 and the 4870. It *should* work.

      • ChronoReverse
      • 11 years ago

      Considering the chip is the same except for clock, a 4870 is practically a 4850 with GDDR5…

        • Usacomp2k3
        • 11 years ago

        He’s trying to save the $100 though.

          • Meadows
          • 11 years ago

          He probably won’t after the card is fitted with GDDR5 and a chunky aftermarket cooler to match.

      • Bensam123
      • 11 years ago

      …Or testing these cards with third party coolers on the site….!!! 😀

      Heatsink reviews are rather bland, but if you combine them with graphics card reviews, extending the noise/temperature measurements section, I think you might have a winner.

    • ish718
    • 11 years ago

    Ok its safe to say that ATI just pwned Nvidia (Price performance wise)
    Cmon HD4870 for $300 and GTX280 for $650

    I wonder what is Nvidias next move, die shrink for GT200 is a must lol
    I’m pretty sure their working hard on GTX 3xx series already after seeing what HD4800 series can do.

    • l33t-g4m3r
    • 11 years ago

    Not to be too nit-picky, but several things in this review really bothered me.
    I didn’t see a single mention of future havok support, while physx practically got it’s own page.
    I guess it doesn’t matter since physics support isn’t mainstream yet.

    And what’s with 3dmark, I thought most review sites were staying away from it?
    also, using 8.5 drivers in the game benchmarks, when 8.6 is out.

    Honestly, I think Anand’s review was better. less tainted.
    This review gave me the impression of having a nvidia undertone, in an ATI review.
    Seriously, a whole page on physx, and you don’t mention havok once. w/e.

    Other than that, I thought the rest of the review was good.

    • SnowboardingTobi
    • 11 years ago

    Umm… page 3… first html link:
    “…street prices seem to jibe with that”

    uhh… I think you mean “jive”

    • Staypuft
    • 11 years ago

    Man, this has been one exciting year. We all have Nehalem to look forward to, and if AMD can get Shanghai right then boy are they back in business. nVidia can still put the fight to AMD though. All they really need to do is die shrink the 200’s so they can lower prices and slap some GDDR5 on there. Then I think those chips can be all they were truly meant to be.

    Also, I’ve been reading online that resetting the HSF for the 4850 with some new paste does wonders for it. Any one want to confim that?

      • sigher
      • 11 years ago

      Well it has never failed to improve things with a few degrees before with any graphics card, so I’m guessing it’ll work for the 4850 too so my guess those stories will pan out.
      Mind you I’m expecting third-party coolers soon too, so I wonder if it would be worth the bother to mess with the card’s cooler already just for the paste, it’s hot but working after all and if you hear about a good 3rd party cooler you might opt for that instead and do the paste thing at that point.

      As for GDDR5 on the g200, since it doesn’t even support DX10.1 I’m starting to wonder if nvidia even was forwards looking enough to even support DDR5 on it, perhaps they also failed in that department? I don’t recall reading details on that so someone else would have to fill us in.

        • Meadows
        • 11 years ago

        And what’s the correlation between Dx10.1 and GDDR5?

          • sigher
          • 11 years ago

          I made that deliberately very clear trying to avoid such a question, but I’ll try again:
          Since you need to prepare a chip for future development when you start to design it you would expect the design team to enable expected features like DX10.1 early so it fits in the design without needing to completely redevelop the thing, and since you also need to look early at the future development of RAM when designing the GPU to avoid having to re-design a lot of it for such future RAM you do that early too. Now since they seemingly did NOT incorporate DX10.1 at a point when it was already defined it is a mistake on their part to not look at the future, so if they make such a mistake they might also not have looked at the future of RAM and designed it to interface well with GDDR5, which you have to do early due to the specific capabilities of GDDR5.
          The link is the designing team’s efforts at looking forwards to future (at the time they started to design, by now it’s current) developments.

          As an example; it was a long time ago now that quimondo (the supplier of RAM) announced they were skipping DDR4 and went to development of DDR5 immediately, because they thought the industry would move to that soon due to the suitability of its capabilities for expected future chips, and it seems they had a point now, that’s looking forward.

    • SubSeven
    • 11 years ago

    In all honesty, this was a superb review of a very anticipated product. Explanations were great, humor helped to spice things up here and there, and the charts/graphs were clear, crisp and very relevant. Great job on the color choices! So first order of business is a much deserved thank you, and again, good job.

    Secondly, and i know many others have inquired about this directly or indirectly, would it be possible to see the cards on an AMD based system? I am really curious to see how things play out on the so called spider set up. I have considered the spider set up several times but have yet to find something concrete to suport doing so. Does anyone here currently run one and can shed some light on the pluses and minuses?

    • Hattig
    • 11 years ago

    Good review.

    The point of 3DMark using PhysX on the CPU tests is to test the CPU, so nVidia’s optimisation (which they are entitled to do) shouldn’t be enabled from 3DMark. There should be a separate physics test/score however, where hardware PhysX or Havok is used if present.

    So right now I don’t think that 3DMark scores (CPU and overall) are valid for nVidia products.

    AMD have hit a home run with this chip though, congratulations are in order. With the CrossFireX Sideport on chip we could have far better dual-GPU solutions in the future, possibly not even requiring specific profiles to be present for games. It remains to be seen how this will be used though, but I’m looking forward to seeing such a card in the future.

    nVidia’s huge die will be costing them in yield and die price. Twice the size actually could mean 3x to 5x the cost to manufacture, even if they can recover dies for the 260, and possibly a future 240. 55nm will help a bit, but it is going to be hard for nVidia to drop their prices.

      • Meadows
      • 11 years ago

      Wrong. The CPU test takes PPUs into account as well, which you can disable with a checkbox if you’re that picky. With the latest beta/leaked drivers, nVidia videocards can masquerade as a PPU. There’s nothing rule-breaking in that.

      Move along.

        • cygnus1
        • 11 years ago

        that may be true, but that’s pretty F-ed up, just wrong. the test is intended to focus on CPU performance. there’s no stressful graphics processing going on while the test runs, it’s useless the way it was run

          • Meadows
          • 11 years ago

          Look (both of you who replied to me), complain to Futuremark then. They intended PPUs to count in the physics CPU test. It’s working as intended.

          There’s an option to disable and separate PPUs. What more do you want? Seriously, it’s tiring to keep explain this.

          Check that checkbox and go to sleep in peace. “Disable PPU” isn’t an option for no reason.

        • Hattig
        • 11 years ago

        That’s the entire problem, the CPU tests are moved to the GPU by nVidia’s drivers when the 3DMark rules say you can’t do that. I think that PPU tests should be a separate category in the application with a (weighted) effect on the overall score. But when the physics are forming part of the CPU evaluation, they should run on the CPU. Reviewers will have to be careful here.

    • herothezero
    • 11 years ago

    As per usual, another well-written review from the TR crew. I don’t really bother going anywhere else these days (save occasionally Anand’s site) not just due to time considerations but because apparently, other review sites haven’t seemed to have mastered the English language the way the TR crew has.

      • indeego
      • 11 years ago

      And TR has *interesting* going for it while Anand isn’t very interesting/humorous. Anand does have some nice preview/insider articles. I was kicked off their forums once for asking a question, so I rarely visit them muchg{<.<}g

        • jobodaho
        • 11 years ago

        How dare you ask a question!

    • gerryg
    • 11 years ago

    I was hoping the idle power use would be lower. Looks like that’s the 4800 series only weakness. Now that they figured out how to beat Nvidia on performance, hopefully they’ll go in and start optimizing the power usage and the crossfire issues. The two go together, especially with multi-GPU cards where you would want to power down at idle or at low loading (older game, for example, where you don’t need 150 fps). I’m assuming they’ll be tackling some of this as part of the new CPU/GPU combos, too, since they’ll probably want to do crossfire between processor and chipset and maybe add-in card as well, and not have all of them powered up all the time.

      • BlackStar
      • 11 years ago

      It looks like Powerplay is not enabled in current drivers, which means idle power consumption can go quite a bit lower. No need to run that memory at 3.6GHz all the time. 🙂

      Edit: I actually downclock my X1950Pro manually when running older games – does wonders for heat and noise output.

        • swaaye
        • 11 years ago

        I’ve seen this said a few times, but some reviews mention that the core is clocking down already. So at least part of Powerplay is operational. There is a lot more than that to Powerplay though, so maybe there is hope. I certainly hope this isn’t the best it can do!

        Seeing a smaller GPU consume a lot more power at idle (RV770 vs. GT200) isn’t very impressive at all. The charts here show that this so-called svelte little GPU consumes as much as my 8800GTX. I was really hoping to see RV770 at least somewhat mimic RV670’s idle power.

          • Damage
          • 11 years ago

          Yeah, the cards do appear to clock down at idle, and AMD has said nothing about PowerPlay not being enabled yet. To the contrary, they talked it up as feature.

            • swaaye
            • 11 years ago

            It is somewhat suspicious. But I have to change what I said a bit, in that 4850 is pretty close to 3870 which is good considering the increase in complexity while on the same process as RV670. This is somewhat deflated when you see DAAMIT claim that their new Powerplay is the best power management under the sun though.

            4870, on the other hand, really takes it up a notch. It looks like it could be running significantly higher voltage and/or having leakage issues maybe? Or, yet another new GDDR type isn’t all that frugal on the juice after all. Probably all of the above added together, I imagine.

            Relatively though, I think GT200’s idle power use is truly amazing and load power acceptable considering its characteristics.

            I may have to see if I can figure out what my 3850’s power use is prior to getting into Windows, before Powerplay kicks in. It definitely consumes a bunch more than when it’s sitting at the desktop. I’ve seen the numbers before when messing around while monitoring power use when tweaking but don’t recall exact numbers. I suspect it’s similar to the ~20W difference seen between 3870 and 4850.

    • glynor
    • 11 years ago

    Very, very nice review. Well done as always, Scott!

    • shank15217
    • 11 years ago

    This reminds me of the Radeon 1800 to 1900 series transition. Second generation vastly outperforming the first.

      • SPOOFE
      • 11 years ago

      One would have assumed the 2xxx – 3xxx transition woulda done that.

        • flip-mode
        • 11 years ago

        Why?

          • SPOOFE
          • 11 years ago

          Because what I described is more analogous? Duh much?

            • flip-mode
            • 11 years ago

            If you’re dumb enough to think the model on the box means something in and of itself. RV670 was primarily a die shrink.

            I suppose you think the 9800GTX should be double the 8800GTS?

            • SPOOFE
            • 11 years ago

            So you’re telling me the 3xxx series wasn’t a marked improvement over the 2xxx series? Or are you pulling Flip-mode’s Infamous Autistic Linguistics again?

            • flip-mode
            • 11 years ago

            Disregarding you penchant for hysterics, the HD387x improved in a few ways over the 2900XT but not in performance. Basically, from what I remember, the point of the 387x was to make the card cheaper to produce, fix the UVD, and reduce the power consumption. Go back and read the review if you want; that’s the way I remember it.

            • DrDillyBar
            • 11 years ago

            agreed; well close…so yes.

            • eitje
            • 11 years ago

            you shouldn’t make fun of autistic people.

            • ish718
            • 11 years ago

            It was more than a die shrink, ATI also did some tweaking. Thats why HD3870 can outperform 2900XT in most games even with half the bandwidth

            • Deli
            • 11 years ago

            3870 and 2900xt were roughly equal in performance. 2900xt can beat 3870 at times too. but they are tied. 3870 had every other advantage, however, with lower temps, lower power requirements and UMD.

        • shank15217
        • 11 years ago

        The 3800 was a performance die shrink, the 1800 to 1900 actually made the Radeons vastly superior to the 78xx series Nvidia cards on modern games.

          • flip-mode
          • 11 years ago

          Huh? What is a “performance die shrink”? Heck, look through this very review – performance is slightly better on the 3870 but we’re talking slightly. If that is the result of a “performance shrink” then I’d say it largely failed.

            • shank15217
            • 11 years ago

            I guess I see performance as performance per watt. However sometimes die shrinks don’t increase performance/watt and so it falls into the category of “money saving die shrink”. In today’s gpu world power is a big concern, the less power your gpu uses the more gpus you can pile into your system. I think this is the way AMD is seeing the picture as well because they keep emphasizing performance/watt and execution resources per die area.

            • flip-mode
            • 11 years ago

            OK, well you might want to be real specific about that from now on cause I don’t think the way your talking about performance has anything to do with the way Spoof and I are talking about performance. Less confusion please.

            • shank15217
            • 11 years ago

            That defeats the purpose of an in-depth review. Frankly editor’s choice or a score isn’t very useful. Read the review and learn a little about the technology behind what you are buying. I think you would appreciate it a lot more. After all you might not even need to buy a $200 video card.

            This was a response to another sub-thread.. which for some reason I cant find anymore.. weird..

            • flip-mode
            • 11 years ago

            q[< This was a response to another sub-thread.. which for some reason I cant find anymore.. weird.. <]q Thank goodness because when I first started reading I thought you may have been hit in the head by something large and heavy and moving fast.

            • FubbHead
            • 11 years ago

            Like a Geforce 280?

    • Price0331
    • 11 years ago

    They definitely ran outta gum Scott.

    • ssidbroadcast
    • 11 years ago

    Way cool review, guys. It feels like what I’ve been waiting for all this time.

    • albundy
    • 11 years ago

    looks like this card is a -[

      • Deli
      • 11 years ago

      LOL…..hey, where’s the GTX260? haha

        • ish718
        • 11 years ago

        lol GTX260? Nvidia probably threw it back in the bin after seeing how HD4870 smacks it around

    • Hdfisise
    • 11 years ago

    How long are these cards? I expect 9″ or around that, but I am interested to know.

      • Forge
      • 11 years ago

      The 4850 is the exact same length as my 8800GT and 7950GX2, if that means anything to you. It appears that the 4870 is also the exact same length, but I don’t yet have one in hand to check.

        • Hdfisise
        • 11 years ago

        My 8800GT is 9″ so I guess its the same. Shame I really need a powerful 6″ card to fit in my case easier.

    • PRIME1
    • 11 years ago

    r[

      • ChronoReverse
      • 11 years ago

      TEMPERATURE not HEAT.

      The heat output is equal to the power draw which, while high, isn’t extraordinary.

      The very high temperatures are caused by the inadequate cooler though.

      • flip-mode
      • 11 years ago

      It’s not late if it arrives when it was said to arrive.

      • Deli
      • 11 years ago

      how the hell is it late? HUH?

    • bogbox
    • 11 years ago

    One thing this card needs is more memory at least 1Gb.
    A OC 4870 with 1 Gb GDDR5 , and the r700 is no longer needed to beat the 280 or draw equal at least.

    • Da_Boss
    • 11 years ago

    I agree with the start of the conclusion. A large monolithic design based on this would’ve been killer! I imagine that 1600SP, 80 Texture unit monster would’ve had the same size and thermal characteristics as the GTX 280, but with MUCH higher performance.

    Personally, I can deal with huge, hot GPUs if the performance was right. Unfortunately in the case of the GTX 280, it’s not.

      • ew
      • 11 years ago

      Why stop at 1600SP? Why not wish for 3200SP or 6400SP?

      At this point I don’t think AMD is in a position to risk a product like that. Look how things are with the GT200. They are way too expensive for what you get when you compare to AMD’s new cards. If the positions were reversed and AMD had bet there last remaining chips on a big chip and it failed they would be done for. Maybe once (if) AMD gets back on their feet they can try a big chip but at this point it just doesn’t make any sense for them to take that kind of risk.

    • pixel_junkie
    • 11 years ago

    Thanks Damage for another excellent review. Just out of curiosity, are the crossfire rigs hitting a CPU bottleneck in COD4 at the two lower resolutions?

    • felixml
    • 11 years ago

    Single slot – no way,
    sticking with 9800GTX, just overclocking it more and more.
    Do NOT see any ATIs in my future based on a few of older experiences with drivers, etc.

      • NeXus 6
      • 11 years ago

      The ATI driver problems died a long time ago.

        • ludi
        • 11 years ago

        Yep…the last ATi platform I had any serious driver problems with was a Radeon7000. The X800XT was a pretty clean runner and the X1950Pro was flawless.

        • PLASTIC SURGEON
        • 11 years ago

        Yes they did…..during the 8500 days. I can say that it’s Nvidia that has some driver issues now. ATI/AMD? Not even close

        • sigher
        • 11 years ago

        When I got an ATI card, after getting bloody angry at the nvidia driver issues I experienced, they had decent drivers, but since then it’s getting progressively worse, they introduced issues with objects (instancing I think, seems fixed now) and issues with shadows (again, but partly fixed now) and then there’s the installer that is all messed up and every second driver update doesn’t seem able to install CCC in a working manner after which a rollback doesn’t work either.
        (Seems it can’t overwrite old files it leaves after uninstalling previous drivers, so they basically make their own installer fail because of their own uninstallers)
        Not to mention that their video converter isn’t even available for XP64, even while they had many many many months to make it available for it.

        So no I’d not say that ATI has no issues anymore, nor do I for a second believe nvidia doesn’t have awful issues with their drivers, drivers for both of them are a living hell.

      • SubSeven
      • 11 years ago

      Well buddy, you will me missing out on one hell of a card. Once this cards drivers are optimized and heat/power issues resolved… these will be the ideal cards to have, especially at their current prices.

        • Deli
        • 11 years ago

        IMO, ATI has superior driver support than Nvidia at this point. And i’m running on a 8800GT.

      • Anomymous Gerbil
      • 11 years ago

      Interesting how long-dead issues stay alive in some people’s minds.

        • TO11MTM
        • 11 years ago

        They’re still alive if they’re still there.

        I can’t for the life of me get my 2400XT to output anything other than 1280x720p or 1900x1080i to my HDTV using either the HDMI output, or the DVI->HDMI adapter. Which greatly sucks because that means if I want to output 1366×768 (my HDTV’s native resolution) the output is a little shimmery from the upscaling and interlacing done by the card.

        It also if I’m running something in 640×480 or even 800×600 (Worms Armageddon, emulators) in full screen it chokes scaling it up to 1280x720p.

        My 7600GT output every resolution to this TV without any issues. I’ve tried using powerstrip to force but I could not get the settings right, and I don’t think I should have to anyway. It’s a very common size for these flat panels…

          • Krogoth
          • 11 years ago

          Blame the HDTV for using a non-standard resolution. >_<

          It is still not as bad as 5:4 aspect ratio (the crappy ratio that refuses to die).

          • A_Pickle
          • 11 years ago

          That’s… a really strange issue. I haven’t had any trouble with my ATI cards outputting resolutions for a good, long time now. As a matter of fact, I’m typing this on a wireless keyboard hooked to my HTPC, running an HD 3650 outputting 1920×1080 to a Sony Bravia 46″ HDTV.

          Catalyst Control Center is working great. 🙂

            • TO11MTM
            • 11 years ago

            It’s a relatively common problem, and not limited to the 2400 series from the research I’ve done.

            It’s not a “Proper” HDTV resloution, but it IS a popular one… and I think one could almost argue, since the NVidia card immediatley properly picked up every mode the TV could properly handle… shouldn’t the ATI card do the same? Or is it trying so hard to conform to a spec that it’s actually breaking things?

            I don’t know, except I’m waiting for the next good “sweet spot” in graphics cards, like when I can get a 9600 for around 100$ in a single slot option (Since it’s in a Shuttle…)

      • Convert
      • 11 years ago

      I remember when a dual slot card was a no-no to most.

        • Krogoth
        • 11 years ago

        Funny, that it still holds today!

        Not everybody has huge chassis or motherboard layouts that can fit dual-slot cards without clearance issues.

          • Meadows
          • 11 years ago

          Small form factor users are the minority in this and multiple GPUs are getting a serious foothold soon enough now, particularly on ATI’s side of the fence.

            • Krogoth
            • 11 years ago

            Foothold? ROFL, multiple graphical card setups on both camps are going to remain niche markets at best.

            • Meadows
            • 11 years ago

            Your comments show you live in 2006 and will stay there forever.

            • swaaye
            • 11 years ago

            You think that gamers are, en masse, going to go out and buy $400-500 graphics cards now, huh? That’s just not at all how it is. Maybe in your group of buddies, but not out there in the majority. Hell, most of my friends are just starting to creep into the DX10 GPU era. They don’t buy GPUs over $200 and they upgrade only once every couple years.

            Just consider the average requirements of the popular MMOs out there. And other genres for that matter. I know folks gaming primarily with IGPs! I watched a guy play Civ4 on a Radeon Xpress for hours just last weekend. There are projects out there to get games running on older hardware. There are forums filled with people trying to run games on IGPs and they are seemingly happy if it runs at all and do actually play the game that way!

            What I maybe see happening is a lot of people buying Radeon 4850 because it offers incredible value, especially for the people who’ve been hanging on to G7x and R5xx and older. Certainly many of the educated gaming enthusiasts will go this route. But, even in this group, the number who consider SLI, CF, R680, R700, GX2 is going to be just so slim because the advantages do not really offer that much of a tangible benefit for the cost, heat, power use, driver profile catches, etc.

    • henfactor
    • 11 years ago

    Great review!

    I think I might have found a typo: l[

    • eitje
    • 11 years ago

    Great review, Scott!

    editted, since typos are now fixed!

    • tygrus
    • 11 years ago

    Would stll love a larger version of RV770. Increase everything else but the SP’s (ie. texture, ROP, mem width, total RAM, dual-UVD).
    Insane FPS and the CPU in Crysis et. al. can’t keep up.

      • 0g1
      • 11 years ago

      GT280 is gonna be pwnt when GDDR5 matures to 192GB/sec :D.

    • Laugh|nGMan
    • 11 years ago

    *[

      • severian64
      • 11 years ago

      L4 missions run fine. If you are getting 15 fps on a L4 then get a new computer.

    • ElderDruid
    • 11 years ago

    I skipped right to the Conclusions section in hopes of finding the TR Editor’s Choice award that would make my buying decision an easy one. Alas, it was not to be….

      • 0g1
      • 11 years ago

      Yeah, I was looking for that too. I think they are a bit confused.
      Quote:
      “AMD have since started talking about how the era of the large, “monolithic” GPU is over. I think that’s hogwash. In fact, I’d love to see a RV770-derived behemoth with 1600 SPs and 80 texture units on the horizon.”

      Such a ‘behemoth’ would be a waste of money. The core would be too expensive to make for the performance vs two cards in Crossfire. Sure Crossfire doesn’t always work, but when it does, it gives frame rates much higher than the ‘behemoth’ simply because of extra memory bandwidth. Sure it would be nice to have a single GPU that gives a ~30% performance increase, and from a gamers point of view, less input lag is better. However, due to poor yields, more heat, lower clockspeeds, etc etc, the cost/performance ratio is too high — more than a 30% increase in cost. But two GPU’s give almost a 100% performance increase so its a lot better from a marketing/sales point of view.

      • Jigar
      • 11 years ago

      This time it’s just not editor’s, it’s a choice of all the enthusiastics… so, that icon was not required..

      • flip-mode
      • 11 years ago

      [H] gave Gold to both.

    • HiggsBoson
    • 11 years ago

    Does anyone know if those tweaks and optimization are all manual?

    And does this imply that the even with the design of something this complex the human mind and human eye still have something to contribute?

    • Dposcorp
    • 11 years ago

    I see a $150 or less 4850 in my immediate future.

    Great review Scott. Nice to see AMD/ATI back in the game.

    Two things I always like to bring up with these ATI cards that Nvidia cant yet touch.
    1)Built-in HDMI with Multi-channel 5.1 surround audio
    2)Better multi-monitor support.

    I’ll gladly give up a couple of FPS for that stuff.

    Also, I can see a 4870 X2, coupled with some new AMD chipsets and a new 4,6, or 8 core CPU, and all of a sudden the Spider Platform looks more like *[http://www.forgottenkingdoms.com/deities/lloth.shtml<]§

      • Thresher
      • 11 years ago

      Personally, I think ATI’s image quality has been better for years, even though they haven’t been nearly as fast.

        • KeillRandor
        • 11 years ago

        Aye – as soon as I swapped from nvidia to Ati, I noticed an immediate difference just on the desktop, with it being much brighter and clearer compared to the nvidia card. I’ve heard that nvidia hav been working on it – but I haven’t seen a modern nvidia card to compare it to…

          • TheEmrys
          • 11 years ago

          ATI has had better 2D than nvidia for what? 4 years? 5? 6? But its still second fiddle to Matrox. If you want to live in a 2D-only world, that is.

            • leor
            • 11 years ago

            ah, matrox, my old friend . . .

            • TheEmrys
            • 11 years ago

            I’d love to see a good retrospective on Matrox. Their ability to continue to “make it” is something I admire.

        • Anomymous Gerbil
        • 11 years ago

        Are you referring to image quality over VGA to analogue monitors? Over HDMI/DVI to LCD monitors, any differences should all be tweakable via settings?

      • A_Pickle
      • 11 years ago

      You forgot one.

      3.) Working drivers.

        • eitje
        • 11 years ago

        lol, you made a driver joke about ati! lol!

          • A_Pickle
          • 11 years ago

          Really? I thought I made a fairly pointed accusation rather than a joke, and it was about Nvidia…?

          If I missed it, I’m sorry. It’s 0730 and I haven’t yet slept. Goodness me.

            • eitje
            • 11 years ago

            it could also be that i’d just woken up.

            it was funny because ATI is usually the one that people say needs working drivers (even though that’s not the case anymore).

            on re-hash, it’s not as funny as i remember.

    • Thresher
    • 11 years ago

    Why would a company want to come out with a chip the size of the new 2xx ones from nVidia with all the inherent issues that causes? Yields, heat, power, etc.

    Seems to me that because of the multiparallel nature of GPUs, it would be just as easy to stack a bunch of smaller ones on one card. ATI has already proven that the drivers can be made that address many of the old issues multi chip designs used to have.

    It’s much more cost efficient for everyone, including the consumer, to put more chips on a card rather than come up with one huge one.

      • Flying Fox
      • 11 years ago

      That remains to be seen, separate power circuitry, custom-designed cooler that has multiple contacts with multiple GPUs, for now is still more expensive than a single chip design.

      What Damage said was that it shouldn’t be too much of a problem to produce a boutique part which is huge.

      And for sure your nice CF/X2 rig can’t play your favourite game the first day it was out since you need to wait for the profiles.

      • ludi
      • 11 years ago

      That’s usually a trade-off, not a solution. Distributing the power amongst several chips isn’t going to give less net output, which must still be dealt with, and any savings on the fabrication side immediately get eaten up with the increased board design and manufacturing costs.

      • flip-mode
      • 11 years ago

      Power issues? Nvidia did a fantastic job managing the power issues.

    • 0g1
    • 11 years ago

    I can’t believe that the current GDDR5 is at 3600Mhz effective data rate and AMD expects that by the end of the year it will be 6000Mhz effective data rate. Wow. 115GB/sec vs 192GB/sec. Cant wait for those memory chips!
    By time they come out, AMD will probably scale their core a bit further … probably add on 2 more SIMDS and nV will have GT280 on 55nm and higher clockspeeds (maybe a transistor reduction too) and $399.

    • Krogoth
    • 11 years ago

    Oh boy, Nvidia’s executives should be embarrassed for trying to stretch its architect for so long to avoid R&D costs.

    At least now they are giving engineers the highlight to go develop something superior to G8x and R6xx/R7xx families. 😉 We will not see its fruits for a while thought.

    In meantime, Nvidia can wage a price war and it sorely needs to cut prices on GT2xx line. Especially GT260 ($299) and GT280 ($399).

    • Mourmain
    • 11 years ago

    Hey Scott,

    Thanks for implementing the changes to the graphs to separate multi-card setups from single-card ones! It works. 🙂

    This was a really exciting review, very well done! I can’t begin to imagine all the work that it takes to do all those benchmarks…

    • Gerbil Jedidiah
    • 11 years ago

    I’d be all over the 4870 if I was in the market for a videocard. Kudos to ATI for getting it right this summer!

    • danny e.
    • 11 years ago

    scores are good, prices are good.. heat not so much. perhaps the fans need to spin a little faster on those cards. I dont see how those temps would help lead to a long life.

    it’ll be interesting to see how the 4870 x2 does … hopefully ATI continues to improve the drivers.

    If I was in the market for a card right now I’d definitely be looking at the 4870. However, those temps do scare me.

    • Richie_G
    • 11 years ago

    Good to see AMD finally wake up. I imagine things will get a bit more interesting from here, from a consumer standpoint at least.

    • Chrispy_
    • 11 years ago

    What a fantastic result for DAAMIT!

    This harks back to the days of the radeon 9700 series where Nvidia got lazy and created the GeforceFX by expanding on dated architecture from the Geforce4 days, which in turn was a tweaked update of the Geforce3.

    New architecture is great, but efficient architecture is even better news. I don’t want to have to include the cost of a new PSU with every graphics upgrade.

    • 0g1
    • 11 years ago

    LOL @ donkeys in MMA cage all out of bubble gum. Well done TR … best 4870 review Ive read because you explain everything very well. Well worth the ~5hrs wait after other reviews.

      • oldDummy
      • 11 years ago

      Hey FP thanks for the link.
      My option is remounting the HSF that comes with it. (Not too much room in my SFF).
      That seems to make a difference.

    • Veculous
    • 11 years ago

    It just occurred to me that with Duke Nukem Forever taking so long in coming, there might only be so much longer that people will still understand the “all out of gum” reference!

    • oldDummy
    • 11 years ago

    How the heck am I going to cool those 4850 CF in a SFF?
    A good problem, but, a problem.

      • Darkmage
      • 11 years ago

      I’m thinking along similar lines. I run watercooled, but the additional $100 for a quality waterblock for this video card is making me pause. With these heat levels, there’s no way I would consider not going all out to cool it.

      Which makes me nervous.

        • oldDummy
        • 11 years ago

        When mine arrive I will first try to reseat the HSF if I can. After that, if no relief, more research.

          • TheEmrys
          • 11 years ago

          You a SFF with a 650watt PSU?

            • oldDummy
            • 11 years ago

            450 with a 80+ rating.

            Should be fine.

        • ChronoReverse
        • 11 years ago

        TEMPERATURE, not HEAT.

        Heat is simply the power usage. If you can cool a card with higher power draw, you’ll be fine.

        Heat is dependent on the cooling. The 4xxx has deliberately low cooling thus high heat.

    • Nictron
    • 11 years ago

    This is awesome for the consumer. I however will still hold on to my 8800 GTX Sli’s for the next few months maybe even a year, since the new 177.26 drivers I modified for Crysis plays much better and Sli is working finally.

    Even though these chips are great they still do not improve much over the G80, just shows you how much of a leap forward the G80 was and after 2 years it can still compete, that is just incredible.

    I am glad that AMD/ATi finally got Nvidia by the balls again! I was never happy to pay over $500 for a high performance card, it is just silly to expect that especially considering that in SA you pay at 12x the dollar price and the official exchange rate is 8x.

      • sigher
      • 11 years ago

      Yeah when you have 8800gtx in SLI there’s really no big need to ‘upgrade’ 🙂
      Not to mention that to run AMD cards in crossfire would also require a new motherboard, ouch.

    • Jigar
    • 11 years ago

    Woo hoo … CF is going way beyond speed limits.. Reviewers should ticket them.. 😉

    • Fighterpilot
    • 11 years ago

    Meadows is an NVidia fan and has every right to be.
    Only “fanboys” trash the opposition’s products even in the face of obvious good performance and price…Which is not what he does.
    There’s a few of them here at TR trying to think of something clever to say at this point I’ll wager.
    They deserve the “Hello Kitty” blind worship award,true fans of either side will recognize that ATI has done a damn good job with this card and also applaud the tremendous impact AMD’s sensible mid range pricing has had on the overall GPU market.
    Ultras and 280’s etc for $600 or more are just lame and we are all better off without them.
    Also Damage and the team must have spent a huge amount of time writing that article..the in depth architecture stuff was fascinating and must have really taken ages to think up and translate into a suitable article.
    Well done guys!

      • PRIME1
      • 11 years ago

      Pot calling the kettle black. Amazing!

    • Mystic-G
    • 11 years ago

    I like AMDs new line but I really have to disagree with their dual-gpu age concept. Many people still want just 1 badass video card for a list of reasons. If they really took advantage of their new chip, they could really knock Nvidia a notch down the totem pole.

      • Nictron
      • 11 years ago

      From what i have heard about the new 4870X2 is that the bridge is improved or even that the chips might be moulded together, and that the previous crossfire limitations have been dwelt with, only time will tell I guess but lets hope ATi brings out another winner! because as it stands the new 4870 is awesome and beats the 260 and 280 in some benchmarks!

    • no51
    • 11 years ago

    All these reviews has convinced me to get a 2nd G92GTS.

    • mako
    • 11 years ago

    Great writing, as always. I think a 4870 may be in my future… although I’ll wait a couple months to see how things shake out.

    • ihira
    • 11 years ago

    TR should stick with realworld gameplay recorded with FRAPS showing an average and minimum FPS like with their AC, Crysis and GRID tests.

    The old recorded timedemo benchmarks need to go. They don’t represent the real world gameplay.

    oh btw 4870 looks like an awesome card

    • stirker_0
    • 11 years ago

    WOW… should’ve waited for the 4870… don’t know WAT to do with my 9800 gtx.., wipe the floor sounds good. oh well time to save up and put couple of 4870 into crossfire…

    Officially hate nvidia now… well until they come out and say oh whoops, we forgot on the gtx 280, we were holding back bunch of stuff, only half of the die is being used, the other half was just sitting there generating HEAT because out programmer got too lazy. then drop prices and maybe i’ll go back to nvidia again…

    … and no i’m not anyone’s fanboy, i go for best value or just high enough performance at a decent price point

      • mongoosesRawesome
      • 11 years ago

      Yup that 9800GTX is totally worthless now. You better just give it to me.

      • flip-mode
      • 11 years ago

      Sell it to me for a good price.

        • Meadows
        • 11 years ago

        Give for free or I pirate.

      • 0g1
      • 11 years ago

      Use it as a PhysX card. 4870 is only about 40% faster anyway. Im going to stick with my 9800GTX till the 4870 with 190GB/sec GDDR5 memory comes out because its still pretty good.

        • SubSeven
        • 11 years ago

        Only 40% faster. Haha, that’s great. If memory serves correctly, I recall there are many willing to spend extra two to three hundred dollars to get 10-15% performance increases. Here you have far more than that for an extra 100 bux (and note, the drivers for this card are nowhere close to optimal yet, so there is much gain to be had in the future). In my books, that’s a steal.

          • Mithent
          • 11 years ago

          The 4870’s good value, certainly – but with all these things, one has to consider whether their current setup is already good enough to do what they want. I have an 8800GTS 512MB, and I could certainly upgrade to a 4870 if I wanted to, but it’s good enough for what I use it for – better to wait until I need a faster card than just to keep buying new ones because the exist.

            • SubSeven
            • 11 years ago

            And you are correct. I am not one of those people that goes and grabs the latests goody to just hit the shelves. But take a loot at things from my perspective… I am not so fortunate with my current video card as most users here. I am still running a 6600GT (I was a huge Nvidia fan back in that day) so yah, I’m way overdue for an upgrade. I hit some Doom III the other day just for the hell of it and with some mild eye candy i got rates averaging in the 30s… which is really quite sad considering the current horse power available.

            • Mithent
            • 11 years ago

            Oh, a 6600GT is almost certainly worth upgrading from if you do anything requiring 3D acceleration, yes; I was thinking mostly of people who have 8- and 9-series cards which are probably perfectly adequate for the moment, but who’ll be replacing them now. It would be a great time to upgrade from a 6600GT though.

            • flip-mode
            • 11 years ago

            It’s a great upgrade from anything less than an 8800GTS in my opinion, though it’d be a shame to already be upgrading from an HD3 card or an 8800GT card.

            • SubSeven
            • 11 years ago

            Yah. Part of the problem is that I have to do a massive overhaul. Can’t just throw in a new card. I need a new rig. Running a socket 754 board (amd 3400+) with AGP interface. Hehe, so neither of these cards will do my any good whatsoever at the current moment. This why i was curious about how the spider platform fares. Thus far, the only thing i have picked up is an anetc P182 and a corsair 620HX. Still doing some more research on the rest. As much as I love AMD and Im not entirely sold on the phenom just yet.

            • ludi
            • 11 years ago

            Partly depends when you bought. I just picked up my 8800GT ($150 after MIR) about six weeks ago (replacing an X1950Pro that had a good 20 month run in my hands), so although the 48×0 cards are pretty sweet, this would be a complete waste of money for me. This GT should tide me for at least a year, if not longer.

            Someone who bought an 8800GT back when it first launched might be a bit itchier.

            • Metalianman
            • 11 years ago

            Sorry mate… I got my 8800GT about 3 months ago… I actually had to do so coz I had my Graphics Card broken in two during my last moving 🙁

            I was always an AMD fanboy I have to admit… but I wasn’t blind… it was either a 9600GT or a 8800GT and I got the later. Not bad, that’s for sure but when the 48xx came out I was, well, acting weird. I already had a Phenom CPU and AMD board so I just checked my balance in my account and went for it. I got two (yeah, I know) 4870s and I’m pleased with my self :-p I still keep running my 8800GT on my older 4000+ Athlon but I -love- the 4870s… My wallet obviously doesn’t but who cares (besides my girlfriend of course!!)

      • ludi
      • 11 years ago

      I’ll trade ya for an 8800GT. Straight up.

        • FubbHead
        • 11 years ago

        No kidding.

    • jjj
    • 11 years ago

    lets see how long it takes for nvidia to lower prices for GTX 260
    this is fun!
    and if/when 9800GT comes out it will be even better(more options for us)

    • Meadows
    • 11 years ago

    As I’ve said before, PhysX scores aren’t unfair.

    – If you want to benchmark a complete system: leave it on, or check the “disable PPU” box in 3DMark Vantage’s options section.
    – If you want to benchmark a CPU, disable PPUs.
    – If you want to benchmark videocards – well, you’ll be looking at the GPU subscore then anyway, so all this is moot in that case.

    Two HD 4850 cards in Crossfire is still the best bang/buck deal I’ve seen in a very long time.

      • bogbox
      • 11 years ago

      The last nvidia fanboy ,giving up to , betraying the green team ..
      WOW ! even you ? But I told you so….

        • Meadows
        • 11 years ago

        Your shitspeak never ends, eh?
        Thought so.

        I’m an nVidia fanboy, but I’m not blind.

          • flip-mode
          • 11 years ago

          q[< I'm an nVidia fanboy, but I'm not blind. <]q Well spoken.

      • Mystic-G
      • 11 years ago

      It’s not unfair but it Does Not give a clear result as current games aren’t really taking advantage of physics as 3DMark does.

      If you based which cards are better off 3D Mark Scores, the 4850 would be worse than the 9800GTX and the 4870 would be worse than the GTX 260. This is *[

        • Game_boy
        • 11 years ago

        It IS unfair because Futuremark rules state that the GPU cannot significantly influence the CPU score for it to be a valid 3DMark score. Any driver which does so would fail their driver validation process – they have not and cannot approve the PhysX-incorporating driver so it can’t be used in official tests.

          • Meadows
          • 11 years ago

          But back then, GPU acting as PPU wasn’t an option. I guess they just need to revise the terminology.

          You guys just check the “Disable PPU” box and you’ll get fair results.

      • Krogoth
      • 11 years ago

      Who cares about 3Dmarks? It has been pointless for years…….

      At best, it is a just a stress-test and baseline test that shows you how fast your system should be compaired to systems of simliar configuration.

        • poulpy
        • 11 years ago

        isn’t it the very definition of a benchmark? 🙂

          • Meadows
          • 11 years ago

          It is, but by the time Krogoth ends a post, he forgets what he started it with.

          • Krogoth
          • 11 years ago

          Unfortunately, the 1337 kiddies and other epenis crowd swear by it as being reflective of real-world performance. When 3Dmarks has failed in that department for quite some time.

        • A_Pickle
        • 11 years ago

        Quite honestly, I don’t even really consider it that. I don’t buy computer parts to run 3DMark over and over and over again. If I did, well, I guess I’d be the one person for whom 3DMark is useful to. But as 3DMark is not itself a real-world application, I don’t give a damn as far as what it’s scores say.

        About the only thing 3DMark has ever been good for, is showing us what kind of FPS we can expect in future games (and then, with no real idea of timeframe). That said, I really think Futuremark has some good, talented, motivated programmers — and it’s nice to see their talents going places that aren’t “run once” benchmarks, what with their upcoming game engine and what-have-you.

        I still think it’s mind-blowing how 3DMark Vantage can run at 3 FPS lower than Crysis, and look about nine hundred million times sexier…

      • cynan
      • 11 years ago

      /[

      • Saber Cherry
      • 11 years ago

      They are unfair because the 280 can have high frame rates, or do really fast physics, but it can’t do both at once. In other words, when you are actually playing a game with advanced graphics, you can’t dedicate the chip resources to physics without slowing down the game… so those numbers are misleading.

      And as others have said, it violates the rules of the benchmark.

        • Meadows
        • 11 years ago

        It was once shown that with upwards of 128 shader processors, your games will really not be shader-limited. Actually, the 9600 GT was proof that in many games you can ignore the difference – what did the 8800 GT have against it? Almost twice the shader power – but it didn’t provide twice the performance. There were shader processors idling. With 240 of those in the GTX 280, you’ve got more than what you need to accelerate physics without SLI even if the latter is advertised for this.

      • asdsa
      • 11 years ago

      As I’ve said previously, physX scores *are* unfair. Benchmark that uses libraries that are owned, developed and taken advantage by one GPU company is far from being a fair scenario. It would be as unfair e.g. to support Radeons’ hardware tesselators. Yeah, why not support it: its a cool feature and very FUTUREmarkish?

      And no, normal people (“dummy” users) and marketing people don’t turn off any knobs. They run everything in default configuration whatever it is and watch whatever comes out. Then they (card manufacturers) boast about it and print the numbers to retail boxes to attract customers. Vantage is BS but still many people take it seriously.

      Anyways, ending with a positive note: Remarkable chip from ATI. Job well done!

      • YeuEmMaiMai
      • 11 years ago

      who cares if they use a GPU to do physics? it will not work like the benchmark in real games anyways since if the GPU is doing physics, it has LESS TIME to RENDER GRAPHICS

        • Meadows
        • 11 years ago

        Wrong. Read my other reply (#150). Any reasonable new videocard has shaders to spare.

          • Sapien
          • 11 years ago

          I rellay doubt that you will be able to see a current GPU “share surpplus” shaders between gfx tasks and physics tasks without a huge penalty to performance for both tasks.
          I have not seen any kind of demonstrations or benchmarks where one card does gfx and physics at the same time – it has always been one dedicated card for each of the tasks.
          My guess is that you will not be able to share the GPU without a prohibiting overhead hit, due to change of the program and memory flushes – remember that the GPU is basically SIMD.

          If you have any evidence or arguments to the contrary, then I would love to see it.

            • Meadows
            • 11 years ago

            My UT3 fps increased with the new drivers and I only own a card with 112 SPs. Evidence.

    • Fighterpilot
    • 11 years ago

    I’m trrying not to vibrrrate off the computer chairrrrr 🙂
    ATI HD 4870 FTW!

      • flip-mode
      • 11 years ago

      This is Fighterpilot’s day. Bask in it FP; take it all in. Just be nice.

        • Jigar
        • 11 years ago

        he deserves it… he waited for this day…

          • zgirl
          • 11 years ago

          Why does he? Last I checked he had no hand in the design of this card. Why should he get any accolades?

          This is my issue with fanboi’s acting like they had everything to do with something like this, when yet they had nothing at all to do with it.

            • Jigar
            • 11 years ago

            He is still with his old faithful X1900XT … doesn’t that comply that he deserves his day, he was waiting for this card … 😉

            • zgirl
            • 11 years ago

            Did you even read what you just wrote? Seriously? I’ve had an equal number of ATI cards as I have Nvidia. I’m not patting myself on the back for having anything to do with this.The only people who deserve congratulations are the ATI engineers who did an excellent job of putting together a very competitive card and AMD for pricing it right.

            Maybe you should look up the definition of what a fanboy is.

            • Nelliesboo
            • 11 years ago

            He is just happy. He stuck with a maker even when he could have went with something better. That is a die hard fan. Think of it as a sports team you like being in the gutter for a while and finally having a good season.

            • SPOOFE
            • 11 years ago

            And that attitude you described is bad for the industry.

            • shank15217
            • 11 years ago

            Jeez looking up to engineers and scientists and supporting their work that produces awesome new technologies is bad for the industry?

            • Anomymous Gerbil
            • 11 years ago

            No, continuing to buy shite products just because you have a hard-on for the company (i.e. you are a fanboi) is bad for the industry.

            • Jigar
            • 11 years ago

            yes, think about it in this way, why would some one support some one in bad times ??

            Well because that person is talented and might be able to pull it off.. 😉

            ATI did it … So fighterpilot deserves to vibrate … ROFL…

            • eitje
            • 11 years ago

            and yet, somehow, all of the sports clubs in the world have managed to survive!

            • zgirl
            • 11 years ago

            Entertainment is different from corporate products. I am glad ATI has a good product back on the field. But if I wanted to be a fanboi like others here I would still be clinging to my Matrox cards, they still had the clearest picture in the industry. But I am not that blindly foolish. There isn’t a poster here (to my knowledge) that had anything to do with the creation of these cards. So no one deserves any accolades.

            • Fighterpilot
            • 11 years ago

            Thanks for all those nice comments guys…TR members FTW! 🙂

            • flip-mode
            • 11 years ago

            Reading back, I didn’t see it said anywhere that Fighterpilot had anything to do with the creation or the success of these cards. I think the sentiment was:

            Fighterpilot, enjoy the fact that your team scored a touchdown.

            Being a Cincinnatian, home of the Bengals, I can attest to the fact that people celebrate when their team scores a decisive victory after a drought.

            • Convert
            • 11 years ago

            While you are entirely right in what you have said z-man I think it is going to fall on deaf and ignorant ears. Now days, for whatever reason, fanboys seem to be hailed as the industries tidal of fortune flows in their favor. It’s like shintai all over again but without any useful information entangled in it.

            I will say this though, fanboyism such as this is bad for the *[

            • Tamale
            • 11 years ago

            I respectively disagree..

            brand loyalty keeps the underdog in business, allowing them to survive until they are able to make another competitive product.

            if everyone always went straight to the top performer, amd might’ve actually died out when C2D and the 8800 series arrived within a few years of each other, but instead a lot of people like fighterpilot have been holding off until they can purchase from the brand they love.

            • Convert
            • 11 years ago

            That doesn’t make much sense, holding off does not put money in their pocket, it only keeps it out of their competitors. Unless of course that was going to be the only GPU purchase you were going to make.

            Furthermore thinking that one fanboy, or heck let’s just say all, have any real impact on keeping a company afloat is absolutely nothing short of ridiculous.

            If AMD continues to make underwhelming products where it matters most they will fail, no matter how many fanboys are bailing water.

            I was simply humoring you with the above. Nvidia fanboys cancel out -[

            • A_Pickle
            • 11 years ago

            To be completely honest, I think there’s such thing as “a noble fanboy,” IE, someone who buys from the underdog because they’re the underdog. They’re not the typical, rabid, foaming-at-the-mouth CounterStrike addict, who, despite pounds and pounds of benchmarks, insist that the Athlon 64 X2 is faster than an equally or higher-clocked Core 2 Duo, and who resist even after you’ve shown them how much faster your system is then theirs, and they reply with a frustrated assertion that their systems just “feel” faster. That’s not this guy — the noble fanboy buys his or her system fully aware of the fact that a company with a superior product exists. And they don’t care.

            I also think that, despite all of the hails of doom and gloom for AMD, people have different interests in product choice. I can safely say that, a month ago, I would’ve purchased a Radeon HD 3870 X2. In fact, I think the only time in the past few months that I would’ve gone with an Nvidia card, is when ATI only had the single-chip offerings to compete with (HD 2900 XT, HD 3850/3870). I have built many systems for people, complete with Nvidia graphics. The drivers are bad enough (to me) that I don’t really want to deal with them. I just want a nice, clean, set-it-and-forget-it experience.

            ATI also has some nice multi-media features with AVIVO HD. I really prefer the image quality of AVIVO versus PureVideo.

            • Convert
            • 11 years ago

            It doesn’t make it any less foolish. It’s a faceless /[

            • Nelliesboo
            • 11 years ago

            I swear some of the cry babies on here. Taking a fun moment people were having (I laughed when I first read it) and making something out of nothing. I am one of those people who buys the best product that is out at the time (unless it is made by OCZ), but so what if someone else wants to buy a brand that they like. The is still a free world at the moment. Some of you need to really lighten up.

            • Convert
            • 11 years ago

            They are free to buy whatever they want.

            You completely missed the point of all this.

            It’s not about anyone saying you don’t have the right to make a personal choice, there are plenty of irrational things that I myself do. I don’t however make up ridiculous BS arguments to try and pass off what I do as the right thing or the only way or to tout the fact I backed one pony over the other to have been the right decision simply because this product refresh turned out to be a good one.

            Arguing against BS arguments does not = arguing against the right to make a personal choice.

            The world isn’t crumbling because some ignorant fanboy posted on TR. Contrary to how my posts might have sounded.

            • Nelliesboo
            • 11 years ago

            All I got from Fighterpilot was that he was happy Ati put out a winner. How was that ridiculous?

            • Convert
            • 11 years ago

            I don’t recall replying directly to FP’s original post, or any subsequent ones for that matter. Are you using flat view?

            Not to say some of it wasn’t directed towards him, he certainly falls under the category I mentioned but his past posts and behavior are what really put him there.

            • SubSeven
            • 11 years ago

            Some of you guys here need to get off the estrogen pills. Fanboyism is a modern phenomenon. While I understand your argument (you don’t like BS arguments being made to justify irrational behavior) please note that you do the same thing EVERY DAY. We all do. The difference is that some of us do it vocally while others do it silently or even subconsciously.

            Secondly, you stated before that fanboyism doesn’t keep a company afloat. On this particular point you are entirely wrong. Having compiled an almost hundred page report on one of the most known brands names today (Apple), I have done quite a bit of research on the company’s management, culture, and client base. Apple is one of the few companies out there that is fortunate enough to have a cult-like client base. I mean if you want to see some fanatics… talk to some people who love Apple products (careful, criticism WILL get you killed!) as I have done. I do not like Apple at all and every time I argue against the company and their products, I feel like I’d have more success arguing my points with a corpse. I can confidently tell you that the only reason Apple survived during their financial drought was this cult like following of their “fanboy” customer base (recall, Apple traded at sub a mere $10/share only a few years ago). This base provided Apple with enough cash flow to just float about till their wondrous IPod came out. You can take my word for it… or you can do your own research.

            • Convert
            • 11 years ago

            Apple makes compelling products; that is what keeps them afloat. Their fervent fanbase is a perk, not the reason.

            While I appreciate the analogy you used it is entirely too complex and based on too many factors. IMO the real reason people were buying Apple products when they weren’t all that compelling is because of the /[

            • SubSeven
            • 11 years ago

            When I said we all do, I meant we all make BS arguments to justify things. My point was that different people do it differently, some do it vocally, others do it mentally or even subconsciously. You practically admitted this yourself when you said you have your own irrational moments. Why did you engage in these behaviors if they are irrational? Simple… because somehow you justified these behaviors. The fact that these justifications are BS is because you yourself stated that the behaviors are irrational. If the justifications were not BS… the behaviors wouldn’t be irrational now would they? I hope that my explanation is not more confusing than it needs to be.

            With respect to Apple, I don’t like to argue on issue because as I have already learned from the past, it is quite futile. I will just say the following: what is so compelling about the products Apple is selling? Remember, this is before the IPod. Apple didn’t have anything appealing in my book. Not only did it not have anything appealing, but the stuff it did sell was so overpriced (for the hardware it used) it was ridiculous. We seem to agree on this point. The only difference is that you believe Apple managed to survive based on its image while I believe it was its fan base. Now let me ask you something, how irrational is buying stuff based on image? At any rate, your point is taken.

            • Convert
            • 11 years ago

            I didn’t practically admit to it, I did.

            I didn’t say people shouldn’t make these decisions, it’s what we are. The only rationalization I make is /[

            • ludi
            • 11 years ago

            Last I checked, “that attitude” has been in this industry since Nvidia first released the Riva128 as a competitor to the Voodoo2. It doesn’t seem to be harming the industry /[

Pin It on Pinterest

Share This