AMD’s Radeon HD 5870 graphics processor


The graphics game has been nothing if not interesting the past year or so. AMD’s Radeon HD 4800 series upended expectations by using a mid-sized chip to serve the bulk of the market and pairing of two of them in an X2 card to create a high-end product. This strategy has worked out pretty well, in no small part because the Radeon HD 4870 GPU has proven to be very efficient for its size. The result? Fast graphics cards have become very affordable, with prices driving to almost-embarrassing lows over time.

Nvidia, meanwhile, has been relatively quiet in terms of truly new products. The last new GeForce we reviewed, back in March, was the GTS 250, a cost-reduced card based on a GPU that traces its roots back to the two-year-old GeForce 8800 GT. Nvidia has milked that G92 GPU as if it were a cow mainlining an experimental drug cocktail from Monsanto. The higher end of the GeForce lineup has been powered by the GT200 GPU, a much larger chip than anything AMD makes with only somewhat higher performance than the Radeon HD 4870.

All the while, folks have been buzzing about what, exactly, comes next for GPUs. Intel’s Larrabee project has been imminent for some time now, promising big things via the miracle medium of PowerPoint. In a sort of pre-emptive response, Nvidia employees have developed, en masse, a puzzling tick: speak to them, and they keep saying “PhysX and CUDA, CUDA and PhysX” after each normal sentence. Sometimes they throw in a reference to 3D Vision, as well, although they seem vaguely embarrassed to admit their chips do graphics anymore. For its part, AMD has been talking rather ambiguously about “Fusion,” which once stood for a combination of CPU parts and GPU parts into a future uber-processor capable of amazing feats of simultaneous sequential and data-parallel processing but now seems to have morphed into “We’d like to sell you a CPU and an integrated graphics chipset, too.”

In the midst of all of this craziness, thank goodness, work has continued on new and rather traditional graphics processors, which have become important enough to cause all of this fuss in the first place. Less than 18 months after the introduction of the Radeon HD 4800 series, AMD has produced a new chip that’s roughly the same size yet promises to double its predecessor’s power in nearly every respect, including shader processing, texturing, pixel throughput—and, yes, GPU-compute capacity. The Radeon HD 5870 is more capable, too, in a hundred little ways, not least of which is its fidelity to the DirectX 11 spec. And in a solid bonus for its target market, the card based on it looks like the Batmobile.

What’s under the Batmobile’s hood

Where to start? Perhaps with codenames, since they’re thoroughly confusing. The last-gen GPU that powered the Radeon HD 4870 was code-named RV770, a familiar number in a succession of Radeon chips. The rumor mill long ago began talking about its successor as the RV870, a logical step forward. Yet marketing types have hijacked codenames and proliferated them, just to make my life difficult, and thus the RV870 became known as “Cypress.” The official name now is the Radeon HD 5870. We’ll refer it to in various ways throughout this article, just to keep you on your toes.

Much like the RV770, the Cypress chip is the product of a three-year project conducted at multiple sites around the globe, directed from AMD’s Orlando office by chief architect Clay Taylor.

A logical block diagram of Cypress. Source: AMD.

The image above contains much of what you might want to know about the newest Radeon, if you squint right. What you’re seeing truly is a doubling of resources versus the RV770. Cypress has twice as many SIMD arrays in its shader core, twice as many texture units aligned with those SIMD arrays, double the number of render back-ends, and even two rasterizers. The big-impact number may be 1600, as in the number of shader processors or whatever AMD is calling them this week. 1600 ALUs, at any rate, bring a prodigious amount of compute power to this puppy.

This GPU is more than just a doubling of what came before, though. If you could zoom in a little deeper, you’d find refinements made to nearly every functional area of the chip. In fact, we hope to do just that in the following pages. But first, we need to scare off anyone who randomly wandered in from Google trying to figure out which graphics card to buy by talking explicitly about chips.

Sorting the silicon

Cypress is manufactured by TSMC on its 40-nm fab process, and it shoehorns an estimated (and breathtaking) 2.15 billion transistors into a die that’s 334 mm². That makes it a little bit larger than other mid-sized GPUs; both the RV770 and the G92b from Nvidia are about 256 mm². Because they’re both manufactured on a 55-nm process, they contain considerably fewer transistors—956 million for the RV770 and 754 million for the G92b, though counting methods sometimes vary. Chip size is important because it relates pretty directly to manufacturing costs. By delivering the first 40-nm product in this part of the market, and by cramming in a formidable amount of processing power, AMD has a good thing going.

RV770

Cypress

The 55-nm G92b

Comparisons to the GT200 chip on GeForce GTX 200-series graphics cards are more difficult, because Nvidia doesn’t like to talk about die sizes, and I’m too chicken to pry the metal cap off one of the chips and risk destroying a card in the process.

The 65-nm GT200 under its metal cap

We know the GT200 ain’t small. Its transistor count is roughly 1.4 billion, and credible reports placed the original 65-nm GT200’s die size at 576 mm². The 55-nm GT200b shrink probably made it just under the 500 mm² mark, according to the rumor mill, but that’s still, uh, hefty. I swear I saw Tom Cruise and Nicole Kidman racing to plant a flag in one corner of the thing.

Cypress is but one member of an entire Evergreen family of products in development at AMD, all of which will share a common technology base. Initially, two cards, the Radeon HD 5870 and 5850, will be based on Cypress. Another codename, Hemlock, denotes the multi-GPU card based on dual Cypress chips that will likely be known as the Radeon HD 5870 X2. Juniper is a separate, smaller chip aimed at the range between $100 and $200. Logic dictates AMD would slot Juniper-based cards into the Radeon HD 5700 series. All of these products are scheduled to be introduced between now and the end of the year, amazingly enough, some in rapid succession.

The rest of the Evergreens will fall after Christmas, in the first quarter of 2010. Redwood is slated to serve the mainstream market (i.e., really cheap graphics cards) and Cedar the value segment (really even cheaper, like $60 cards). When all is said and done, AMD should have a top-to-bottom family of 40-nm, DirectX 11-capable graphics card offerings.

Boarding up

The focus of our attention today, though, is the Radeon HD 5870. This is AMD’s fastest single-GPU implementation of Cypress, with all 1600 SPs enabled and cranking away at 850 MHz. The card has a gigabyte of GDDR5 memory onboard clocked at 1200 MHz, for a 4.8 Gbps data rate. Also, it’s rather long. Have a look:

Radeon HD 5870 (left) next to 4890 (right)

Twin dual-link DVI connectors, along with HDMI, DisplayPort, and CrossFire connections

Thankfully, two six-pin power plugs will suffice

The bare card

The 5870 card’s PCB is 10.5″ long, an inch longer than the 4980 before it and the same size as a GeForce GTX 260 or a Radeon HD 4870 X2. However, that fancy cooler shroud extends to roughly 10 7/8″, which means the 5870 might have fit problems in more compact PC cases. You’ll want to measure before assuming this beast will fit into your mid-tower enclosure, folks.

Despite its iffy dimensions, AMD has clearly paid attention to detail in the card’s design. The multi-colored, injection-molded cooler shroud with Bat-inspired intake vents is just part of that. Dave Baumann, the 5870’s product manager, told us the firm had listened to users’ worries about high idle temperatures in the 4800 series and adjusted the 5870’s cooling accordingly. The 5870 should also have lower fan RPMs than its predecessor, and the use of a different bearing in the blower should produce a lower-pitched sound that’s less obtrusive in operation. AMD has built in hardware detection of voltage regulator temperatures, as well, to avoid the overheating problems caused by “an application that amounted to a power virus” that caused some problems on RV770 and other cards. (FurMark, anyone?)

The single biggest improvement from the last generation, though, is in power consumption. The 5870’s peak power draw is rated at 188W, up a bit from the 4870’s 160W TDP. But idle power draw on the 5870 is rated at an impressively low 27W, down precipitously from the 90W rating of the 4870. Much of the improvement comes from Cypress’s ability to put its GDDR5 memory into a low-power state, something the 4870’s first-gen GDDR5 interface couldn’t do. Additionally, the second 5870 board in CrossFire multi-GPU config can go even lower, dropping into an ultra-low power state just below 20W.

AMD says the plan is for Radeon HD 5870 cards to be available for purchase today at a price of $379. Nvidia appears to have cut prices preemptively in anticipation of the 5870’s launch, too, at least selectively. This GeForce GTX 285 is down to $295.99 after rebate at Newegg, and this MSI GeForce GTX 295 is reduced to $469.99 with free shipping, as I write.

The Radeon HD 5850. Source: AMD.

In all likelihood, the GTX 285 will find closer competition in the form of the Radeon HD 5850, the second Cypress-based product, due next week. The 5850 will have two of its SIMD arrays and texture units disabled, leading to a total of 1440 SPs and 72 texels per clock of filtering capacity. Also, clock speeds will be down, with the GPU at 725 MHz and the GDDR5 memory at 1 GHz or 4 Gbps. The 5850 will have the same suite of display outputs and CrossFire multi-GPU capabilities as the 5870, though, and will come with a visibly shorter PCB. AMD expects these boards to be available next week, most likely on Monday, for $259.

To Eyefinity and beyond

You may have already read about AMD’s Eyefinity capability that it’s pushing with the Radeon HD 5000 series. Most members of the Evergreen family (with the exception of the smallest chip) will be able to support up to three different displays simultaneously, as the 5870 can with its four outputs. One may connect either two DVI displays and one DisplayPort or one DVI, one HDMI 1.3a, and one DisplayPort. Optimally, that means a single 5870 could drive three four-megapixel displays at once. AMD has demonstrated and plans to release the Eyefinity6 edition of the Radeon HD 5870, which breaks new ground in the use of superscript in product naming. The Eyefinity6 backs up that bravado with an array of six compact DisplayPort connections that will allow it to feed up to six four-megapixel displays at once with a single GPU.

Gulp.

The other key to Eyefinity is a bit of driver magic that makes multiple monitors attached to a card appear to the OS as a single, large display surface. AMD’s drivers support a multitude of different possible configurations with varying monitor sizes and portrait/landscape orientations, some of which involve multiple display groups and thus multiple virtual display surfaces. Because all of the monitors in a display group appear as one to the operating system and applications, many games can simply run across multiple displays without any additional tweaking. Here’s a look at six narrow-bezel Samsung monitors running DiRT 2 on a single GPU:

And here’s a more extreme configuration AMD had cooked up at the press event for the Radeon HD 5870, with 24 total displays connected.

You might see that picture and think Eyefinity already works with CrossFire multi-GPU configurations, but that’s not the case yet. AMD says it is working on that, though.

I’m not entirely sure yet what to think of Eyefinity. On the one hand, I’m a bona-fide multi-monitor enthusiast myself, sporting six megapixels on my desktop as I write these words. I expect AMD to make big inroads into financial trading firms and other places where multi-display configurations are common. I’m pleased to see AMD paying renewed attention to multi-monitor capabilities, and just the sheer thought of having over 24 megapixels of display fidelity pushes my PC enthusiast buttons. On the other hand, I tend to think that, for most of us, large-screen gaming might be better conducted on a big HDTV or via a projector, where you’re pushing fewer pixels (and less in need of GPU horsepower) across a larger display, uninterrupted by bezels.

But we’ll see. I intend to address Eyefinity and gaming in more depth in a future article. Perhaps I’ll find a use for all of those pixels.

The graphics engine

AMD refers to the front-end of Cypress as the graphics engine, encompassing as it does the traditional setup engine, the command processor, and the thread dispatch processor. Notable new additions here include a second rasterizer and a next-generation tessellation unit.

The Cypress graphics engine. Source: AMD.

Keeping with the theme of doubling resources, AMD added a second rasterizer to make sure the GPU can convert polygon meshes into pixels at a rate sufficient to keep up with the rest of the chip. There are two separate units here, and I wondered at first whether taking full advantage of them might require the use of DirectX 11 and its multithreaded command processing. But AMD says the geometry assembly and thread dispatch units have been modified to perform the necessary load balancing in hardware transparently.

The tessellator is capable of turning lower-polygon models into higher-poly ones by using mathematical hints, such as higher-order surfaces. Radeons have had hardware tessellation units for several generations, as does the Xbox 360 GPU, but they’ve not been widely used because prior versions of DirectX haven’t exposed their capabilities. That all changes with DirectX 11, which exposes the tessellator for programming via two new shader types: hull shaders and domain shaders. Not only that, but Cypress’ tessellator is improved from prior iterations, so it can handle popular (as these things go) algorithms like Catmull-Clark in a single pass. The tessellator can adjust the level of geometric detail in real time, too. We should see vastly more geometric detail in terrain, characters, and the like once hardware tessellation goes into widespread use.

Notable by their absence are the interpolation units traditionally found in the setup engine. These fixed-function interpolators have given way to a long-term trend in graphics processors; they’ve been replaced by the shader processors. AMD has added interpolation instructions to its shader cores as a means of implementing a new DirectX 11 feature called pull-model interpolation, which gives developers more direct control over interpolation (and thus over texture and shader filtering.) The shader core offers higher mathematical precision than the old fixed-function hardware, and it has many times the compute power for linear interpolation, as well. AMD CTO Eric Demers pointed out in his introduction to the Cypress architecture that the RV770’s interpolation hardware had become a performance-limiting step in some texture filtering tests, and using the SIMDs for interpolation should bypass that bottleneck.

Shader processing

Not only has Cypress doubled the amount of computing power available on a single GPU, but AMD has also added refinements to improve the per-clock performance, mathematical precision, and fundamental capabilities of its stream processors.

Here’s another look at the basic layout of the chip. Cypress has 20 SIMDs, each of which has 16 of what AMD calls thread processors inside of it. Each of those thread processors has five arithmetic logic units, or ALUs. Multiply it out, and you get a grand total of 1600 ALUs across the entire chip, or 1600 “stream processors” or “stream cores,” depending on which version of AMD nomenclature you pick. “Stream cores” is the latest, and it seems to be a bit inflationary. My friend David Kanter argues that what makes a core in computer architecture is the ability to fetch instructions. By that measure, Cypress would have 20 cores, since the thread processors inside of each SIMD march together according to one large instruction word.


A thread processor block. Source: AMD.

The organization of the thread processors is essentially unchanged from the RV770 and traces its roots pretty directly back to the R600. The primary execution unit is superscalar and five ALUs wide. That fifth ALU is a superset of the others, capable of handling more advanced math like trascendentals. The execution units are pipelined with eight cycles of latency, but the SIMDs can execute two hardware thread groups, or “wavefronts” in AMD parlance, in interleaved fashion, so the effective wavefront latency is four cycles. Multiply that latency by the width of the SIMD, and you have 64 pixels or threads of branch granularity, just as in R600.

Despite this similarity to past architectures, AMD has made a host of improvements to Cypress, some of which are helpful for graphics, others for GPU compute, and some for both. Demers told us DirectX 11, DirectCompute 11, and OpenCL are fully implemented in hardware, with no need for performance-robbing software emulation of features. Demers stopped just short of asserting that Cypress would support the next version of OpenCL fully in hardware, as well, but gave the distinct impression that this chip would likely be able to do so.

Cypress adds a number of instructions to support DirectX 11, DirectCompute, and other missions this chip may have, including video encoding. One general performance improvement is the ability to co-issue a MUL and a dependent ADD instruction in a single clock, sidestepping a pitfall of its superscalar execution units.

On the dedicated compute front, Cypress continues to execute double-precision FP math at one-fifth its peak rate for single-precision, but AMD has upped the ante on precision in several ways. Demers claims the GPU is compliant with the IEEE 754-2008 standard, with precision-enhancing denorms handled “at speed.” The chip now supports a fused multiply-add instruction, which takes the result of a multiply operation and feeds it directly into the adder without rounding in between. Demers describes FMA as a way to achieve DP-like results with single-precision datatypes. (This FMA capability is present in some CPU architectures, but isn’t yet built into x86 microprocessors, believe it or not—though Intel and AMD have both announced plans to add it.) The lone potential snag for full IEEE compliance, Demers told us, is the case of “a few numerical exceptions.” The chip will report that such exceptions have occurred, but won’t execute user code to handle them.

A block diagram of Cypress as a stream processor. Source: AMD.

Peak shader
arithmetic (GFLOPS)

Single-issue Dual-issue

GeForce 9800 GT

339 508
GeForce GTS 250

484 726
GeForce GTX 285

744 1116
GeForce GTX 295

1192 1788
Radeon HD 4850

1088
Radeon HD 4870

1200
Radeon HD 4890 OC

1440
Radeon HD 4870 X2

2400
Radeon HD 5850

2088
Radeon HD 5870

2720

AMD continues to devote more transistors to compute-specific logic. The local data stores on each SIMD, used for inter-process communication, have doubled in size to 32KB, and AMD’s distinctive global data share has quadrupled from 16 to 64KB. The memory export buffer can now scatter up to 64 32-bit values per clock, twice the rate of RV770. Cypress supports 32-bit atomic operations, as well; hardware semaphores enable global synchronization in “a few cycles,” according to Demers. However, Demers wouldn’t reveal whether or not Cypress’s memory controller is capable of supporting ECC memory, a capability that could be crucial in the burgeoning markets for GPU computing.

Demers made no bones about the fact that the primary market for this chip is graphics and gaming, but he was quick to point out that Cypress is also the most advanced GPU compute engine in the world. Given the current state of things, that claims seems to be credible—at least for the time being. The Radeon HD 5870’s peak processing power is formidable at 2.7 TFLOPS for single-precision math and 544 GFLOPS for double-precision. That’s more than twice the peak theoretical capacity of the GT200b’s fastest graphics card variant, GeForce GTX 285, even if we generously include Nvidia’s co-issue feature in our FLOPS count.

Of course, as with almost any processor, peak throughput is only part of the story. We don’t yet have much in the way of standard GPU compute benchmarks or applications we can run, but we can look at the directed tests for shader performance in 3DMark.

These results range from disappointing—slightly slower than the GTX 285 in the GPU cloth test—to astounding—considerably faster than two Radeon HD 4870s in the parallax occlusion mapping and Perlin noise tests.

Texturing and memory

Cypress’ memory hierarchy has been massaged, too. The doubling of the number of SIMDs on chip means twice as many 8KB L1 caches onboard, so the total L1 size doubles to 160KB. All told, these caches give the GPU as much as one terabyte per second of bandwidth for L1 texture fetches, according to Demers—a staggering number. The four L2 caches associated with the memory controllers have grown from 64KB to 128KB each, and deliver up to 435 GB/s of bandwidth between the L1 and L2 caches.

AMD has held steady at four 64-bit memory controllers, yielding an aggregate 256-bit path to memory. GDDR5 data rates are up from 3.6 Gbps on the Radeon HD 4870 to 4.8 Gbps on the 5870, an increase of less than 50%. This is perhaps one potential weakness of a chip that has doubled in nearly every other department, but AMD contends the RV770 was memory bandwidth-rich and compute-poor, relatively speaking, and thus not taking full advantage of the memory bandwidth available to it. Cypress, the firm claims, will be more balanced.

AMD has made other provisions to make the best use of the bandwidth available to Cypress. Chief among them are new block texture compression modes, contributed by AMD to the DirectX 11 spec and now available to all GPU makers. These compression modes are purported to offer higher quality than prior standards, with better signal-to-noise ratios and better handling of transparency. The basic technology has been adapted to work with both standard 8-bit-per-channel integer and FP16 HDR texture formats, with compression ratios up to 6:1 possible. Texture sizes of up to 16k by 16k are now supported, as well.

Another tweak that gets my IQ-junkie juices flowing is the move to a new anisotropic filtering algorithm that does not vary the level of detail according to the angle of the surface to which it is being applied. This is a hardware-level change to the texture filtering units. AMD claims it has implemented an algorithm that achieves the same results as the Microsoft Direct3D reference rasterizer, but does so more efficiently, with no additional performance cost compared to its prior GPUs.

We can see the impact of this change using the infamous tunnel test, pictured below. The idea here is that you’re looking down a 3D-rendered cylinder, and the mip-maps are different colors in order to show you where one ends and the other begins. Some level of blending between them is being applied by the GPU, also—that’s trilinear filtering. The closer the colored shape is to a circle, the less the aniso filtering algorithm varies according to the angle of inclination. In other words, rounder is better. The smoother the blending between the colors, the more trilinear filtering is being applied. Smoother is better.


Anisotropic texture filtering and trilinear blending


Radeon HD 4870

Radeon HD 5870


GeForce GTX 285


GeForce GTX 285 HQ

Up to now, as you can see, Nvidia has performed better on this test than AMD. I don’t want to overstate the importance of that; the reality is that these things are much easier to spot in a contrived test like this one than in a real game, where the differences are very tough to see. Still, the 5870 aces this test in pixel-perfect fashion, setting a new standard for anisotropic filtering.

Not only that, but generally we’d be handing you a caveat right now about trilinear filtering on the Radeons, because AMD has long used an adaptive trilinear algorithm that applies more or less blending depending on the contrast between the textures involved. In the case of a test like this one, that algorithm always does its best work, because the mip maps are entirely different colors. In games, it applies less filtering and may not always achieve ideal results. However, for Cypress, AMD has decided to stop using that adaptive algorithm. Instead, they say, the Radeon HD 5870 applies full trilinear filtering all of the time by default, so the buttery smooth transitions between mip-map colors you’re seeing in the image above are in earnest.

In games, the impact of these performance quality improvements is subtle, but you can expect to see less high-frequency noise in the form of things like texture crawling and sparkle on the 5870. I need to play with it some more, frankly, in order to find some good examples of the differences. I can tell you now that they’ll likely be very difficult to capture in a static screenshot. We’ll try to look into this topic more when we have time, though.

Peak
pixel
fill rate
(Gpixels/s)

Peak bilinear

texel
filtering
rate
(Gtexels/s)


Peak bilinear

FP16 texel
filtering
rate
(Gtexels/s)


Peak
memory
bandwidth
(GB/s)

GeForce 9800 GT

9.8 34.3 17.1 57.6
GeForce GTS 250

12.3 49.3 24.6 71.9
GeForce GTX 285

21.4 53.6 26.8 166.4
GeForce GTX 295

32.3 92.2 46.1 223.9
Radeon HD 4850

10.9 27.2 13.6 67.2
Radeon HD 4870

12.0 30.0 15.0 115.2
Radeon HD 4890 OC

14.4 36.0 18.0 124.8
Radeon HD 4870 X2

24.0 60.0 30.0 230.4
Radeon HD 5850

23.2 52.2 26.1 128.0
Radeon HD 5870

27.2 68.0 34.0 153.6

One reason AMD was able to make these image quality improvements is this GPU’s embarrassment of riches in the texture filtering department, where it more than doubles the peak theoretical capacity of the Radeon HD 4870. Cypress also has twice as many render back-ends or ROPs as the RV770, with two attached to each memory controller, so it has substantially more peak pixel fill rate and antialiasing oomph.

This color fill rate test usually ends up being memory-bandwidth limited. The Radeon HD 5870 has a little bit less memory bandwidth, in theory, than the GeForce GTX 285, and it works out that way in practice, too.

This test measures filtering rates with standard integer texture formats. The 5870 falls a little shy of its theoretical peak of 68 bilinear filtered Gtexels/s, but most of these GPUs do. Interestingly enough, the 5870 also falls behind the Radeon HD 4870 X2 and the GeForce GTX 285 when we get to the higher levels of anisotropy. Those other cards both have more memory bandwidth than the 5870, which may play a part. But remember, also, that they’re producing lower-quality results than the 5870. Notice that in its high-quality filtering mode, which still can’t match the 5870’s output, the GTX 285’s performance drops below the 5870’s.

This is a test of FP16 texture filtering, which is probably where we want to focus more of our attention, since this is the hard stuff. However, I still don’t know what the heck is going on with the units here. At the very least they’re off by a factor of 100, since the 5870’s peak theoretical FP16 filtering speed is 34 Gtexels/s and 3DMark is reporting 1868. This has been a long-standing problem with 3DMark Vantage, and the folks at FutureMark have stopped answering my emails about it. I’m open to suggestions for alternate FP16 texture filtering tests.

In the meantime, we’re going to assume the relative differences here are meaningful, at least, and notice that the Radeon HD 5870 is alone at the top of the charts. This is likely one of the places where most GPUs are interpolation limited, and the 5870’s shader-based interpolation allows it to outpace even two Radeon HD 4870s on an X2 card.

The render back-ends and antialiasing

The render back-ends in Cypress haven’t escaped notice, either. Besides doubling up, the individual render back-end units have gained some new capabilities. A new read-back path lets the chip’s texture units read from the compressed color buffers for antialiasing, which should improve performance with AMD’s custom-filtered AA modes. Performance when using multiple render targets has purportedly improved, and comically, AMD has built in a provision for fast color clears because some software vendors were prone to clearing the screen many times, for whatever reason.

The larger L2 caches adjacent to the render back-ends should mean less of a performance hit when going from 4X to 8X multisampled antialiasing, according to AMD, although that seems a bit academic since the hit on the RV770 was really quite small.

The biggest news on the antialiasing front, though, is the return of supersampling. Yep, it’s back! Most antialiasing methods in recent years have focused on object edges alone, especially the dominant form of AA, known as multisampling. Supersampling is more of a brute-force method in which every pixel onscreen is sampled multiple times, not just object edges. It’s terribly inefficient, of course, except that it has the potential to improve image quality everywhere and is obviously the best choice, if you can afford to pay the performance cost. (Supersampling is de riguer among professional animators and the like.)

Because it touches every pixel on the screen, supersampling can address difficult cases that multisampled AA modes won’t address—visible edges created by sharp color transitions or alpha transparencies in textures, shimmering in object interiors caused by pixel shaders without sufficient internal sampling rates, or any high-frequency noise your texture filtering algorithm has failed to eliminate. Using it in a game, you may simply find that objects onscreen appear to have more solidity to them.

AMD has gussied up its supersampling mode by toying with the sample patterns, too. I don’t have an image of it, and conventional tools won’t produce one, but the sampling pattern varies within a 2×2-pixel block, in an attempt to defeat our eyes’ propensity to recognize regular patterns. Using four different patterns helps on this front.

The Radeon HD 5870 supports 2X, 4X, and 8X supersampling via a simple switch in the Catalyst Control Center, and the traditional box filters can be combined with custom-filtered AA modes to ratchet up the effective sample count. I’d like to write more about it, but I’ve only had a week to spend with the 5870 so far, and I think these antialiasing methods deserve their own article, at some point, complete with a suite of comparative screenshots—not that screenshots can capture the full impact of supersampling on image quality.

I’m going to tip my hand on the 5870’s gaming performance in order to give you a look at the performance hit caused by various antialiasing methods. Hang on…

Yeah, so looking at the orange bars that represent 4X multisampled AA, a single 5870 is indeed faster than a Radeon HD 4870 X2. Yikes.

And, back on task, the performance hit when going from 4X MSAA to 8X MSAA is indeed smaller on the 5870 than on the 4870 X2—although, like I said, both are pretty much academic at this point.

Notice how much larger the performance hit is for 8X MSAA on the GeForce GTX 285. Nvidia’s ROPs or render back-ends just don’t handle 8X multisampling as well as AMD’s, for some reason. Notice, though, that Nvidia pretty much makes up for its poor 8X multisampling performance via its coverage-sampling AA modes, which store fewer color samples than conventional multisampling. Nvidia’s 16X CSAA mode employs more coverage samples and fewer color samples than 8X multisampling and delivers arguably comparable image quality with essentially no performance hit versus 4X MSAA.

For this reason, I’ve limited the bulk of my performance testing to 4X multisampled AA, where the GPUs are on common ground.

Oh, and yes, the performance hit with supersampling is brutal, but at 4X, the 5870 still achieves a very playable 47 FPS average in Left 4 Dead at 2560×1600 with 16X anisotropic filtering. If you have the power, why not use it?

Our testing methods

As ever, we did our best to deliver clean benchmark numbers. Tests were run at least three times, and the results were averaged.

Our test systems were configured like so:

Processor Core i7-965
Extreme
3.2GHz
System bus QPI 6.4 GT/s
(3.2GHz)
Motherboard Gigabyte
EX58-UD5
BIOS revision F7
North bridge X58 IOH
South bridge ICH10R
Chipset drivers INF update 9.1.1.1015

Matrix Storage Manager 8.9.0.1023

Memory size 6GB (3 DIMMs)
Memory type Corsair
Dominator TR3X6G1600C8D
DDR3 SDRAM
at 1333MHz
CAS latency (CL) 8
RAS to CAS delay (tRCD) 8
RAS precharge (tRP) 8
Cycle time (tRAS) 24
Command rate 2T
Audio Integrated
ICH10R/ALC889A
with Realtek 6.0.1.5919 drivers
Graphics
Sapphire Radeon HD 4890 OC 1GB PCIe

with  Catalyst 8.66-090910a-088431E drivers

Radeon HD 4870 X2 2GB PCIe

with  Catalyst 8.66-090910a-088431E drivers

Radeon HD 5870 1GB PCIe

with  Catalyst 8.66-090910a-088431E drivers

Dual Radeon HD 5870 1GB PCIe

with  Catalyst 8.66-090910a-088431E drivers

Asus GeForce GTX 285 1GB PCIe

with ForceWare 190.62 drivers

Dual Asus GeForce GTX 285 1GB PCIe

with ForceWare 190.62 drivers

GeForce GTX 295 2GB PCIe

with ForceWare 190.62 drivers

Hard drive WD Caviar SE16 320GB SATA
Power
supply
PC Power & Cooling Silencer
750 Watt
OS Windows 7 Ultimate x64 Edition
RTM
OS updates DirectX
March 2009 update

Thanks to Corsair for providing us with memory for our testing. Their quality, service, and support are easily superior to no-name DIMMs.

Our test systems were powered by PC Power & Cooling Silencer 750W power supply units. The Silencer 750W was a runaway Editor’s Choice winner in our epic 11-way power supply roundup, so it seemed like a fitting choice for our test rigs.

Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests.

We used the following versions of our test applications:

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Far Cry 2

We tested Far Cry 2 using the game’s built-in benchmarking tool, which allowed us to test the different cards at multiple resolutions in a precisely repeatable manner. We used the benchmark tool’s “Very high” quality presets with the DirectX 10 renderer and 4X multisampled antialiasing.

True to form, the 5870 tracks closely with the Radeon HD 4870 X2 here, matching the performance of two prior-gen chips with a single piece of silicon. That’s sufficient to put the 5870 comfortably ahead of the GeForce GTX 285, but not quite enough to push it past the dual-GPU GeForce GTX 295.

Wolfenstein

We recorded a demo during a multiplayer game on the Hospital map and played it back using the “timeNetDemo” command. At all resolutions, the game’s quality options were at their peaks, with 4X multisampled AA and 8X anisotropic filtering enabled.

AMD’s new hotness again tracks closely with the 4870 X2, but its performance lead over the GeForce GTX 285 narrows here—and pretty much disappears altogether in multi-GPU mode. Nvidia has a long history of relatively high performance in id Software’s OpenGL-based game engines, and that trend appears to be continuing here.

Of course, that’s all relative. The reality is that even the slowest card pushes a reasonably decent 46 FPS in this game at high quality levels and a four-megapixel resolution, and every other card we tested averages over 60 FPS. If we want to challenge the 5870, game developers will have to start making use of DirectX 11 and more advanced shader effects.

Left 4 Dead

We also used a custom-recorded timedemo with Valve’s excellent zombie shooter, Left 4 Dead. We tested with 4X multisampled AA and 16X anisotropic filtering enabled and all of the game’s quality options cranked

The 5870 returns to form here, dominating both the single- and multi-GPU contests while performing almost exactly like a Radeon HD 4870 X2. Again, the game itself barely challenges these GPUs.

Tom Clancy’s HAWX

Last time we tested with HAWX, we used FRAPS to record frame rates while we played the game. Doing so does work, but I had some trepidation about its repeatability, because of one thing: when you take off straight up, pointed at the sky, frame rates tend to skyrocket. The amount of time you spend nose-up in the game will affect frame rates rather profoundly. And personally, I can’t play this game well without accelerating straight up from time to time. Otherwise, I run into the ground, or I just can’t get targets lined up quickly.

As a result, I decided this time to use the built-in benchmark tool in HAWX, which seems to do a good job of putting a video card through its paces. We tested this game in DirectX 10 mode with all of the image quality options either turned on or set to “High”, along with 4X multisampled antialiasing. Since this game supports DirectX 10.1 for enhanced performance, we enabled it on the Radeons. No current GeForce GPU supports DX10.1, though, so we couldn’t use it with them.

Hmm… so the 5870 isn’t much faster than the other single-GPU cards in the mix here, though it does scale nicely in CrossFire mode.

To give you some idea of the effect of DirectX 10.1 on performance here, the 5870’s frame rate at 2560×1600 dropped to 52 FPS with DirectX 10, a whole four frames per second. The 4870 X2 took a bigger hit, going from 73 to 56 FPS with the change.

Sacred 2: Fallen Angel

I must confess that I’ve spent the vast majority of my gaming time in the last couple of months playing Sacred 2. A little surprisingly for an RPG, this game is demanding enough to test even the fastest GPUs at its highest quality settings. And it puts all of that GPU power to good use by churning out some fantastic visuals.

We tested at 2560×1600 resolution with the game’s quality options at their “Very high” presets (typically the best possible quality setting) with 4X MSAA.

Given the way this game tends to play, we decided to test with fewer, longer sessions when capturing frame rates with FRAPS. We settled on three five-minute-long play sessions, all in the same area of the game. We then reported the median of the average and minimum frame rates from the three runs.

This game also supports Nvidia’s PhysX, with some nice GPU-accelerated add-on effects if you have a GeForce card. Processing those effects will put a strain on your GPU, and we’re already testing at some pretty strenuous settings. Still, I’ve included results for the GeForce GTX 295 in two additional configurations: with PhysX effects enabled in the card’s default multi-GPU SLI configuration, and with on-card SLI disabled, in which case the second GPU is dedicated solely to PhysX effects. It is possible to play Sacred 2 with the extra PhysX eye candy enabled on a Radeon, but in that case, the physical simulations are handled entirely on the CPU—and they’re unbearably slow, unfortunately.

In another strong showing, the new Radeon outperforms both teams’ dual-GPU cards, the 4870 X2 and the GTX 295. In CrossFire, it’s money.

You can see the performance hit caused by enabling PhysX at this resolution. On the GTX 295, it’s just not worth it. Another interesting note for you… As I said, enabling the extra PhysX effects on the Radeon cards leads to horrendous performance, like 3-4 FPS, because those effects have to be handled on the CPU. But guess what? I popped Sacred 2 into windowed mode and had a look at Task Manager while the game was running at 3 FPS, and here’s what I saw, in miniature:

Ok, so it’s hard to see, but Task Manager is showing CPU utilization of 14%, which means the game—and Nvidia’s purportedly multithreaded PhysX solver—is making use of just over one of our Core i7-965 Extreme’s eight front-ends and less than one of its four cores. I’d say that in this situation, failing to make use of the CPU power available amounts to sabotaging performance on your competition’s hardware. The truth is that rigid-body physics isn’t too terribly hard to do on a modern CPU, even with lots of objects. Nvidia may not wish to port is PhysX solver to the Radeon, even though a GPU like Cypress is more than capable of handling the job. That’s a shame, yet one can understand the business reasons. But if Nvidia is going to pay game developers to incorporate PhysX support into their games, it ought to work in good faith to optimize for the various processors available to it. At a very basic level, threading your easily parallelizable CPU-based PhysX solver should be part of that work, in my view.

Crysis Warhead

But will it run Crysis?

Although we’ve had a bit of a tough time finding games that will really push the limits of the Radeon HD 5870, this game engine is certain to do it. In a true test of GPU power, we turned up all of the quality settings in Warhead to the highest settings using the cheesily-named “Enthusiast” presets. The game looks absolutely gorgeous at these settings, but few video cards will run it smoothly. In fact, we chose to test at 1920×1200 rather than 2560×1600 because it appears at least some of the cards have serious trouble at the higher resolution, almost as if they were running out of video RAM. Anyhow, this is a pretty brutal test, tough enough to challenge even our fastest multi-GPU setups.

For this game, we tested each GPU config in five 60-second sessions, covering the same portion of the game each time. We’ve then reported the median average and minimum frame rates from those five runs.

Told you this would be a tough test. The 5870’s performance once again mirrors that of the Radeon HD 4870 X2. However, as you can see, the 5870 experienced a couple of odd performance dips in CrossFire mode at certain points during our session. This problem occurred in multiple sessions and had a real impact on playability, unfortunately. I expect AMD has some driver work to do on this front.

Power consumption

We measured total system power consumption at the wall socket using an Extech power analyzer model 380803. The monitor was plugged into a separate outlet, so its power draw was not part of our measurement. The cards were plugged into a motherboard on an open test bench.

The idle measurements were taken at the Windows desktop with the Aero theme enabled. The cards were tested under load running Left 4 Dead at 2560×1600 resolution, using the same settings we did for performance testing.

Look at that. A single 5870 draws less power at idle than any other card we tested, besting even the prior champ, the GeForce GTX 285. And two 5870s in CrossFire draw less power at idle than a single Radeon HD 4890. Very nice.

The 5870 also draws the least power under load. Given its performance, the overall power efficiency is astounding.

Noise levels

We measured noise levels on our test system, sitting on an open test bench, using an Extech model 407738 digital sound level meter. The meter was mounted on a tripod approximately 8″ from the test system at a height even with the top of the video card. We used the OSHA-standard weighting and speed for these measurements.

You can think of these noise level measurements much like our system power consumption tests, because the entire systems’ noise levels were measured. Of course, noise levels will vary greatly in the real world along with the acoustic properties of the PC enclosure used, whether the enclosure provides adequate cooling to avoid a card’s highest fan speeds, placement of the enclosure in the room, and a whole range of other variables. These results should give a reasonably good picture of comparative fan noise, though.

The 5870 has best-in-class acoustics at idle and the second-lowest noise level under load.

GPU temperatures

For most of the cards, I used GPU-Z to log temperatures during our load testing. In the case of multi-GPU setups, I recorded temperatures on the primary card. However, GPU-Z didn’t yet know what to do with the 5870, so I had to resort to running a 3D program in a window while reading the temperature from the Overdrive section of AMD’s Catalyst control panel.

The days of pushing 95°C on the GPU core are, happily, fading away. AMD adjusted its fan speed thresholds on the Radeon HD 4890, and it’s stuck to the same formula here. That big Bat-cooler holds the 5870 to a more comfortable 76°, even though it’s relatively quiet. And I’m pleased to report that our 5870 eventually dropped back down to 39° at idle after running this test.

Conclusions

Well, Sherlock, what do you expect me to say? AMD has succeeded in delivering the first DirectX 11 GPU by some number of months, perhaps more than just a few, depending on how quickly Nvidia can get its DX11 part to market. AMD has also managed to double its graphics and compute performance outright from one generation to the next, while ratcheting up image quality at the same time. The Radeon HD 5870 is the fastest GPU on the planet, with the best visual output, and the most compelling set of features. Yet it’s still a mid-sized chip by GPU standards. As a result, the 5870’s power draw, noise levels, and GPU temperatures are all admirably low. My one gripe: I wish the board wasn’t quite so long, because it may face clearance issues in some enclosures.

If there’s trouble brewing at all here, it’s won’t come immediately from the GPU competition, but from game consoles and the developers who have chosen them almost exclusively as their performance targets. Games need to move on and take advantage of the many, many multiples of console-class graphics and processor power now available on the PC. If they don’t, AMD may have trouble selling this incredibly fast chip to consumers simply because the applications don’t take full advantage of it.

AMD seems to have recognized this problem when it built support for multiple large monitors into its chip and tweaked its software to support gaming across multiple displays, along with high-quality texture filtering and advanced antialiasing features like custom filters, edge-detection, and supersampling. I’m happy to have all of it, personally. I’ll acknowledge that not everyone needs a GPU this powerful to play today’s games, but I’ll take all of the graphics power I can get, especially if the GPU can use it to produce higher quality pixels. In my book, the Radeon HD 5870 is a steal at $379.

As ever in PC hardware, though, you may find even better values a rung or two from the top of the performance ladder. In this case, I’m thinking of the Radeon HD 5850, which sure looks promising at $259. I’m curious to see what the next week will bring on that front.

Comments closed
    • Fighterpilot
    • 10 years ago

    The silence from Maroon1 and other rabid NV fanboys is deafening LOL
    ATi fans everywhere say “hi” 🙂

    • swaaye
    • 10 years ago

    Just to refresh my brain, I went back and looked at the 8800GTX review. NV’s big super-secret unified shader bombshell that received pretty much universal praise. The review says it launched at between $600 and $650. Heh.

    Good ‘ol G80 didn’t always win against 7950GX2, 7900GTX SLI or X1900XT CF, either.

    So I’m confused on the flak that I see 5870 get. 🙂

    • Fighterpilot
    • 10 years ago

    The “previous generation” referred to by TR was in respect to the 4870 not the 4890 which is dissimilar in both specification and performance.
    Given the similar performance of HD5870 to /[

      • flip-mode
      • 10 years ago

      Pot meet kettle. I’m not attacking your character. 1.37 does not equal 2

        • Meadows
        • 10 years ago

        An HD 5870 runs almost exactly as fast as an HD 4870 X2.
        I don’t see any “1.37”, but 2 equals 2, you see.

        I find the claims valid and acceptable, and the card itself impressive.

          • flip-mode
          • 10 years ago

          That’s cool. The thing I have said but that seems to have been ignored: it is petty and disagreeable to argue over it (see post 308: q[

            • SubSeven
            • 10 years ago

            Well said. All in all the 4870’s drivers are so optimized at this point that you are right, two often times do outperform because of scaling issues and such. Also, don’t forget the 5870 drivers are quite new and over time the performance of the cards will improve significantly (if AMD keeps up with the great work they have been doing thus far) and at maturity, the card will be 2x as fast or maybe even faster. All in all, the notion is 2x is very relative and the conclusion one reaches depends on the base of comparision and assumptions used, as well as configurations used, and tests that were performed. With the right tests, Scott could have made the 5870 be 3x faster; so again, it all very relative and in my opinion not worth the multipost heated argument we had where each slapped the other with zingers and other snide commets (myself included). I just want to conclude by saying that how you view performance is up to you, and that Flip is on point by saying that the 5870 is currently reigning king of the hill with no equivalent. So let’s end this debate because we pretty much beat this horse into its 3rd death already.

            • swaaye
            • 10 years ago

            The thing is, 4870 might be more different from 3870 than 4870 is different than 5870.

            We’re not talking a whole new architecture here. Not even close. RV770 involved lots of rather major changes. This chip is the fourth take on R600’s tech. I would frankly be surprised if the drivers are a mess. But who knows.

            I’m sure that DX11 can be improved because it’s basically at its earliest stage right now. But we aren’t comparing DX11 results.

    • kilkennycat
    • 10 years ago

    Scott:-

    ./[

      • ish718
      • 10 years ago

      I guess this is where Microsoft’s DirectCompute steps in O_O

      • Drive
      • 10 years ago

      No, the problem is nVidia offer CUDA PhysX with *[

        • ish718
        • 10 years ago

        Besides, Physx is nothing special and it isn’t better than the other physics APIs out there. At the end of the day, it’s up to the programmer.
        We have witnessed crappy implementations of physx, whether it looks stupid or it kills performance by a lot.

        BTW, havoc is used more than physx in console games.

    • flip-mode
    • 10 years ago

    300

      • Meadows
      • 10 years ago

      /[

        • flip-mode
        • 10 years ago

        Nice.

      • Krogoth
      • 10 years ago

      MADNESS? THIS IS AMD!

      • flip-mode
      • 10 years ago

      333

        • SecretMaster
        • 10 years ago

        334

    • ub3r
    • 10 years ago

    Great to see AMD released this card early.

    This means they have already started working on the the HD68xx GPU.

    Meanwhile, Nvidia is dieshrinking their GT2xx and rethinking a new name for it.

    • Silus
    • 10 years ago

    Here’s a chart that someone put together, using most of the HD 5870 reviews in the web:

    §[<http://www.madshrimps.be/vbulletin/f22/amd-radeon-hd-5870-hd-5850-performance-chart-66465/<]§ Using the HD 5870's performance as base line (100%): - The HD 5850 has 88% of that performance - The GTX 285 has 81.6% of that performance - The HD 4890 has 74.1% of that performance As for dual-GPU cards: - The GTX 295 has 110.8% of that performance - The HD 4870 X2 has 104.1% of that performance So on average the HD 5870 is 25.9% faster than a HD 4890 and 18.4% faster than a GTX 285.

      • SubSeven
      • 10 years ago

      Are you still trying to rant about that AMD doubled on the performance of previous generation 4870s? Dood, no one really cares that it is not exactly 100% get over it and accept the fact that AMD>>Nvidia, at least for the moment.

        • Silus
        • 10 years ago

        Where am I ranting ?

        It seems you don’t like those numbers. So touchy about a chart that just shows the average performance increase, collected through a bunch of reviews…

        And yes, TR’s conclusion is wrong. The HD 5870 in no way doubles the performance of the previous generation. The point of having a constant readership is precisely to point out, not only that the review is good, but also that it needs correcting on some instances, if it needs it of course.

          • indeego
          • 10 years ago
            • Silus
            • 10 years ago

            I’m not talking about doubling specs. That much is quite obviously a fact. I’m talking about doubling actual performance (in actual games), which is why I asked Scott, in my very first comment in this thread, if he meant “theoretical performance”, because it certainly isn’t double in games.

            That’s all.

            • WaltC
            • 10 years ago

            I think it’s likely that most games, especially as tested on some sites, don’t come close to pushing everything possible out of the hardware. Pushing the hardware harder would equate to only looking at 8x FSAA testing, or thinking about the fact that the 58xx series is so strong that ATi felt comfortable bringing back SS FSAA, etc. Yet even looking at those things, I think it’s likely that we don’t yet have the software available that will show the real power of these gpus. A poor analogy might be comparing a modern 4-core cpu to an 8-year-old single core cpu when running a browser or word processor. The older cpu might look 91.2% as good as the modern cpu when running a browser or a word processor–but start throwing a lot of number crunching at the old single core running a multithreaded program, and the performance spread will become gigantic…;)

            Basically what these benchmarks to date are doing is looking at yesterday’s software running on tomorrow’s gpus.

            • ish718
            • 10 years ago

            Very good point.
            Most PC games now are console ports and are not optimized for latest PC hardware. So I guess games are not a definitive benchmarking tool for GPUs…

            O_O Hell, most PC exclusives are not optimized for or take advantage of the latest hardware.

            • flip-mode
            • 10 years ago

            I don’t see how it is a good point. Besides Silus’s math fail, the card still does not show twice the performance of either a 4870 or a 4890. But, whatev, I think it is a pretty negligible issue.

            • SubSeven
            • 10 years ago

            I have to respectfully disagree with you. First of all comparing the 5870 to the 4890 is unfair. If you want to see a doubling of the performance, compare it to the 4870. There is no 4870 in the review but the 4870X2 is there so you can use that in your basis of comparison. The X2 and the 5870 are pretty much identical for all intents and purposes so i don’t see why the 5870 is not twice as fast the 4870.

            • flip-mode
            • 10 years ago

            Well, it’s simple math (which should be respected after all the criticism Silus has endured). Take the 4870 scores (fine, lets use the 4870 instead of the 4890, for the same of argument) and double them. That will give you a figure that is twice the performance of a 4870. Don’t start bending the rules of math.

            • Fighterpilot
            • 10 years ago

            Silus deserves all the criticism he “endured” and you are trying to argue that black is in actual fact …white.
            To quote Mr S.Wasson :”AMD has also managed to _[

            • flip-mode
            • 10 years ago

            q[

            • SubSeven
            • 10 years ago

            I don’t understand, why are you using the 4890? Why are you comparing an apple to an orange? Clearly your mathematical prowess is slightly higher than Silus’ but the logic behind the computations is as errenous as Silus’ math. Flip, from the history of your replys here that i have read over my membership at TR, i have come to respect your opinion and enjoy your replys as often times they are insightful and clever. Right now however, you are quite off your usual self. At the risk of sounding a total asshole, i’ll just leave this as is, and agree to disagree.

            • flip-mode
            • 10 years ago

            I just need to back away from the keyboard, I suppose.

            • Silus
            • 10 years ago

            Thanks…I guess. That was the point I was trying to make ever since my very first post in this thread, despite my “dismal math skills” later on 🙂

            The HD 5870 does indeed double the THEORETICAL performance of previous gen in many instances (except in memory bandwidth), but that does NOT translate into actual real-world performance doubling. Not even close.
            And that’s why I asked Scott if he meant “theoretical performance”…

            • green
            • 10 years ago

            nice. and alternatively we could compare the 2 different generations by taking numbers from the same site using an intersect of benchmarks from the following reviews:

            §[<http://www.techreport.com/articles.x/16681/6<]§ §[<http://www.techreport.com/articles.x/17618/9<]§ §[<http://www.techreport.com/articles.x/16681/7<]§ §[<http://www.techreport.com/articles.x/17618/11<]§ from which we find only Far Cry 2 and Left 4 Dead are crossing over (while Warhead does as well, different quality settings were used) FC2: 1650x1050 = 76 / 43.1 = 1.76 1920x1200 = 66 / 35.4 = 1.86 2560x1600 = 46 / 11.0 = 4.27 L4D 1650x1050 = 204 / 127.8 = 1.60 1920x1200 = 182 / 107.5 = 1.69 2560x1600 = 122 / 66.1 = 1.84 Average = 2.17 unfortunately thought it's not a valid comparison a problem being that it's comparing 2 different operating systems another being the max res of fc2 is skewing the average like crazy

            • Silus
            • 10 years ago

            Because it isn’t ?

            The HD 4870 X2 does not double the performance of a single HD 4870 on most cases, so why to keep insisting that it does ? And the HD 4890 is not much faster than a HD 4870, as you seem to think is is for some odd reason. It’s the same GPU (with 3 million more transistors) that usually clocks higher. That’s it.

            • ish718
            • 10 years ago

            Until we have applications that can fully utilize the power of HD5870, it would be quite impossible to determine whether it’s actually twice the performance of HD4870. Wouldn’t you agree?

            Not to mention the HD5870 drivers are still new.

          • SubSeven
          • 10 years ago

          Ok, first of all i like those numbers, they seem quite reasonable to me. Second of all, I LOVE numbers, I deal with them everyday and performance analysis is one of my work functions. What I don’t understand is why you have so many problems with that notion? Your own number suggest that the 4870X2 and the 5870 pretty much have identical performance. So if the 4870X2 is twice the performance of the 4870, and the 5870 is about the same as the 4870X2, then by simple chain rule logic you should deduce that the 5870 is two times faster than the 4870. Are you comparing the 4890 to the 5870? Don’t mean to sound like a dick here, but to everyone else this makes sense and they agree. You are the only one that has a massive problem with that statement and is bitching about it.

            • Silus
            • 10 years ago

            You are the one having problems with a simple question I made to Scott. The conclusion states that the HD 5870 is double the performance of the previous generation. Which is incorrect, by Scott’s own benchmark results, which is why I asked if he meant “double theoretical performance”, which if it was the case, correcting the artcile would be necessary.

            As I said, a regular/constant readership isn’t about just praising everything the site owners do. Itr’s also to point out some erroneous info that may appear (and needs correcting), to improve the content and value of the information present in the articles.

            • indeego
            • 10 years ago

            I think you are stating that you want semantics cleared up where really the only person that would satisfy is yourself.

            Simply based on the idle and load power draw and performance compared to the previous generation the conclusion likely still stands.

            We’ve pointed out to you other sites that have also used the “doubling” moniker, but that doesn’t appear to satisfy you… hence it’s time to

            /[

      • Meadows
      • 10 years ago

      Wrong. 35% faster than the HD 4890 and 22.5% faster than the GTX 285.

      Maths wasn’t your strong side, was it?

        • SubSeven
        • 10 years ago

        Meadows is on point.

        • Silus
        • 10 years ago

        And I would say your math sucks. Don’t even know where you got those numbers. The chart is very simple. Base line 100% is the HD 5870 and all others have their performance represented as a percentage relative to the HD 5870. The GTX 285 has 81.6% of the performance of the HD 5870 and the HD 4890 has 74.1% of the performance of the HD 5870. That makes the HD 5870 (on average) 18.4% faster than a GTX 285 and 25.9% faster than a HD 4890.

          • SubSeven
          • 10 years ago

          No sir, it is your math that is in error. You are saying 25.9% only because you are doing straight subtraction (100 – 74.1). You cannot do simple subtraction when units are percent because this unit is a function of something else. For example, if A is 10% and B is 5% you cannot say A is bigger than B by 5%, this is WRONG. A is bigger than B by 100%. This is so because percent represent a relationship, or a ratio. In your case, based on the numbers provided, what the data states is that on average, if the 5870 gets 100FPS the 4890 would get 74.1FPS. This is a difference of 25.9 FPS. 25.9FPS is about 35% of 74.1FPS which is why meadows was saying that based on the data you provided, the 5870 is 35% faster than the 4890, not 25% faster.

            • Silus
            • 10 years ago

            Did you even SEE the graph, before making that comment ?

            The horizontal line represents percentages and what it says is that for the baseline of 100% (whatever the actual fps is), the GTX 285 is, on average, 81.6% of that.

            That means that if the HD 5870 scores 35 fps on a certain game, the GTX 285 will score, on average, around 28 fps. If the HD 5870 scores 100 fps, then the GTX 285 will score, on average, around 81 fps.

            And the creator of the graph explains in the description too: “The HD 5870 is however the fastest single GPU out there now, besting the GTX 285 by almost 20%!”

            • Rza79
            • 10 years ago

            Silus you need to go back to school!
            I did see the graph and Meadows and SubSeven are both right.
            If the 5870 was 18.4% faster than the GTX 285 then simple maths would make: 81.6 * 1.184 = 96.6 != 100
            Obviously the 5870 is 22.5% faster then the 285.
            You just can’t substract numbers that are relative to each other.
            GTX 285 has 81.6% the performance of the 5870 but it’s 22.5% slower.

            • ihira
            • 10 years ago

            Let me explain this in a simple manner:

            If A is 100 and B is 60, would A be 40% more faster than B?
            No, 40% more of B is [60 + {60 * 0.4}] = 84 which is wrong.
            This is what you’re doing here.

            • SubSeven
            • 10 years ago

            I give up; someone draw him a picture.

            • green
            • 10 years ago

            l[

            • Kharnellius
            • 10 years ago

            If you change the following statement:

            “The HD 5870 is however the fastest single GPU out there now, besting the GTX 285 by almost 20%!”

            To…

            “The HD 5870 is however the fastest single GPU out there now, besting the GTX 285 by almost 20 percentage points!”

            …people around here would not be getting their panties in nearly as much of a bunch. Even better would be:

            “The HD 5870 is however the fastest single GPU out there now, besting the GTX 285 by almost 20 percentage points (compared to the base)!”

            • Meadows
            • 10 years ago

            Or it should just be changed to 22.5%.

            • Silus
            • 10 years ago

            Yeah, but they prefer to continue with the “the HD 5870 is double the performance of previous gen” thing.

            I may have failed the math before, but they fail reality, because the HD 5870 is no where near double the performance of previous gen 🙂

            • Meadows
            • 10 years ago

            Not near, because it’s exactly double. Check the benchmarks, maths wiz.

            • flip-mode
            • 10 years ago

            We’re all looking at the same benchmark Meadows. No reason to fan the flames. Your definition of “double”, in this case, is different from his. 😉

          • Meadows
          • 10 years ago

          g{

      • indeego
      • 10 years ago

      39% in the hizzouse! So behind the curve I amg{

        • Silus
        • 10 years ago

        What do you mean “No TR represented” ? They include the link to TR’s review and it was probably used to collect data for the chart too.

        And it seems they combined all the data, from the several different games, in several different reviews and computed an average performance increase, with the HD 5870 numbers as the baseline.

        I’m at 47.3% of the performance of a HD 5870 🙂

      • asdsa
      • 10 years ago

      So GTX 295 is only 10% faster than HD 5870. Impressive I say.

    • liquidsquid
    • 10 years ago

    I don’t know if anyone else said it but: Thank GOD those idiotic graphics stickers on the heat-sink case are gone on this. I have to say a chick holding pistols makes my stomach do flips on how annoying and childish it is. Just make sure the guts work well and it fits in my case, and I don’t give a rat’s patootie what graphic is strapped to it, just preferably not something childish. Also preferably something that does not compromise heat dissipation. (And no darned LEDs to light up the inside of my window-free case).

    It is the monitors and what is on THEM that should look good.

    -LS

    • alphaGulp
    • 10 years ago

    That single-core physX discovery is pretty big news, for all that nVidia’s exclusive physics engine approach is doomed to fail anyway IMO. With the average gaming machine having 2+ cores nowadays, with roughly half of the physics-capable GPUs being made by AMD (and soon Intel), and with physics simulations becoming commonplace in games, I don’t believe that nVidia can continue to ‘sell’ physX to game developers for much longer, regardless of how much nVidia might be willing to actually pay ’em for it through their sponsorship program.

    I always had a feeling that the original physX company was gimping their CPU version, and didn’t think that nVidia, with its history of poor ethics, would have made any changes to that, but I never imagined they would make it so obvious. Then again, in their defense, I imagine fixing this (i.e. actually having the product do what they have led people to believe it does) would cost a lot. I wonder sometimes if nVidia realizes how much their brand risks suffering each time they do something like this. Then again, with the vast majority of people ultimately not caring about even the Sony CD rootkit from a few years ago, maybe they’re right to, as far as their stock holders are concerned.

    As an aside, I wish MS had taken more of a lead on this: nobody gives a crap about directx 11, whereas an MS physx 1.1 or 2.0 would have been awesome. I know they are working on it, but as far as I’m concerned they started too late (dx10 should have given them the hint that they should focus elsewhere) and/or have been progressing too slowly (the parallelization of this type of computing is well understood and there are several frameworks already out there to be compared against).

    Incidentally, I wonder if MS will make that physics API win7 only – if so that’s pretty funny since it basically removes their monopoly-based advantage from any short/medium term game development decisions. If they’re smart, maybe they’ll make it single core or GPU for XP & above, with only the win7 version being multi-core, – meh: let’s see what it is when it comes out!

    ag

    P.S.: Kudos to TR for thinking of testing this (a quick search did not return anyone discussing this topic / fact). You guys are the best!

    P.P.S.: in case you were curious about it, I generally sign posts when I want to have a post-scriptum, since otherwise it feels weird. Makes sense, neh?

    P.P.P.S: this is the post-post-post-scriptum, in case you were also curious about that… I don’t know why, but I dig latin acronyms (eg, ie, vs, qed, etc…) & like knowing what they mean 😛

    Ok! I’ve spent waaaaaaay too much time on this post – bye!

      • Meadows
      • 10 years ago

      Microsoft already has physics. DirectCompute.

        • UberGerbil
        • 10 years ago

        DirectCompute isn’t a physics API. It’s an abstraction for doing GPGPU computation. It’s the equivalent of OpenCL, not PhysX. You certainly can build a physics API on top of DirectCompute, but that’s not what it does out of the box.

        Software APIs under the control of a single hardware vendor, no matter how freely licensed or “open,” generally fail. (See GLIDE, various sound APIs from Creative and others, even some Intel initiatives). There’s too much suspicion on the part of the non-owning IHVs, and too much temptation to tilt the field on the part of the owner. Microsoft historically has stepped into these situations by offering a 3rd-party solution (and throwing resources around for developer support) that everybody hates equally but supports as the least worst thing.

    • Wintermane
    • 10 years ago

    I see the problem here the same as for blue ray players. Too few people give a crap.

      • SomeOtherGeek
      • 10 years ago

      You are on point. So, what is wrong with the picture? Too high-end? Not a marketable item?

        • Wintermane
        • 10 years ago

        If I knew why id tell you I just know it seems like everyone is just drifting gaming wise and dont plan to stop until they hit something.

      • flip-mode
      • 10 years ago

      Huh? Give a crap about what – DX11? Well, that may be true, but it’s also kind of an irrelevant “problem”. /[

    • Suspenders
    • 10 years ago

    Anyone else notice XFX’s rather amusing ATI marketing tie in with the new 5800 series?

    §[<http://www.xfxforce.com/2118AD/?c_camp=2118banner&c_des=TechReport<]§ "DirectX 67 with 262 zillion shaders per second" lol

    • SomeOtherGeek
    • 10 years ago

    Scott, great review as always!

    Ok, we have commented about everything under the sun. Seems like a lot of the focus is on games. Is that all DirectX is for?

    From Microsoft: “Microsoft DirectX is a group of technologies designed to make Windows-based computers an ideal platform for running and displaying applications rich in multimedia elements such as full-color graphics, video, 3D animation, and rich audio. DirectX includes security and performance updates, along with many new features across all technologies, which can be accessed by applications using the DirectX APIs.”

    So, I guess everyone needs to make every application use the DirectX technology to take advantage of this awesome card?

    But the Windows OS is nothing but one big fancy multimedia OS, true? So, the HD5870 should make Windows run all that much better? Especially for Win7. So, for the most practical purposes Win7 will run like a charm with this card? If so, then everyone should be happy if they can afford this.

    For me, I guess I can’t afford it so I’ll stick with my old “slow” GTX260 and Vista for now. Then wait until next year when the prices are more affordable and I can buy a new “slow” 5870.

    • Richie_G
    • 10 years ago

    ‘Tis a thing of beauty.

    • lamparalaptopiaguita
    • 10 years ago

    normally i wouldnt care for another high end card, esp. in this console-first game industry

    but man….

    that batmobile design….

    *must resist*

    *losing resistance to temptation*……

    • gerryg
    • 10 years ago

    Looks like an impressive feature set… for the 5850. I personally vowed to never buy a full-size case again, so a long card like 5870 is never going to happen in my systems. I can’t justify as much budget on graphics cards like I could several years ago, so I think I’ll wait for a 5830 or 5770. There’s way too many good games out there I still haven’t had time to play, so top speed isn’t as important as other stuff like video playback quality, power usage, noise levels, and overall bang for the buck. Hopefully AMD can get the other stuff “out the door and into the store” quickly, so I can reasonably make a Christmas present request…

      • FuturePastNow
      • 10 years ago

      I have a P150…and I have a Dremel tool…hmmm…nah

    • madmanmarz
    • 10 years ago

    If you think the 285 holds a candle to this, you haven’t read the review.

      • Meadows
      • 10 years ago

      It holds a match to it, though.

    • Freon
    • 10 years ago

    Looks pretty good. Maybe a bit of spit and polish on the drivers, but it looks like a winner at its price point. Kudos to AMD on the power consumption. Good job on the article as well, Scott.

    • odizzido
    • 10 years ago

    Nice card, nice review. Makes me want to get one.

    • thermistor
    • 10 years ago

    Real questions here…

    Will 5870 saturate PCIe x 16 1.0 bus in a single card running solo?

    Will running 2 x 4850/512 in xfire on a P35 board be a problem with bus saturation? (P35 is PCIe 1.0).

    Just wondering if all PCIe 1.0 motherboards are now limiting these two very fast new cards.

    • Ashbringer
    • 10 years ago

    The lack of PC games that take advantage of it’s tech is going to be a deciding factor in the purchase of AMD’s Radeon HD 5XXX series cards. As it is, DX10 seems to have came and went without anyone noticing. What do you expect when DX10 was Vista exclusive?

    Another problem is ATI’s drivers. Lately, I’ve been having wonky problems with them. I currently have a 4670, and the Catalyst Control Center won’t start up. I’ve tried uninstalling, and reinstalling the drivers. Even worse, running the latest Catalyst installation will give an error. The only way for me to get the drivers to install is to type “atisetup.exe -install -output screen”.

    On top of that, my older X1950 GT and X850XT cards aren’t getting new drivers either. Not like I don’t use them. The X1950 GT has severe lag when scrolling through websites that use a lot of flash, and it doesn’t matter if it’s IE or FireFox. This is a known issue that hasn’t been corrected.

    Personally, I’ve been buying ATI products forever, but the past 2-3 years have been nothing but trouble. Even worse, I have motherboards with ATI chipsets. Like their video cards, installing drivers can break things more then fix them. There have been moments when installing the latest Catalyst drivers have caused my PC to be unable to boot. Usually required half a day of running the Vista repair program, which usually results in me having to reinstalling the OS.

    BTW, I’m currently running Windows 7 X64. So if you think by switching to Windows 7 will make you immune to those issues, you’re wrong. When I upgrade my video card, it won’t be a ATI product.

      • asdsa
      • 10 years ago

      Another foolish nvidia troll ranting the age old “ATI drivers suck, bohoo” mantra (which has actually nothing to do with this article) because they seemingly cannot whine anymore about performance, power consumption or noise. My feature X don’t work with program Y in setup Z and then it crashes if I throw my PC literally out of window. Are you so pissed off that ATI has now better product than nvidia to make up stories like that? You think that your “personal experience story” actually makes one bit a difference in anyones purchasing decision? My personal story is that ATI drivers and cards have never given me any serious trouble or break critical functionality (not that anyone would care). The biggest gripe for me was that little white rectangle in upper left corner in ccc 8.1. If ccc would actually cause Vista/7 to to stop booting or whatever random stupidity you made up then I’d guess there should be enormous amount of unsatisfied customers and a lot of forum movement but I’ve heard or seen no such thing. God, I have to stop feeding trolls but some of them are so preposterous…

        • NeXus 6
        • 10 years ago

        You forgot to address his comment on the lack of games and DX11, which even Scott brought up in the conclusion of the article. While this card may be great, I don’t see it being a big seller for the reasons Scott gave. The GT300 will most likely fail as well unless by some miracle a load of DX11 games come to market in 2010 or the masses suddenly upgrade to 30″ monitors.

        As for drivers, it’s bad on both sides. I do agree that AMD tend to break things more often than NVIDIA with new driver releases, but lately NVIDIA appears to going down this path as well.

          • shaq_mobile
          • 10 years ago

          im not a fan of nvidia drivers. the layout is pretty junky. though AMD’s are getting worse 🙁

          i liek to have control, but i dont want to have too much, rivatuner is a little overwhelming at times. though it does have some super sweet functionality.

          im seriously dissapointed with my gtx275. it performs about the same as the 4890, yet it sounds reminiscent of the 5800 ultra days. can nvidia make me a card that doesn’t sound like an airliner when i play any 3d game? my 4890 had that weird passive electronic buzzing noise that i was told comes from the capacitors when they dont get properly ‘coated’. they say the voltage flowing through them makes them expand and contract super fast so it creates that buzzing noise taht synchronizes with your frames. extremely irritating.

          • Anomymous Gerbil
          • 10 years ago

          What? These cards won’t fail. They same-ish/better performance than existing high-end cards at lower prices, less power, and less noise. DX11 is almost irrelevant.

            • NeXus 6
            • 10 years ago

            They may fail because PC games aren’t pushing hardware to the extent that you need this amount of speed with the exception of running 30″+ or multiple displays. The vast majority of gamers are still using 22″ or smaller displays, so a HD4890 or GTX285 can handle nearly every game without breaking much of a sweat. Even cards older than this may be good enough for the average gamer.

            Let me repeat what Scott said in the review:

            /[<"Games need to move on and take advantage of the many, many multiples of console-class graphics and processor power now available on the PC. If they don't, AMD may have trouble selling this incredibly fast chip to consumers simply because the applications don't take full advantage of it." <]/

            • OneArmedScissor
            • 10 years ago

            You absolutely do not need a 4890 or GTX 285 for 1680×1050 lol.

            Our perception of what is “needed” is extremely distorted by all the benchmarks run at 2560×1600. Note how “low end” cards of today still tend to run most games fine, even at such a ridiculous resolution that no one with a $100 card would have.

            • NeXus 6
            • 10 years ago

            I only referenced those video cards because they are more recent. They might be a better value over the 5870 if you have plans to upgrade in the near future and have no need for that much power.

            Certainly you can get by with older generation cards depending on what games you play.

        • phez
        • 10 years ago

        Win7 + older ATI hardware = nightmare. My Mobility x700 doesn’t even work with their ‘legacy’ driver suite either.

          • A_Pickle
          • 10 years ago

          Somehow I doubt this will be a problem, given that Windows 7 and Vista share a driver model. Get the DriverHeaven Mobility Modder, download the Catalyst 9.3 drivers, and then use the Mobility Modder to modify them to work with your laptop.

      • indeego
      • 10 years ago

      DX10 came and went because it added nothing from the end-user’s perspective in terms of performance or visual goodiesg{<.<}g

        • WaltC
        • 10 years ago

        Well, I think that DX10 suffered from a developer’s lack of enthusiasm because DX10 was initially restricted to Vista, and many people were still clinging to their guns, trucks, and XP (not that there’s anything worrisome about those things)…;) On the eve of Win7, this dynamic is going to change as XP will quickly recede in importance in terms of installed base, I expect. Developers will have a lot more incentive to support DX10. You’ve also got the dynamic of the number of DX10-capable gpus rapidly rising as well–so I think that DX 10 support will ramp very quickly with DX11 support generally coming later.

        Besides, it took a long time after DX9 shipped for DX9 support to finally take off as well, IIRC.

          • indeego
          • 10 years ago

          I don’t think XP will fade as quickly as people expect it to fade. I imagine 2.5-3.5 years before it is eclipsed by Vista/7g{<. <}g By then the next DirectX may be out anyway...

            • WaltC
            • 10 years ago

            Could be, but I think the number of Win7 adopters who jump from XP to W7, and who are also avid gamers, will be high, and I think it will be a fairly rapid adoption. Among current XP users who also do a lot of 3d gaming, I’ll bet there’s a lot of pent-up desire to move to a new OS. Especially for people with relatively new hardware, XP has got to be getting long in the tooth about now…;)

            • NeXus 6
            • 10 years ago

            You may be right, but I think most of those “XP gamers” have moved on to consoles and have ditched the PC as their main gaming platform or ditched it altogether. The remaining PC gamers may wait until they see any benefits of Win7 before upgrading. At this point, DX10/11 isn’t a reason to upgrade hardware; better performance is the reason.

            • odizzido
            • 10 years ago

            I am one of those “xp gamers” who hasn’t moved to console, and you are right. I will be holding off getting windows 7 till I know if it will do anything I find worth paying for.

            • Suspenders
            • 10 years ago

            Same here. Strategy games is what I love playing, so consoles are pretty much out for me by default. I’ll also be sticking with XP until I have a good reason to upgrade to 7.

            • cygnus1
            • 10 years ago

            If all you use your PC for is gaming, fine, Win7 won’t do much for ya (except crash less). But if you use it as a tool, the interface and security improvements are worth the money.

            • derFunkenstein
            • 10 years ago

            Then you’re part of the problem. People need to invest in the platform before developers are going to see reason enough to develop for it.

            • WaltC
            • 10 years ago

            /[

            • NeXus 6
            • 10 years ago

            It depends on the gamer and the games they play, but there’s been a shift to consoles in recent years and game sales show that trend. PC gaming sales are down for a reason, and piracy isn’t the only cause of it.

            If you’re building a new PC or have fairly new hardware then Win7 is the obvious choice, but I just don’t see a huge flock of gamers (still running XP) moving to it unless they see a need to. Win7 isn’t a huge leap over Vista, and DX10 brought little to the table. Let’s hope DX11 changes things in a big way.

            • SubSeven
            • 10 years ago

            Just wait till SC2 comes out… then we will see where the shift will be taking place.

            • WaltC
            • 10 years ago

            Well, the one thing we know is that consoles aren’t going to help much with either DX10 or DX11…;) I’ve never owned a console and never will for lots of reasons–not the least of which is the fact that consoles when they are released (that is) simply recycle yesterday’s PC technology inside a very cheap package. But that’s expected at the $300 price point, and there’s nothing wrong with that at all–provided the customer knows what he’s getting.

            PC games, despite being $10-$20 cheaper than their console counterparts, and generally supporting a lot more stuff, fluctuate often based on the kinds of games and their quality that are released. Additionally, and I think most importantly, the only mention I’ve seen of PC games dropping in sales volumes is when /[

            • cygnus1
            • 10 years ago

            Agreed. The PC game download market seems to be flourishing. From GoG selling old games to new games on Steam, Gamersgate, Impulse, Direct2Drive, etc. It’s no wonder retail sales of PC games are down. Consoles, on the other hand, are in the infancy of download sales. The share of downloaded console games must be minuscule when compared to the downloaded PC games.

            The industry research companies need to either include digital sales in the numbers or stop producing sales numbers. The only thing retail sales numbers are doing, is giving game companies a reason to focus on consoles. They see growth, or at least no decline, on the consoles but year after year drops in PC retail sales.

            • indeego
            • 10 years ago

            Game publishers know exactly what the distribution is between download/physical sales, they don’t need analysts/retail sales figuresg{<.<}g

            • cygnus1
            • 10 years ago

            Well, big game publishers will know about their own digital/retail sales ratios, but they won’t know about the rest of the industry.

        • Kaleid
        • 10 years ago

        And (the non-successful) Vista was the only way to get DX10.

      • flip-mode
      • 10 years ago

      Strange, very strange. I have ATI X300s and X1300s on a dozen machines at work – all of them do an impeccable job of not only scrolling flash-laden websites, but doing CAD duty as well.

      I’ve never had a problem with ATI’s drivers, even “back in the day” of the 9500 Pro. Seriously, all the way to this day, I’ve never had a problem, though I did jump to Nvidia during the x8xx, x18xx, and x19xx years.

      Also, their chipsets: I have a 690g and a 785g. The 785g is pretty great, the 690g bothered me at times with slow SATA performance, but for the most part is has been good to me.

      I don’t know why you are having so many issues but I think you may be an uncommon case. FWIW.

        • thecoldanddarkone
        • 10 years ago

        His arguement about the x1950 and x8xx on Windows 7 is somewhat justified because ATI doesn’t support Win 7 on dx9 cards.

        • Ashbringer
        • 10 years ago

        I’ve owned ATI products since the AIW Radeon 7200. I’ve had driver problems in the past, but nothing like I’ve experienced in the past few years. I currently have 3 PCs, and 2 of them are using the 690g chipset. One is a Biostar, and the other is from Asus.

        When these machines had Vista X64, installing the lastest Catalyst package would give an error for unsigned drivers. Nothing I did would ever fix this, and it occurred on both PCs. So it wasn’t specific to the motherboard.

        When I switch them to Windows 7 X64, it got better. That is, I don’t install the motherboard drivers. The ones that come with Win7 are better. Though, the latest Catalyst 9.9 now prevents me from starting up CCC. Trust me, I’ve tried to fix this. Something about the VS2005 C++ redistributable.

        The phase “if it ain’t broke, don’t fix it”, is something an ATI owner should literally consider before installing video drivers. Yet, with a policy to update the drivers every month, one has to wonder what’s the point? Isn’t that the selling point of buying a ATI product, is to get driver updates every month?

        BTW, I’ve only owned a Geforce 2 MX and Geforce 6800 in my lifetime. Yet, I’ve owned a AIW Radeon, AIW Radeon 8500, Radeon 9500 Pro, Radeon X850 XT, Radeon X1950 GT, and Radeon 4670. I’m am no Nvidia fanboy.

          • Ashbringer
          • 10 years ago

          Just going to post this, but I found a fix for the CCC. It is something to do with VC++ 2005. Some applications will install a version not compatible with CCC. So you gotta further update it to fix it.

          A link to the installer that’ll fix this issue.

          §[<http://code.msdn.microsoft.com/KB961894/Release/ProjectReleases.aspx?ReleaseId=2067<]§

            • asdsa
            • 10 years ago

            This long thread that has nothing to do with HD5870 but some broken VC++ version causing problems in 690G/Win7 64-bit config? Oh my, oh my. Now that you found the fix for ATI drivers all is OK again? We don’t need to, like, switch sides or anything anymore? Cos you almost got me putting my Radeon on ebay and getting nvidia… (sorry for the sarcastic humor)

          • flip-mode
          • 10 years ago

          That’s weird. I can’t remember ever getting the “unsigned driver” error with my 690g but to be honest I’ve never really given two shats whether the driver was signed or not. And also, I didn’t often update the 690g drivers. So I could have experienced those issues but not noticed them.

          As for older card support in Win7, is it just a matter of time? I can’t imagine ATI not supporting DX9 cards. Dunno. If true, I really could not blame them. They have to cut the tether at some point, and those cards are getting pretty long in the tooth. I wouldn’t blame Nvidia from dropping support for GF7 series cards either, though I don’t know what their plans are. Those cards are ancient in computer years. A $60 card would probably clobber an X1900XTX or GF 7900 GTX.

            • glynor
            • 10 years ago

            For what it’s worth… My X1900XT and my older X800XL AGP card both work just fine in Windows 7 x64. While they have officially moved these cards to “legacy driver support” (which means they’ll only be releasing new drivers for them to address “critical issues”), the existing Vista x64 drivers for both cards work just fine.

            How this is any different from Nvidia discontinuing support for the Nforce 2, 3, and 4 boards? Heck, Nforce4 (for both Socket 754, 939, and LGA 775) came out in late 2004 and you can’t get “official Windows 7” drivers for it either…

            • flip-mode
            • 10 years ago

            I am more concerned about stopping support for motherboards, but whatever. You can’t possibly buy a computer product of any kind and expect it to be supported for more that a few years. Stuff changes too quickly and I would rather AMD / Nvidia / Intel devote their resources to new and recent products than products that are more than a generation or two past.

            • thecoldanddarkone
            • 10 years ago

            You know except they still *[

            • glynor
            • 10 years ago

            Oh, please… CDW still sells brand-new, boxed Pentium 3 flip-chip CPUs. Does that mean Intel should still support them?

            That, and you’re acting like they abandoned the product. The most recent drivers that came out for the 690G chipset came out in *[

            • thecoldanddarkone
            • 10 years ago

            I’m talking about fully built machines, not p3, not indivual peices. I’m talking about brand new LAPTOPS. GET THAT THROUGH YOUR HEAD!

            • derFunkenstein
            • 10 years ago

            i agree that a brand new laptop without ongoing driver support is awful.

            • glynor
            • 10 years ago

            And *[

            • thecoldanddarkone
            • 10 years ago

            You’re still not getting it, some of these laptops never existed until a few months ago… Your arguements still holds no water, because these laptops were released recently. The HP dv2, and the the Gateway lt3103, and few other laptops use the x1250. Heck some of these vendors didn’t even sell laptops with the x1250 until recently. Who sold those chips, AMD.

            • Ashbringer
            • 10 years ago

            If you walk into a PC store and saw the Radeon 4XXX and 5XXX series cards, what would buy? Some people don’t care and buy the cheaper 4XXX cards. Others like bigger numbers and will go for a 5XXX card.

            Then about 1-2 years later, ATI drops support for the 4XXX and 3XXX cards, but not the 5XXX series. Even though your product was on the shelf at the same time as 5XXX cards, you feel a little cheated.

            Driver support is important for video cards. Without it, you might as well have a paper weight. Cause god forbid, a new game like BioShock 3 gets released, and the game crashes or displays corruption cause of drivers. Despite your card being outdated, you have no problem running the game on low graphics settings.

            ATI seems to discontinue products based on DX technology. If you had a DX9 card, it was discontinued. In about 1-2 years from now, the 3000 and 4000 series will end up the same. So if you had to buy a new video card, it’s better to wait for the cheaper 5000 cards for it’s DX11 tech and support, then it is to get the cheaper 4000 cards.

          • clone
          • 10 years ago

          I’ve sold about 30+ 690g motherboards and their replacement the 740g’s over the past 3 years and had none of the issue’s you mention (also a pair of 780’s), they’ve been 1 Biostar, 8 Asus and 24+ Gigabytes none have had any problems, none have failed they’ve all worked fine at least so far.

          all are using XP and Vista, no more than 7 are using add in video which is mainly ATI 2600’s along with a cppl of 3650’s and 2 Nvidia’s 8600 GT’s.

          I’ve used both ATI and Nvidia, both aren’t perfect but in general because Intel is more expensive when all costs are factored I sell almost exclusively AMD/ATI and I’ve had no real problems worth mention in the past 3 years with regards to driver installs to get systems up and running.

          I have seen a cppl more ATI video cards fail compared to Nvidia’s but I sell alot more ATI’s than Nvidia’s for a variety of reasons.

          also I don’t sell Nvidia motherboards anymore, buggy in ways that you didn’t discover until later…. nothing horrible but tech support calls knocked them off my list now it’s either ATI or Intel for chipsets.

    • jackbomb
    • 10 years ago

    What I really like about this card is its ability to stream 7.1 LPCM without downsampling to 16 bits. Even better, ATI and Cyberlink are working on drivers/software that’ll allow HD 5xx0 cards to bitstream TrueHD and DTS-MA.

    So, my next HTPC/gaming box is gonna kick ass, and the TrueHD/DTS-MA indicators on my receiver are gonna light up–for the first time ever.
    Life don’t get much better than that…lol.

      • Divefire
      • 10 years ago

      +1 to that. As long as they get the software side right building a HD gaming HTPC will become so much simpler.

    • Faceless Clock
    • 10 years ago

    Great review. This is an important card for ATI, even though people are not that excited because they don’t see much need to upgade from their older cards. ATI will now be able to compete on even the highest end, which until now was still Nvidia’s stronghold. GTX285/295 cards are going to get a lot cheaper over the next few months.

      • PRIME1
      • 10 years ago

      nvm, wrong reply.

    • DrDillyBar
    • 10 years ago

    Another great write up Damage.
    I think there’s life in my HD4870 yet. 🙂

    • asdsa
    • 10 years ago

    Very well written article, now that I finally had time to read it through. Makes you appreciate the chip beyond just plain benchmark bars.

    • marvelous
    • 10 years ago

    Techreport is always on point with their detailed reviews. That’s why this is the first site I come to when I want real information.

    5870 is great card. It’s the fastest single die card in the world and power consumptions that have greatly improved over products of this caliber. However PC gaming has been ridden by console ports that doesn’t push the limitation of modern GPU. Besides Crysis and some badly coded games there is really no game you can’t play with a $100 GPU.

      • Pettytheft
      • 10 years ago

      Quit it with blaming things on console ports constantly. You should really look at the hardware layout on PC gaming. Half of the gamers out there own budget cards that perform below or barely at the level of a console still. Not every game will scale from budget to high end.

    • geekl33tgamer
    • 10 years ago

    Grrrrr, I really wanted to like this new card.

    By all accounts, I should be excited about wanting to upgrade (what are in graphics card years considered a pentioner) my GeForce 8800GT’s (SLI). Had them over 2 years now, and is the longest I have ever had the same graphics cards in my gaming PC!

    Think I will sit on the fence until the green side makes a move tho. Anyone else feeling that this card has impressive on paper specs, but delivers little more over their older cards in reality?

    I *really* want AMD/ATI do do well, having used various 9700 Pro’s, X800XT and X1800XT’s in the past. I will buy what ever is best at my upgrade point, and don’t consider myself a fanboy of either brand, as I have owned and liked products from both companies…

    …but I can see Nvidia gaining the upper hand, sooner rather than later unfortunatly 🙁

      • flip-mode
      • 10 years ago

      It feels like largely a pricing issue to me. The card would be much more impressive at $300 than at $380. As Convert said, hopefully the 5850 delivers a better sense of value at the $250 price.

      • coldpower27
      • 10 years ago

      It doesn’t feel fast enough over my GTX 280 to warrant an upgrade, I will have to see what nVidia brings to the table before I make a final decision.

      I do really like the idle and load values though, and the paper specs are impressive 2 Billion Transistors oh my!!!

    • xtremevarun
    • 10 years ago

    Great review..and its good to read abt a new gfx card aft a long time…AMD has done awesome work with power efficiency and temperature

    • d0g_p00p
    • 10 years ago

    I think this is the first time in quite some years where I am not super excited to see the next gen graphics cards. Everyone knows why, it’s the stagnation of PC graphics because of the console as the target base system.

    I remember when each new version of a ID engine (title) would crush your system and that was what made people upgrade. In some ways I think this might be a good thing. If you don’t need the best video card on the market to play the current titles then maybe devs will see the PC as a viable platform once again like the good old days.

    Don’t get me wrong though. I am waiting to see what nVidia pulls out with before I purchase a new video card. The enthusiast in me wants to upgrade, however knowing that whatever choice I make will be good for years to come is great to look forward to.

    • WaltC
    • 10 years ago

    What an excellent review!…:) I found the fact that nV’s PhysX is not really designed to utilize as much cpu power as available in software mode to be very interesting. I suspect, though, that nV is equally motivated by a desire not to have to compare its gpu directly to cpus in terms of PhysX processing throughput–as much as it is in creating the appearance that hardware PhysX on nV gpus “really flies”…;)

    Very interesting and thought-provoking observation, though, and one that I would like to see investigated. I see the whole PhysX thing as marketing related anyway, and this really puts a marketing spin on things.

      • swaaye
      • 10 years ago

      I was messing with Physx tests last night. I had been playing Mirror’s Edge for the first time and the hardware Physx option got me curious.

      My big-TV gaming sys is running a GF8200 mobo, Phenom II X4 940, and a 8800GTX. I discovered that I could use the IGP as a dedicated Physx processor and thought that was really cool. NV’s CP lets you pick available NV GPUs for Physx acceleration. For tests, I found a synthetic test called Fluidmark (from the Furmark folks) and NV’s own fluids demo.

      Results: GF8200 is a pathetic Physx processor. 🙂 But the CPU isn’t faster. HOWEVER, I noticed that the CPU usage was only on one core for software Physx. Now that, folks, is obvious tilting of the potential of the CPU. 3x 3GHz cores doing nothing.

      8800GTX on the other hand is quite the beastly Physx processor, assuming it has nothing else to do of course. It was over 10x faster than the 8200 in Fluidmark even though it did still have to render the (albeit simple) image for the bench.

        • WaltC
        • 10 years ago

        Thanks for the comment–very interesting. I wonder how the 8800GTX would do in comparison to the cpu if the software could make use of all the cpu cores.

          • geekl33tgamer
          • 10 years ago

          If NV didn’t tilt the software aspect of this towards needing another graphis card, I suspect any modern day 4 core processor would trounce it.

          Take the Core i7 CPU for example, it can munch it’s way thru a video conversion from VC-1 to WMV-HD without breaking a sweat (By all accounts, this is a demanding task). Of course this needs the program to be multi-threaded, but thats not hard these days, and the results are generally impressive.

          Physics should be standardised thru DirectX. It would work on all PC’s then, regardless of GPU brand, and developers are more likley to build in support for it?

            • WaltC
            • 10 years ago

            Good point, but honestly I’m not sure that the kind of Physics PhysX does is really the sort of compelling, core feature that *ought* to become a part of D3d. Right now I see it more as a gimmick–you get more “rocks” and “smoke” and “particles” and “bricks,” etc., but nothing in the way that really makes a game more playable and enjoyable. Certain other kinds of physics support–like defamation (sp?) might certainly be compelling physics features for the API. I think this will happen anyway, but probably through a different route.

    • TurtlePerson2
    • 10 years ago

    I would really like to see Folding@Home performance numbers on this thing. The review talked so much about the computer performance, but it only tested the card on games.

      • cygnus1
      • 10 years ago

      The Folding@Home client may not support the 5870 yet.

    • yogibbear
    • 10 years ago

    Hm….. i might actually consider getting rid of my 8800gt for one of these so i can max out Stalker:CS….

      • TurtlePerson2
      • 10 years ago

      I maxed out Stalker:SoC with a 4830 at 1920×1200. This thing might be overkill.

    • ish718
    • 10 years ago

    Great performance and power efficiency….
    All we need now is new games O_O

    • Convert
    • 10 years ago

    Hmm, the 4870 was more impressive compared to the prior generation than the 5870.

    Hopefully the 5850 fares better.

    The laundry list of features sounds more impressive than what it delivers in my book. Though one can never really complain about features and improvements, even if they don’t seem to matter.

      • charged3800z24
      • 10 years ago

      I thought I saw what appears to be the fastest single GPU and the price is less as well. What is not impressive about that?

        • Convert
        • 10 years ago

        Well, considering it is the high end next generation card it should be the fastest single card. If it wasn’t it would be a bit of a catastrophe provided they couldn’t offset with pricing.

        What is it cheaper than? Dual setups? It is currently more expensive than the 285 and 4890. Which it really doesn’t matter anyways as the prices will drop on those. It is more expensive than what the 4870 launched for and while the price jump from a 3870 launch and a 4870 launch were similar the 4870 proved to be over 2x faster on a consistent basis.

        That isn’t to say it’s a bad thing, I think you are paying more for the bullet points than the FPS this launch. Well actually I guess you could say the efficiency is what makes it worth it.

          • charged3800z24
          • 10 years ago

          Edit: New egg shows the thing for 379. Which is only a smidge aboe what they sell the 285 for.

          hmm, I swore the 285 was over 400 just not to long ago. But the 5870 still looks to be twice as fast as the 4870, it is sometimes faster then the 4870×2. It is a single GPU that has much better power consumption then both those cards. That is much better improvement over the previous Generation. I didn’t expect this card to take the 295 which is a dual GPU. I still think it is where I thought it would be. It will probably show better once DX11 titles start to show up.

            • Convert
            • 10 years ago

            You can get a 285 for $325, that isn’t including the $30 MIR. Though, as I said, it really doesn’t matter as the prices will continue to drop on the older cards, as they always do.

            Looking at the 4870 review you see it besting the 3870 by well over 2x on quite a few occasions. The 5870 is dead even with the 4870×2 in almost all games.

            Though I guess I should be happy it beats it 2x, could be worse.

            • charged3800z24
            • 10 years ago

            that’s kinda my point, it beats the 4870x2often which is the high end card for 4000 series, the 5870 will have a x2 version which will probably be the highend. Look at the 5870 x-fire config, it is pretty impresive. The drivers can only get better. I see your point thogh, the 4000 series was a better awe factor back then,. but it didn’t take much to accomplish that.. ^_^

    • Shinare
    • 10 years ago

    Is it worth considering over an nVidia GPU for GPU2 folding@Home?

    • danny e.
    • 10 years ago

    So, it seems the HD 6870 will finally be able to play crysis?
    188W is a lot.

    It would be nice to have this power and be drawing 45W.
    I guess that will be a few more process shrinks down the road.

      • Convert
      • 10 years ago

      q[< I guess that will be a few more process shrinks down the road. <]q Depends on what you mean, will you have a *[<5870<]* running at 45w a couple of process shrinks down the road? I suppose you could, but you can do something similar right now by buying a low end card. Can you have a 5870 *[

    • jinjuku
    • 10 years ago

    Just give me a discrete integrated IGP solution with the full AVIVO suite with Uncompressed 8 channel LCPM and DTS-HD and Dolby-HD bit streaming!

    • matnath1
    • 10 years ago

    I am puzzled as to why DAAMIT and NVIDIA still choose to launch their high end in this struggling economy. Especially when all of that extra gaming muscle is completely unnecessary with todays games.

    Wouldn’t it have made more sense to launch the midrange first, even the low end… Say the HD 5670 which hopefully will perform on par with an HD4770 but will only need power from the PCI X16 intrerface (NO SIX PIN). Launch this puppy at $125 and I’m all over it! That’s all the power we need til DX 11 games are the norm. By than the HD5870 will be obsolete?

      • PRIME1
      • 10 years ago

      Same reason Chevy makes a Corvette. Marketing.

        • indeego
        • 10 years ago

        Yeah the Corvette did wonders for Impala salesg{<.<}g

      • Hattig
      • 10 years ago

      There’s a lot to be said for having the most powerful card on the market in terms of mindshare and consumer opinion. It’s why people diss AMD’s current CPUs despite the fact that they’re more than adequate, and for the price, very competitive.

      AMD will be releasing the 40nm mass-market variants very shortly as well.

      NVIDIA look to be 3 to 6 months away from their next generation general availability – it’s just that I don’t know who to believe online anymore. I saw a 2% yield figure for the test wafers which was 1/10th of what you would expect, but it’s from a site where you need a salt mine, not just a pinch!

      • Voldenuit
      • 10 years ago

      Launching a new series in the midrange cannibalises your high end card sales. If the 5770 came out at $125, no one would buy a 4890, 4870 or 4850. This way, there is wiggle room to price older inventory to clear, giving both gamers and ATI a good deal. Not to mention early adopters are more likely to be enthusiasts who want top end performance and are willing to pay for it.

      That’s probably why the high end SKUs are usually launched first – 9700 Pro, 6800, 7800, 8800 GTS/GTX, 4870/4850 and now 5870/5850. Not to mention that halo products such as these ship fewer units than mainstream, and allow time for production of new parts to ramp up.

      • poulpy
      • 10 years ago

      Wouldn’t make much sense the other way as (on top of my head) this way you:
      – create a new and higher price point with better performance
      – therefore don’t cannibalise existing products lines (or slightly the extremely low volume X3/4 cards..)
      – are able to slowly ramp up production of the new line with a high price low volume product
      – are able to keep on producing and slowly empty stocks of existing low cost high volume products

      Edit: damn guys you’re quick! I leave this reply open for a minute or two and I end up #4

      • SPOOFE
      • 10 years ago

      Even in the depths of the Great Depression people still bought radios and went to the movies. When life sucks people want to be entertained. Bread and circuses, buddy.

    • Lazier_Said
    • 10 years ago

    Performance is no surprise. With 2.2 billion transistors and 20 parallel cores, of course it owns. The GT200 is a year and a half old with a third fewer transistors and no QDR memory. No contest at all.

    The real achievement is that the 40nm process let ATI do this in a reasonable power envelope. And even better (and thoroughly un-ATI like) it has reasonable idle power too.

    I still don’t see the killer app to make me want a $400 card, but this bodes pretty well for $150 cards in a couple months.

    Intel’s supply woes with G2 SSDs mean my new box is still sitting on the bench in pieces, which pissed me off to no end at the time, but perhaps it means I’ll get a better card for it than the GTX260-216 originally planned.

    • Suspenders
    • 10 years ago

    Great review, as always. I do have one minor quibble, though. All of AMD’s marketing refer to these cards as the “ATI Radeon HD 5870”, while the article itself does not. Anywhere. In fact, “AMD Radeon” is, as far as I can tell, an unofficial name tag for AMD’s ATI branded products that tech reviewers seem to adopt now and then.

    Since the card itself is still plastered with ATI logos, I think it would be better if we actually referred to the product by its’ proper name in reviews.

    One last thing, a request; would it be possible to put “Empire: Total War” as one of the benchmarked games? It’s quite a GPU hog, and I know theirs a large community out their that base their GPU purchasing choice on this one game (I’m one of them ;)).

    Thanks again for the review!

      • indeego
      • 10 years ago

      uh, what? I see “ATI Radeon” plastered on ATI’s site and AMD’s as well. You can’t realistically expect the name to be used, with slogans, trademarks, etc each and every mentiong{<.<}g

        • cygnus1
        • 10 years ago

        I think he’s referring to the fact that even the article title says AMD, and not ATI.

        But I disagree with him as well, the way the title is written is correct. The card is produced by AMD. ATI is essentially only a brand name now, and not a company name. I don’t think dropping ATI from the cards name, for the sake of brevity, is an issue.

        I would be entertained if they used the term DAAMIT though. Always did get a chuckle when the names were combined that way.

          • Suspenders
          • 10 years ago

          I certainly don’t think dropping ATI within the article for the sake of brevity is really a problem either. I certainly don’t want to advocate slavishly following tech companies naming guidelines to the letter every single time; the article would be 5 pages longer! But I do think it’s a problem that “ATI” isn’t in the article, anywhere, at all. The title would be a good place, but even a cursory mention of it at all would have sufficed. The card, the chip, the marketing materials are all plastered with ATI all over it. I think AMD is giving us a pretty big hint here as to what they want the card called. If AMD didn’t want the name, they would have dropped it by now.

          I really don’t see what the problem is with calling the thing it’s official name in its own review.

      • DrDillyBar
      • 10 years ago

      DAAMIT

    • ironoutsider
    • 10 years ago

    This card is so awesome! Too bad i’m going with Nvidia on my next card. I have dual 4830’s right now and linux support is not very good. I had to do a lot of work to get just the basics running. It wasn’t as much work as I had with my 8600 gt. Which I guess understandably, linux isn’t much a graphics platform, but having support for it is definitely a big plus for me. Though I have to admit that the newer drivers are doing a little better than when I bought the cards.

      • LovermanOwens
      • 10 years ago

      install Windows 7 and you will be all set

      • StashTheVampede
      • 10 years ago

      If you want great linux support, why don’t you go Matrox?

        • ironoutsider
        • 10 years ago

        WTF Matrox?

          • SHOES
          • 10 years ago

          you can expect good linux support when they pull thier arses out of the stonage and improve thier marketshare… Which quite simply put isnt going to happen until they simplify thier interface so that any old windows user can actually get something important done without going into a command prompt.

      • Game_boy
      • 10 years ago

      Linux support on the open driver is progressing rapidly. The problem is that they are rewriting the entire Linux graphics stack (X.Org, DRI, KMS, Gallium, etc.) in parallel with the driver. The Linux stack is well behind Windows/Mac at its core and this effort will bring it up to an acceptable level without hacks. Once that is stable then the radeon driver will be at a good baseline to quickly support new hardware.

      §[<http://www.x.org/wiki/RadeonFeature<]§ The Phoronix forums member bridgman is an AMD employee working on Linux drivers (they are funding both fglrx and radeonhd, and have written most of the open R700 3D code internally). He explains how AMD is doing all it can, far beyond what it deserves due to current marketshare, and that it just takes time.

    • JdL
    • 10 years ago

    Some real talent working at AMD now. Amazing improvements across the board. They set very high standards and met them all — congrats to AMD, and I will be looking forward to the mid-range parts when the come.

    • PRIME1
    • 10 years ago

    You know what? Nevermind.

    No reason for me to be such a buzz kill.

    Nice review as always, Scott. Everyone else enjoy the rest of your day.

    -[http://www.microcenter.com/single_product_results.phtml?product_id=0317763<]§ Amazing that the GTX295 is still the fastest card on the planet. Granted the 5870 is a nice card, I was just expecting a lot more. I doubt many people will feel compelled to upgrade from a GTxxxx or 48xx series.<]-

      • OneArmedScissor
      • 10 years ago

      A lot more?!? What in the sam hill do you need that for? 3840×2400?!?!?

        • PRIME1
        • 10 years ago

        Game physics for one (if they ever get around to supporting it).

        There are new game engines on the horizon as well such as Rage from ID. You don’t buy a new card just for yesterday’s games.

          • indeego
          • 10 years ago

          And you should never buy a card for tomorrow’s games. HL2 6+ month delay anyoneg{

        • tigen
        • 10 years ago

        Not just pixels but improved settings, like longer view distances and maximum quality.

      • mboza
      • 10 years ago

      LOL. A 285 keeps pace with the 5850 in much the same way the 4890 keeps pace with the 285 – close, but always a little behind.

      Anyone considering a 285 might want to wait a week for a 5850.
      §[<http://www.gpureview.com/7-days-till-the-hd-5850-amp-5870-arrive-read-some-benchmarks-while-you-wait-article-807.html<]§

      • Krogoth
      • 10 years ago

      Drop the green-shaded glasses. GTX 295 consumes a lot more power and barely runs faster then 285 and 5870. If anything 295 and 4870 X2 are both dead-ends.

      GTX 285 only outperforms 5870 by a trivial amount in games that are “endorsed” by TWIMTBP. Physix rendering has always been a joke and a marketing sham. It’s real purpose is to promote SLI and higher-end Nv solutions, because it performs like crap on mid-range and lower-end GPUs Nvidia GPUs (majority of the gaming market).

      Eyefinity also falls into the same category, because it requires a expensive setup of displayport monitors (still not cheap per unit), 5870 does not have enough pixel-pushing strength to handle more demanding games at uber-high resolutions.

        • PRIME1
        • 10 years ago

        Not everyone has your minuscule expectations/standards.

          • Krogoth
          • 10 years ago

          Physics is already pwning the semiconductor crowd. The GPU guys are feeling the burn as well. The days of leaps and jumps are long over (since G80).

          Gimmicks are worthless if there are only somewhat “useful” to a tiny minority of the gaming market. Developers have no reason or incentive to use them. That is unless they got paid the big $$$$ to use them (TWIMTBP and Eyefinity).

            • PRIME1
            • 10 years ago

            Look up MIMD. The GT300 will be the next G80.

            • Fighterpilot
            • 10 years ago

            More like the NV30 I suspect.
            err….having some problem with your green glasses there Primey?
            You mentioned GTX285 being almost as fast…..try reading the test results again.
            5870 pwns every single card in the NVidia lineup and damn near beats 2 of them (GTX295)….and cost almost a hundred dollars less.
            GTX295 at New egg=$469 approx for the cheapest generic version.
            Radeon 5870 =$379 and they are flying off the shelves so fast they can’t even keep them in stock.
            NV fanboys in damage control mode=FAIL.

      • rUmX
      • 10 years ago

      Don’t be delusional. The Radeon 5870 owns the GTX295 hands down.

      Keep in mind that the 5870 is a “single gpu” card, while a GTX295 is a dual gpu card. In games that supports and scales with SLI, the GTX295 is just winning or losing by a hair. In everything else, the 5870 just creams it.

      Oh, and I own a GTX295 myself.

      • coldpower27
      • 10 years ago

      Actually I am hoping for a sale on the GTX 295 now that this puppy has arrived, though I would prefer to stay with Nvidia, this doesn’t feel fast enough for me to warrant an upgrade yet.

      Maybe when Nvidia and their GT300 or whatever they have in the works, and can actually consistently beat their GTX 295 in performance with a single card as well. I might be persuaded to upgrade.

        • OneArmedScissor
        • 10 years ago

        I’ve seen several people say they would get a GTX 295 when the price is dropped because of new ATI cards, but you have to recognize that they are still sold at a loss, even after the PCB revision.

        A price cut is just not going to happen. They hardly make/sell any of them. What’s already in stock is probably all there is ever going to be, much as was the case with the original dual-PCB version. It will continue to sell for the same reason it always has – being the most powerful “single card,” to people that that is worth money to. And then it will be gone forever.

    • Tarx
    • 10 years ago

    This review was done a reference board?
    I didn’t check all the reviews, but so far it seems like only Hardware Canucks did it on a retail Sapphire board. It seems similar to the reference board, but might give a better indication of noise. (noise level seems to vary significantly depending on the review)

    • flip-mode
    • 10 years ago

    If I had to guess, I’d say GT300 will outperform this card, but it will be with the same caveat that came with the GT200 beating the RV770: die size. So, when Nvidia does roll out the next card, it is going to have to fight any price war on the same battlefield. The die size issue that Nvidia is dealing with has to be very frustrating for them. Still all is speculation when it comes to GT300; performance and die size could fall anywhere. Hopefully it will perform very nicely and lead to a huge price war!!! All of us can be hopeful that a year from now the 5870 will be a $200 card.

    For the moment, there is nothing to compel me to spend a cent to upgrade from a 4850. Performance is great at my resolution; the power consumption is the only bone I have to pick with the card. It is very nice to see the 5870 had dealt with that issue in a very satisfying way.

      • SoulSlave
      • 10 years ago

      I do fall in the same situation as you, I have a 4850 (lightly overclocked) and at 1680×1050 it just doesn’t make any sense to upgrade at this time.

      • MadManOriginal
      • 10 years ago

      It better be $200 a year from now – there ought to be at least a refresh of some sort by then :p

      Having said that I’ll reiterate what I said before: this card performs nicely in absolute terms but it simply fits right in to the price:performance scale rather than altering price:performance which is what I expect from a new generation card. From that viewpoint it’s rather *meh* to me no matter what card one currently uses. I’ll look forward to pricing once NV comes out with their DX11 cards and when ‘Juniper’ launches…the latter might be compelling if they provide HD4870/90-like performace at HD4770 pricing.

      Overall I think what we can take away from this launch is if you’ve got an upper-midrange or better card and aren’t having performance issues (not likely) there’s no reason other than ‘new and shiney’ to get one of these.

        • flip-mode
        • 10 years ago

        I dunno, I was guessing a year to $200 based on the 4870, which IIRC started life at $299 and took about a year to get to $200.

          • MadManOriginal
          • 10 years ago

          Yeah the thing about the HD4000 series is they were fighting an uphill pricewar battle with NV the whole way. The HD5000 series is starting out with a first-to-market price hike so that’s got to count for something when counting future pricing. There’s also got to be a floor price below which it won’t drop just based on manufacturing cost so we’ll just have to see how it goes.

      • ssidbroadcast
      • 10 years ago

      flip-mode is on point.

        • BoBzeBuilder
        • 10 years ago

        ssidbroadcast is on point regarding flip-mode.

          • derFunkenstein
          • 10 years ago

          BoBzeBuilder is getting a bit recursive.

            • BoBzeBuilder
            • 10 years ago

            Because I want to fit in.

      • bfellow
      • 10 years ago

      4850 is only servicable now. I sold a 4850 to get a 4870 then 4890. Of course I went 64-bit and got a larger monitor for increased res.

        • SoulSlave
        • 10 years ago

        I don’t know for how much you sold your 4850 and your 4870, but you’ve better made a hell of a deal. Otherwise it just doesn’t make any sense at all…

      • shank15217
      • 10 years ago

      AMDs dx11 performance is still an unknown but over the years I’ve see thats it’s pretty substantial to double the performance in one generation. I doubt Nvidia can double the performance of the G200, which is what it would need to do to beat on the 5870

        • flip-mode
        • 10 years ago

        People always doubt Nvidia can double performance, but that is just what they did from G92 to GT200. If they doubled GT200, they’d actually get even farther ahead of the 5870 than the GT200 was from the 4870, since the 5870 didn’t double the 4870. As Silus said, the 5870 is more like 60% faster than 100% faster, the latter being a true doubling.

        I won’t be surprised if Nvidia doubles performance, but I’ll be pretty surprised if they can do it with anything less than a horrendous die size. Pleasantly surprised.

          • LovermanOwens
          • 10 years ago

          The only way Nvidia could beat AMD’s performance lead is to have a chip with a die size that takes up a huge amount of realestate and could power several appliances…

          jk

          • shank15217
          • 10 years ago

          Well I would wait a for a few driver revisions to rule out the 5870 isn’t double the performance of the 4870, as it has exactly twice the resources with some extra enhancements. Graphic chips don’t scale perfectly because other factors like the game engine, drivers, OS, api, processors come into play.

      • Skrying
      • 10 years ago

      I’ll go out and say I think GT300 will out perform the HD5870 as well. However, I don’t believe the GT300 will reach a true doubling of performance. Not because of theoretical limits but because the games we have available are simply not scaling to that degree. The same situation appears to be the case with the HD5870. The hardware and clock speeds suggest it should be double the performance or equal or faster to a HD4870 X2, however the nature of Crossfire/SLI appears to work around code efficiency issues inherent in the game’s engine.

      • glynor
      • 10 years ago

      One would certainly hope the GT300 would beat this… If it doesn’t, then Nvidia is in serious trouble. Remember… Whatever Nvidia brings out isn’t going to be here until well after the Q4 season is over, probably 4-6 months out. Pricing on these AMD cards will likely be dramatically lower, and Nvidia will have a 5870X2 to deal with by then.

      Should be interesting… Nvidia’s new chip will almost certainly dwarf even the massive GT200. Sure, it’ll again re-take the “fastest single-GPU halo” for whatever that’s worth, but it’ll do it after Christmas, with enormous per-die costs. AMD will be able to respond quickly by dramatically cutting pricing on these cards, because they can afford to do so with lower per-die costs and lots of time “banked” selling these during Q4 for higher profit margins.

    • SoulSlave
    • 10 years ago

    Some reviews, including Tech Report’s one, seem to be a tad bit disappointed with the performance of 5870 boards (might be just an impression though).

    However I’m not so sure that they were targeting GTX 295 with this one (took this one from Anand’s review) if they are keeping to their strategy of delivering great performance/price ratios they would most likely be targeting GTX 285, anything beyond that just doesn’t make any sense from this point of view.

    I mean, who really needs GTX 295 (or 4870 X2 for that matter)? Not to mention 5870 X2.

    Let’s not forget that GT 200 series from NVidia is just a temporary competition, GT 300 is coming somewhere down the road (heard a few worrisome rumors though) and it will all come down to how 5870 (50, X2, etc.) stack up against those.

    I think it’s safe to say that NVidia will take the single GPU performance crown back, but at what cost?

    Maybe we will have a similar scenario in 2010 as we saw in 2008/2009.

    Until then, AMD (ATI, whatever…) should enjoy it’s upper hand, and bask in the sun a while (maybe a big while) longer.

    One last thought: “Boy, you’ve gotta love competition!”

    • derFunkenstein
    • 10 years ago

    l[

      • mako
      • 10 years ago

      Yeah, the 4.8 Gbps spec is the data rate per pin.

        • derFunkenstein
        • 10 years ago

        oh, i see. Then multiply that times the width of the bus to get the full bandwidth. 4.8Gbps x 256 = 1228.8Gbps = 153.6GB/sec.

    • BoBzeBuilder
    • 10 years ago

    Interesting. Anands review shows the 5870 to be rather inefficient under load, very loud, and temperatures reaching as high as 89 degrees.
    And their testing methods are nearly identical to TR’s.

      • indeego
      • 10 years ago

      for temps:
      TR is 39 idle and 76 load.

      I checked a few other reviews and they are a tad lower than anand (aren’t they using open case?)

      tweaktown: 45.2 @idle (lowest of the bunch they tested. bad review site, can’t tell if it’s idle or not, but that is almost certainly idle)

      techspot: 38 idle (lowest) /
      87 under stress load (amongst the hottest.)

      Driverheaven:
      46/79

      HotHardware: “mid-40’s/Mid 80’s”

      So all in all I think TR is on point here. You’ll likely see 40-45 idle and 75-85 loadg{<.<}g Man there are a lot of crappy review sites out there, with really crappy article navigation systemsg{<.<}g

        • Voldenuit
        • 10 years ago

        Anand is using OCCT for load power and temp measurements, whereas TR uses a game (L4D). As the Anandtech article itself mentions, OCCT and Furmark stress the cards beyond even the most demanding games, which probably explains the temperatures (and his noise measurements).

    • LurkingFool
    • 10 years ago

    This /[

      • Tarx
      • 10 years ago

      NCIX had a bunch in stock from a XFX and Powercolor (and maybe others too)

        • LurkingFool
        • 10 years ago

        Yeah, NCIX says in stock on their sales page but when you check further they have none. Maybe they just sold out…like everyone else…everywhere…immediately. In my book, when you ship extremely limited quantity, it’s still a paper launch, no matter who does it. If, in a week from now there is still no stock, do we officially call it a paper launch. Two weeks from now? I hope I am totally wrong about this, but I have this gut feeling that this is a disguised paper launch. Just my opinion.

    • indeego
    • 10 years ago

    “the applications don’t take full advantage of it. ”

    which is sad. I won’t be getting this card, simply because I don’t need itg{<.<}g

    • Hattig
    • 10 years ago

    Great review of a great card that surely will go down in graphics card legends and folklore.

    This card hits aces across the board, and the 5850 will surely ace the perf/$ metric.

    How many shaders are used by the interpolators that are now not fixed function? That will account for some of the <100% scaling from the 4890.

    In addition the next few months of drivers will probably find tens of percentage improvements, just like they did for previous card releases.

    • phez
    • 10 years ago

    Nvidia is going to be in for a world of hurt if their cards don’t make it for holidays. $379 is borderline attractive for this card, but holiday prices should definitely sweeten the deal.

    • Silus
    • 10 years ago

    Good review as always, but what’s up with this line in the conclusion ?

    “AMD has also managed to double its graphics and compute performance outright from one generation to the next”

    Do you mean theoretical compute and graphics performance ? Because in your tests (and actually in any other review out there) the HD 5870 is not anywhere near as double the performance of a HD 4890. It’s more like %50-60 on average.

      • khands
      • 10 years ago

      It’s not doubling from the 4890, it’s doubling from the 4870. A pretty important difference.

      The biggest issue I see is that you can get yourself a 4870 1GB Xfire setup for less than one of these bad boys, that’s where your real competition is going to come from until they either discontinue the card or drop 5870 prices.

        • Silus
        • 10 years ago

        No. It’s not even doubling the HD 4870. A HD 4890 about 10% faster than a regular HD 4870. And if the HD 5870 is on average 50% faster than a HD 4890, it’s not going to double the performance of a HD 4870. And that can be seen in other reviews that do use a HD 4870 in the tests.

          • Hattig
          • 10 years ago

          But it does pretty much equal a 4870X2.

          Maybe some people should wait for the 5890 eh?

          Anyway, AMD need to get some form of physics API implemented (is there an OpenPhysics?) and ray tracing (via an enhanced OpenRT?) so that the full potential of these systems can be explored in the future.

            • flip-mode
            • 10 years ago

            I’d say it not much harm done, but Silus is still on point: it hasn’t “doubled the performance” of the previous single chip.

            • Silus
            • 10 years ago

            Congratulations…You’re already trying to turn this into a fanboy discussion. Not surprising though…

            I said what I did, because it’s in the conclusion of the review. And I asked if Scott meant “theoretical performance”, since it sure isn’t double the performance, as you can confirm in ANY review out there, including TR’s.

      • YeuEmMaiMai
      • 10 years ago

      wait till the drivers for this card mature a little, we will see some nice performance gains. Typical of cards to need a few driver revisions to get everyhthing down smooth

    • idgarad
    • 10 years ago

    First off why not use the lynnfield and find out of this card pins an 8x pci-e slot so we can get the whole lynnfield issue figured out with DX11 cards since there are hundreds of posts from people worrying about this.

    Two why not a E8600 or typical Core2 quad configuration to see if all that GPU horse power is choked by the processor?

    You have done a great “Lets take the fastest system with the fastest video card and see how fast it goes” but you haven’t bothered with “Lets take a typical system (median) throw in this new card and see if it is even worth the money.”

    Face it a sports car that can go 200 mph on a race track tells us little how well it drives in rush hour.

      • Tamale
      • 10 years ago

      What? When was the last time you heard of a car reviewer telling us how a high-end sports car performs during rush hour?!

    • Rza79
    • 10 years ago

    Something seems to be wrong with the card tho.
    I expected it to beat the 4870X2 easily but it doesn’t.
    Being single chip, 1600 shader and 850Mhz, it should have been much faster than it is today. I really hope it’s related to drivers and not something else. Hopefully it’s not some internal bandwidth constraint that can’t be fixed through drivers.
    Maybe TR can do a small update to the article with some memory OC’s to see how it impacts performance.

      • bittermann
      • 10 years ago

      What a weak a** argument for a monster of a new card…why don’t you just give congrats where it’s due? Of course drivers are not mature yet!

        • OneArmedScissor
        • 10 years ago

        And the games aren’t made for it, nor do they really need it, either.

        It has a 13% clock speed advantage. Why should it kill the 4870 X2?

        While it may add an enormous amount of processing power, that’s only because of how powerful the previous 55nm cards were. This is still just a die shrink, and a run of the mill step in GPU generations.

        It was never supposed to be the equivalent of Netburst to Conroe.

        • DieGo316
        • 10 years ago

        this card will fit in a cooler master scout?? help plz!! im tryin to buy a new case =)

      • SoulSlave
      • 10 years ago

      Performance doesn’t scale that way. It’s like saying that an quad core is twice as fast than a dual core because it has twice as many cores (I know, I know, lousy comparison, but we’re on lousy comparisons from the beginning).

      It takes a fully optimized application to really scale with the the addition of instruction processing units (cores, shader units, whatever). Even then, it it won’t scale linearly.

      Of course games are much easier to optimize, even due to the nature of openGL / DirectX instructions they use.

      But you just can’t expect a generation old game to show twice the performance in a new card, simply because it won’t use all it has to offer.

      The real comparison will be between evergreen, GT 300 with DX11 games.

        • Rza79
        • 10 years ago

        Seems you guys can’t read …
        I never said i expected double the performance but shouldn’t it be faster than the 4870X2? Why is the 4870X2 (also 1600 shader but with lower clocks and dual chip as a disadvantage) still outperforming it in many cases? If you look at Anand’s review (which contains more games), you will see what i mean.
        I’m just saying, let’s hope it’s the drivers.

          • shalmon
          • 10 years ago

          great review, as always…

          i say it’s a winner… i mean, hard to be any more thorough

          -twice the performance of previous gen single (or sometimes better than dual gpu/single card rigs)
          -it’s more quiet than previous single or dual gpu/single card rigs
          -it consumes less power than previous single or dual gpu/single card rigs at idle and at load
          -it runs a lot cooler while being more quiet than previous single or dual gpu/single card rigs
          -it consumes less chassis real estate than dual gpu setups
          -all the performance comes without the scaling inconsistencies and config/setup problems of dual gpu rigs
          -support for multiple displays and a new third display interface
          -oh, and let’s not forget it has support for an entirely new graphics api

          normally after a launch like this we’re always praising one or a couple of those features BUT at the expense of another….all of the bases are rarely covered so nicely

          and, well…i may be wrong, but ati’s products tend to mature well the 2nd time around, who knows what “5890” might be like?

          my only disappointment was crysis performance…what’s with that game, i wonder if anything will ever run it at 60fps minimum?

          p.s. the temp/noise/power figures from anand don’t jive….hmph

      • Krogoth
      • 10 years ago

      You completely ignore power consumption.

      5870 competes against 4870X2 while consuming 40% of the energy when loaded.

      • swaaye
      • 10 years ago

      5870 has more computational power than 4870X2, but considerably less memory bandwidth. I think that’s the cause of the apparent perf deficit. You can see this in game tests where the 5870 usually wins at lower resolutions but loses somewhat at higher res.

      HAWX though is performing very strangely.

        • Rza79
        • 10 years ago

        Since the 4870X2 uses AFR, you can’t just add up their memory size and bandwidth.

          • swaaye
          • 10 years ago

          The review did. And the results seem to indicate a RAM bandwidth bottleneck. Yeah I know that AFR is not that hot on the efficiency, but still….

          5870 only has +33% the bandwidth of 4870 and yet it’s twice the GPU.

          I believe that NV’s G300 is going to have a 512-bit GDDR5 setup, meaning it will have ~2x the bandwidth of 5870. That will be interesting.

            • LawrenceofArabia
            • 10 years ago

            Thats why I personally have a hunch that a hypothetical 5790 may not only be a higher clocked part, but one with a larger bus for the GDDR5, probably 384 or even 512 bit. It just seems like the natural progression

            And to be fair, expecting a jump akin to the 4870 from this card was a bit naive, given that rv770 had more than double the shaders from its predecessor as well as GDDR5 to further its advantage. We probably wont see anything like that again for a while.

            • shank15217
            • 10 years ago

            They will not increase the bus bandwidth for this chip. If they have a 5890 it will be higher clocked with faster memory.

    • gtoulouzas
    • 10 years ago

    As far as exploitation of newer gpu capabilities go, I find nVidia’s approach, with 3D vision and PhysX, much more compelling than ATI’s “utilize 3 24inch monitors at the same time” and “higher antialiasing”.

    MUCH more compelling.

    It’s a shame, really, because imho ATI offers by far the best value for money with its cards.

    Yet, if all they’re good for is accomodating lazy ports, or multi-monitor configurations, I find them useless. True 3D and hardware accelerated physics, on the other hand, are game changers. Get your act together, ATI!

      • armistitiu
      • 10 years ago

      Name one game that actually benefits from Physx. Also would you rather buy a 120Hz monitor and some expensive glasses or buy 2 more monitors and game like you never gamed before? Seriously 3DVision is something ATI can make trough the drivers. It’s probably not a priority right now. Although i find Eyfinity a cool thing i doubt that everyone will buy more monitors right away. Still it’s nice thing i would want to have in the future.

        • gtoulouzas
        • 10 years ago

        On your first point, Mirror’s Edge and Batman : Arcam Asylum come to mind. You’re right that there aren’t many games at the moment, but there’s still much potential in hardware acc. physics…

        On the second point, I would have to go, 100%, unequivocally, with 3D vision and 200Hz monitor. I don’t think there’s any comparison between the effect a game (especially one optimised for 3D) will give you on 3D vision, compared to two monitors. I’d go for 3D any time of the day.

        Perhaps it’s a matter of personal preferrence..

          • KilgoreTrout
          • 10 years ago

          Gaming on three monitors is great. Especially for simulators of all kinds, but regular fps:es also benefit a lot from the extreme fov. You get closer to the peripheral vision that you have in real life.

          Currently it is a hassle to do three-monitor gaming though. I use the Matrox TH2Go, but setting up 1680×1050 x 3 (the maximum resolution possible) can be finicky and has very specific demands on your monitors. I truly hope AMD follows through with Eyefinity.

          Being able to game on three 30-inch monitors would be sweet. Although it would take a more powerful card than the 5870 to push all those pixels. :o)

            • ssidbroadcast
            • 10 years ago

            Wow, I love your account name. Ideas, or a lack of them, can cause disease.

            • KilgoreTrout
            • 10 years ago

            “Life is no way to treat an animal.”

            Always nice to meet literate people on the interwebs. :o)

          • armistitiu
          • 10 years ago

          Mirror Edge maybe but i didn’t like that game. And Batman? Please i don’t find his cape any more appealing if it’s rendered with physx. And that certainly won’t change the gameplay. All games that use Physcs use it for small eyecandy improvements that certainly don’t change the game in such a big way. While i agree with you that GPU’s should do more than rendering and a good example would be a physics engine running on it, I don’t think that a closed, proprietary solution (Physx) is the answer.

          About the 3DVision thing…Yes, It is a matter of personal preference. But you have to think about the price too. How much is a 200Hz monitor plus 3DVision glasses? Now how much would 2 normal monitors cost? I think the “normal” monitors win here.
          There is also one more thing i’d like to point out. If you buy 3 monitors you also get to use them in other things than games. I for one would certainly like to play on one monitor, have my VS debugging on another, and use the last one for IM,music and browsing. Or i can game on all three monitors at once.

      • bittermann
      • 10 years ago

      Did you see the hit NV took when Physx was enabled….not worth it no matter how you look at it. And surely not compelling!

      • shank15217
      • 10 years ago

      Do you also find it compelling to see that Nvidia doesn’t improve physX performance on CPUs past 2 threads even though modern CPUs can push 4 or 8 threads? Being able to connect multiple monitors to a computer increases it’s usefulness for just about any applications.

    • MadManOriginal
    • 10 years ago

    I don’t know…it’s impressive but not mindblowingly so at least purely in terms of gaming graphics maybe just because current upper end cards are more than sufficient. At best it’s what was expected but somehow I’m just kind of *meh* The other improvements like AA, AF, and power draw are very nice but they just aren’t as sexy to hype. I think it’s probably the pricing that makes it underwhelming, we’ve obviously been spoiled since the HD4000 launch but this fits in right about where one would expect for price/performance rather than changing the price/performance curve. The HD5850 may be the one to get after all in the high performance segment if one can’t wait for NV competition to drive down prices. Or maybe the ‘Juniper’ cut-down chip which with 14 SIMDs (1120 ‘shaders’ or whatever) according to sources should still be faster than an HD4870…if that’s ~$125 it will be a great deal.

    Oh, the length is also a major bummer it was one place that AMD had a distinct advantage over NV in the upper end cards. It actually looks like there is quite a bit of empty PCB space in the naked card shot, is the length mainly to accomodate a larger cooler?

    • redpriest
    • 10 years ago

    PhysX also uses x87 in software mode unfortunately. Since physics calculations can be made in parallel this is also a dismal optimization.

    • El_MUERkO
    • 10 years ago

    I have a pair of 4870’s so I’m going to let the dust settle on the launch and maybe jump for a 5870×2, any idea when the x2 is due?

      • stmok
      • 10 years ago

      A month after the 5870…Or so I hear.

    • asdsa
    • 10 years ago

    Not even PRIME1 can do bitchin’ this time.

    • FubbHead
    • 10 years ago

    It’s not “will it play crysis” anymore. It’s “will it play Stalker”? 🙂

      • ironoutsider
      • 10 years ago

      I don’t see why, Stalker looks like crap. Tried to play stalker clear sky on dual 4830’s and the game was very uninteresting and looked really crappy. Even with directx 10.1, highest settings, and anti-aliasing. The game doesn’t look good at all and runs as slow as crap. Even crysis didn’t run so slowly and looks 10x better! Don’t bring that stalker crap in here!

        • yogibbear
        • 10 years ago

        FAIL GAMER ALERT!

        Stalker Clear Sky = one of the best games in the last 3-4 years (Only just beaten by SoC itself!)

        Portal, The Witcher, Stalker:CS…. um….. Plants vs. Zombies…

          • Meadows
          • 10 years ago

          STALKER never was a good game.

            • swaaye
            • 10 years ago

            disagree big time!!! Haven’t played Clear Sky though.

            • Kaleid
            • 10 years ago

            One of the best games the last 5 years. Especially if modded. Only the Thief series match in mood, atmosphere, use of sound plus light and shadows.

            • Meadows
            • 10 years ago

            g{

            • GrimDanfango
            • 10 years ago

            Stalker is a very polarising game… I’ve come across a roughly equal number of people who love it and who loathe it.

            I’m definitely in the love contingent, but I can understand that its not everyone’s cup of tea. (and before reiterating that I should play more games, I’ve tried a vast number of games over the years)

            As far as I can tell, the reason most people who dislike it do so appears to come down to their personal preference as far as visual design and immersiveness goes.
            For me, graphics do not have to be pretty to be good. The graphics in Stalker were ugly by design, they were depicting a twisted unsettling wasteland. The visual design was actually fantastic, it just isn’t a game world anyone would want to visit on holiday.
            Similarly for the game mechanics, I rather likes things like the fact that most guns felt clunky and awkward, and you had to work through the game to cobble together more effective equipment. For a lot of people the same thing turned them right off the game.

            Everyone will just have to agree to disagree on this game… some people value flashy cinematic visuals over gritty atmospheric ones, some people value quick sharp action with plenty of reward feedback, some prefer a game you have to work at. Neither side is wrong.

            • kamikaziechameleon
            • 10 years ago

            What i didn’t like about stalker includes

            launch glitches the original unpatched game wasn’t compatible with several nvidia chipsets, my former 6800 included.

            Bad, Boring, Predictable human AI

            Did i mention glitches

            boring, the pacing and presentation were slow and dry, not building in a interesting way.

            What i liked

            The original setting.

            The games concept.

            • Kaleid
            • 10 years ago

            I think I’ve played my share…more than enough.

            • derFunkenstein
            • 10 years ago

            Hint: if you have to mod the game to make it good, it’s not a good game.

            Also, it’s an FPS, right? If so, then Meadows is right.

            • swaaye
            • 10 years ago

            Uhg………..

            • Kaleid
            • 10 years ago

            Vanilla might be good, but few changes here and there can create a masterpiece.

            Opinions, opinions in all.. no truth claims should be made.

            • swaaye
            • 10 years ago

            Oh this site and its commentators. I really enjoyed STALKER SoC due to the atmosphere and gameplay style, but obviously it instills seriously strong feelings in the other direction for some people here.

            • Kaleid
            • 10 years ago

            Which is perfectly fine with me.

      • ClickClick5
      • 10 years ago

      No…will it run two instances of Crysis?!

        • ironoutsider
        • 10 years ago

        I totally agree with plants vs zombies, but you sir are a moron and probably need to play more FPS games (I’m sure your not a moron, I just felt I needed to say that). I’m sure stalker probably becomes a good game, but I just couldn’t get into it. It wasn’t exciting or interactive or… anything. Played it for 3 hours and found it extremely boring/painful because

        1. Terrain detail looks like crap. Grass, buildings, dirt, water, everything about the landscape is gross. Can’t even see into the distance!
        2. I have to read text again?! WTF? I thought I quit doing that back in elder scrolls III Morrowind! It wouldn’t kill them to hire some voice actors dammit.
        3. Why is invisible force field killing me over and over. Where is the autosave dammit! I have start over WTF?!!!
        4. OH yeah! eat headshots! WTF>?! it’s not working!!! AHH GRENADES!

        Get my point. Do yourself a favor and pickup fallout 3 if you haven’t already man. Now THAT is a game worth wasting your life on.

          • Kaleid
          • 10 years ago

          Fallout 3 could use some of STALKERS atmosphere 😉

            • swaaye
            • 10 years ago

            Fallout 3 has atmosphere, but I don’t like it as much as I liked STALKER’s. I just don’t like the Fallout world’s style much.

            • Kaleid
            • 10 years ago

            Sure, but its nowhere near the STALKER atmosphere IMO. S feels a lot more dark, twisted and dangerous. And the lighting engine can be absolutely gorgeous.

            There was also too much familiarity with the F3 engine of because of Oblivion..

          • yogibbear
          • 10 years ago

          HahaahahahAHAAH Fallout 3….

          Okay i did “enjoy” fallout 3. But i would NEVER EVER EVER put fallout 3 in any top games of last 5 years…

          There are sooooo many things I disliked about Fallout 3. (But the gameplay etc. is good and interesting enough that i played through it twice). But animations, story, lots of little RPG nitpicks, fallout 1/2 comparisons (Bethesda being lame and not producing anything better than the awesomeness that was Daggerfall… Morrowind, Oblivion and FO3 are all downhill from there in terms of game scope, accomplishments and awesomeness!)

          I think you’re more likely deserving of the badge of “haven’t played enough fps’s” when you yourself admit to not getting too far into stalker.

      • swaaye
      • 10 years ago

      Yeah I’ve heard that Clear Sky’s D3D10 mode is ridiculously demanding. Though I’m not convinced that the visuals justify it at all. I haven’t played that sequel yet.

      The original STALKER runs quite fast on my 8800GTX, including the new Complete 2009 mod that ads quite a few of Clear Sky’s features, among other things. It’s too bad though that the game’s nifty deferred shading renderer screws up MSAA and that NV’s way of forcing AA causes a huge performance drop.

        • clone
        • 10 years ago

        Clear Sky was a decent improvement over Stalker Shadows of Chernobyl………my only complaint with Stalker has always been that it should have had a co-operative Campaign mode so friends could work together, the game was more like an RPG than an FPS.

        the original Stalker could also have used some more polish but I loved the game regardless, they never got the bloodsuckers working quite right, I watched them get stuck often and they should have given them the ability to jump very far over almost any obstacles and up ramps the player couldn’t, it also would have been fantastic if they enabled them to hide in tree’s to pounce on unsuspecting players…… that would have been incredible.

        anyway regardless of the glitches Stalker and Stalker Clear Sky were by far the best First Person Shooters I’ve played since Half Life2, EP1, EP2.

        as for the ATI card if & when it’s priced around $350 I’ll buy one if not anytime soon I’ll buy a 4890 for $200 cdn.

        p.s. Nvidia has been downplaying DX11 which is usually an admission that they are having problems.

    • ub3r
    • 10 years ago

    Lol, still struggling on crysis.

      • Krogoth
      • 10 years ago

      It is because Crysis is build on a worthless pile of junk code. It marginally looks better then the UT3 engine. The mediocre gameplay adds more injury to insult.

        • FubbHead
        • 10 years ago

        Yeah, sure it is.

        • no51
        • 10 years ago

        coolface.jpg

        • Meadows
        • 10 years ago

        Gameplay is fine and it looks better than UT3, or any UE3 game.

      • Meadows
      • 10 years ago

      Antialias always kills that engine.

        • ironoutsider
        • 10 years ago

        Multiplayer is a lot of fun though. They just need a better ranking system and a way to keep track of stats. It’s too bad that people think the game is carp! I really liked Crysis.

          • shaq_mobile
          • 10 years ago

          crysis was pretty bad. id really like to see companies stop making games that really only excel in one department. id rather have a game that jacked an older engine, and used the rest of the budget on sweet gameplay.

        • Rakhmaninov3
        • 10 years ago

        There are so damn many leaves and other edges that are constantly moving in the game that AA will be a burden for any GPU

    • Krogoth
    • 10 years ago

    5870 performance-wise is slightly faster 4870 X2, but consumes 60% less juice when loaded. It also has better image fidelity.

    I think the power consumption is what gives 5870 the decisive edge in this line-up.

      • ub3r
      • 10 years ago

      Wrong. The performance gives it the edge. The power consumption is a bonus.

    • bogbox
    • 10 years ago

    This is a good graphics card.They need to adjust the price ,when (before) the Nvidia launch there cards, to really make a King out of the GPU.

    The 4800 series was so good because they had (much) a smaller price then the expensive 200 series ,and to not only performances.(the 200 was faster than 4800)

    • methodmadness
    • 10 years ago

    Man, I just bought my 4890 three months ago…

      • OneArmedScissor
      • 10 years ago

      I wouldn’t worry about that. You also didn’t pay $400+, and still got one of the most capable cards made to date. :p

      • ssidbroadcast
      • 10 years ago

      Then you have a very-fast video card. It’s okay.

      • ClickClick5
      • 10 years ago

      My launch 4870 is still holding its ground. Your 4890 will not be hurting any time soon, so don’t worry. I’m going to do my best and wait for the 6xxx series.

      Lol, I can feel that there will be a few “green” users switching to the “red” side.

      My boys are back again! ATI!

    • rUmX
    • 10 years ago

    This is awesome. I too can’t wait for the 2GB model. Let’s see how that one performs at 2560×1600 before I make my decision.

      • Krogoth
      • 10 years ago

      I doubt 2GiB will help it.

        • armistitiu
        • 10 years ago

        I’m thinking it will help a bit but the performance increase will only be observed at super-maxed out settings on 2560X1600 with all the AA it can get.

          • khands
          • 10 years ago

          This right here, it runs out of memory for textures, etc. Which is what kills its >4x AA.

    • Vasilyfav
    • 10 years ago

    What a beast of a card. Hopefully it will drive the final stake into Nvidia’s venerable G92 chip and make them hurry up with a competing card of their own to drive prices down.

      • The Dark One
      • 10 years ago

      Whether or not games companies embrace DX11, it’s a lot easier for AMD to trumpet their support for it than it was for 10.1 and their previous generation.

      I’m sure that’s why they’re pushing the entire line out so quickly.

        • JustAnEngineer
        • 10 years ago

        TSMC’s 40nm chip manufacturing process undoubtedly has a lot to do with why AMD is rushing the new GPUs to market. Other than Radeon HD4770, AMD’s current GPUs are fabricated on TSMC’s older 55nm process.

    • FireGryphon
    • 10 years ago

    Whoever took the shot of the card visible on the front page is awesome, because it looks like a diesel train engine.

    • rodidas
    • 10 years ago

    I cant wait to see benchies of the 2gb version, as in some instances it seemed as though it was being held back by it lower vram at higher reso’s.

    • Prodeous
    • 10 years ago

    Indeed a nice review.

    I was hoping to see Folding@Home performance overview. Either here or anywhere. So may reviews, but non showing if there where any improvements in the non graphic computational power. 2,7GFps..

    The price to performance ratio I’ve seen across all the reviews identifies its true weakness, which I hope that the 5850 will address.

    Again nice review. Can’t wait for more.

      • ChronoReverse
      • 10 years ago

      We should hope for an OCL or DXC version of FAH to compare with Nvidia more directly. I’ve heard that the 5870 does better than the Nvidia cards using the Nvidia demos =O

        • Krogoth
        • 10 years ago

        While it consumes less juice. 😉

    • lycium
    • 10 years ago

    thanks for the excellent review scott 🙂

    i was pretty surprised to see the power, heat and noise results however (very important to me when choosing a gpu) since they were basically the exact opposite of what anandtech observed!

      • rodidas
      • 10 years ago

      As far as i can see the heat they measured was under a stress test program and did not reflect that of load gaming temps.

    • ssidbroadcast
    • 10 years ago

    q[< Nvidia has milked that G92 GPU as if it were a cow mainlining an experimental drug cocktail from Monsanto.<]q q[

      • xzelence
      • 10 years ago

      I concur whole-heartedly.

      • Tamale
      • 10 years ago

      lol, I was thinking the exact same thing. Great review of a great product, Scott!

      I’m particularly impressed with the idle power consumption. Nothing gets me more excited about high-power cards like this than knowing that I can put them in a machine I don’t always game on and feel somewhat responsible about my power draw.

        • ssidbroadcast
        • 10 years ago

        Same. Card is hella long, tho.

          • derFunkenstein
          • 10 years ago

          Yeah I don’t think it’d actually FIT in my machine.

          Not that my lame-o 1440×900 display can do anyting with all that power anyway. LOL

            • tfp
            • 10 years ago

            OMG maybe YOU could play crysis

            • derFunkenstein
            • 10 years ago

            yeah, I could actually play it at “native resolution” I imagine.

    • Ashbringer
    • 10 years ago

    ATI put 2 exhaust ports instead of one being one, because 1 big one isn’t marketable…

    • Firestarter
    • 10 years ago

    I’d like to add that I read this article during TRs downtime. Which is sad.

    • axeman
    • 10 years ago

    This thing is a monster.

    • Proxicon
    • 10 years ago

    Should we wait 3 months for the Direct X 11.1 card?

      • xzelence
      • 10 years ago

      likes this.

    • Anonymous Coward
    • 10 years ago

    I have a hard time seeing how Intel’s “software GPU” can match something like this. Certainly seems like an amazingly engineered, targeted, resource efficient device. Perhaps they hope AMD goes bankrupt. 🙂

    Looking forward to what this generation can bring to the laptop and shuttle cube sized market.

      • kvndoom
      • 10 years ago

      You’re wasting precious minutes of your life if you believe anything Intel promises about graphics. They’ve been feeding the public BS for over a decade. Wake me up when they actually have a working product for sale.

    • Hyperneko
    • 10 years ago

    simply.amazing. I wonder if it’ll fit in my CM Storm Scout….*grabs measuring tape*

      • ub3r
      • 10 years ago

      Waoh!! Simply amazing, i wonder if itll fit in my shuttle sp45..

      • DieGo316
      • 10 years ago

      im wondering just the same….maybe time for a new case XD

    • -_-
    • 10 years ago

    I love Sacred 2 – Fallen Angel !
    Its Just a shame the developer went outa business and there wont be any new patches. However i hear of an expansion that may fix issues but if that has issues im not sure where that leaves people?

    • Fighterpilot
    • 10 years ago

    Nice in depth review TR.
    Kudos to ATi on the power and heat numbers…..with twice the performance of the 48** series chip available.
    I’ll be adding one to my upcoming new build.

    • StuG
    • 10 years ago

    Gentlemen, looks like I’ve found my next card.

Pin It on Pinterest

Share This