AMD’s Radeon HD 6850 and 6870 graphics processors

For a journalist, there’s nothing better than having a good story to tell. At least, that’s always been my way of thinking, and we’ve had no shortage of intrigue, one-upsmanship, and swings of momentum in the GPU arena over the past year or so.

AMD grabbed the lead with the debut of its DirectX 11-class Radeon HD 5000-series graphics processors last September, well ahead of long-time rival Nvidia’s competing chips. These new Radeons were quite good products, with a few strokes of brilliance like the Eyefinity multi-monitor gaming feature, but those highlights were counterbalanced by a frustrating series of supply problems stretching into 2010 caused by TSMC’s troubled 40-nm chip manufacturing process. That same chipmaking process was a major contributor to uncharacteristically long delays in Nvidia’s DX11 GPUs, which left a frustrated AMD with a market largely all to itself—a market it couldn’t fully supply. Consumers groaned as a nearly unprecedented thing happened: prices on Radeon HD 5800-series cards rose above their introductory levels—and held there.

At the very end of the first quarter of the year, the first Fermi-based GeForces finally arrived. They ran hotter and louder but not much faster than the Radeon HD 5870, not exactly a winning combination. The outlook for Nvidia looked rather dim at that point, but a funny thing happened on the way to AMD’s coronation as the kings of the DX11 generation. The new GeForces’ performance quietly crept upward as Nvidia tuned its drivers for this novel, unfamiliar architecture, and then, in the middle of July, the GF104 debuted. This GPU, derived from the Fermi architecture, was smaller and more tightly focused on achieving strong performance in today’s games. Onboard the GeForce GTX 460, it gave the incumbent Radeons much stiffer competition. Soon, we were declaring the GeForce GTX 400 series the new kings of value and hinting strongly that AMD needed to cut Radeon prices to win our recommendation.

Oddly enough, AMD didn’t budge for a while, likely because supply constraints meant the firm was selling all of the graphics chips it could secure from TSMC. But AMD had, well, another card or two up its sleeve that would allow it to challenge the GTX 460 much more directly. Those cards, we now know, are called the Radeon HD 6850 and 6870, a pair of new offerings that come as part of AMD’s annual fall refresh of its GPU lineup. They are both based on a leaner, meaner new graphics chip code-named Barts, a part of AMD’s “Northern Islands” series of GPUs.

Barts? Where’s Homer’s?

The funny thing about Barts is that it’s made using the exact same 40-nm fabrication process that has caused both AMD and Nvidia no end of trouble, mostly because AMD had little choice in the matter when TSMC outright canceled its plans for a 32-nm fabrication process. Both of the major GPU makers had to adjust their plans rather abruptly at that point, focusing on improvements to their chip designs to deliver additional goodness in this next generation of products.

Yet in the midst of some real frustrations, there’s good news on several fronts. AMD Graphics CTO Eric Demers told us last week that TSMC had finally gotten a handle on the problems with its 40-nm process technology over the summer. If so, the latest chips from both AMD and Nvidia should be cheaper, faster, and more plentiful. That trend should be reinforced by some choices AMD has made along the way, especially the fact the Barts is actually smaller—and thus cheaper to produce—than the Cypress chip it replaces. Barts’ mission is to address the value and performance sweet spot in the middle of the market, obviously opposing the GeForce GTX 460. Although the cards based on Barts are dubbed 6850 and 6870 and promise performance fairly similar to the products they replace, they should be less expensive, draw less power, and produce less heat than their predecessors.

A block diagram of the Barts GPU. Source: AMD.

The image above maps out the major components of the Barts chip in a familiar fashion. For the most part, this is the same core GPU architecture we know from the Cypress chip behind the Radeon HD 5800 series, only scaled down slightly and tweaked in several ways. Cypress has 20 SIMD arrays, each with 16 five-ALU-wide execution units, giving it a total of 1600 arithmetic logic units, or ALUs, with which to process the various types of shaders involved in the DX11 graphics pipeline. Barts dials back the SIMD array count slightly to 14, giving it a grand total of 1120 shader ALUs. With this GPU architecture, that change has some natural implications. The texture units, for instance, are aligned with the chip’s SIMD arrays, so those drop in number proportionally, as well. Here are the vitals on Barts and some of its closest friends, to give you a sense of things.

ROP

pixels/

clock

Textures

filtered/

clock

Shader

ALUs

Rasterized

triangles/

clock

Memory

interface

width (bits)

Estimated

transistor

count

(Millions)

Approximate

die
size

(mm²)

Fabrication

process node

GF100

48 64 512 4 384 3000 529* 40 nm
GF104

32 64 384 2 256 1950 331* 40 nm
RV770

16 40 800 1 256 956 256 55 nm
Cypress

32 80 1600 1 256 2150 334 40 nm
Barts

32 56 1120 1 256 1700 255 40 nm
*Best published estimate; Nvidia doesn’t divulge die sizes

With the GF104, Nvidia held texturing capacity steady at the GF100’s rate while reducing nearly everything else—ROP rate, rasterization rate, memory interface width, and ALU count. The result was a GPU probably better tuned to the needs of current games.

With Barts, AMD has made a different set of choices, reducing shader processing and texturing capacity versus Cypress while retaining the same ROP rate and memory interface size. Oddly enough, these very different choices may also produce a GPU better tuned for the usage patterns of today’s game engines, given the present state of AMD’s GPU architecture. After all, Cypress doubled up on RV770’s resources in nearly every way but memory bandwidth. If that left it, at times, with an excess of shader and texturing power, then Barts may well be a more optimal balance of resources overall. That may especially be the case when high levels of antialiasing are in use, since Barts has the same ROP blending power, clock for clock, as Cypress—and as a smaller, newer chip, Barts may have a little more clock speed headroom.

Cypress (left) versus Barts (right)

By the way, you may have noticed the presence of two “ultra-threaded dispatch processor” blocks in the diagram above, and if you’re into these things, you may have recalled that the diagrams of Cypress only showed one of these blocks. Truth is, though, that this diagram of Barts is simply more detailed than the earlier one of Cypress. AMD’s David Nalasco tells us both chips have dual “macro sequencers,” as AMD calls them internally, to “dispatch instructions to the SIMDs.” (There’s also a “micro sequencer” in each SIMD.) As the diagram shows, each macro sequencer has instruction and constant caches. One bit of detail missing above is a crossbar between the two “rasterizer” blocks and the macro sequencers, so either sequencer can be fed by either rasterizer.

To take you further down the rabbit hole, the presence of two rasterizers in the diagram above may be a little bit misleading. As with Cypress, Barts has dual scan converters, but it lacks the setup and primitive interpolation rates to process more than one triangle per clock cycle. That’s in contrast to the GF104, which can process two polygons per clock tick, or the GF100, whose max is four.

Although the setup rate hasn’t changed in Barts, the chip’s internal geometry processing throughput should be higher thanks to some selective tweaks. One of DirectX 11’s key features is tessellation, in which a relatively low-polygon model is sent to the GPU, and the chip then adds additional detail by using a mathematical description of the surface’s curves and, sometimes, a texture map of its bumps. Adding detail once the model is on the chip can reduce host-to-GPU communications overhead, oftentimes dramatically; it also makes much higher degrees of geometric complexity feasible. One of the challenges tessellation presents is the management of data flow. As essentially a very effective form of compression, tessellation involves a relatively small amount of input data and a much larger, sometimes daunting amount of output data. To better deal with this data flow in Barts, AMD “re-sized some queues and buffers,” according to Nalasco, “to achieve significantly higher peak throughput” in certain cases. At the same time, thread management for domain shaders, which handle post-expansion geometry processing, has been improved.

AMD claims these changes had “negligible impact” on Barts’ transistor budget and power draw, yet the firm has measured tessellation throughput for Barts at up to twice that of Cypress in directed tests. The biggest gains come at lower tessellation levels, as show in the image below. At higher levels, the chips’ common setup rate likely becomes a limiting factor, and the two are separated only by Barts’ slightly higher clock speed.

Barts vs. Cypress tessellation throughput. Source: AMD.

Interestingly enough, we were able to measure a substantial difference between Cypress and Barts ourselves using the hyper-tessellated Unigine Heaven demo.

Barts hasn’t quite matched the GF104 and friends, with their truly parallel geometry processing capabilities, but it has narrowed the gap quite a bit.

Barts also has some image quality improvements, one in hardware and one in software, that we’ll discuss shortly, but that’s about it in terms of changes to the core graphics hardware. We were a little bit surprised to see Demers claiming rather large gains in performance per chip area for Barts versus Cypress, on the order of 25%, given that the two chips share the same underlying architecture and are made on the same fabrication process, but that’s precisely what happened during the press event for this product. Strangely, the comparison being made was between the Radeon HD 6870—a fully enabled Barts chip running at peak clock speeds—and the Radeon HD 5850—a partially disabled Cypress variant with lower clocks. I also run faster than Usain Bolt if you cut off one of his legs below the knee, but that’s not something I like to advertise.

Texture filtering quality improvements

We were pleased when Cypress and the Radeon HD 5000 series introduced revised texture filtering with some nifty properties, including angle-invariant anisotropic filtering. Although that’s just as geeky as it sounds, the real-world impact is noteworthy, because texture filtering has a huge influence on image quality. If objects shimmer, sparkle, or crawl as you move around in a game, yeah, that’s probably poor texture filtering.

In other words, “bad filter make Thog’s Xbox suck.”

We know how to filter textures to eliminate such artifacts, but doing so requires lots of sampling and is, in performance terms, rather expensive. As a result, GPU makers have devised shortcuts, attempting to produce the best compromise between image quality and performance. Some of those filtering algorithms have been pretty complex, and although they haven’t all been great in every way, they’ve allowed us to cope. That’s often the name of the game in real-time graphics.

Over time, as transistor budgets have grown, the trade-offs between performance and quality have become less stark. Cypress represented a high-water mark of sorts because it promised to eliminate one of the worse compromises in older filtering algorithms, the fact that surfaces at some angles of inclination weren’t filtered as well as others, while improving filtering quality overall.

Trouble is, after the Radeon HD 5000 series had been in the market for while, folks started noticing some problems with Cypress’ texture filtering, especially in textures with lots of fine, high-contrast detail. This problem wasn’t evident in every case—heck, I never noticed it myself while gaming on a Cypress card—but it turned out to be quite real. At the press event for Barts, AMD Graphics CTO Eric Demers admitted that it was an issue with Cypress-era hardware.

We’ve replicated an example he showed from the D3D AF Tester application using a high-frequency checkerboard texture. In the image below, you’re looking down a 3D-rendered cylinder with that texture mapped to the interior walls. As the squares in the checkerboard become much too small to represent with a single pixel, the goal of good filtering is to produce a smooth, visually comprehensible representation of what’s happening at a sub-pixel level. On Cypress, the image produced looks like so:

Cypress

There are several very obvious transition points that form rings within the image above. Those obvious transitions represent a failure of blending, and in a game with, say, a very detailed texture of a road stretching out ahead, they have the potential to translate into visible lines that travel ahead of you, looking cheesy. (For those of us old enough to remember the bad old days of bilinear-only filtering on early 3D chips, this effect might induce flashbacks.)

Demers relayed to us the dismay that he and his team had when they realized this problem made it into Cypress hardware. They thought they’d created a very elegant solution for a long-standing challenge, but in certain cases, it wasn’t quite perfect. The problem, he said, is not blending between mip levels but a filter transition within a mip level. The transition between two kernels doesn’t quite happen as it should. Demers was adamant that Cypress does not cheat on its level-of-detail calculations (a common performance optimization) and that the issue is simply the result of a mistake. Fortunately, the error has been corrected in the Barts filtering hardware, and the result is much smoother transitions.

Barts

Nvidia’s texture filtering algorithm strikes a somewhat different balance, as you can see below. (All of these tests were produced using the default filtering quality in the video drivers.)

GF104

On the Radeons, the checkerboard pattern melts into a gray circle well before the end of the cylinder, whereas the GF104 shows detail all the way to the end, with some intriguing and intricate moire patterns. Those patterns are smoother and more regular than the ones on the Radeons, which translates into less visual noise. The odd thing about the Nvidia result here is that puffy, uh, donut shape in there. (Mmmmm… donuts.) Heck, the donut isn’t perfectly round, either; it’s more octagonal. Switching on colored mip levels will give us a better sense of what’s happening.

Cypress

Barts

GF104

Now the donut on the GeForce looks like a big, red stop sign, which highlights the fact that Nvidia applies a little less filtering to objects at certain angles of inclination. Coloring the mip levels also reveals clearly that Nvidia does less accurate blending between mip levels than AMD, which is what causes the donut effect. The color gradients are much finer on the Radeons, and those smoother transitions produce no visible rings in our high-frequency checkerboard sample.

Which is better overall? I’m not sure I can say, and this is a single, static example that’s very tough to handle. In games, the differences between the GPUs are much less readily evident. The reality is that AMD and Nvidia appear to be very closely matched—even more so now that Barts fixes that filter transition problem.

Morphological AA: sounds cool, right?

The other new image quality enhancement arriving with Barts is a software-based antialiasing filter that AMD has dubbed morphological AA. AMD has been playing around with various custom, post-process AA filters for some time now, and this new one is the next step. Unlike some of AMD’s past custom filters, morphological AA is based on a compute shader and, as I understand it, simply looks at a finished image, detects edges with rough transitions, and attempts to smooth them.

The advantages of this approach are several. The morphological filter has relatively low performance overhead, and because it’s simply looking at a finished scene and doing its work, it will smooth out rough transitions even if they don’t occur along polygon boundaries. The most widely used AA method, multisampling, simply won’t address jagged edges within textures or the like. Also, because it’s a post-process effect, morphological AA should be compatible with a wide range of games based on DX9 and up. Since this feature is implemented in DirectCompute, it will be available via a driver update for owners of existing 5000-series Radeons, as well as the 6850 and 6870.

AMD only got us a driver with morphological AA enabled a couple of (very busy) days ago, so we haven’t had much time to play with it and form many impressions. We have produced some sample images, shown below, from morphological AA versus multisampling. You should know that the morphological AA sample image was produced by a tool from AMD that applies the filter to a screenshot. We had to use this tool because the post-process filter’s effects don’t show up in screen captures.

No antialiasing

4X multisampling

Morphological AA (without MSAA)

8X multisampling

These images from Bad Company 2 are a pretty good example of the problems with multisampled AA. Quite a few of the object edges in this shot aren’t touched by MSAA, regardless of the strength of it. That seems to be a quirk of a lot of modern game engines, including this one. The result is that the sight and the top of the player’s gun are smoothed nicely by MSAA, but very little else in the image is—not the foliage, nor the edge of the cliff curving through the top right portion of the image. By contrast, with morphological AA, a great many of the object edges in the scene are softened.

One disadvantage of morphological AA is that, since the filter operates on a completed scene, it lacks any sort of sub-pixel precision. The edge-detection algorithm must simply guess about the angles of the slopes it’s smoothing. Look, for instance, at the right side of the middle tine of the sight on the player’s gun above. Without AA, that tine is three pixels wide most of the way down and then fans out into a fourth pixel on the right. The morphological filter turns that into a fairly pronounced angled edge, while multisampling (especially at 8X) reveals that line runs very nearly straight up and down.

Ok, if you don’t see that, I don’t blame you. It’s a bit subtle, but the limitation is a real one, and it may mean that object edges tend to crawl or warp while in motion. We need to spend more time with this feature to get a fuller impression of its worth.

The Catalyst drivers for the new Radeons have a couple of other changes, as well. In addition to morphological AA, the older edge-detect CFAA filter remains an option, but the narrow and wide tent filters are not available. Nalasco tells us the decision to remove the tent filters was a consequence of some performance improvements AMD recently made to its edge-detect filter that rendered the tent filters superfluous.

AMD has added another checkbox in the Catalyst Control Center titled “Disable surface format optimization.” This language refers to the fact that AMD’s drivers have been, for certain games, converting HDR textures into lower-precision formats in order to raise performance. Nvidia has been banging the drum about this issue for a while now, saying that it would never do such a thing. Only a handful of games seem to be affected, none of which we’ve used recently for performance testing, so we haven’t spent much time worrying about it. (The list includes Dawn of War II, Empire Total War, Need for Speed: Shift, Oblivion, Serious Sam II, and the original Far Cry.) AMD claims image quality is not visibly reduced by this change, but it has decided to make the concession of letting the user disable this optimization if he wishes. Given the choice, we would prefer to let the game developers choose the appropriate texture formats, so we conducted all of our testing with this checkbox ticked.

Some display and multimedia changes

Although Barts hasn’t changed much in the 3D department, it has seen some revisions in other places, including its display and video playback hardware. AMD has made quite a bit of hay out of being able to support three or more monitors with a single GPU over the past year, particularly in relation to multi-screen gaming with Eyefinity. We’re not suprised, then, to see the firm pushing ahead with support for newer display output capabilities.

The most noteworthy change here is probably support for version 1.2 of the DisplayPort standard. This version has twice the bandwidth of 1.1 and enables some novel capabilities. One of those is transport for high-bitrate audio, including the Dolby TrueHD and DTS Master Audio formats, over DisplayPort. Another is the ability to drive multiple displays off of a single output, either via daisy chaining or the use of a break-out DisplayPort hub with multiple outputs. In one example AMD shared with us, a hub could sport four DVI outputs and drive all of those displays via one DP input. What’s more, Barts-based Radeons can support multiple display timings and resolutions over a single connector, so there’s tremendous flexibility involved. In fact, for this reason, AMD has no need to offer a special six-output Eyefinity edition of the Radeon HD 6870.

The output array on XFX’s Radeon HD 6870

Barts’ standard port array will sport two mini-DisplayPort 1.2 outputs, a single HDMI output, one dual-link DVI port, and one single-link DVI output. Yes, AMD has chosen to drop dual-link support on that second DVI output in favor of more DisplayPort connectivity, and yes, that move seems to be a bit premature to us. A Barts-based card can drive a second dual-link DVI monitor using a DisplayPort to DVI converter, but the dual-link versions of those dongles tend to be relatively pricey. Then again, so are monitors that require dual-link inputs.

Speaking of expensive things, that other port on the card above supports HDMI 1.4a, so it’s compatible with stereoscopic televisions and can be used for the playback of Blu-ray discs in stereoscopic 3D. One other modification of note to the Barts display output hardware is an update to its color gamut correction that should allow for more accurate color representation on wide-gamut panels.

If you do plan to slip on a pair of funny glasses in your living room in order to watch a movie, you should be happy to hear that the Barts UVD block can now decode the MVC codec (used for Blu-ray 3D discs) in hardware, along with the MPEG4 (DivX/Xvid) codec. AMD has also extended its support for MPEG2 acceleration to the entropy decode stage of the video pipeline, further unburdening the CPU. No UVD update would be complete without some improvements to the post-processing routines used to address problems with low-quality source video, and AMD hasn’t left us hanging there, either. What’s more, at its 6800-series press event, AMD had representatives from CyberLink, ArcSoft, and DivX on hand to pledge support for the new UVD hardware in their respective media player programs.

Beyond those changes, the folks in AMD marketing have been working overtime on some crucial marketing name modifications, playing off of the success of “Eyefinity” as a semi-clever play on the word “eye.” We now have EyeSpeed and EyeDef, although I kind of get fuzzy when it comes to tying those things to GPU attributes. I do know that AMD’s Stream Computing initiative has been renamed as AMD Accelerated Parallel Processing Technology, which has a lot more vowels and consonants.

On the initiative front, AMD has decided to counter Nvidia’s 3D Vision push by partnering with third-party makers of shutter glasses, stereoscopic displays, and middleware that adds stereoscopic 3D support to current games. These activities will take place under the “HD3D” banner. We’re a little bit unsure what to make of this effort, for a host of reasons, including the simple fact that we’re dubious on the long-term prospects for glasses-based stereoscopic display schemes. We also have a pretty strong impression that the GPU makers will need to support stereo 3D actively and work directly with game developers in order to get really good results. Middleware vendors like DDD, one of AMD’s new partners, don’t help their case when they claim, for instance, that their TriDef Media Player “automatically” converts 2D source DVD, photos, and videos to stereoscopic 3D. Cardboard cutouts, ahoy! On the flip side, we expect AMD isn’t investing too much in stereoscopic support by cobbling together an initiative like this one. If stereo 3D schemes prove unpopular with consumers, they’ll have less to lose. In other words, Nvidia is grabbing failure—or success—by its bare hands, while AMD is using robotic arms.

Introducing the Radeon HD 6850 and 6870

Now that we’re several thousand words into our review, let’s talk about the new Radeon graphics cards. Here are the most relevant specs:

GPU

clock

(MHz)

Shader

ALUs

Textures

filtered/

clock

ROP

pixels/

clock

Memory

transfer

rate

Memory

interface

width

(bits)

Idle/peak

power

draw

Suggested

e-tail

price

Radeon HD 6870

900 1120 56 32 4.2 Gbps 256 19W/151W $239
Radeon HD 6850

775 960 48 32 4.0 Gbps 256 19W/127W $179

Both cards have 1GB of GDDR5 memory onboard, and AMD has chosen to de-tune the 6850 only by reducing the number of active SIMD engine/texture unit pairs from 14 to 12 and lowering clock speeds.

Notice the very nice prices in the table above. These cards amount to a major overhaul of the value proposition in the middle of AMD’s lineup.

Peak pixel
fill rate
(Gpixels/s)


Peak bilinear

INT8 texel
filtering rate*
(Gtexels/s)
*FP16 is half rate


Peak
memory
bandwidth
(GB/s)

Peak shader
arithmetic

(GFLOPS)

Radeon HD 5830

12.8 44.8 128.0 1792
Radeon HD 6850

24.8 37.2 128.0 1488
Radeon HD 5850

23.2 52.2 128.0 2088
Radeon HD 6870

28.8 50.4 134.4 2016

At $179, the Radeon HD 6850 essentially replaces the weak-sister Radeon HD 5830, yet it has twice the ROP rate of that rather unfortunately crippled Cypress derivative, is based on a smaller board with more modest power consumption, and, well, we’ll show you performance shortly. At $239, the Radeon HD 6870 supplants the Radeon HD 5850, yet the newer card costs less, has a higher ROP rate and slightly more memory bandwidth, with comparable specs otherwise.

Above are a couple of Radeon HD 6870 cards from XFX and Sapphire, both of which are available now, and both of which look to be based on AMD’s reference design. 6870 cards require dual 6-pin auxiliary power connectors, and at 9.75″ long, they should be considerably easier to cram into a case than the over-11″ Radeon HD 5870. They’re practically the same size as the Radeon HD 5850, though.

Pictured above is the reference version of the Radeon HD 6850, which has a single 6-pin power input and measures 9″ long.

Right out of the gate, XFX is offering a verison of the 6850 based on its own custom board design that’s a quarter-inch shorter than AMD’s and rocks a Zalman-esque dual-heatpipe cooler. XFX says it’s using higher-quality components to give its cards longer life and better overclocking headroom.

Unfortunately, both of these XFX cards are currently selling for 20 bucks above AMD’s suggested price at Newegg, while other versions, like this Sapphire 6850, are priced in line with AMD’s guidance. XFX may be able to command something of a premium thanks to its lifetime warranty and solid reputation for support, but $20 seems like a lot to ask.

From left to right: Radeon HD 5870, 6870, and XFX’s custom 6850

The competition sharpens its swords

Of course, Nvidia wasn’t about to let AMD introduce new graphics cards without a welcoming committee. The greeting ceremony for Barts arguably started several weeks ago, when Nvidia dropped the price of GeForce GTX 460 768MB cards to $169. Then things got weird, as AMD held out on revealing Radeon HD 6800-series pricing to reviewers until earlier this week. Nvidia then slashed its prices quicker than a fireworks tent on the fifth of July, taking the GeForce GTX 460 1GB down to $199 and dropping the GTX 470 to $259.

That maneuver prompted an amusing back-and-forth in which AMD sent out a retaliatory e-mail claiming Nvidia’s price cuts were only temporary—complete with a promotional directive from Nvidia written in French as ostensible proof. Nvidia responded by saying, essentially, “Nuh uh!” and insisting its price cuts are permanent. We’ll take them at their word for now, but hold them to it later.

In fact, Nvidia tells us to expect the cards available at $199 to be somewhat faster than the the GTX 460’s original 675MHz base clock, as its partners de-emphasize the lower-clocked models over time. The MSI Cyclone card pictured above is right at $199.99 at Newegg at present, and we’ve included in our tests over the following pages.

Not only that, but Nvidia and its board partners have equipped us with a handful of intriguing new GeForces in the past week.

This imposing fellow is a GeForce GTX 460 1GB card from MSI with an 810MHz core clock and 3.9 Gbps memory. That’s positively stratospheric compared to the initial 675MHz core and 3.6 Gbps memory of the GTX 460, and the higher frequencies should translate pretty directly into stronger performance. Better still, this version of the card, known rather comically as the Hawk Talon Attack, features 0.4-ns GDDR5 memory that promises additional overclocking headroom. We haven’t yet had time to test its limits, but between the RAM, the dual-fan cooler, and the fact that MSI’s software offers overvolting of the GPU core and memory, yeah, we’d like to try soon. This puppy is going for $215 at Newegg right now, not far above the GTX 460 1GB’s base price. We have a full set of performance results for this card.

If you can’t be bothered to overclock a graphics card yourself and 810MHz just isn’t enough, there’s this unassuming little number from EVGA, the GeForce GTX 460 1GB FTW edition. Currently selling for $229, this GTX 460 1GB is clocked at a nosebleed-inducing 850MHz with 4 Gbps memory.

There is precedent for this sort of clock speed and performance creep in Nvidia graphics card models over time. Heck, the GeForce GTX 260 actually transitioned from 192 to 216 ALUs and saw prevailing speeds rise from a 576MHz base to 650MHz and more during its run. Still, this is quite the promotion for the humble GTX 460 in a pretty short span.

Time limits prevented us from testing the EVGA FTW edition in our full suite, but we only left it out of a couple of the games we tested manually with FRAPS.

The final member of the Barts welcoming committee is this GeForce GTX 470 from Galaxy. This “GC Edition” card boasts a modest clock speed bump from Nvidia’s stock 607MHz to 625MHz, but its real appeal is a blue PCB and that slick looking plastic cooling shroud that looks like it ought to have little green army men hiding inside of it. In fact…

There’s a hatch for the army men to hide under! Sweeeeet.

Actually, the product packaging makes no mention of why the fan flips up, but I’ve heard it’s purported to be for easy cleaning of dust and lint. Whatever the case, this cooler is certainly distinctive. This beast isn’t currently listed at Newegg, but Asus and others have stock-clocked GTX 470s at Nvidia’s new $259 suggested price. EVGA also has a 625MHz variant for $269.

Our testing methods

Many of our performance tests are scripted and repeatable, but for some of the games, including Battlefield: Bad Company 2 and Mafia II, we used the Fraps utility to record frame rates while playing a 60-second sequence from the game. Although capturing frame rates while playing isn’t precisely repeatable, we tried to make each run as similar as possible to all of the others. We raised our sample size, testing each FRAPS sequence five times per video card, in order to counteract any variability. We’ve included second-by-second frame rate results from Fraps for those games, and in that case, you’re seeing the results from a single, representative pass through the test sequence.

As ever, we did our best to deliver clean benchmark numbers. Tests were run at least three times, and we’ve reported the median result.

Our test systems were configured like so:

Processor Core i7-965 Extreme 3.2GHz
Motherboard Gigabyte EX58-UD5
North bridge X58 IOH
South bridge ICH10R
Memory size 12GB (6 DIMMs)
Memory type Corsair Dominator CMD12GX3M6A1600C8

DDR3 SDRAM at 1600MHz

Memory timings 8-8-8-24 2T
Chipset drivers INF update 9.1.1.1025

Rapid Storage Technology 9.6.0.1014

Audio Integrated ICH10R/ALC889A

with Realtek R2.51 drivers

Graphics Gigabyte
Radeon HD 4850 OC 1GB

with Catalyst 8.782-100930m drivers & 10.9a application profiles

Radeon HD
4870 1GB

with Catalyst 8.782-100930m drivers & 10.9a application profiles

XFX Radeon HD
5830 1GB

with Catalyst 8.782-100930m drivers & 10.9a application profiles

Radeon HD 5850 1GB

with Catalyst 8.782-100930m drivers & 10.9a application profiles

Asus Radeon HD
5870 1GB

with Catalyst 8.782-100930m drivers & 10.9a application profiles

XFX Radeon HD
6850 1GB

with Catalyst 8.782-100930m drivers & 10.9a application profiles

XFX Radeon HD
6870 1GB

with Catalyst 8.782-100930m drivers & 10.9a application profiles

Asus GeForce
GTX 260 TOP SP216 1GB

with ForceWare 260.89 drivers

Gigabyte
GeForce GTX 460 768MB OC

with ForceWare 260.89 drivers

MSI Cyclone
GeForce GTX 460 1GB 725MHz

with ForceWare 260.89 drivers

MSI Hawk
Talon Attack GeForce GTX 460 1GB 810MHz

with ForceWare 260.89 drivers

EVGA GeForce GTX 460 1GB FTW 750MHz

with ForceWare 260.89 drivers

Galaxy GeForce GTX 470 1280MB GC

with ForceWare 260.89 drivers

GeForce GTX 480 1536MB

with ForceWare 260.89 drivers

Hard drive WD RE3 WD1002FBYS 1TB SATA
Power supply PC Power & Cooling Silencer 750 Watt
OS Windows 7 Ultimate x64 Edition

DirectX runtime update June 2010

Thanks to Intel, Corsair, Western Digital, Gigabyte, and PC Power & Cooling for helping to outfit our test rigs with some of the finest hardware available. AMD, Nvidia, and the makers of the various products supplied the graphics cards for testing, as well.

Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests.

We used the following test applications:

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Running the numbers

Peak pixel
fill rate
(Gpixels/s)


Peak bilinear

INT8 texel
filtering rate*
(Gtexels/s)
*FP16 is half rate


Peak
memory
bandwidth
(GB/s)

Peak shader
arithmetic

(GFLOPS)

Peak

rasterization

rate

(Mtris/s)

GeForce GTX 260 TOP SP216

18.2 46.8 128.8 605 650
GeForce GTX 460 768MB

16.2 37.8 86.4 907 1350
GeForce GTX 460 1GB

21.6 37.8 115.2 907 1350
GeForce GTX 460 1GB 725MHz

23.2 40.6 115.2 974 1450
GeForce GTX 460 1GB 810MHz

25.9 47.6 124.8 1089 1620
GeForce GTX 465

19.4 26.7 102.6 855 1821
GeForce GTX 470

24.3 34.0 133.9 1089 2428
GeForce GTX 470 GC

25.0 35.0 133.9 1120 2500
GeForce GTX 480

33.6 42.0 177.4 1345 2800
Radeon HD 4850 OC

11.2 28.0 63.6 1120 700
Radeon HD 4870

12.0 30.0 115.2 1200 750
Radeon HD 5770

13.6 34.0 76.8 1360 850
Radeon HD 5830

12.8 44.8 128.0 1792 800
Radeon HD 5850

23.2 52.2 128.0 2088 725
Radeon HD 5870

27.2 68.0 153.6 2720 850
Radeon HD 6850

24.8 37.2 128.0 1488 775
Radeon HD 6870

28.8 50.4 134.4 2016 900
Radeon HD 5970

46.4 116.0 256.0 4640 1450

We’ve already looked at how the 6850 and 6870 compare to the 5850 and 5870, but here’s a broader comparison of specs. As always, these are just theoretical peaks and don’t necessarily predict delivered performance.

Nvidia and AMD do largely seem to be converging on a common target in terms of resource balance. Take the 6850 and the GTX 460 725MHz, for instance. They have very similar peak ROP/pixel fill and texture filtering rates, although the 6850 has a little more memory bandwidth. As usual, the Radeon has a substantially higher theoretical peak arithmetic rate, but Nvidia’s GPUs seem to be efficient enough at executing actual shader code to overcome that gap. Also, relatively speaking, that shader rate gap is shrinking. The 5830’s peak is about 1.8 teraflops, but the 6850 peaks at just under 1.5 teraflops.

We like to see how well these GPUs can approach their theoretical best peformance with directed tests when possible. We’ve already looked at tessellation performance earlier, so here’s a quick look at texture sampling and filtering capabilities.

We’ve grown increasingly dissatisfied with the texture fill rate tool in 3DMark Vantage, so we’ve reached back into the cupboard and pulled out an old favorite, D3D RightMark, to test texture filtering performance. Unlike 3DMark, this tool lets us test a range of filtering types, not just texture sampling rates. Unfortunately, D3D RightMark won’t test FP16 texture formats, but integer texture formats are still pretty widely used in games. I’ve plotted a range of results below, and to make things more readable, I’ve broken out a couple of filtering types into bar charts, as well.

In theory, the newer Radeons have slightly lower texturing capacity than the products they replace, and that holds up in our measurements, as the 6850 just trails the 5830 and the 6870 does the same versus the 5850. By contrast, the full-on Cypress aboard the Radeon HD 5870 is a titan of texture filtering prowess, even faster than the mighty GTX 480. Still, the cards based on the trimmed-back Barts GPU retain rates that are quite competitive with those of their GeForce counterparts.

Battlefield: Bad Company 2
BC2 uses DirectX 11, but according to this interview, DX11 is mainly used to speed up soft shadow filtering. The DirectX 10 rendering path produces the same images.

We turned up nearly all of the image quality settings in the game. Our test sessions took place in the first 60 seconds of the “Heart of Darkness” level.

Our very first game benchmark gives us a feel for the added efficiency of the Barts GPU. Despite the trimming to Barts’ shader and texturing power, the Radeon HD 6870 not only keeps up with the 5850, but matches the 5870 frame for frame. The 6850, meanwhile, relegates the 5830 to its proper status as a bad memory. This thing is a much more competitive product.

Speaking of competition, the new Radeons acquit themselves nicely versus those pesky GeForces. The 6850 slots in right between the GTX 460 768MB and the 1GB 725MHz version, as does its pricing, and the 6870 essentially ties with the very fastest 850MHz variant of the GTX 460 1GB—and with the pricier GTX 470.

Owners of older Radeon HD 4800-series cards will want to consider these fresh Radeons carefully. The 6850 offers a 50% increase in measured frame rates over ye olde 4850, and more importantly, it takes this game from sluggish to smooth at the very common 1080p resolution.

Starcraft II

Up next is a little game you may have heard of called Starcraft II. We tested SC2 by playing back a match from a recent tournament using the game’s replay feature. This particular match was about 10 minutes in duration, and we captured frame rates over that time using the Fraps utility. Thanks to the relatively long time window involved, we decided not to repeat this test multiple times, like we usually do when testing games with Fraps.

We tested at the settings shown above, with the notable exception that we also enabled 4X antialiasing via these cards’ respective driver control panels. SC2 doesn’t support AA natively, but we think this class of card can produce playable frame rates with AA enabled—and the game looks better that way.

The new Radeons perform admirably once more, ever-so slightly outgunning the closest competition from the green team. Most of these cards deliver eminently acceptable frame rates at this extreme display resolution, but the two slowest cards obviously stumble—the 5830 likely due to its weak ROP rate, while the GTX 460 768MB is probably bumping up against a video memory limitation.

Aliens vs. Predator

The new AvP game uses several DirectX 11 features to improve image quality and performance, including tessellation, advanced shadow sampling, and DX11-enhanced multisampled anti-aliasing. Naturally, we were pleased when the game’s developers put together an easily scriptable benchmark tool. This benchmark cycles through a range of scenes in the game, including one spot where a horde of tessellated aliens comes crawling down the floor, ceiling, and walls of a corridor.

To keep frame rates playable on these cards, we had to compromise on image quality a little bit, mainly by dropping antialiasing. We also held texture quality at “High” and stuck to 4X anisotropic filtering. We did leave most of the DX11 options enabled, including “High” shadow quality with advanced shadow sampling, ambient occlusion, and tessellation. The use of DX11 effects ruled out the use of older, DX10-class video cards, so we’ve excluded them here.

Another page of results reinforces our sense that the 6800-series Radeons have hit their marks. They also confirm that the competing GeForce cards are right in the mix. This is not going to be an easy one to call, is it?

Metro 2033

The developers of Metro 2033 have come up with a nifty scripted benchmark based on their game, and we decided to give it a shot. As the settings page below shows, we did not test at Metro 2033‘s highest image quality settings. Those wouldn’t be too friendly to mid-range graphics cards like these mostly are. Because we used DirectX 11, we had to exclude the older cards from this one.

Wow, these results are a little top-heavy with green. That’s true in part because we’ve included the single biggest DX11 GPU and the most expensive single-GPU graphics card you can buy, the $500 GeForce GTX 480. Even so, the GeForces tend to be quite a bit faster in this game—with the obvious exception of the GTX 460 768MB at 2560×1600. That lower-memory card looks to be a poor match for a high-resolution display.

Why are the Barts cards faster than the Cypress ones at 2560×1600? I dunno, but we’ll be entertaining guesses in the comments. Perhaps it has to do with tessellation overhead, which would also explain why the GF100-based cards, the GTX 470 and 480, so handily outrun their GF104-based brethren.

DiRT 2: DX9

This excellent racer packs a scriptable performance test. We tested at DiRT 2‘s “ultra” quality presets in both DirectX 9 and Direct X 11. The big difference between the two is that the DX11 mode includes tessellation on the crowd and water. Otherwise, they’re hardly distinguishable.

DiRT 2: DX11

The GeForces handle this game particularly well at lower resolutions and in DirectX 9. As the display resolution rises and we layer on DX11 features, though, the balance shifts back toward the Radeons. In the end, we’re back to near performance parity at the most demanding settings.

Like the narrator in a National Geographic special, we should pause with faux empathy to note a rather unfortunate act of cannibalization: the higher-clocked GTX 460 cards outperform the GTX 470 here. This same thing nearly happened in our Bad Company 2 tests, but now the old girl has finally succumbed. Thus the circle of life is complete. Or, wait, maybe I’m supposed to blame global warming? I forget how this goes.

Borderlands

We tested Gearbox’s post-apocalyptic role-playing shooter by using the game’s built-in performance test with all of the in-game quality options at their max. We didn’t enable antialiasing, because the game’s Unreal Engine doesn’t natively support it.

We’ve included Borderlands once again because it’s one of my favorite games ever, the test is easily scriptable, the game itself tends to be pretty demanding on fast video cards, and it’s based on the oh-so-popular Unreal engine. However, we’ve watched GeForces stomp on Radeons in this test so many times now, it may be time to move on. I just wish AMD would give this game a little bit of tuning attention in its drivers somehow. Heck, the ol’ GTX 260 is nearly as fast here as the Cypress cards.

One thing we can say is that AMD’s architectural compromises with Barts appear to have paid off for this game. The Barts-based 6870 matches the Cypress-fortified 5870 almost exactly.

Mafia II

The open-world Mafia II is another new addition to our test suite, and we also tested it with Fraps.

If you turn on antialiasing in this game, that apparently does something unexpected: enables a 2X supersampled antialiasing mode. Supersampling touches every single pixel on the screen and thus isn’t very efficient, but we still saw playable enough frame rates at the settings we used. In fact, we need to look into it further, but we think Mafia II may also be using some form of post-processing or custom AA filter to further soften up edges. Whatever it’s doing, though, it seems to work. The game looks pretty darned good to our eyes, with very little in the way of crawling or jaggies on edges.

Although this game includes special, GeForce-only PhysX-enhanced additional smithereens and flying objects, we decided to stick to a direct, head-to-head comparison, so we left those effects disabled.

You’ll notice that most of the GeForces are missing from the results above. That’s because we found a pretty serious problem with how we’d tested this game once we’d compiled the results. Have a look at the frame rate lines for the faster GeForce cards below:

For most of our gaming session, the GeForces look to be effectively capped at a 60 FPS frame rate. That’s not entirely the case, as the section of the end of each run—a quick scripted sequence in the game engine—demonstrates; the GeForces range well over 60 FPS then. For whatever reason, though, they’re stopping at 60 FPS otherwise. As a result, we’ve excluded the faster GeForce cards from the main results above, and we’re not entirely confident about the numbers for the slower cards, either. We’ve just left in these Mafia II results so you can compare the Radeons to one another.

On that front, the 6850 and 6870 perform admirably, with the 6870 once again topping even the 5870.

Power consumption

We measured total system power consumption at the wall socket using our fancy new Yokogawa WT210 digital power meter. The monitor was plugged into a separate outlet, so its power draw was not part of our measurement. The cards were plugged into a motherboard on an open test bench.

The idle measurements were taken at the Windows desktop with the Aero theme enabled. The cards were tested under load running Left 4 Dead 2 at a 1920×1080 resolution with 4X AA and 16X anisotropic filtering. We test power with Left 4 Dead 2 because we’ve found that the Source engine’s fairly simple shaders tend to cause GPUs to draw quite a bit of power, so we think it’s a solidly representative peak gaming workload.

Oh, and our graph labels have changed on this page to more fully reflect the brands of the cards being used, since custom board designs and coolers will have a major impact on power, heat, and noise. We tested only the XFX version of the 6870 here because it, the Sapphire, and the reference card from AMD all share the same cooler and board layout.

AMD led us to expect some nice reductions in idle power use with the Barts-based cards, but we just didn’t see much on our power meter. Nvidia’s GTX 460 cards still draw appreciably less power when they’re not busy.

Barts does draw less power under load than Cypress, and Barts is also more power-efficient than the GTF104 at comparable performance levels. Those GTX 460s clocked at over 800MHz cause our test rig to pull 12W more than it does with a 6870 in the PCIe slot.

Noise levels

We measured noise levels on our test system, sitting on an open test bench, using an Extech model 407738 digital sound level meter. The meter was mounted on a tripod approximately 10″ from the test system at a height even with the top of the video card.

You can think of these noise level measurements much like our system power consumption tests, because the entire systems’ noise levels were measured. Of course, noise levels will vary greatly in the real world along with the acoustic properties of the PC enclosure used, whether the enclosure provides adequate cooling to avoid a card’s highest fan speeds, placement of the enclosure in the room, and a whole range of other variables. These results should give a reasonably good picture of comparative fan noise, though.

The differences in noise levels at idle for most of these cards aren’t sufficiently large for us to say with any confidence that you’d notice them. We’re simply sticking a sound level meter on a tripod next to a test system and recording a number; we don’t have Steve Jobs’ massive ana-whatever chamber buried six miles deep to ensure total isolation. Only the GTX 480 and better are likely to be appreciably noisier, with the one real stand-out here (in a bad way) being XFX’s take on the Radeon HD 6850.

Wow, things are all over the map here, and noise levels don’t seem to track with peak power consumption levels as one might expect. The Radeon HD 6850’s reference cooler has hit the same basic target as the two 5800-series cards did before it, and that’s not too bad. There are much quieter GeForces, though, including the Galaxy GTX 470 and EVGA’s 850MHz GTX 460 FTW—both of which are very impressively quiet for their power draw levels.

Several cards are unfortunately on the noisy side, including the MSI Hawk Talon Attack, XFX’s 6850, and the Radeon HD 6870’s common reference cooler. I have a good sense of what’s happening with the other two cards, as we’ll discuss below, but the 6870’s noise levels are a little disappointing for a card that draws so much less power than a Radeon HD 5870—or, jeez, a GTX 480.

GPU temperatures

We used GPU-Z to log temperatures during our load testing.

Notice the presence of the two MSI cards and the XFX Radeon HD 6850 at the top of the chart above, indicating they have the lowest operating temperatures. Board makers appear to be tuning their custom coolers these days to achieve especially low temperatures, even if it means they’ll be several decibels louder than necessary. When we’ve asked about these tuning decisions in the past, the motivations seem to be related to creating more overclocking headroom or extending the life of the GPU silicon. We can’t say we like it, though, when a snazzy and obviously effective cooler like the quad-heatpipe number on the MSI Talon Attack registers two decibels above a GTX 480 on our meter. We can’t help but get the sense that board makers don’t share our priorities when they build in such noisy default fan speed profiles. Meanwhile, the stock cooler on EVGA’s 850MHz GTX 460 is whisper quiet and keeps temperatures well within a reasonable range.

Fortunately, the folks at XFX tell me they are considering offering their users a choice by making an alternative BIOS with a quieter fan profile available on their website for this 6850. If that happens, we’ll try to grab it and see how it handles.

Conclusions

These two new Radeons are surgical strikes at the competition, not all-out assaults intended to alter the balance of power massively. AMD has simply taken a proven technology, refined it slightly, and targeted it at a couple of crucial price points in the middle of the market. Even the few tweaks AMD’s engineers made to the GPU’s 3D graphics hardware—the filtering fix and the tessellation optimizations—mainly just help even things up with Nvidia’s GF104.

Happily, though, both of these Radeons have hit their marks, bringing enough performance and capability to their respective targets to make not just the parts they replace but all of the Radeon HD 5800-series cards feel a bit pointless. Yes, the 5870 is still a bit faster than the 6870, but not by much. The balance of graphics hardware resources in Barts looks to be golden for today’s games; the cuts to shader and texturing power barely sting.

The Radeon HD 6850 and 6870 are also good enough to have forced Nvidia’s hand on a frantic round of clock speed increases and price cuts, and we’re naturally pleased to see it happen. Nvidia’s adjustments do currently seem to be sufficient to keep the various versions of the GTX 460 we tested competitive, price-performance wise, with the 6850 and 6870. I’m not sure I could choose between the two GPU brands right now in this category, it’s so even. If there is a mismatch here, it’s between the GTX 460 768MB at $169 and the 6850 at $179. That 768MB card seems to run out of video memory at higher display resolutions, so unless you’re absolutely married to a single, lower-resolution monitor, we would recommend paying the extra 10 bucks for the 6850.

Then again, we’re bickering over $10 here, and I’ve sworn that off in the past. Video card prices do tend to drop around the introduction of a new product, and the landscape may change dramatically in the coming days and weeks. What you should do if you want a new video card for gaming is use the 6850 and the GTX 460 1GB as your new baselines. Don’t bother with cheaper cards like the Radeon HD 5770 or the GeForce GTS 450 unless you’re truly strapped for cash. The deals are too good to ignore at around $180.

The only thing that might keep you from snapping up one of these cards now is, well, even higher aspirations. If you’re hoping for a baby Barts to push the envelope at under $150, don’t hold your breath. AMD tells us the Juniper GPU in the Radeon HD 5700 series will be sticking around for a while. If you’re looking for a true and proper replacement for the Radeon HD 5870 that ups the ante on performance, however, you may want to hold out for a bit. Barts is just part of AMD’s multi-pronged approach to refreshing its GPU lineup, and a larger, more potent, and rather intriguing chip code-named Cayman is just over the horizon. We’ve also heard whispers of a dual-Cayman product code-named Antilles that could be stupendously quick. These Northern Islands are full of surprises.

Comments closed
    • kamikaziechameleon
    • 9 years ago

    Cool cards but I haven’t bought a card since the 4870 1gb was 160 dollars and I plan to wait till I can find such a deal again. This isn’t that.

    These cards aren’t fast enough for the price. AMD needs to shave 50 dollars off each or I’m sitting on my hands. Atleast Nvidia isn’t shy about cutting prices.

    • michael_d
    • 9 years ago

    I ran Metro 2033 benchmark on my single 5870 system. I got average of 22 frames under the following settings:
    2560×1600
    Very High
    DX11
    AAA
    AF16

    However the game is very playable.

    • anotherengineer
    • 9 years ago

    Whatever happened to Prime1??????????

    I really thought this vs the 460 would have brought him back, I guess I was wrong.

      • flip-mode
      • 9 years ago

      He could have created a new account. He could have had a fanboy epiphany and realized that fanboyism is repulsive and he’s had enough of it. He could be off at some other website. He could have been in a horrible accident and we don’t even know about it (I sincerely hope not).

        • BoBzeBuilder
        • 9 years ago

        Or maybe banned?

          • Googoo24
          • 9 years ago

          Oh, no….He’s still a die hard Nvidia fanboy. Just go to [H]ardocp if you want proof. Why he hasn’t been banned there is beyond my realm of reasoning. Seriously, Nvidia could put poop in poorly constructed cardboard box; paint it green and call it GTX 4658790000111100-1999, and he’d still find someway to defend it.

          Besides, he wouldn’t respond to this article, because, out of a few, Nvidia biased games, his beloved brand is trailing the new 68xx series.

    • ThorAxe
    • 9 years ago

    #248, I agree that 4xAA is not worth losing better image quality.

    For example without AA my 6870 Crossfire set up scores:

    Resolution: 1920 x 1200; DirectX: DirectX 11; Quality: *[

    • tim.hawkinson
    • 9 years ago

    Hey Scott, I just wanted to say thanks for putting together such an entertaining and informative article about these new cards. I really enjoyed the first few pages of history and technical overview as well as the massive amount of test data.

    The Tech Report has become my go-to site for tech news over the past year due to the excellent writing and genuine excitement that comes across in your articles. Thanks for the content and keep up the good work!

      • pdjblum
      • 9 years ago

      I want to second that sentiment. I also will say that I was very appreciative of your decision to include multiple flavors of the 460 gtx cards, as they all are readily available at various price points, all close to the new amd cards. They are clearly relevant and including them only helps us to make a more informed buying decision, which, after all, is the purpose of the review.

    • Fighterpilot
    • 9 years ago

    You want Crossfire results?
    Here…take a look at these new cards totally destroying GTX480 and company.
    Excellent scaling and all round results.
    §[< http://techgage.com/article/amd_radeon_hd_6870_hd_6850_in_crossfirex/<]§

    • l33t-g4m3r
    • 9 years ago

    Has /[http://www.anandtech.com/show/3987/amds-radeon-6870-6850-renewing-competition-in-the-midrange-market/11< ]§ . . /[https://techreport.com/discussions.x/19216<]§ l[<- At the "very high" setting in the AT review, the 6850 only hits 25 FPS at 1920x1200. Scott got 32 FPS out of the same card at 1920x1080 using the "medium" setting. In a first-person shooter like Metro, I would take 32 FPS over 25 FPS any day of the week, even if it meant sacrificing a little bit of eye candy. I think Scott did a good job of picking reasonable graphical settings for the class of product in this review. <]l Lol, no. If you are actually playing the game and you want to enjoy it, you wouldn't touch the medium setting, as it is completely horrid looking. I even dropped my resolution to 1680 to play at the "very high" settings, since the advanced lighting is what creates the atmosphere. Almost forgot to mention this: The nvidia cards REALLY take a nosedive using the VH/high settings, and I think that is far too important to overlook. You say the 6850 hits 25? Well the 1GB 460 gets 24. Omitting that fact skews the results. Perhaps that little mishap was accidental, or maybe it wasn't, I don't know, but it shouldn't have been left out. Sure, nvidia getting higher fps under medium details is important information, but don't ignore that they also have an extreme drop-off and lose the lead when you enable the VH/high settings. I know that this is your site and all, and you guys can benchmark however you want, but when some of the benchmarks start looking to be consistently skewed in nvidia's favor due to minor discrepancies, I won't be trusting anything that anyone writes here anymore, since this'll have become the nvidia-report, and not tech-report. :-( On the positive side, I do like the rest of the article. I just brought up what I didn't like, because this is a discussion forum, and I'd like to discuss it. Constructive criticism, basically.

    • LoneWolf15
    • 9 years ago

    Thanks Scott –this is currently the best review on the web of the 68xx class cards, and was well worth the wait.

    I’m glad to see someone who does a good job comparing the 5xxx series cards, rather than just placing the current nVidia 4xx as the only competition. You also do the best job of making sure we see results from multiple resolutions, with in-game settings that most of us would use.

    I’ll be happy with my 5870-XXX Edition for awhile longer, though it’s nice to see that AMD did a good job improving the efficiency with Barts. It will be interesting to see what Cayman brings.

    • NarwhaleAu
    • 9 years ago

    Given the whining going on, you would think the 68xx isn’t able to run almost every game at 1920 x 1200/1080 with settings on max. The performance is adequate except for that very small percentage of gamers who currently have 30″ monitors. This card has close to twice the performance per $ than the current 58xx, but it is targeted at the sweet spot for price – the spot the x8xx should be.

    ATIs core product has always been focussed on the sweet spot between $150 and $250. The fact that the 5870 was close to $400 only a few months ago and the 5850 was somewhere near $300 is an anomaly. The graphics equivalent of bracket creep.

    Yes, they aren’t faster than the 58xx (they should have stuck with 1120 SP just so that marketing could say they are) and yes someone upgrading from that series will be mislead (considering they will have paid $300 for the 5850 and are purchasing a card for $170, you would think a light bulb would go off anyway). However, ATI had two options – continue with the bracket creep, or realign the mainstream performance card with the x8xx name. I’m glad they went with the renaming – now I get to buy a 69xx.

      • OneArmedScissor
      • 9 years ago

      Watch out, someone is going to bite your head off because X flawed in game benchmark only has 25 minimum FPS with a 6870 at 1920×1200 and all the settings maxed out.

      • sweatshopking
      • 9 years ago

      YOU’RE WRONG. ACCORDING TO SOME BENCHMARK, THE 6870’S (PARTICULARLY IN CROSSFIRE) SCORE POORLY IN DEAD RISING 2.

        • Googoo24
        • 9 years ago

        So….Because the 68xx does poorly in a Nvidia biased game, that makes his overall point wrong? Rigghhttt……

        • flip-mode
        • 9 years ago

        UPPER CASE IS FUN. I LIKE UPPER CASE. IT MAKES ME FEEL IMPORTANT. NOT THAT I’M NOT IMPORTANT, I JUST DON’T USUALLY FEEL AS IMPORTANT AS I REALLY AM. BUT UPPER CASE LETS ME FEEL THE FULLNESS OF MY OWN IMPORTANCE. PRETTY SWEET.

          • indeego
          • 9 years ago

          flip-mode is on shiftg{<.<}g

          • Disco
          • 9 years ago

          This reply made me laugh. But I think that SSK is only mocking those who may actually and truely be OUTRAGED by such controversial statements!

            • flip-mode
            • 9 years ago

            SSK and I are BFF, so I tease him long time.

            • sweatshopking
            • 9 years ago

            yah man. it was a joke. the 6800 series are better cards. and the scaling has been fixed. I was just horsing around.

      • Ardrid
      • 9 years ago

      I disagree on your last point about someone being misled into upgrading from a 5850 because that’s exactly what I’m dong. The key is that I have no intention of keeping my 5850; I’m selling it for at least $200 and, given the ASPs on eBay currently, I should have no problem getting at least that much. I don’t think there’s a problem with dropping effectively $40 (the $300 is a sunk cost at this point), to get a card that performs better, uses less power, and runs cooler.

    • NIKOLAS
    • 9 years ago

    As someone who has been critical of your GPU reviews in the recent past with your graphs being filled with SLI & CrossFire results, let me say a big THANK YOU for doing this review without SLI & CF.

    It made it a lot easier to read and work out what’s what.

      • Voldenuit
      • 9 years ago

      I for one, think that SLI and CF benches would have been very interesting, so that a current 460 or 5770 owner could weigh up the benefits of getting a second card over a new 6850/6870.

        • paulWTAMU
        • 9 years ago

        CF/SLI needs to be in it’s own graph I think. Otherwise just too cluttered

        • NIKOLAS
        • 9 years ago

        I have always argued that the graphs should show single cards only and then have another set of graphs that mixes in SLI & CF for the minority of people who are interested in seeing this.

        When all that is given is SLI & CF mixed in, it makes the task of reading a video card review for those who will only ever be interested in a single card, such a chore, that I had given The Tech Report away for GPU reviews.

        I don’t even know if this review is a one off or a change to the policy of how the graphs are displayed because it has been a while since I checked out a Tech Report GPU review.

          • ThorAxe
          • 9 years ago

          I agree with you that separating the SLI and CF graphs from the single cards is a good idea except when only a few cards are being tested.

          Personally the only graphs I care about are those showing SLI and CF as I have been doing one or the other since the 8800GTX.

      • Firestarter
      • 9 years ago

      If you check out Anand’s article, you’ll see that they saw better scaling from the 6800 series cards than the 5800 series. Check out the HAWX and Bad Company 2 numbers for example. In HAWX, 2 6870’s beat 2 5870’s, even though 1 6870 is slower than 1 5870. In Bad Company 2, the 6870’s almost beat the 5870’s. Not bad for cards that consume a lot less power under load.

      So there, SLI and CF testing is not totally useless. It would have been nice if TR had been able to reproduce that characteristic or show some holes in Anand’s testing.

    • ThorAxe
    • 9 years ago

    Just out of interest I ran the CCC Auto-tune utility on my 6870s. The results varied dramatically.

    The first card reached 945 Core and 1135 Memory.

    The second card reached 990 Core and 1240 Memory. (though this was not stable in benchmarks)

    A quick test in Crossfire mode had them both stable at 950/1150.

    • Googoo24
    • 9 years ago

    Hmmm….Why are you guys using the 10.9a drivers? Why not the 10.10, which are specifically designed for these cards?

    Why aren’t you using the 10.10 beta drivers provided by AMD to most reviewers, specifically for these cards? Can someone explain?

      • Googoo24
      • 9 years ago

      Does anyone have an answer?

        • Voldenuit
        • 9 years ago

        They’re running Cat 10.10 drivers (8.782). 10.9a is for the application profiles.

          • Googoo24
          • 9 years ago

          Ahh….But am I seriously supposed to believe that a 5870 is marginally better than a 460 768mb in borderlands?

            • derFunkenstein
            • 9 years ago

            Yes. It’s a TWIMTBP game and it runs significantly better on nVidia hardware than AMD hardware at a given pricepoint.

      • Googoo24
      • 9 years ago

      Well……Isn’t that kinda of a low blow…? Or it could it be argued that that’s the performance perspective for a game biased in Nvidia’s favor?

    • Palek
    • 9 years ago

    q[

      • paulWTAMU
      • 9 years ago

      best paragraph in a tech article eve.r

        • Chrispy_
        • 9 years ago

        You need to have been reading techreport for longer then 🙂

          • paulWTAMU
          • 9 years ago

          2004 I think was when I started :p

          • Firestarter
          • 9 years ago

          Something something creamy smoothness 😀

    • clhensle
    • 9 years ago

    “Speaking of competition, the new Radeons acquit themselves nicely versus those pesky GeForces. The 6850 slots in right between the GTX 470 768MB and the 1GB 725MHz version, as does its pricing, and the 6870 essentially ties with the very fastest 850MHz variant of the GTX 460 1GB—and with the pricier GTX 470. ”

    GTX 470 768MB and the 1GB 725MHz
    to
    GTX 460 768MB and the 1GB 725MHz

      • Damage
      • 9 years ago

      Fixed, thanks!

    • emorgoch
    • 9 years ago

    This may seem like a silly question, but I haven’t read a single review that answers it: How many displays / What combination of ports can be used with the 6850/6870? The 5800s were great and all to have 4 different ports, but you could only use 3 of them at once. What are the restrictions with these new cards?

    As for the OC’d 460s, I think it’s perfectly legit to have them. These are factory overclocked, with full warranty support, available from retailers. Reference designs are all well and good, but I want to know what the performance is of the product that I can buy so that I can make a value comparison. If these were hand created card given to TR by nvidia, that would be a different scenario, but any can get these.

    • Scrotos
    • 9 years ago

    Correction, page 3:

    Since this feature is implemented in DirectCompute, it will be available via a driver update for owners often existing 5000-series Radeons, as well as the 6850 and 6870.

    …”update for owners OF existing 5000-series…”

      • Damage
      • 9 years ago

      Fixed, thanks.

    • Disco
    • 9 years ago

    As a bit of a long-time ATI fan (very happy with my 5850), I am disappointed with this ‘launch’. I would think that the requirements for a launch to be considered successful would be to have a new product that in some way makes it the new obvious choice if you are upgrading. Both of these cards are good, but the fact that damage ends the review with no specific recommendation speaks volumes. These cards should either beat the 460’s with a significant performance boost for the same price, or they should beat them with a substantial price savings and deliver the same performance. To do neither is just wishy-washy.

    Just because some high performance 69xx’s are ‘on the way’ doesn’t make really make these two any better. And I hate the naming scheme; the 5850 should be replaced with a 6850… etc.

      • Manabu
      • 9 years ago

      Nvidia pre-emptive price drop strategy really worked, judging by the comment above…

      • Googoo24
      • 9 years ago

      Interesting, since, out of several reviews, this is the one of the select few showing the 460 beating a 6870 let alone a 6850. In fact, they don’t even present a stock 460 1gb in any of the gaming tests; no, they rely on overclocked variants to compete with a $180 card. In fact, in several of the tests, a severely OC’d 460 is only marginally slower than a 480.

    • SomeOtherGeek
    • 9 years ago

    Anyway, Damage, nice review. It is nice to see where the cards sit at the price range. Keep up the good work.

    • Ihmemies
    • 9 years ago

    I’d buy the EVGA FTW 460 with 850MHz speed. Pricing is equal to 6870’s, but it offers better peformance while doing it relatively quietly.

    In germany EVGA card goes for about 200

    • codedivine
    • 9 years ago

    No double precision on Barts. I will pass.

      • UberGerbil
      • 9 years ago

      If GPGPU is your thing then, no, these aren’t the droids you’re looking for.

      • Stranger
      • 9 years ago

      just curious what are you using the double precision performance for?

        • sweatshopking
        • 9 years ago

        PRON!!!!

    • ClickClick5
    • 9 years ago

    Dang…my 4870 is getting lower and lower on the list. 🙁
    One more gen is all I ask.

      • SomeOtherGeek
      • 9 years ago

      Oh, I really do thing they will be good for a couple more years. They still work well with web browsing and office work…

      But, I’m in the same boat the 4870, just gets moved to the slowest PC whenever I get a newer GPU.

    • Firestarter
    • 9 years ago

    Any word on the subjective quality of the MLAA? As in, does it do a satisfactory job of removing the jaggies when you’re gaming instead of pixel peeping? Does it have an overall positive impact on the image quality or does it blur more than just the jaggies?

    I’m just curious, as this sounds like a ‘good enough’ kludge for games in which MSAA doesn’t work as desired.

      • Meadows
      • 9 years ago

      For what it’s worth, it’s much like the cheap thing that modern consoles often do. And, for what it’s worth, it’s just as useless (if fast).

        • Firestarter
        • 9 years ago

        Have you seen it in action?

        • khands
        • 9 years ago

        I thought it was the exact same thing no?

          • Meadows
          • 9 years ago

          Yes, except its an option and it’s on the PC.

    • swaaye
    • 9 years ago

    I think these new ATI cards are pretty nice. Looks like they replace the previous high end, which is what I expect from the new mid range. One of the top grade GTX 260s would do the job too though.

    If I didn’t play almost everything at 1360×768 on a TV these days, which allows ye ancient 8800GTX and even 3870 to run things quite well, maybe I’d upgrade. 😀 Even DX11 games like the semi-awful AVP3 and Dirt 2. The DX11 features certainly aren’t worth paying for at this time.

    Actually the biggest issue is that I am not playing any recent games. I don’t want to upgrade just to play games that I’ve already played. Maybe when Deus Ex 3 comes or something like that. If it is too much for my 3 year old video cards!

    • michael_d
    • 9 years ago

    Interesting results 6870 is on par with 5870. However, 6870 is a replacement for 5770 while 5870 is twice as fast as 5770. Could 6970 be twice as fast as 6870?

    P.S. Metro 2033 results look suspect unless the benchmark tool is not any different from real gameplay. I can play it at 2560×1600 on Very High.

      • Raskaran
      • 9 years ago

      Just look at transistor count, power envelope, die size, price, performance, hell even AMD supplied slides.
      6850 & 6870 are using modified 58xx arch, they aren’t targeted as direct replacements of 5850 & 5870. They have better tesselator and fewer shaders so a 6870 will be better or worse than 5870 depending on how much the game engine stresses tesselator unit and how much the shaders. Same with 6850 in correlation to 5850.
      5750 & 5770 is another league.
      With the rumors in the ears 5970 could very well be twice as fast as 6870
      edited for grammar

    • Freon
    • 9 years ago

    Good stuff. Pricing seems very appropriate and dices up the 5770, GTX 460, and 5850 midrange of the market.

    I wish they named them differently. Maybe 6840 and 6860 would make more sense given the performance. Even 6830 and 6850 would seem to make more sense, and restart a trend in seeing new generations provide significant gains compared to the previous generation cards with the same naming convention, like the 38xx to 48xx transition.

    I’m very curious where AMD will go now for the 6900 series. I am assuming 6870 just with more pipelines across the board, maybe x1.5 including memory bus width? Naming doesn’t seem to leave much room for a dual-chip model in the 6900 series, but who knows. Maybe the 6950 and 6970 will be dual/CF cards, but I would assume not.

    6850 at $179 looks like an awfully tempting Crossfore config for a total price close to or below a 5870. $239 x 2 not so much. Overall the 6850 looks like a great value. It’s above the belt line of performance drop off in most benchmarks, especially seeing a lot of flat performance in the midrange in some of these benchmarks (save the 460 768MB and GTX 480 at the top and bottom). Some of the other sites included some CF and it appears the 68xx cards are possibly more efficient in CF.

    I’d be interested to see some more benches of SC2 at 1920×1200, but I guess I’ll assume it would be equally boring. Strange that the cards perform so close outside the 460 768MB (definitely out of memory, settings screen even says you need 1024MB), and GTX 480 (no idea why this manages to separate itself).

    Oh, and here’s hoping they’ll patch in the -[

    • Flickshots
    • 9 years ago

    I am neither an ATI or Nvidia fan and always go for the better card in its generation. Previously I had the HD 5770. It was pretty hard to make a decision between the 6850 and the GTX 460 1GB. Based on the scores It is still not known which is the better card or which will be the better card in the future or which will sell for more later. When performance is so close you need to think about other factors. The one I thought of was drivers. And I decided on the Nvidia card this time around. For some reason I favor nvidias drivers over Ati’s.

    Regarding overclocking headroom for the 6850 or 6870 it seems like these GPUs get pretty hot at stock. Maybe cooling is the problem so its unknown how well they will overclock. The GTX 460 overclocks very well and the HAWK cards are claimed to easilly reach 900 Mhz on the core. Best one reviewed here was Evga FTW at 850 Mhz.

    I also think AMD needs to lower the price on 6870.

    Thanks for the review. And thanks to AMD for releasng these cards on time. It made possible to get a GTX 460 1GB for $30 cheaper.

    Now we wait for a future GTX and Cayman.

    • obarthelemy
    • 9 years ago

    BTW, same remark about OCed 460: it’s unfair, because
    1- this is not an official produc, specs may vary (specs even vary for official products)
    2- this is a PR maneuver by nVidia to get journos to write about their stuff in an ATI product launch.
    3- where are the ATI OC cards during nVidia articles ?

    no kudos to you. you’re either naive or… not naive.

      • Damage
      • 9 years ago

      For what it’s worth, we decided to test the cards we did after careful consideration, and we quite simply disagree with you on this issue.

      To explain our thinking, these *are* “official” products and quite real. I’ve linked to their Newegg listings in the review. You can buy them at prices competitive with the Barts-based Radeons, as we’ve noted.

      I sometimes think the use of the phrase “overclocked” confuses people. These are not overclocked in any traditional sense of the word. The products simply sell at higher clock rates than the GPU maker’s lowest specified base speed, fully tested and validated with full warranties. Even so, Nvidia works with its board partners to make these higher-speed models possible. I believe they even help with sorting/binning the GPUs. “Overclocked” simply sounds sexy, so they use it for marketing purposes. Folks should see through that.
      What’s more, there is tremendous and unusual range under the “GTX 460” banner now, from the slowest 768MB to the fastest 1GB version. We would have been remiss not to include a range of offerings representative of what’s available on the market at different prices. We took quite a bit of time explaining this dynamic in the review.

      It is possible Nvidia and its partners won’t keep its prices this low or clock this high (or that prices on the AMD products will change, too), and we allowed for that possibility. But we’ve relayed to you what Nvidia has told us and the world about its pricing and likely clock speeds on GTX 460 products.

      We will try to keep an eye on these things over time, but for now, we’ve decided they are credible enough in their stated intentions to be taken seriously. We treated AMD’s words with the same respect in making our choices here. That’s really the best we could do in trying to sort out a tough set of issues, IMO. Had we only tested a GTX 460 1GB at its original base clock, we’d have taken even more criticism from folks who want to know which, of the products they can buy, is best.

      Finally, we do include higher-clocked versions of Radeons in our reviews from time to time, but several factors prevent that from being common. AMD and its partners aren’t as aggressive about offering such products or pushing for higher clock speeds, so the choices are often limited. Also, last time around with the 5800 series, the 40nm supply problems prevented board makers from pushing the limits. Here’s hoping it’s not so bad this time. Finally, as with say the various chips in the GTX 400 series debuts over the past several months, AMD simply didn’t budge on prices or clock speeds in response to Nvidia, so we had nothing different to test or report. We did, for the record, include the fastest 5750 we could find in our GTS 430 review, but the pickings are sometimes rather slim.

        • flip-mode
        • 9 years ago

        q[

        • MadManOriginal
        • 9 years ago

        I think including factory ‘overclocked’ cards isn’t a bad idea especially because of the way you did it in this review: there are reference speed cards included (handy for doing cross-comparisons to other reviews, both at other sites and at TR) AND you included a range of factory overclocks. Both of those things together as additional data points makes the information very worthwhile and I thank you for going to such lengths. I may have objected in the past when the i[

        • rang0046
        • 9 years ago

        i think tr was more genuine than this seems nediot cuts the checks around here u guys are all using over clock cards in all ur reviews why dont u guys wait until amd aib release some oc cards u guys are just so bias

        • Lans
        • 9 years ago

        I think TR did the right thing by checking out the availability of the factory OC’ed GTX 460s then deciding to include them. I do agree that Nvidia deserves the benefit of a doubt that the prices will stick and there will be more supply of OC’ed cards as the stock supply dwindles. Well, if they don’t then I do agree TR and others should call them out on that. I am personally suspicious of Nvidia’s motive and if they can live up to their promises but despite that, for the time being, if the consumers can get those cards at those clocks and prices then more power to us!

        I am little disappointed there was no manual OC’ing of the factory OC’ed GTX 460s and HD 68×0. I know what TR would have achieve may not be what we can expect but at the same time, it would certainly help me if I should get one of the GTX 4×0 or if I should hold off for factory OC’ed HD 68×0 if it showed promising OCing room/results.

        EDIT: I really like TR including a page on the rationale and keeping the charts nice and clear which model was being tested.

        • rhema83
        • 9 years ago

        Agree.

        Instead of letting its partners do the magic, NV could have binned the 800MHz-capable GF104 chips and sold them as GTX460v2. Then TR will be comparing GTX460 and GTX460v2 with HD6850 and HD6870.

        Would that make ANY difference beside the different name (and another chance to diss NV for rebranding)? NO.

      • kc77
      • 9 years ago

      It’s not bad that the cards were included generally speaking. More information is more information. That’s not bad in and of itself and kudos to TR for including additional reference points for us to ponder. However, the problem comes in when moving forward in terms of what the standards are.

      -Are we going to see various speed binned cards in all reviews going forward?

      -Are we going to see them when the 580 launches?

      The first question is more important than the second, but if the answer is “no” to any one of those then it becomes that it’s important to know what’s out there for one manufacturer but not for the second. Kind of odd no?

      You can bet dollars to donuts that moving forward we are going to see various speed bins for both Nvidia and ATI that appear and disappear, but you can be assured on the next GPU review TR will get questions like, “There’s various speed bins of card XYZ, where are they ?” If we are going with the standard that more information is being provided in order to provide “a more accurate picture of the marketplace” then that’s a much harder act to follow consistently, which would probably be why few reviewers do it for GPU launches. No?

    • obarthelemy
    • 9 years ago

    Thanks for including older cards. I have a 4850; with the 6870 about twice as fast, i’ll probably be upgrading.

    It’s very useful to include older cards: upgraders in the market for a new card want to know IF it’s worth it, before choosing WHICH card to buy. And upgraders have older cards, of course.

    • anotherengineer
    • 9 years ago

    Great review, however NO source engine benchmark ;(

    Fire up that test rig for 1 more day Scott 🙂

    • phez
    • 9 years ago

    I understand the inclusion of the OC’d 460s.

    But why no OC’d numbers for the Radeons?

      • flip-mode
      • 9 years ago

      TR didn’t do /[

        • phez
        • 9 years ago

        Considering most “regular” 460s can already overclock nicely, I assumed this was the reason for the inclusion of the cards, rather than the simple explanation of the pre oc’d ones being available in retail.

        Which is why the inclusion of oc’d performance from the new radeons would have been nice. And would have avoided all this stupid “OC unfair” talk.

          • Meadows
          • 9 years ago

          There /[

    • Suspenders
    • 9 years ago

    That anisotropic filtering bit was very interesting, especially the differing methodologies used by ATI and Nvidia. It’s interesting also that video cards have come along so far over the past few years that image quality differences between the two camps are so tiny nowadays.

    • ryko
    • 9 years ago

    hmmm…don’t like the SL DVI port. so if i have 2 DL DVI monitors and i wanted to upgrade, i would need to get a $100 active adapter! there goes all of my cost savings….i hate adapters.

    i really liked the idea of eyefinity on the 5×00 series, but the main reason i stayed away was their stupid active DP to DVI adapters. will i actually have to go nvidia this time around?

      • Kurotetsu
      • 9 years ago

      Ummm, did you happen to notice there is an HDMI port sitting right on top of the DVI ports? HDMI to DVI adapters aren’t expensive at all. That should handle your two DL-DVI monitors. The third monitor has to use DP no matter what.

      l[

    • spigzone
    • 9 years ago

    Nvidia – ‘all you reviews are belong to us’.

    *FOUR* GTX 460 cards in the review???

    I do believe that sets a new Nvidia stooge record.

    Congratulations.

    • BoBzeBuilder
    • 9 years ago

    Damage, you damaged my lack of understanding regarding the new AMD 6xxx series. Thank you.

    • grantmeaname
    • 9 years ago

    A well-written, enjoyable, and helpful review. Thanks, Damage.

    • sweatshopking
    • 9 years ago

    yesterday you were hating on these chips. now you’re liking them. lol, you guys. this is the internet. you have to hate on something even if all evidence counters your point. You can’t EVER change your mind. You should know that.

    Reply to Vodenuit Fail.

      • Voldenuit
      • 9 years ago

      No, I don’t like them. I don’t hate them. In fact, I’ve made posts pointing out that the “efficiency gains” are overstated and that the value proposition is pretty much on par with everything else (which is a good thing, since you can spend $220-250 on any card on the market right now and get a good deal).

      If anything, I’m indifferent to them as a product but mystified at the attention they draw. Since they perform on an equal perf/$ as the 460 1 GB and 470.

      If they come down to $200-220, I know what I’ll be putting in my next build, though.

        • khands
        • 9 years ago

        I do think the 6870 needs a $20 price reduction, and then an OC’d version to run against the 470+ or whatever Nvidia is putting in the $300 bracket. Although the lower end Cayman may fit the bill.

          • Freon
          • 9 years ago

          It does seem there is an awfully large gap in price between the 6850 and 6870 given the performance difference, but the 6870 is still faster than a 5850, which goes for $260-300. Maybe still a rough choice between that and a ~800mhz GTX 460 1GB depending on how prices look at the moment you are buying.

            • StuG
            • 9 years ago

            Honestly I feel these are all launch prices. I would expect when the big daddy comes around (6970) that these will fall into a better price bracket. Right now though, the 6870 is canablizing some of the 5870’s sales, and they want a bit more money due to that (I feel).

            • khands
            • 9 years ago

            They’re trying to drop 5000 inventory now. I expect a final push after Cayman, and then the entire line will probably readjust again after Antilles and Nvidia’s answer drop.

    • Chrispy_
    • 9 years ago

    I’m still reading the article, but I just had to post this now:

    l[

      • yogibbear
      • 9 years ago

      Very wrong. It’s a two colour geometric background with a bomber aircraft. Blues and greys.

    • ThorAxe
    • 9 years ago

    Scott is an excellent Journalist, anyone that has read even a few articles by him can see this.

    The new features and architecture were explained succinctly and gave me a clear understanding of their functions. Anyone can run benchmarks but only a few can explain the intricacies of GPU rendering. That is the real challenge.

    • Ryu Connor
    • 9 years ago

    Getting hard to guage how my GTX295@2560×1600 is faring in this new wild world.

      • Chrispy_
      • 9 years ago

      Just fine, because there’s no game currently out that really pushes your old card very hard.

      The only real drawbacks to a 295 is the effective 896MB which might hit some internal limitation like the 768MB 460 does at some high-AA, 2560×1600 tests.

      In reality, reducing the AA a little or the resolution to 1080p will mean your 295 is still faring very well.

        • khands
        • 9 years ago

        IMO there’s no reason to have a top end card if you’re not doing 2560×1600 or higher, I’d sooner decrease AA than drop resolution. That being said, the 295 is still fine.

    • shank15217
    • 9 years ago

    tejas84 and pogsnet seem to be the same person aka paid viral fud

    • R2P2
    • 9 years ago

    With a 44W drop in idle power consumption, I wonder how long it would take for an upgrade from my 4870 to a 6850 or 6870 to pay for itself…

    • can-a-tuna
    • 9 years ago

    Really disappointing you went for the Anandtech route but not just having a single overclocked GTX460 but two massively overclocked versions for the card. Adding those highly clocked GTX460 versions makes HD6800 look quite bad in the graphs which is unfair and playing for nvidias hand. HD6800s can overclock almost as good as 460 with current non-reference cards.

      • sweatshopking
      • 9 years ago

      the fact that the overclocked cards come in at a $220ish shows you that at the 169$ level, they really arent that competitive. that reference boards are cheap, and performance is low on the 460’s. Stock to stock, and $ to $ the amd cards are better. I’d like to see a FPS value chart. Actually there might even be one, but I might be the only guy who doesn’t look at the benchmarks. I read the other pages, and the conclusion, and skip the benchmarks.

      • Xaser04
      • 9 years ago

      Given that the 6850 and 6870 are going head to head with the GTX460 it makes perfect sense to show multiple entries for the 460 with varying clock speeds.

      Given most 460’s can hit at least 800mhz on the core, (whilst many can be bought already running this speed for not much more than the reference clocked models) it is useful to know just how much extra performance you can expect to see, especially against these new 68xx cards.

      EDIT – I should point out I live in the UK where 1GB 460’s cost from

      • d0g_p00p
      • 9 years ago

      it was explained why they used overclocked cards. Also Please show me a 200+Mhz overclocked 6870 and 6850

    • flip-mode
    • 9 years ago

    -[

      • StuG
      • 9 years ago

      I believe they are better looking too.

    • tejas84
    • 9 years ago

    LOL @ Barts welcoming committee!

    Great Review, seems Barts is ok but nothing special!

    Good for competition though! GF110 is going to be a nasty surprise for AMD.

    • Moomanpoo
    • 9 years ago

    I would just like to know why you didn’t use 10.10 drivers that AMD supplied you with?

    I would also wonder why you are benchmarking overclocking video cards against Non overclocked video cards?

    Seems Very Biased to me, I have been reading this website for a longtime, and I even just made an account to ask you guys these questions.

    This was a very biased review of the video cards.

      • can-a-tuna
      • 9 years ago

      Oh, No wonder HD6800 results seemed a lot weaker than what they are in other reviews. Cat 10.9 doesn’t even support HD6800. This review is the most biased one I’ve seen so far.

      r[

        • Fighterpilot
        • 9 years ago

        He’s using Version 8.782 which corresponds with Catalyst 10.10
        The application profiles are 10.9a (for crossfire?)

      • eitje
      • 9 years ago

      Welcome to the Tech Report! I notice that you registered about an hour before posting this comment, and I wanted to make sure you felt welcome here! Enjoy your stay!

    • Krogoth
    • 9 years ago

    It is 4850 and 4870 all over again.

    Affordable DX11 performance. I am sure that Cayman will be faster, but at a cost. Only epenis crowd cares about it. 5870 and 480 are officially overpriced jokes.

    Overclocked GF104s don’t cut it, but the aggressive price cuts on entire 460 and 470 line do it.

    Competition is good for us all. 😀

      • StuG
      • 9 years ago

      Attempt to enjoy a huge battle of Empire Total War, and neither of them seem like jokes. Both of my 5870’s max out and I get around 40-50 FPS. ><

        • Krogoth
        • 9 years ago

        I suspect the CPU is the culprit not the GPU.

      • flip-mode
      • 9 years ago

      480’s massive price is pretty well justified by its massive performance in my opinion. It’s way outta my league but that doesn’t mean I have to resent it.

        • Krogoth
        • 9 years ago

        Massive performance? Not really. At best, it is only 30% faster than 5870. The 5870 itself is getting tailed by the cheaper 6870. The 470 sits between being a little faster than 5870 at some application and being a little smaller in others rivaling the 6870.The 460 and 6850 are not far away from 6870. I am not factoring the power consumption/noise which puts the 480 in a worse light.

        480 is a halo product that you should only get for bragging rights/epenis provided that you have a generous budget. Value was never part of its game. 😉

      • michael_d
      • 9 years ago

      You are an ignorant joke. 5870 rocks!

        • Krogoth
        • 9 years ago

        5870 wasn’t that great of a value after the price jumps due to demand/supply issues with 40nm process.

        6870 is almost as fast while being cooler, quieter and cheaper. On top of that, it fixes a known bug with Cypress architecture (filtering issues and sub-par tessellation performance).

        5870 better be getting some price cuts or there is very little reason to get it at this point.

    • Fighterpilot
    • 9 years ago

    Thanks for the review.
    AMD now has a damn good midrange.
    Kudos to them for making it smaller,faster,smarter.
    There’s apparently some new circuitry we haven’t seen yet in the Cayman chip and a lot more compute power.I guess a lot of people are waiting to see what sort of power that thing will have.
    Looks like a solid launch to me for Barts.

      • flip-mode
      • 9 years ago

      Wow, that’s unfortuante.

      • axeman
      • 9 years ago

      This must explain why TR’s numbers seem a little “off” compared to most of the other reviews. Most sites have the 6870 10%-20% slower than the 5870 most of the time, whereas in TR’s review the numbers put the two cards closer than that.

    • vvas
    • 9 years ago

    As I read elsewhere on the tubes, these should really have been named 6770 and 6750. I mean, come on! The 6870 is to the 5870 what the 5770 was to the 4850: a tweaked new design with more or less the same performance at a much more competitive price point. Giving the prestigious x8x0 model numbers to Barts is quite the misnomer, they should have reserved them for Cayman.

      • GrimDanfango
      • 9 years ago

      The logic behind this is that the x8x0 range has traditionally been a much lower price point. The 3870 and 4870 were much closer in market segment to the new 6870. If anything, it’s the 5850/5870 that are out of sync… so they already made their mistake by not bumping up the numbers on those cards.
      Now they’ve got the unfortunate problem of attempting to shuffle their range back down the scale without pissing off too many fans.

      At the end of the day, it doesn’t actually affect anything in the slighest. They’re still highly competitive products, whatever they may be called, and the BIG one is just around the corner under the guise of 6950/6970

    • Bauxite
    • 9 years ago

    I really like the look of the post-process aa compared to traditional msaa. The pixels might be “off” a bit to what we are used to, but the entire picture looks more realistic and natural to me.

    It doesn’t look blurry at all, in fact it looks more detailed than the 8x and especially 4x which seem to lose their realism the closer you look at the foliage.

    The real test is what it looks like in motion, whether or not there is any visible distortion or pattern. Given the way it works makes the usual methods of recording it a bit hard though.

    The fact that its not application based is pure gravy, too many games have been inconsistent over implementation or not having it at all. (SC2 was a big “WTF?!” when it shipped)

    More info and testing please! Even if its mostly subjective without some kind of lossless external capture.

    • HisDivineShadow
    • 9 years ago

    Caymen and Antilles intrigue me, given the solid shot to the arm of the $200 market these Barts cards are.

    Here’s hoping they give us some exciting reviews to read. Perhaps a card that truly brings Crysis to its knees, years after its release? Hoping that new AA turns out to be… better than it is now.

    I’m somewhat surprised they don’t have a blended mode of some SS plus a reduced form of whatever they’re doing through shaders. Then use the old fashioned AA pass as a chance to scan the image and boost the shader-AA when it gets to the end.

    • potatochobit
    • 9 years ago

    I came to the conclusion that these new amd cards suck and I might be moving to the green team even though I really don’t want to.

    I like the new display port but not sure if I’ll ever use it.
    so now I need to consider how much it will cost to go 3D for each and then weigh that in with having physx.

    i am also really considering a 200$ 5850 over the 240-260$ 6870

      • StuG
      • 9 years ago

      Nice troll attempt.

        • sweatshopking
        • 9 years ago

        friggin guys trying to muscle into my territory….

      • TheEmrys
      • 9 years ago

      You’ll be happier with these cards if you think of them as a refresh. Its rather more accurate than to think of them as new chips.

        • khands
        • 9 years ago

        Even then, it seems these are really supposed to supplant the 5700 series, as this is about what happened from the 4800s to the 5700’s the 5770 was slightly slower than the 4870, and the 5750 was slightly faster than the 4850, although they were priced wrong. The 6870 is slightly slower than the 5870, and the 6850 seems to be slightly slower than the 5850 this time, but at least they’re priced right.

        All that being said, I can’t wait for Cayman.

          • AlexTheGreat
          • 9 years ago

          well yes of course that’s the case, that’s why they don’t cost as much as the 5850/5870

      • SomeOtherGeek
      • 9 years ago

      I don’t care what the others say, but I just thing the green cards are better computers than the red. I have always been wowed by the nVidia cards than the ATi ones. I feel like I have more control over the GTXs than anything else. But then that is just me.

    • Bensam123
    • 9 years ago

    Is this AMDs take on the tick-toc approach Intel is using? Massive leaps in performance one generation, then refining it down for efficiency and pricing in the next generation, before making a giant performance leap again.

    Also, it would be a good idea to include the 4870s and 4850s in some of the newer tests. I know they’re two generations old, but the 5870 wasn’t much faster then them to begin with and the 6870 is hovering around the pricepoint of the 5850, which is once again around the price point of the 4870. Depending on where you buy a 4870 from (say eBay for as little as $60) that can mean a lot.

    If the 6870s were straight up double faster then the 5870 or something then I can see why you’d totally drop the last generation, but they’re attempting to muddle the waters between old and new cards by rebadging and respinning them, even though something many people might have will produce very similar results.

    • Fighterpilot
    • 9 years ago

    lol…is there a single GTX460 that ISNT overclocked in that review?

      • Voldenuit
      • 9 years ago

      I just did a check on Newegg, only 4 out of 23 460 GTX 1 GBs listed are actually running at stock speeds.

      I’m guessing it’s only a matter of time until nvidia raises its base speed, but until then, the AIB makers seem to have beaten them to it.

      It’s good to see TR test the 460 at various frequencies, so that users can know what to expect from a FOC or DIY OC.

    • Jambe
    • 9 years ago

    l[

      • Voldenuit
      • 9 years ago

      q[

    • PrincipalSkinner
    • 9 years ago

    Good review. What I’d like to see is Crossfire performance of these new Radeons. There have been speculations that scaling in multi GPU setups has improved.

    • Raskaran
    • 9 years ago

    Breakdown ( Overall not the best, nor the worst, but late for sure )
    + FPS charts ( not for all unfortunately )
    + lowest FPS shown ( again not for all )

    – Not shown the wires / said if custom coolers have solid or fluid RPM, can the profile be changed through software or do we need hardware RPM controllers (ref. gigabyte 4850 with zalman cooler)
    – For XFX you could? lower the RPM to match the temp of the normal 6850 and then test DBa – this would tell us if its a crap cooler or just too fast RPM profile
    – No default idle/load voltage readings
    – No OC at auto or matched for DBa manual fan profile
    – No OC at default or matched load voltage
    – No video playback %CPU charts
    – No folding results, other OpenCL stuff
    – No CPU GHZ or cores scaling, is the new arch less dependant on fast multicore CPU ?
    – Brands should be always displayed next to model in FPS charts
    – No eyefinity ( DiRT 2 should pull it)
    – Not shown/said card’s bundle, this can mean a lot when you have X same priced brands, with same warranty period
    – Only 1 MorphAA sample, a low res zoomed one at that
    – No Wat/FPS, $/FPS, £/FPS (or enable user input of the price)

    – We still don’t know if it can run Crysis

    ~Not clearly stated if the 6850 with old 5850 PCB’s support the new versions of DP and HDMI.

      • Voldenuit
      • 9 years ago

      Seeing as prices on all the cards here are still fluctuating, it would be premature to do FPS/$.

      The take home message though is that outside of the 6850, which outperforms its equivalently priced rival, all the $200-260 cards are quite close in performance and value. At least, that’s how I see it.

      • TheEmrys
      • 9 years ago

      Are you seriously making a list of what you like and don’t like about the review?

        • OneArmedScissor
        • 9 years ago

        I give your review of his review of a review a C-.

      • flip-mode
      • 9 years ago

      I think a lot of the “negatives” you list are a bit strange and some of them are just clearly outside of Tech Reports methodologies. Some of the things that you ask for would be nice to know, but I see nothing that you listed that constitutes a glaring omission.

      q[<- Not shown the wires / said if custom coolers have solid or fluid RPM, can the profile be changed through software or do we need hardware RPM controllers (ref. gigabyte 4850 with zalman cooler)<]q The wires? What are "the wires"? Why does it matter if the fan is FDB or BB - the sound levels are measured and that is what is of real importance. If you want to know about the fan bearings, check the product specifications before you buy. q[<- For XFX you could? lower the RPM to match the temp of the normal 6850 and then test DBa - this would tell us if its a crap cooler or just too fast RPM profile<]q Alternatively, test the product as it is packaged and report the results (as was done). Why should the consumer have to make adjustments to the product - shouldn't the product come off the shelf tuned for optimal fan performance? q[<- No default idle/load voltage readings - No OC at auto or matched for DBa manual fan profile - No OC at default or matched load voltage<]q Reporting voltages is a nice extra, but I wouldn't call the lack of it much of a negative. As for your very specific requests for OC - I don't understand most of what your asking. Matched dBA? You want Scott to tune all the fans to the same dBA and then test overclocking as a function of fan noise? That sounds pretty unreasonable to me. Then you want him to try do do the same for voltage? Again, unreasonable. I can understand asking for OC results at default voltages - that would certainly be nice. q[<- No video playback %CPU charts<]q That could be nice. q[<- No folding results, other OpenCL stuff<]q I question the value of that. It would be interesting, but folding is quite a niche and Nvidia is really the only choice if you're a heavy folder so there's really no point in testing it on AMD cards at all. q[<- No CPU GHZ or cores scaling, is the new arch less dependant on fast multicore CPU ?<]q That could be interesting - another way to phrase the question is how fast of a CPU do you need to remove the CPU as a bottleneck. It is taken for granted that CPUs are rarely the bottleneck for gaming at high settings, but I think it is a very valid question to ask when that stops being true. Is an Athlon II X4 640 a bottleneck at all when running games at high settings? I have no idea, but it could well be... HardOCPs CPU articles might answer the question better, but HardOCP usually only tests the top end CPUs, which is rather unfortunate. q[<- Brands should be always displayed next to model in FPS charts<]q Clockspeeds are more important, IMO. q[<- No eyefinity ( DiRT 2 should pull it)<]q Meh, can't say I care at all. q[<- Not shown/said card's bundle, this can mean a lot when you have X same priced brands, with same warranty period<]q LOL, what bundle? Bundles seems to be mostly a thing of the past - but whenever a specific card gets a game bundled in TR mentions it. q[<- Only 1 MorphAA sample, a low res zoomed one at that<]q And the article explained why. q[<- No Wat/FPS, $/FPS, £/FPS (or enable user input of the price)<]q You mean no value charts? Yeah, those are great. Granted, they are usually done in separate articles. q[<- We still don't know if it can run Crysis<]q LOL, I hope that was a joke. q[<~Not clearly stated if the 6850 with old 5850 PCB's support the new versions of DP and HDMI.<]q That's probably up to each individual product - check the product packaging and the reported specifications before you buy. There are two things your post lacks. The first is an overall impression of TR's review. The second is better writing.

        • Raskaran
        • 9 years ago

        I felt cheated to recieve so little information after waiting +3days for the review. It’s the first one that did so, I always liked the previous reviews.

        I treated ‘ – ‘ as items lacking/removed (see any changelog) not as negatives, the review judging is done at forums.

        “The wires? What are “the wires”?” – You can get 2,3 or 4 wire fans, it’s nice to know if you have software control over them or do you need a seperate RPM controller.

        “shouldn’t the product come off the shelf tuned for optimal fan performance?” – define optimum. Knowing the cooling capacity of the radiator+fan i can judge myself if its better then stock/competition or not.

        “I don’t understand most of what your asking. Matched dBA?” – If we compare things that differ with X variables, we either cut each comparison to 1 variable or we are at best guessing the meaning of results.
        I want to know what is the OC capacity of the cards( and temp’s) when at default and when voltages and fans measured dBa is the same. Yes it can be done, yes I want to see it being done properly.

        Folding, OpenCL, eyefinity, 1 MorphAA shot, I don’t mind the lack of them, I just found it make the review incomplete.

        Yes the Crysis one was a joke, afterall we know Metro 2033 is the new king, and it was shown to work pretty well at medium.

        Getting 2 items in the budle is better then one, getting a mDP->DP cable for ‘free’ with the card is nice. It’s even nicer if i know that it’s there with the card before i buy it. You see the picture ?

        “That’s probably up to each individual product – check the product packaging and the reported specifications before you buy.” – it’s not always posible to see the box before you order it. And they could? say how it is in the cards they’ve got. Proven > promised.

        I also read somewhere that due to the old PCB being used the power phase is 3+1 instead of the 4+1 of AMD reference cards. Would be nice if it was disclosed with the review.

        Anything else i missed ?

          • flip-mode
          • 9 years ago

          q[

            • Raskaran
            • 9 years ago

            Uff, that’s comforting, for a second there i thought i started making sense.

            About the missing part: I’m not going to tell Damage how to do a review on his own site if he is not asking for an opinion.
            He may or may not notice and use praise/critical voices, and answer questions in future.

            And just because I’m spoiled with quality comparisons it doesn’t mean this one has to be one or that i feel any worser because it’s not.

      • Freon
      • 9 years ago

      I think much of that will be covered in the IHV roundups later and aren’t typically investigated too specifically on initial hardware reviews like this.

    • Voldenuit
    • 9 years ago

    q[

    • ThorAxe
    • 9 years ago

    Bought two 6870s to replace my 4870×2 + 4870 Tri-fire setup. So far they are faster and quieter.

      • sweatshopking
      • 9 years ago

      omg 3 4870’s? how was the driver support? I imagine you and the other guy running 3 4870’s didn’t garner that much support.

    • allston232
    • 9 years ago

    Declaring yourself a “journalist” in the first sentence in an article is a bit much, don’t you think? I would call you a “first-rated blogger.” But thanks for writting a captivating, fun-reading review. I do enjoy reading it.

      • wezaleff
      • 9 years ago

      If it is inappropriate for Scott to “declare” himself a journalist (which he only really implies), it is surely much more inappropriate to rub it in his face. Besides, I think it fits perfectly; the Internet is kind of a big deal.

      • Jambe
      • 9 years ago

      Either you are genuinely ignorant of the term’s meaning or you are a cretinous elitist.

      • Bauxite
      • 9 years ago

      In the last decade I’ve read top notch articles on here that put a lot of “school of journalism” hacks in major news outlets to shame. Double shame considering a lot of them are regular salaried with their paper/magazine/tv spot or whatever.

      The best stuff is usually outside mass media anyways: attention grabbing, misquoted, out-of-context, misleading and outright false regurgitated crap might drive the most eyeballs and/or ears in media but its still trash.

      • djgandy
      • 9 years ago

      So what defines a journalist for you? Major news channels / sites that employ people who are barely able to spell and use correct grammar?

      • flip-mode
      • 9 years ago

      Does he get invited to press events that the general public is not admitted to? Yes he is and he probably has the signed NDAs to show for it. Sounds like some kind of Journalist to me.

        • Firestarter
        • 9 years ago

        Sounds more like a pawn of the industry to me 😛

        Journalists investigate and report, and good ones investigate further and report better than your run of the mill journalist. By that definition, The Tech Report employs some fine journalists 🙂

      • SiliconSlick
      • 9 years ago

      Best review on the entire net. See anyone else list ALL the drivers for the individual cards ?
      See anyone else use their own desired tests out of the “old box” to “check what’s really going on ? ( in that case showcasing 5870’s powers).
      I can’t complain, I feel I’ve finally seen ONE, and ONLY ONE review that isn’t a scathing wash of hatred and personal bias.
      (of course it’s usually a raging red rootser fanboy review, and the little twits that rant and rave support and screeds for ati have nearly destroyed every review site on the net).

      That being said I did note that this site decided to leave the HDR crap look give us fake higher framerates AMD/ati box CHECKED, so it could be fairer, and I will of course take that into consideration on every game bench w/HDR present.
      I know, it’s sad being happy about a “fair site” that still gives AMD/ati the unfair advantage because they are dead broke and ready to hit bankruptcy court 24/7/365 for years on end now.
      Oh well, at least I didn’t have to read 20 biased for ati lies in text and put up with a drooling, lying red rooster of a reviewer.
      Some sites are so bad it turns ones stomach to read the crap the reviewer spews.
      Thanks for having a decent review that doesn’t wind up pissing me off because it’s so blatantly red rooster garbage doused.

    • pogsnet
    • 9 years ago
      • OneArmedScissor
      • 9 years ago

      Saving the environment, one bazillion watt PSU build at a time. :p

      But seriously, the 6670 is supposed to be 8w idle and 63w TDP. That’s a pretty drastic improvement compared to the 5700s, and bodes well for new laptop models.

    • Voldenuit
    • 9 years ago

    Woot! I’ll be sure to read this one from cover to cover (metaphorically speaking).

    • paulWTAMU
    • 9 years ago

    aaah, no 8800 or 9800GTs? 🙂
    I’m glad to see increase in effenciency as much as I am an increase in power. I’d love to be able to get decent cards that are 8600GT size or smaller; it just makes it easier when building or installing ya know?

    I feel better about settling on a 460 now too 😉 yeah these are better but the 6850 was out of stock when I ordered except for one factory OC’d variant for 199 versus like 150 or 160 for the 460.

    edit: No Civilization, Batman AA or Fallout in the benchmarks though? I feel bummed cause those are what i”m playing or thinking of buying.

      • Voldenuit
      • 9 years ago

      Seeing as Batman:AA is an Unreal-powered console port, any modern card over $170 will probably be able to max it out.

      It runs very smoothly on my 4870 1 GB at 1920×1080+4xAA on ‘Very High’, so performance on a modern card ought to be academic.

      Turning on -[<'Batvision'<]- (ok, 'Detective Vision', but 'Batvision' or 'Batgoggles' sounds cooler, and there are species of bats with very good eyesight, um, nevermind) makes my GPU fan ramp up, though. :p

        • potatochobit
        • 9 years ago

        an AMD card will never max out batman as it cannot run physx

        you can argue that you are getting high frame rates but so what?
        are you getting the full experience? no one with a single card that is an AMD card can argue that.

          • odizzido
          • 9 years ago

          One of the nice things about my 5850 is I don’t need to worry about making sure physx is off in my games before playing them like I did with my 8800.

          • GrimDanfango
          • 9 years ago

          I actually added my old 8800GTX into my system’s second PCI-E to try out PhysX on Batman:AA to see what all the fuss was about.
          Turns out it was fuss about nothing… the only noticable addition was random sprays of rubble bursting out of things, and some occasional police-cordon-tape with cloth sim on it.

          It really didn’t add anything to the experience, and even with a hulking great dedicated card, still dragged a little with PhysX set to “full”. I’d hate to see how miserably slow it would run if it was used to do anything besides some incidental cosmetic effects.

          I’m still vastly more impressed with HL2’s implementation of Havok – highly refined, highly gameplay-integrated, incredibly quick. And this was running entirely on CPU, 6 years ago! Strikes me PhysX is a waste of time if nobody uses it properly.

          They will never use it for anything gameplay-integrated, the way HL2 does, as to do so would exclude Radeon users entirely. So long as PhysX is propriatory, it’s never going to be of any real use.

            • sweatshopking
            • 9 years ago

            I’m finding that for all intents and purposes, havok does everything I want. As long as chunks of dead things roll down hill, or if i walk into chains, they move, i’m happy. Oblivion was good, sc2 is fine, i haven’t seen anything in physx that has made me want to buy it. CPU’s do it fine.

            • no51
            • 9 years ago

            I think of it this way, PhysX offers Michael Bay type explosions. Since most of the games out there have Bay type plots already, why not? I’ve read comparisons of PhysX effects on Batman:AA, and yes, its mostly cosmetic. Some of them have immersion value though, for example in some levels there are disappearing floors whereas if PhysX was turned off it’d just be a gap. Like I said, it’s just something that separates a shitty action movie to a shitty Michael Bay movie (not that Batman:AA is shitty, it’s fantastic).

          • Waco
          • 9 years ago

          I see you’ve transitioned from trolling on OCC to trolling on here. Great. :/

            • poulpy
            • 9 years ago

            Makes one wonder where did Proesterchen, Shintai and HelloKitty end up..

            As a side note -not related at all with fanboyism in any way, and yes I can see you laughing at the back- Silus, hope you’re all right mate we haven’t heard from you since the reviews hit the fan 🙂

      • derFunkenstein
      • 9 years ago

      9800GT should be roughly equal to the included OC’d 4850, IIRC. And that 4850, in the tests where it’s listed, lost out by a minimum of 50% (the 6850 is 50% faster than the 4850 at a minimum). If you have a display of 1680×1050 or higher, I think you’d see a significant improvement with either side’s $170-180 cards.

        • Fighterpilot
        • 9 years ago

        A standard HD4850 beats GTX9800.

          • sweatshopking
          • 9 years ago

          I’d agree here as well.

          • derFunkenstein
          • 9 years ago

          Upon further review, that appears to be correct. Which would make the leap to a 6850 or GTX460 (even the 768MB version) a HUGE leap.

            • paulWTAMU
            • 9 years ago

            good caus4e mine shoudl be here thursday 🙂

    • bdwilcox
    • 9 years ago

    Oh, review, where have you been? We’ve waited so long for you!

Pin It on Pinterest