Radeon HD 4890 vs. GeForce GTX 275

AMD and Nvidia, those guys. Always with the competing, the one-upsmanship, the PowerPoint slides, the late-night phone calls before a product launch, the stake-outs, the restraining orders. Both sides want desperately to win the next round of the competition, to be ready to capture your business when the time comes for a video card upgrade.

And heck, both have very good products these days. In fact, the competition between them has been perhaps tighter than ever in the past little while. Since the advent of DirectX 10-class GPUs, we’ve seen image quality and feature sets converge substantially. With little to separate them, the two sides have resorted to an astounding bit of price competition. What you can get in a video card for under $200 these days is flabbergasting.

Since the introduction of its excellent Radeon HD 4000 series, AMD has been doing most of the driving in the market, setting standards for price and performance and forcing Nvidia to follow suit. For its part, Nvidia has acted very aggressively to remain competitive, slashing prices and rejiggering products at will. So it is now, with the introduction of a brand-new GPU from AMD, the Radeon HD 4890, and the rapid-response unveiling of a similarly priced competitor from Nvidia, the GeForce GTX 275. Both cards are slated to sell for around 250 bucks, and they are among the fastest single-GPU graphics cards on the planet.

Which is better? Tough to say. We have only had a short time with each card, but we’ve put them head to head for a bit of a comparo, and perhaps we can begin to answer that question with this quick first look.

In the red corner: Radeon HD 4890

The GPU that powers the Radeon HD 4890 is something of a curiosity. This chip, code-named RV790, shares the same architecture with the RV770 GPU you’ll find in the Radeon HD 4850 and 4870, and it’s made on the same 55nm manufacturing process. Yet the RV790 is very much a new chip, in spite of the similarities. Why the new design? AMD says it was reaching “an odd plateau” with RV770 clock speeds, and the modifications to the RV790 are intended to resolve that problem, enabling higher clock frequencies and thus better performance.

The Radeon HD 4890

To that end, AMD’s engineers endowed the RV790 with a new row of decoupling capacitors around the perimeter of the chip, as apparent in the overlay image on the right. (That red ring around the chip signifies the capacitor placement, not death, Xbox 360 fans.) The caps ought to lower noise and improve signal quality, allowing the chip to better tolerate more voltage. In addition, AMD has reworked the chip’s timing and power distribution with an eye toward higher clock speeds.

The tweaks make for a slightly larger piece of silicon: the RV790 measures out to about 17 mm per side, by my little green ruler, or roughly 290 mm². The RV770, by way of comparison, is 260 mm². The transistor count is up, as well, from an estimated 956 million in the RV770 to 959 million in the RV790.

Happily, the changes appear to have worked. On the Radeon HD 4890, the RV790 is good for at least another hundred megahertz. AMD has set stock clock speeds on the Radeon HD 4890 at 850MHz (versus 750MHz for the 4870), and this time around, the firm seems to have left some additional headroom for board vendors to offer higher clocked variants—or for overclockers to exploit, perhaps. GDDR5 clock speeds are up, as well, from 900MHz on the stock Radeon HD 4870 to 975MHz on the 4890. That may sound like a modest increase—and to some extent, it is—but keep in mind that GDDR5 memory transfers data four times per clock, so even smaller increments can add up. Also, 4890 cards come with a gigabyte of memory onboard, double the standard payload of most 4870s.

AMD estimates the peak power draw of a Radeon HD 4890 card at about 190W, but power consumption at idle should be lower than the 4870’s, at around 60W, thanks in part to board design alterations and in part to chip-level modifications.

A closer look at the RV790 GPU

As I’ve said, the 4890 is priced at around 250 bucks, but these days, nothing’s quite that simple. You’ll find a number of 4890 cards listed at online vendors for $249.99. This XFX model is a good example. But most of ’em, like the XFX, have a $20 mail-in rebate attached. So hey, if you can write small enough to fill out one of those forms, and if the postal service doesn’t lose it, you have a chance at getting a $20 check two or three months from now. That’s not your only option, either. The card we’ve tested, for example, is a Sapphire offering with a 900MHz GPU clock that’s selling for $264.99 and also comes with a $20 mail-in rebate. Oddly enough, AMD classifies any Radeon HD 4890 card clocked at 900MHz or better as separate product, dubbed the “Radeon HD 4890 OC,” although clock speeds on those cards will vary. We’ve thus labeled the product we’ve tested as a “4890 OC” in our benchmark results. We’ll get into the exact amounts of GPU capacity and bandwidth involved shortly, but whichever 4890 card you choose, that’s a heck of a lot of power for the money.

The 4890’s debut raises a couple of questions. One is whether the RV790 will be migrating to the lower rungs of the Radeon HD 4800 series product lineup, as sometimes happens in cases like these. AMD says the answer is no, that the Radeon HD 4850 and 4870 will always be based on the RV770, because the RV790 is a larger chip (and thus almost assuredly more expensive to produce). We should see 1GB versions of the 4870 and 4850 become more prominent going forward, though.

Another obvious question: will we see a Radeon HD 4890 X2 card soon? AMD told us it doesn’t have plans for such a beast at this time, in part because putting two RV790 GPUs on a board, with each at 850MHz, would result in total power consumption north of 300W. That’s great news for power supply makers but an inconvenience for everyone else. A 4890 X2 could still happen, though, if AMD deems it viable, so we’ll have to wait and see.

In the green corner: GeForce GTX 275

You didn’t really expect Nvidia to sit back and watch passively while AMD unveiled its hot new graphics card, did you? The green team already has the fastest single-GPU card in the form of the GeForce GTX 285, and meeting the 4890 head on was merely a matter of spinning out a new variant. That product, we have learned, will be called the GeForce GTX 275, and although cards won’t be available in all parts of the world until April 14, we have an early sample of the GTX 275 in Damage Labs for comparison against the new Radeon.

Happily, in order to compete with the Radeon HD 4890, the GeForce GTX 275 had to be a pretty potent product, so Nvidia has left a large fraction of the GTX 285’s computing and graphics power intact. Like its elder sibling, the GTX 275 is based on the 55nm version of the GT200 GPU, and it has all 240 of its stream processors enabled. The only major concession Nvidia has made to product segmentation is the disabling of one of the chip’s ROP partitions, which reduces its fill rate from 32 pixels per clock to 28, with an attending drop in antialiasing throughput.

The 55nm GT200 package flanked by GDDR3 memory chips

With that change, the GPU’s total memory interface width is also reduced from 512 to 448 bits, and the total RAM available drops from 1GB to 896MB. (You’ll see 14 rather than 16 chips in the picture above for this same reason.) 448 bits is still nearly twice the width of the 4890’s 256-bit path to memory, but Nvidia uses GDDR3 RAM, which only transfers data twice in each clock cycle.

The GT200 architecture is different in other ways, too, including the fact that its shader processors run at higher frequencies than much of the rest of the chip. For the GTX 275, Nvidia has settled on a core clock of 633MHz, SPs at 1404MHz, and GDDR3 memory at 1134MHz. Those clock speeds and the total theoretical GPU power involved are both down somewhat from the GTX 285, but they’re still considerable—and very much in the same class as the Radeon HD 4890, as we’ll soon see.

The de-tuning of the 55nm GT200 GPU has power consumption benefits: Nvidia rates peak GTX 275 board power at 219W, down over 60W from the GTX 285. That’s a little more power-hungry than the Radeon HD 4890, but the 55nm GT200 is a larger chip—the metal cap covering the package in the picture above obscures its exact size, but it has a heart-stopping 1.4 billion transistors. And the card itself has nearly twice as many memory chips onboard. Fortunately, recent GeForces have had admirably low power draw at idle, and the GTX 275 ought to continue that tradition.

GTX 275 cards aren’t yet selling on these shores, but Nvidia claims they should go for $249, by which it surely must mean $250 minus a penny. That puts ’em solidly into Radeon HD 4890 territory, but we’ll have to watch and see how mail-in rebates, game bundles, and price premiums for higher-speed models pan out. I should note that we’re testing a reference board with a stock clock speed in this review, so some GTX 275 models may offer slightly higher performance.

Side by side

Dude, look: pictures!

At 10.5″, the GeForce GTX 275 is an inch longer than the Radeon HD 4890

Both cards come with dual multi-GPU connnectors for two- and three-way CrossFire/SLI action

Fortunately, neither card requires an 8-pin PCIe aux power plug—two 6-pins will do

The GT200 chip is obscured by the package cap, but the package itself is quite a bit larger than the RV790’s

Stuff that’s not hardware

So let’s see if I can sum this up and perhaps make it a little game, too.

1. ______ says that it has a range of advantages over the other guys. In addition to graphics, users can expect much faster video encoding through than they’d get with a CPU alone by using the 2. ______ software package from 3. ______. The speed-ups are amazing. Similar performance gains are possible with GPU acceleration in the fields of high-performance computing, image processing, and distributed computing—including Folding@Home.

Additionally, games can reach new levels of visual fidelity and performance via the 4. ______ API, which is exclusively supported by its GPUs. Many games are already using it now in shipping titles, with even more planned in the next six months to a year. The firm says it has the best approach to GPU accelerated physics, because 5. ______ is the most widely used dev toolkit of its kind. One should note that the Havok demo that was shown running on Radeon hardware at GDC recently will also run on GeForces, because it uses OpenCL. This is a key advantage of the company’s standards-based approach to GPU computing.

Speaking of which, Windows 7 is just around the corner, and 1. ______ has worked closely with Microsoft to ensure that the quality and stability of its drivers for this exciting new OS are second to none.

Have it figured out yet? The answers are, alternately, either option one:

1. Nvidia

2. Badaboom

3. Elemental

4. PhysX

5. PhysX

…or option two:

1. AMD

2. Espresso

3. CyberLink

4. DirectX 10.1

5. Havok

Take your pick. I’m not convinced I’d buy one video card over another on the basis of either set of choices, but I will leave that up to you. Just thought you should know.

In addition to the above, Nvidia does have one new trick in its Release 185 driver rev that’s worthy of note: an option to enable ambient occlusion via a simple on/off control panel setting. Ambient occlusion is pretty much just what it sounds like: the lighting model will take into account how objects in the world might occlude ambient light. That may sound impossible, but the basic idea is to do less illumination in places where ambient light isn’t likely to reach as well, such as in corners where two walls meet.

With the exception of a few titles like Crysis and Stalker: Clear Sky, most current games don’t support ambient occlusion natively. Nvidia has elected to support AO in its control panel much like it does SLI, by using game-specific profiles. Among the 22 games supported out of the gate are some big names, including Mirror’s Edge, Valve’s Source engine games, Call of Duty: World at War, and the biggest PC game ever, World of Warcraft.

Here’s a quick example of how ambient occlusion affects Left 4 Dead.

Without ambient occlusion

With ambient occlusion

This isn’t a stark, bright scene with lots of ambient light, so the effect is very subtle. (Yeah, my example kinda stinks.) You can probably see the difference, though, if you concentrate on the area beneath the nearest railing, to the right of the left-most water tank (or whatever that is), where the floor meets the top of the brick wall that rises just above the stairwell. Along that intersection, the floor is a little darker.

No, really. Look closer.

Anyhow, I need to play with this feature more to see what I think of it. I’ve tried it briefly in L4D and Fallout 3, and the visual difference is hard to notice. I expect different games or different levels could yield clearer results (and better example pictures).

You may notice the performance hit more easily, which Nvidia estimates is on the order to 20-40%. Fallout 3 felt sluggish to me with AO enabled, but Left 4 Dead runs so quickly on a GTX 275 that I couldn’t perceive any slowdown. I kind of like the idea of using today’s apparent surplus of GPU power to deliver higher-quality lighting, so kudos to Nvidia for giving this a shot. I wonder whether AMD will follow suit.

Test notes

We’re sticking with our most recent suite of game tests here, so we have some basis for comparison. We’ve tested the Radeon HD 4890 OC and the GeForce GTX 275 with their very latest driver revisions, and we’ve included older results from a range of cards for comparison. In some cases, of course, the newer drivers could give our two stars today a bit of a performance advantage. Concentrate on the main comparison if you find that possible difference distracting.

I had hoped to test with a non-OC version of the Radeon HD 4890, as well, but we don’t have such a card in our possession, and apparently the Overdrive feature in AMD’s driver control panel doesn’t support underclocking. I believe we could have accomplished our goals with a BIOS editor, but since this is a brand-new card and we were facing some time constraints, we decided to forgo testing at 850MHz for now.

Our testing methods

As ever, we did our best to deliver clean benchmark numbers. Tests were run at least three times, and the results were averaged.

Our test systems were configured like so:

Processor Core i7-965
Extreme
3.2GHz
System bus QPI 4.8 GT/s
(2.4GHz)
Motherboard Gigabyte
EX58-UD5
BIOS revision F3
North bridge X58 IOH
South bridge ICH10R
Chipset drivers INF update
9.1.0.1007
Matrix Storage Manager 8.6.0.1007
Memory size 6GB (3 DIMMs)
Memory type Corsair
Dominator TR3X6G1600C8D
DDR3 SDRAM
at 1333MHz
CAS latency (CL) 8
RAS to CAS delay (tRCD) 8
RAS precharge (tRP) 8
Cycle time (tRAS) 24
Command rate 2T
Audio Integrated
ICH10R/ALC889A
with Realtek 6.0.1.5745 drivers
Graphics
Asus EAH4850 TOP Radeon HD 4850 512MB PCIe

with Catalyst 8.12 (8.561.3-081217a-073402E) drivers

Dual Asus EAH4850 TOP Radeon HD 4850 512MB PCIe

with Catalyst 8.12 (8.561.3-081217a-073402E) drivers

Gigabyte Radeon HD 4850 1GB PCIe

with Catalyst 9.2 drivers

Visiontek Radeon HD 4870
512MB PCIe

with Catalyst 8.12 (8.561.3-081217a-073402E) drivers

Dual Visiontek Radeon HD 4870
512MB PCIe

with Catalyst 8.12 (8.561.3-081217a-073402E) drivers

Asus
EAH4870 DK 1G Radeon HD 4870 1GB PCIe

with Catalyst 8.12 (8.561.3-081217a-073402E) drivers

Asus
EAH4870 DK 1G Radeon HD 4870 1GB PCIe

+ Radeon HD 4870 1GB PCIe

with Catalyst 8.12 (8.561.3-081217a-073402E) drivers


Radeon HD 4890 OC 1GB PCIe

with Catalyst 8.592.1 drivers

Sapphire
Radeon HD 4850 X2 2GB PCIe

with Catalyst 8.12 (8.561.3-081217a-073402E) drivers

Palit
Revolution R700 Radeon HD 4870 X2 2GB PCIe

with Catalyst 8.12 (8.561.3-081217a-073402E) drivers

GeForce
9800 GTX+ 512MB PCIe

with ForceWare 180.84 drivers

Dual GeForce 9800 GTX+ 512MB PCIe

with ForceWare 180.84 drivers

Palit GeForce 9800 GX2 1GB PCIe

with ForceWare 180.84 drivers

EVGA GeForce GTS 250 Superclocked 1GB PCIe

with ForceWare 182.06 drivers

EVGA
GeForce GTX 260 Core 216 896MB PCIe

with ForceWare 180.84 drivers

EVGA
GeForce GTX 260 Core 216 896MB PCIe

+ Zotac GeForce GTX 260 (216 SPs) AMP²! Edition 896MB PCIe

with ForceWare 180.84 drivers

GeForce GTX 275 896MB PCIe

with ForceWare 185.63 drivers

XFX
GeForce GTX 280 1GB PCIe

with ForceWare 180.84 drivers

GeForce GTX 285 1GB PCIe

with ForceWare 181.20 drivers

Dual GeForce GTX 285 1GB PCIe

with ForceWare 181.20 drivers

GeForce GTX
295 1.792GB PCIe

with ForceWare 181.20 drivers

Hard drive WD Caviar SE16 320GB SATA
OS Windows Vista Ultimate x64 Edition
OS updates Service Pack 1, DirectX
November 2008 update

Thanks to Corsair for providing us with memory for our testing. Their quality, service, and support are easily superior to no-name DIMMs.

Our test systems were powered by PC Power & Cooling Silencer 750W power supply units. The Silencer 750W was a runaway Editor’s Choice winner in our epic 11-way power supply roundup, so it seemed like a fitting choice for our test rigs.

Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests.

We used the following versions of our test applications:

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Specs and synthetics

We’ll start with our customary look at the theoretical throughput of the various cards in some key categories. Keep in mind that, where applicable, the numbers in the table below are derived from the observed clock speeds of the cards we’re testing, not the manufacturer’s reference clocks or stated specifications.

Peak
pixel
fill rate
(Gpixels/s)

Peak bilinear

texel
filtering
rate
(Gtexels/s)


Peak bilinear

FP16 texel
filtering
rate
(Gtexels/s)


Peak
memory
bandwidth
(GB/s)

Peak shader
arithmetic (GFLOPS)

Single-issue Dual-issue

GeForce 9500 GT

4.4 8.8 4.4 25.6 90 134

GeForce 9600 GT

11.6 23.2 11.6 62.2 237 355

GeForce 9800 GT

9.6 33.6 16.8 57.6 339 508
GeForce 9800 GTX+

11.8 47.2 23.6 70.4 470 705
GeForce GTS 250

12.3 49.3 24.6 71.9 484 726
GeForce 9800 GX2

19.2 76.8 38.4 128.0 768 1152
GeForce GTX 260 (192 SPs)

16.1 36.9 18.4 111.9 477 715
GeForce GTX 260 (216 SPs)

17.5 45.1 22.5 117.9 583 875
GeForce GTX 275

17.7 50.6 25.4 127.0 674 1011
GeForce GTX 280

19.3 48.2 24.1 141.7 622 933
GeForce GTX 285

21.4 53.6 26.8 166.4 744 1116
GeForce GTX 295

32.3 92.2 46.1 223.9 1192 1788
Radeon HD 4650 4.8 19.2 9.6 16.0 384
Radeon HD 4670 6.0 24.0 12.0 32.0 480
Radeon HD 4830 9.2 18.4 9.2 57.6 736
Radeon HD 4850

10.9 27.2 13.6 67.2 1088
Radeon HD 4850 1GB

11.2 28.0 14.0 63.6 1120
Radeon HD 4870

12.0 30.0 15.0 115.2 1200
Radeon HD 4890

13.6 34.0 17.0 124.8 1360
Radeon HD 4890 OC

14.4 36.0 18.0 124.8 1440
Radeon HD 4850 X2

20.0 50.0 25.0 127.1 2000
Radeon HD 4870 X2

24.0 60.0 30.0 230.4 2400

Although the Radeon HD 4890 OC and GeForce GTX 275 are ostensibly direct competitors, they diverge from each other quite a bit on paper: the GeForce easily leads in fill rate and texture filtering capacity, while the Radeon has a clear advantage in shader FLOPS. Despite different memory types and interface widths, though, memory bandwidth is roughly equal, with a slight edge to the GTX 275.

Incidentally, the GTX 275’s place in Nvidia’s lineup is probably worth calling out. Notice that the older GeForce GTX 280, based on the 65nm GT200 chip, trails the 275 in texture filtering and shader arithmetic capacity. So although 280 is a higher number than 275, the newer card may prove to be superior in many cases.

This would be an unexpected result, were it not for the fact that we’ve seen it many times before. In spite of the theoretical numbers, the Radeon measures out with more real-world pixel and texture fill rate.

And despite the 4890 OC’s pronounced FLOPS advantage on paper, the GTX 275 leads the Radeon in two of the four shader tests.

Far Cry 2

We tested Far Cry 2 using the game’s built-in benchmarking tool, which allowed us to test the different cards at multiple resolutions in a precisely repeatable manner. We used the benchmark tool’s “Very high” quality presets with the DirectX 10 renderer and 4X multisampled antialiasing.

Our first real game test shows us just how close of a contest this is shaping up to be. The 4890 OC leads at the two more common resolutions, but the GTX 275 steps into the lead at our four-megapixel peak res. We see this same pattern in the match-up between the GeForce GTX 260 and the Radeon HD 4870 1GB, so it’s not an unexpected result.

Left 4 Dead

We tested Valve’s zombie shooter using a custom-recorded timedemo from the game’s first campaign. We maxed out all of the game’s quality options and used 4X multisampled antialiasing in combination with 16X anisotropic texture filtering.

You just can’t stress either of these cards terribly much with Left 4 Dead and these (pretty darned high) quality settings. They just offer more GPU than one needs for this game. Once again, though, the 4890 OC is faster in the two lower resolutions before falling behind at 2560×1600.

Call of Duty: World at War

We tested the latest Call of Duty title by playing through the first 60 seconds of the game’s third mission and recording frame rates via FRAPS. Although testing in this matter isn’t precisely repeatable from run to run, we believe averaging the results from five runs is sufficient to get reasonably reliable comparative numbers. With FRAPS, we can also report the lowest frame rate we encountered. Rather than average those, we’ve reported the median of the low scores from the five test runs, to reduce the impact of outliers. The frame-by-frame info for each card was taken from a single, hopefully representative play-testing session.

The GTX 275 pulls out a victory here, although the match-up is very close indeed. The 4890 OC’s median low frame rate is only two FPS lower than the GTX 275’s.

Fallout 3

This is another game we tested with FRAPS, this time simply by walking down a road outside of the former Washington, D.C. We used Fallout 3‘s “Ultra high” quality presets, which basically means every slider maxed, along with 4X antialiasing and what the game calls 15X anisotropic filtering.

Chalk up one for the new Radeon, which produces an ever-so-slightly higher average frame rate coupled with a more substantial gap in median low numbers. Still, with the median lows near 50 FPS, neither card runs this game with perceptible choppiness.

Crysis Warhead

This game is sufficient to tax even the fastest GPUs without using the highest possible resolution or quality setting—or any form of antialiasing. So we tested at 1920×1200 using the “Gamer” quality setting. Of course, the fact that Warhead tends to apparently run out of memory and crash (with most cards) at higher resolutions is a bit of a deterrent, as is the fact that MSAA doesn’t always produce the best results in this game. Regardless, Warhead looks great on a fast video card, with the best explosions in any game yet.

If you thought this game would tip the scales one way or another, think again. Two frames per second separate the cards’ averages, and both of our contenders bottom out at 25 FPS.

Power consumption

We measured total system power consumption at the wall socket using an Extech power analyzer model 380803. The monitor was plugged into a separate outlet, so its power draw was not part of our measurement. The cards were plugged into a motherboard on an open test bench.

The idle measurements were taken at the Windows Vista desktop with the Aero theme enabled. The cards were tested under load running Left 4 Dead at 2560×1600 resolution, using the same settings we did for performance testing.

This power thing isn’t making my job any easier. The Radeon HD 4890 OC draws quite a bit less power when running a game, but the situation is almost exactly reversed at idle, continuing the dizzying asymmetrical parity of our test results.

Noise levels

We measured noise levels on our test system, sitting on an open test bench, using an Extech model 407738 digital sound level meter. The meter was mounted on a tripod approximately 8″ from the test system at a height even with the top of the video card. We used the OSHA-standard weighting and speed for these measurements.

You can think of these noise level measurements much like our system power consumption tests, because the entire systems’ noise levels were measured. Of course, noise levels will vary greatly in the real world along with the acoustic properties of the PC enclosure used, whether the enclosure provides adequate cooling to avoid a card’s highest fan speeds, placement of the enclosure in the room, and a whole range of other variables. These results should give a reasonably good picture of comparative fan noise, though.

At full tilt, the Radeon HD 4890 OC’s cooler makes more racket than the GTX 275’s. That’s pretty clear, and it matches with my subjective impressions.

GPU temperatures

I used GPU-Z to log temperatures during our load testing. In the case of multi-GPU setups, I recorded temperatures on the primary card.

Here’s one possible reason why the 4890 OC’s cooler is louder: it’s working to keep GPU temperatures relatively low. This is something of a reversal for AMD’s stock coolers in the Radeon HD 4800 series. The 4870 512MB above, at 84°C, is more typical. Meanwhile, the GTX 275 is one hot chip. Only the two Asus Radeons in CrossFire, with their plainly broken coolers, produce higher temperatures—just before they crash. The GTX 275 isn’t in the same boat, but it does appear Nvidia has chosen to favor good acoustics over lower GPU temperatures, while AMD has taken a step in the opposite direction.

Overclocking

I don’t usually test overclocking with video cards because, well, it’s typically a real pain in the rear. Video cards crash in some bad ways, and finding the limits of your GPU can be incredibly tedious. However, I decided to give it a try this time around, since AMD is touting the clock speed headroom built into the 4890, and I came away surprised by a couple of things.

First, although it first disappeared a while back, I hadn’t realized Nvidia never restored the GPU auto-overclocking function to its downloadable system tools suite. Back in the nTune days, that feature was remarkably good at finding the practical limits for your GPU. Apparently, it has evaporated into the wind, never to return. (At least not in Vista x64, anyway.)

I was even more shocked to find that the overclocking utility built into AMD’s Catalyst drivers proved truly useful. My past experiences with this utility were not, shall we say, good. Lots of system lock-ups, very little progress toward higher clock frequencies. But in this case, with the 4890 OC, AMD’s utility didn’t lock up once and methodically found its way up to a 990MHz core clock with 1190MHz memory, a config that proved wholly stable in subsequent testing.

Since I wasn’t willing to endure hours of trial-and-error with the GTX 275, I simply tried setting it to the stock clocks for the GeForce GTX 285: 670MHz core, 1550MHz shaders, and 1300MHz memory. The card was generally OK at those speeds, with no visual anomalies, but it wound up locking up right at the end of our benchmark tests. You’d probably want to back it down a notch for everyday use.

Overclocking allows for some nice performance improvements from both cards, although neither one has a pronounced advantage on this front.

Conclusions

Jeez. We plow through all of those numbers, and I still can’t declare any clear winner. What we do know is that, with the Radeon HD 4890 and its OC variant, AMD has succeeded in establishing a new standard for performance at the $249 price point. This is a most excellent development, and it has forced Nvidia’s hand. Fortunately, Nvidia’s response is quite respectable, as well. The GeForce GTX 275 is a worthy competitor to the 4890, and which card is a better value may depend on how prices shake out once both options are out in the wild. Remember, the 4890 OC card we tested costs $265, and if anything, it was an almost exact match for our stock-clocked GeForce GTX. Then again, the first GTX 275 has apparently popped up at Newegg as I write, and it’s priced at $259.99. So perhaps the 4890 OC is the most apt comparison—and the parity remains intact.

That’s not to say the cards are exactly interchangeable. Nvidia has biased its cooling solution in favor of higher GPU temperatures and lower noise levels under load, while AMD has done the opposite. Also, the 4890 OC draws less power under load than the GTX 275, but more when idling at the Windows desktop. You can choose which set of attributes you prefer there. The 4890 is physically shorter, too, which could be important inside of an especially cramped case.

If you think I’m gonna pick for you when things are this tight, though, forget it. This one is too close to call. My only advice is that you might want to consider carefully whether you need to pay for the extra performance in these cards versus the next rung down the ladder, like this Radeon HD 4870 1GB for $190 or this GeForce GTX 260 for $179. Those might be even better deals, and they may be all the GPU you need for current games, too.

Comments closed
    • TurtlePerson2
    • 10 years ago

    Two months later and this thing can be had for $150 AR. Prices are falling fast it seems.

    • Deanjo
    • 11 years ago

    Meanwhile I see no need to upgrade my 8800GT SLI system. At least I can Cuda stuff without interrupting desktop rendering, and those only cost me $100 a piece.

    • Umbragen
    • 11 years ago

    My only complaint about the 4890 is the noise. I hadn’t bought a new/current video card since 2004 and the solutions I’ve been getting by with were practically silent. When this thing cranks up it’s not /[

    • PerfectCr
    • 11 years ago

    These articles are what TR is all about! Awesome and I am not even in the market to build another PC for about another 3 or 4 years. πŸ™‚

    • sergeant_skyes
    • 11 years ago

    well i would like to tell that amd’s architecture is much more efficient than nvidia’s as radeon 4890 ( with 0.96 bn transistors and 16 rops 256 bit) edges out gtx 275 (with 1.4 bn transistors and 24 rops 448 bit) you guys have to appriciate that. and well you guys might think that 4890 loses to gtx 275 in resolutions like 2560 * 1200 but hey if a guy can afford a big HD monitor having this resolution then he would sure not to choose either of these cards but would have chose something like 4870 x 2 or gtx 295.as for me radeon is a clear winner but im not buying these cards when DX 11 cards are in the corner.

      • Meadows
      • 11 years ago

      You may wonder how it’s “more efficient” or “edges out” a card that it loses to during /[

        • sergeant_skyes
        • 11 years ago

        well the fact is that there are a very few people out there who are rich enough to own a full HD monitor with the highest res so nvidias card may be good at such scenarios but for the rest two most commonly used resolutions ATI’s is much better..

      • cobalt
      • 11 years ago

      You’re picking your facts to support your point. For example, while you point out the HD4890 is only 256 bits, you could have just as easily pointed out the clock speed of the GTX275’s memory is only half of the 4890’s. Simlarly, you could have pointed out that the NVIDIA card, with only 240 shaders, is able to perform similarly to the 800-shader AMD card, making the NVIDIA card more efficient.

      In terms of number of transistors, and maybe power usage at load, I’ll agree.

        • sergeant_skyes
        • 11 years ago

        well yes your right that nvidia is still using GDDR3 but hey they could adopt GDDR5 actually more easily than AMD after the ground breaking success of their 8800 series but they really didn’t care as they were confident (to say overconfident) as amd was struggling from both CPU and GPU sides (go and refer the interview in Anandtech.com about the The RV770 Story: Documenting ATI’s Road to Success ; they clearly stated that nvidia was confident that ATI couldn’t beat them twice on the same manufacturing process dosen’t it show that they were overconfident??) and hey no techie really considers their cards have 800 stream processors but in fact as you might know they are ALU’s . many consider ATI’s having 160 stream processors.(160/5)

        • SangiYaro
        • 11 years ago

        Actually, GTX275’s memory speed (1134Mhz) is higher than HD4890’s (975Mhz). Its just that 4890 uses memory which transfers more data per clock cycle. Also NVIDIA’s and AMD’s shaders are completely different so amount of them cant be directly compared.

        IMO efficiency of architecture is nothing more than performance/transistor.

    • willyolio
    • 11 years ago

    i find that little “capacitor ring” pretty interesting. i wonder if we can expect to see it in the R8xx’s, along with higher clock speeds across the board for those chips.

    • indeego
    • 11 years ago

    I’m totally not getting either of these.

    I want a 30″ screen next up, then I’ll worry about the card. Right now nothing hurts at 1920×1200+1600×1200 on 24″+20″g{<.<}g

      • flip-mode
      • 11 years ago

      Yay? For those that are in dire need of a card (not me either) these are wonderful and the launch prices are wonderful too.

      A 4850 runs 1600×1200 really well for me. I sure hope I won’t need to upgrade this card for a long time.

      The good news is that I have a serious backlog of games to play:

      Oblivion
      Far Cry 2
      Fallout 3
      Rainbow 6 Vegas

      All of those should be extremely playable for me.

        • BoBzeBuilder
        • 11 years ago

        Flip-mode is on point.

    • d0g_p00p
    • 11 years ago

    You can get the overclocking feature back if you use the older version of nTune 5.0 something. This works on Vista x64. Using the brand new version of the utility does remove it though.

    • jodiuh
    • 11 years ago

    I skipped the whole damn thing after reading 181’s on 285, 185’s on 275. L4D got much smoother and performance got quite a kick going from 181.xx to 182.xx on my 285 @ 8x MSAA.

    Fail.

      • Meadows
      • 11 years ago

      Your loss.

      • indeego
      • 11 years ago

      You should start a techreportII/[

    • lycium
    • 11 years ago

    excellent review as always, the explicit isomorphism between both vendors’ offerings was really well executed!

    it’s definitely an extremely difficult decision, and i’m looking to replace my 8800gts 640mb. don’t want to buy the titanic -[

    • Fighterpilot
    • 11 years ago

    Thanks for the great review Damage.
    Just about every serious video card fan around the Net has been waiting for the /[

    • NotEmul8d
    • 11 years ago

    Not too happy to see the 4890 review was with a factory OC’d board, sigh.

      • khands
      • 11 years ago

      The stock (850Mhz) boards seem to let the 275 win a couple more, but they still flip-flop a lot, the difference isn’t that great until you bump it up to more than 950, it seems to just take off.

    • glacius555
    • 11 years ago

    Great review, thanks Damage! I wonder about one thing though, sure it takes time to test all of them, but don’t you think that using some of the HD4850 and HD4870’s with older drivers might be misleading? Thanks!

    BTW, isn’t it kinda weird that HD4850 is using more power than HD4870 1GB when stressed?

    • honoradept
    • 11 years ago

    i still use my heritage ASUS Radeon 7000 128MB

      • Thresher
      • 11 years ago

      What kind of framerates you getting with the Pong FRAPS demo?

    • matnath1
    • 11 years ago

    Folding @ Home on Nvid Cards tip the scale in their favor for me. On Average Nvid cards are almost twice as fast at folding. I’d love to see TR add a folding ppd section for this very reason. See this Tech Power Up coverage which collaberates every google search comparison I’ve done comparing Nvid Vs AMD/ATI:

    Β§[< http://www.techpowerup.com/reviews/Powercolor/HD_4890/23.html<]Β§ I am running three folding rigs 24/7. Power consumption and points per day are absolutely critical. That is why I use all 216 GTX 260's (two are 55nm). I'm getting around 20,000 points per day on the three.

    • Damage
    • 11 years ago

    Konstantine’s post nuked for profanity.

    • Damage
    • 11 years ago

    phez’s post nuked for profanity.

    • phez
    • 11 years ago

    Holy shit. The 275 is as fast as two 260s in SLI. That means either two things: indeed, SLI is shit, or this one board is as fast as what can be pieced together physically by two?

    And at essentially half the price of what G80 did. /giddy

    • Anomymous Gerbil
    • 11 years ago

    g{

      • eitje
      • 11 years ago

      most chip manufacturers don’t give exact numbers of transitors on their products, so they must be estimated.

    • Vasilyfav
    • 11 years ago

    You know ambient occlusion is a garbage post-processing effect when you have to DRAW AN ARROW to it, just to be able to notice it.

    • thebeastie
    • 11 years ago

    This was a fantastic review.
    When are we going to see 275 in SLI? πŸ™‚

      • Xaser04
      • 11 years ago

      You can – its called the GTX295 πŸ™‚

    • paulWTAMU
    • 11 years ago

    Holy crap. 250 bucks buys you a GPU that can play games at 2560X1600? That’s freaking sweet.

      • Krogoth
      • 11 years ago

      Yes, it is very amazing. You just need two of them to get the same resolution at a playable framerate with some AA and AF thrown in.

    • ClickClick5
    • 11 years ago

    Mmmm, I’m really going to just wait for the 6xxx series. My 4870 is still doing it’s job. πŸ™‚

      • DrDillyBar
      • 11 years ago

      Indeed.

        • rhema83
        • 11 years ago

        Not trying to state the obvious, but the “casual gamer” will be more than satisfied with a $130 HD4850. (Heck, it can run top-of-the-line games from past years – except you guessed it, Crysis – at the highest detail levels.)

    • MadManOriginal
    • 11 years ago

    Nice brief review of what are ultimately refresh tweaks…no need for a long article here. I also think it’s good that you put ‘OC’ next to the 4890, I hope that wasn’t just because it’s the official product name but to let people know it’s not a base speed 4890 as well. I’ve read that the base 4890 has a different PCB though (on xbitlabs I think) with fewer voltage pahses, I do hope you guys get one and try oc’ing…I suspect many people will follow the old adage ‘Factory oc’d cards are a waste’ and get base 4890s expecting these kind of clocks but they might not realize the cards are not the same design.

    • LiamC
    • 11 years ago

    D’oh double post

    • albundy
    • 11 years ago

    Great review! but wow, 90 degees at load? thats insane in the membrane! i’m guessing that passive cooling is off limits. lol!

    my mind is however made up. will wait for dx11 cards in a few months.

      • Anomymous Gerbil
      • 11 years ago

      Haha. What, you think there will ever be high-end cards that won’t be pushed to their limits,and therefore run hot?

    • Palek
    • 11 years ago

    Finally!!! What took you so long? πŸ˜‰

    Every other website posted their reviews last week, but nobody provides as much entertainment in their articles as you guys!

    *reads article*

      • Tamale
      • 11 years ago

      I noticed Scott has a lot more comparative data than most of the other sites.. I think it’s crucial to see where the 4890 lies in comparison to dual-card setups like the 4850 X2 and I’m happy to wait a little more for a more thorough review.

        • Palek
        • 11 years ago

        I think you know, but I wasn’t really complaining about TR being late, merely expressing my delight in seeing the article posted at last! I’m happy to wait a week or more for the invariably better reviews Scott and the gang produce. Plus TR are one of those unfortunately rare review sites where the writers understand how to make their writing *[

          • Damage
          • 11 years ago

          No, that last paragraph is as intended. Both firms have pointed to Havok’s OpenCL compatibility as a plus for their approach.

            • Palek
            • 11 years ago

            Ah, I see.

            By the way, have you guys looked at the performance of the AVIVO encoder recently? I happened to download and try the latest release (9.3) last night. I transcoded a DVD into iPod-compatible video and the output was excellent, with no observable artifacts during a quick test screening. The machine I used has a Radeon HD 4670 and an Athlon 64 X2 4850e and it transcoded a 29-minute video into a 1.5Mbps MP4 file (420×270 I think) in roughly 5 minutes. On the second try I changed the bitrate to 2.5Mbps (640×360) and transcoded a 28-minute video in roughly 6 minutes.

            Maybe it’s time for another head-to-head between AVIVO and Badaboom? πŸ™‚

    • Tamale
    • 11 years ago

    I just liked the CUDA/Stream madlibs. That was pure Damage πŸ™‚

      • grantmeaname
      • 11 years ago

      “continuing the dizzying asymmetrical parity of our test results” was my favorite part

      • ssidbroadcast
      • 11 years ago

      Yeah and just when I thought I’ve seen it all in Scott’s writing style, he pulls out /[

    • Faceless Clock
    • 11 years ago

    Something that is becoming clear in these reviews is that Nvidia’s SLI is considerably out-performing CrossfireX solutions. For example, just look at those GTX 260s against the Radeon 4870s and Radeon 4870X2. The 260s are considerably better.

    Right now, that only matters if you want to buy very high-end hardware. But if ATI is going to keep going with this idea of using multiple GPUs as their high-end solution, it is something they need to address.

    • KikassAssassin
    • 11 years ago

    From the article:

    /[http://enthusiast.hardocp.com/article.html?art=MTYzNiw4LCxoZW50aHVzaWFzdA==<]Β§ They used the same driver version you did, too, so it can't be explained by a difference in the driver.

      • Damage
      • 11 years ago

      Oh, it let me set the clock speed down, but once I ran a 3D app, the speed shot right back up to stock. My first clue: performance was unchanged. Then I traced it back and saw what was happening.

        • KikassAssassin
        • 11 years ago

        huh, strange. I just used Overdrive to underclock my 4870, and used Rivatuner’s monitoring app to check my clocks. I ran a couple games, and it stayed at the lower clock speeds I specified. I’m using Catalyst 9.2 drivers, I wonder if it’s the beta driver you’re using that’s causing it to do that.

          • HurgyMcGurgyGurg
          • 11 years ago

          Your right the HD 4870 will stay at its underclocked speeds even during 3D.

          I had a HD 4870 with its VRMs failing and had to underclock it to get it to run 3D games. I would have known if it switched back to original speeds because it would have locked up the system and or crashed.

          And this was back with 8.12…

          I guess its a bug with the beta drivers like Assassin mentioned.

        • Meadows
        • 11 years ago

        Tried RivaTuner? Ought to work just that much.

    • deathBOB
    • 11 years ago

    My 8800GT is weeping softly in its slot.

      • BoBzeBuilder
      • 11 years ago

      That’s still a great card. No need for an upgrade unless you game at resolutions higher than 1680*1050.
      But still, for $250, you can have a powerhouse. Gone are the days of $599 high-end cards.

        • lycium
        • 11 years ago

        only as long as the days of not having $600 to spend stay.

        • rhema83
        • 11 years ago

        Agreed. The 8800GT is a great card. I am running a 4850 512MB at stock clocks and I am satisfied… Although I’m constantly tempted by new offerings such as 4870 1GB, and now the 4890 1GB. And yeah, I’m a red supporter.

      • grantmeaname
      • 11 years ago

      my Mobility Radeon x200M is too embarassed to even start to show any emotion…

        • Palek
        • 11 years ago

        My 1950 Pro is hiding in the closet and my HD 4670 took exception to the phrase “lower mid-range”.

          • KikassAssassin
          • 11 years ago

          My 512MB HD 4870 is standing proud (though underneath its strong, gruff outward appearance, it’s secretly embarrassed by its small memory penis).

            • Kurkotain
            • 11 years ago

            man my fx5500 decided to suicide… πŸ™
            THAT’S OLD! lol

            • Waco
            • 11 years ago

            An FX5500 is old? Time to bring out the Voodoo2 and S3 Trio cards. πŸ˜€

            • Krogoth
            • 11 years ago

            Nope, nothing is faster then a Hercules Monochrome ISA! XD

        • Buzzard44
        • 11 years ago

        On the contrary, my Mobility Radeon x200M is standing tall….playing Diablo 2 on max settings.

        But seriously, although I don’t play many new games and don’t turn up the settings when I do, I haven’t encountered much trouble with frame rate, except while playing Titan’s Quest (~15 fps on average, dropping to 7 or 8 sometimes…esh).

      • Meadows
      • 11 years ago

      Mine is happy. Performs relatively well too.

      The main reason why it’s happy is that it knows I can’t afford to replace it anyway.

      • Jigar
      • 11 years ago

      Dude, i got 2 of them, and they don’t let me upgrade them πŸ˜‰

    • End User
    • 11 years ago

    Can you make your L4D timedemo available for download?

    Thanks for another great review.

    • Meadows
    • 11 years ago

    How uninteresting. Once again they’re trading blows to no end, and there’s no /[

    • ManAtVista
    • 11 years ago

    Very close. [testing my posting abilities, so apologies for pointless comment…]

    • Sargent Duck
    • 11 years ago

    Damage for the win.

    I’m going to go read the review now…

    • homerdog
    • 11 years ago

    What’s the deal with idle power consumption of the 4890? Some reviews show it being a lot better than the 4870, some about the same, and some worse. Weirdness.

    • Skrying
    • 11 years ago

    Interesting. Cards so close in performance that other metrics will finally be a real deciding factor. I see a increased race to the bottom in a price war with this one.

      • shank15217
      • 11 years ago

      I don’t see a price war at all, the cards fit a gap in the pricing structure and it will stay like that until the next gen cards come out in 6-8 months.

        • khands
        • 11 years ago

        You may see the 285 dropping like a rock though, it just doesn’t perform well enough to justify the price difference anymore.

        • Skrying
        • 11 years ago

        All cards fit a place in the pricing structure of the market, that makes no sense as some kind of point. These cards are so close that a consumer is basically flipping a coin when picking it. Nvidia or AMD want the choice to be more clear than that, they’ll lower the prices if they can produce the stock.

          • shank15217
          • 11 years ago

          Flipping a coin? Are you serious?.. What if you didn’t have a SLI bloard and wanted to use dual cards? What if you wanted phyx support? What if you have games that take advantage of DX10.1? They are sufficiently different enough. Prices aren’t gonna drop because these cards came out, not until either company releases a faster card at the same price point on a new architecture.

            • Freon
            • 11 years ago

            It really seems like all those points are moot when it comes down to the core competencies of a video card.

            When to competing companies offer unique features, a good portion of the time they are not utilized or underutilized by developers. It’s not worth the development time to only reach a small portion your audience. Remember, it’s not just the 50/50 (to pick a split) coin flip between NV or AMD, even then only the newest cards have these features, so reduce that further. Even years later when the tech trickles down do you top out at 50% of your audience if each side is keeping things to themselves.

            Good game benchmarks for the price, decent noise level, good at industry-wide standard functions… That’s what is important to me at least, and I don’t think I’m alone. I’m laughing at all this Cuda and PhysX and whatever crap right now and the article makes the same point.

    • jonybiskit
    • 11 years ago

    COD4 WORLD AT WAR!!!!!!!11

    • adisor19
    • 11 years ago

    Awesome review Damage !

    Title should be 4890 instead of 4980 though..

    Adi

      • grantmeaname
      • 11 years ago

      here you go with your reality distortion field again… it looks right to me!

      πŸ˜‰

Pin It on Pinterest

Share This