Home Radeon HD 4890 vs. GeForce GTX 275

Radeon HD 4890 vs. GeForce GTX 275

Scott Wasson
In our content, we occasionally include affiliate links. Should you click on these links, we may earn a commission, though this incurs no additional cost to you. Your use of this website signifies your acceptance of our terms and conditions as well as our privacy policy.

AMD and Nvidia, those guys. Always with the competing, the one-upsmanship, the PowerPoint slides, the late-night phone calls before a product launch, the stake-outs, the restraining orders. Both sides want desperately to win the next round of the competition, to be ready to capture your business when the time comes for a video card upgrade.

And heck, both have very good products these days. In fact, the competition between them has been perhaps tighter than ever in the past little while. Since the advent of DirectX 10-class GPUs, we’ve seen image quality and feature sets converge substantially. With little to separate them, the two sides have resorted to an astounding bit of price competition. What you can get in a video card for under $200 these days is flabbergasting.

Since the introduction of its excellent Radeon HD 4000 series, AMD has been doing most of the driving in the market, setting standards for price and performance and forcing Nvidia to follow suit. For its part, Nvidia has acted very aggressively to remain competitive, slashing prices and rejiggering products at will. So it is now, with the introduction of a brand-new GPU from AMD, the Radeon HD 4890, and the rapid-response unveiling of a similarly priced competitor from Nvidia, the GeForce GTX 275. Both cards are slated to sell for around 250 bucks, and they are among the fastest single-GPU graphics cards on the planet.

Which is better? Tough to say. We have only had a short time with each card, but we’ve put them head to head for a bit of a comparo, and perhaps we can begin to answer that question with this quick first look.

In the red corner: Radeon HD 4890
The GPU that powers the Radeon HD 4890 is something of a curiosity. This chip, code-named RV790, shares the same architecture with the RV770 GPU you’ll find in the Radeon HD 4850 and 4870, and it’s made on the same 55nm manufacturing process. Yet the RV790 is very much a new chip, in spite of the similarities. Why the new design? AMD says it was reaching “an odd plateau” with RV770 clock speeds, and the modifications to the RV790 are intended to resolve that problem, enabling higher clock frequencies and thus better performance.

The Radeon HD 4890

To that end, AMD’s engineers endowed the RV790 with a new row of decoupling capacitors around the perimeter of the chip, as apparent in the overlay image on the right. (That red ring around the chip signifies the capacitor placement, not death, Xbox 360 fans.) The caps ought to lower noise and improve signal quality, allowing the chip to better tolerate more voltage. In addition, AMD has reworked the chip’s timing and power distribution with an eye toward higher clock speeds.

The tweaks make for a slightly larger piece of silicon: the RV790 measures out to about 17 mm per side, by my little green ruler, or roughly 290 mm². The RV770, by way of comparison, is 260 mm². The transistor count is up, as well, from an estimated 956 million in the RV770 to 959 million in the RV790.

Happily, the changes appear to have worked. On the Radeon HD 4890, the RV790 is good for at least another hundred megahertz. AMD has set stock clock speeds on the Radeon HD 4890 at 850MHz (versus 750MHz for the 4870), and this time around, the firm seems to have left some additional headroom for board vendors to offer higher clocked variants—or for overclockers to exploit, perhaps. GDDR5 clock speeds are up, as well, from 900MHz on the stock Radeon HD 4870 to 975MHz on the 4890. That may sound like a modest increase—and to some extent, it is—but keep in mind that GDDR5 memory transfers data four times per clock, so even smaller increments can add up. Also, 4890 cards come with a gigabyte of memory onboard, double the standard payload of most 4870s.

AMD estimates the peak power draw of a Radeon HD 4890 card at about 190W, but power consumption at idle should be lower than the 4870’s, at around 60W, thanks in part to board design alterations and in part to chip-level modifications.

A closer look at the RV790 GPU

As I’ve said, the 4890 is priced at around 250 bucks, but these days, nothing’s quite that simple. You’ll find a number of 4890 cards listed at online vendors for $249.99. This XFX model is a good example. But most of ’em, like the XFX, have a $20 mail-in rebate attached. So hey, if you can write small enough to fill out one of those forms, and if the postal service doesn’t lose it, you have a chance at getting a $20 check two or three months from now. That’s not your only option, either. The card we’ve tested, for example, is a Sapphire offering with a 900MHz GPU clock that’s selling for $264.99 and also comes with a $20 mail-in rebate. Oddly enough, AMD classifies any Radeon HD 4890 card clocked at 900MHz or better as separate product, dubbed the “Radeon HD 4890 OC,” although clock speeds on those cards will vary. We’ve thus labeled the product we’ve tested as a “4890 OC” in our benchmark results. We’ll get into the exact amounts of GPU capacity and bandwidth involved shortly, but whichever 4890 card you choose, that’s a heck of a lot of power for the money.

The 4890’s debut raises a couple of questions. One is whether the RV790 will be migrating to the lower rungs of the Radeon HD 4800 series product lineup, as sometimes happens in cases like these. AMD says the answer is no, that the Radeon HD 4850 and 4870 will always be based on the RV770, because the RV790 is a larger chip (and thus almost assuredly more expensive to produce). We should see 1GB versions of the 4870 and 4850 become more prominent going forward, though.

Another obvious question: will we see a Radeon HD 4890 X2 card soon? AMD told us it doesn’t have plans for such a beast at this time, in part because putting two RV790 GPUs on a board, with each at 850MHz, would result in total power consumption north of 300W. That’s great news for power supply makers but an inconvenience for everyone else. A 4890 X2 could still happen, though, if AMD deems it viable, so we’ll have to wait and see.

In the green corner: GeForce GTX 275
You didn’t really expect Nvidia to sit back and watch passively while AMD unveiled its hot new graphics card, did you? The green team already has the fastest single-GPU card in the form of the GeForce GTX 285, and meeting the 4890 head on was merely a matter of spinning out a new variant. That product, we have learned, will be called the GeForce GTX 275, and although cards won’t be available in all parts of the world until April 14, we have an early sample of the GTX 275 in Damage Labs for comparison against the new Radeon.

Happily, in order to compete with the Radeon HD 4890, the GeForce GTX 275 had to be a pretty potent product, so Nvidia has left a large fraction of the GTX 285’s computing and graphics power intact. Like its elder sibling, the GTX 275 is based on the 55nm version of the GT200 GPU, and it has all 240 of its stream processors enabled. The only major concession Nvidia has made to product segmentation is the disabling of one of the chip’s ROP partitions, which reduces its fill rate from 32 pixels per clock to 28, with an attending drop in antialiasing throughput.

The 55nm GT200 package flanked by GDDR3 memory chips

With that change, the GPU’s total memory interface width is also reduced from 512 to 448 bits, and the total RAM available drops from 1GB to 896MB. (You’ll see 14 rather than 16 chips in the picture above for this same reason.) 448 bits is still nearly twice the width of the 4890’s 256-bit path to memory, but Nvidia uses GDDR3 RAM, which only transfers data twice in each clock cycle.

The GT200 architecture is different in other ways, too, including the fact that its shader processors run at higher frequencies than much of the rest of the chip. For the GTX 275, Nvidia has settled on a core clock of 633MHz, SPs at 1404MHz, and GDDR3 memory at 1134MHz. Those clock speeds and the total theoretical GPU power involved are both down somewhat from the GTX 285, but they’re still considerable—and very much in the same class as the Radeon HD 4890, as we’ll soon see.

The de-tuning of the 55nm GT200 GPU has power consumption benefits: Nvidia rates peak GTX 275 board power at 219W, down over 60W from the GTX 285. That’s a little more power-hungry than the Radeon HD 4890, but the 55nm GT200 is a larger chip—the metal cap covering the package in the picture above obscures its exact size, but it has a heart-stopping 1.4 billion transistors. And the card itself has nearly twice as many memory chips onboard. Fortunately, recent GeForces have had admirably low power draw at idle, and the GTX 275 ought to continue that tradition.

GTX 275 cards aren’t yet selling on these shores, but Nvidia claims they should go for $249, by which it surely must mean $250 minus a penny. That puts ’em solidly into Radeon HD 4890 territory, but we’ll have to watch and see how mail-in rebates, game bundles, and price premiums for higher-speed models pan out. I should note that we’re testing a reference board with a stock clock speed in this review, so some GTX 275 models may offer slightly higher performance.

Side by side
Dude, look: pictures!

At 10.5″, the GeForce GTX 275 is an inch longer than the Radeon HD 4890

Both cards come with dual multi-GPU connnectors for two- and three-way CrossFire/SLI action

Fortunately, neither card requires an 8-pin PCIe aux power plug—two 6-pins will do

The GT200 chip is obscured by the package cap, but the package itself is quite a bit larger than the RV790’s

Stuff that’s not hardware
So let’s see if I can sum this up and perhaps make it a little game, too.

1. ______ says that it has a range of advantages over the other guys. In addition to graphics, users can expect much faster video encoding through than they’d get with a CPU alone by using the 2. ______ software package from 3. ______. The speed-ups are amazing. Similar performance gains are possible with GPU acceleration in the fields of high-performance computing, image processing, and distributed computing—including Folding@Home.

Additionally, games can reach new levels of visual fidelity and performance via the 4. ______ API, which is exclusively supported by its GPUs. Many games are already using it now in shipping titles, with even more planned in the next six months to a year. The firm says it has the best approach to GPU accelerated physics, because 5. ______ is the most widely used dev toolkit of its kind. One should note that the Havok demo that was shown running on Radeon hardware at GDC recently will also run on GeForces, because it uses OpenCL. This is a key advantage of the company’s standards-based approach to GPU computing.

Speaking of which, Windows 7 is just around the corner, and 1. ______ has worked closely with Microsoft to ensure that the quality and stability of its drivers for this exciting new OS are second to none.

Have it figured out yet? The answers are, alternately, either option one:

1. Nvidia
2. Badaboom
3. Elemental
4. PhysX
5. PhysX

…or option two:

1. AMD
2. Espresso
3. CyberLink
4. DirectX 10.1
5. Havok

Take your pick. I’m not convinced I’d buy one video card over another on the basis of either set of choices, but I will leave that up to you. Just thought you should know.

In addition to the above, Nvidia does have one new trick in its Release 185 driver rev that’s worthy of note: an option to enable ambient occlusion via a simple on/off control panel setting. Ambient occlusion is pretty much just what it sounds like: the lighting model will take into account how objects in the world might occlude ambient light. That may sound impossible, but the basic idea is to do less illumination in places where ambient light isn’t likely to reach as well, such as in corners where two walls meet.

With the exception of a few titles like Crysis and Stalker: Clear Sky, most current games don’t support ambient occlusion natively. Nvidia has elected to support AO in its control panel much like it does SLI, by using game-specific profiles. Among the 22 games supported out of the gate are some big names, including Mirror’s Edge, Valve’s Source engine games, Call of Duty: World at War, and the biggest PC game ever, World of Warcraft.

Here’s a quick example of how ambient occlusion affects Left 4 Dead.

Without ambient occlusion

With ambient occlusion

This isn’t a stark, bright scene with lots of ambient light, so the effect is very subtle. (Yeah, my example kinda stinks.) You can probably see the difference, though, if you concentrate on the area beneath the nearest railing, to the right of the left-most water tank (or whatever that is), where the floor meets the top of the brick wall that rises just above the stairwell. Along that intersection, the floor is a little darker.

No, really. Look closer.

Anyhow, I need to play with this feature more to see what I think of it. I’ve tried it briefly in L4D and Fallout 3, and the visual difference is hard to notice. I expect different games or different levels could yield clearer results (and better example pictures).

You may notice the performance hit more easily, which Nvidia estimates is on the order to 20-40%. Fallout 3 felt sluggish to me with AO enabled, but Left 4 Dead runs so quickly on a GTX 275 that I couldn’t perceive any slowdown. I kind of like the idea of using today’s apparent surplus of GPU power to deliver higher-quality lighting, so kudos to Nvidia for giving this a shot. I wonder whether AMD will follow suit.

Test notes
We’re sticking with our most recent suite of game tests here, so we have some basis for comparison. We’ve tested the Radeon HD 4890 OC and the GeForce GTX 275 with their very latest driver revisions, and we’ve included older results from a range of cards for comparison. In some cases, of course, the newer drivers could give our two stars today a bit of a performance advantage. Concentrate on the main comparison if you find that possible difference distracting.

I had hoped to test with a non-OC version of the Radeon HD 4890, as well, but we don’t have such a card in our possession, and apparently the Overdrive feature in AMD’s driver control panel doesn’t support underclocking. I believe we could have accomplished our goals with a BIOS editor, but since this is a brand-new card and we were facing some time constraints, we decided to forgo testing at 850MHz for now.

Our testing methods
As ever, we did our best to deliver clean benchmark numbers. Tests were run at least three times, and the results were averaged.

Our test systems were configured like so:

Processor Core i7-965
System bus QPI 4.8 GT/s
Motherboard Gigabyte
BIOS revision F3
North bridge X58 IOH
South bridge ICH10R
Chipset drivers INF update
Matrix Storage Manager
Memory size 6GB (3 DIMMs)
Memory type Corsair
Dominator TR3X6G1600C8D
at 1333MHz
CAS latency (CL) 8
RAS to CAS delay (tRCD) 8
RAS precharge (tRP) 8
Cycle time (tRAS) 24
Command rate 2T
Audio Integrated
with Realtek drivers
Asus EAH4850 TOP Radeon HD 4850 512MB PCIe
with Catalyst 8.12 (8.561.3-081217a-073402E) drivers
Dual Asus EAH4850 TOP Radeon HD 4850 512MB PCIe
with Catalyst 8.12 (8.561.3-081217a-073402E) drivers
Gigabyte Radeon HD 4850 1GB PCIe
with Catalyst 9.2 drivers
Visiontek Radeon HD 4870
512MB PCIe
with Catalyst 8.12 (8.561.3-081217a-073402E) drivers
Dual Visiontek Radeon HD 4870
512MB PCIe
with Catalyst 8.12 (8.561.3-081217a-073402E) drivers
EAH4870 DK 1G Radeon HD 4870 1GB PCIe
with Catalyst 8.12 (8.561.3-081217a-073402E) drivers
EAH4870 DK 1G Radeon HD 4870 1GB PCIe
+ Radeon HD 4870 1GB PCIe
with Catalyst 8.12 (8.561.3-081217a-073402E) drivers

Radeon HD 4890 OC 1GB PCIe
with Catalyst 8.592.1 drivers
Radeon HD 4850 X2 2GB PCIe

with Catalyst 8.12 (8.561.3-081217a-073402E) drivers
Revolution R700 Radeon HD 4870 X2 2GB PCIe
with Catalyst 8.12 (8.561.3-081217a-073402E) drivers
9800 GTX+ 512MB PCIe
with ForceWare 180.84 drivers
Dual GeForce 9800 GTX+ 512MB PCIe
with ForceWare 180.84 drivers

Palit GeForce 9800 GX2 1GB PCIe
with ForceWare 180.84 drivers

EVGA GeForce GTS 250 Superclocked 1GB PCIe
with ForceWare 182.06 drivers

GeForce GTX 260 Core 216 896MB PCIe

with ForceWare 180.84 drivers

GeForce GTX 260 Core 216 896MB PCIe

+ Zotac GeForce GTX 260 (216 SPs) AMP²! Edition 896MB PCIe
with ForceWare 180.84 drivers

GeForce GTX 275 896MB PCIe
with ForceWare 185.63 drivers

GeForce GTX 280 1GB PCIe

with ForceWare 180.84 drivers

GeForce GTX 285 1GB PCIe
with ForceWare 181.20 drivers

Dual GeForce GTX 285 1GB PCIe
with ForceWare 181.20 drivers

GeForce GTX
295 1.792GB PCIe
with ForceWare 181.20 drivers

Hard drive WD Caviar SE16 320GB SATA
OS Windows Vista Ultimate x64 Edition
OS updates Service Pack 1, DirectX
November 2008 update

Thanks to Corsair for providing us with memory for our testing. Their quality, service, and support are easily superior to no-name DIMMs.

Our test systems were powered by PC Power & Cooling Silencer 750W power supply units. The Silencer 750W was a runaway Editor’s Choice winner in our epic 11-way power supply roundup, so it seemed like a fitting choice for our test rigs.

Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests.

We used the following versions of our test applications:

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Specs and synthetics
We’ll start with our customary look at the theoretical throughput of the various cards in some key categories. Keep in mind that, where applicable, the numbers in the table below are derived from the observed clock speeds of the cards we’re testing, not the manufacturer’s reference clocks or stated specifications.

fill rate

Peak bilinear

Peak bilinear

FP16 texel


Peak shader
arithmetic (GFLOPS)

Single-issue Dual-issue

GeForce 9500 GT

4.4 8.8 4.4 25.6 90 134

GeForce 9600 GT

11.6 23.2 11.6 62.2 237 355

GeForce 9800 GT

9.6 33.6 16.8 57.6 339 508
GeForce 9800 GTX+

11.8 47.2 23.6 70.4 470 705
GeForce GTS 250

12.3 49.3 24.6 71.9 484 726
GeForce 9800 GX2

19.2 76.8 38.4 128.0 768 1152
GeForce GTX 260 (192 SPs)

16.1 36.9 18.4 111.9 477 715
GeForce GTX 260 (216 SPs)

17.5 45.1 22.5 117.9 583 875
GeForce GTX 275

17.7 50.6 25.4 127.0 674 1011
GeForce GTX 280

19.3 48.2 24.1 141.7 622 933
GeForce GTX 285

21.4 53.6 26.8 166.4 744 1116
GeForce GTX 295

32.3 92.2 46.1 223.9 1192 1788
Radeon HD 4650 4.8 19.2 9.6 16.0 384
Radeon HD 4670 6.0 24.0 12.0 32.0 480
Radeon HD 4830 9.2 18.4 9.2 57.6 736
Radeon HD 4850

10.9 27.2 13.6 67.2 1088
Radeon HD 4850 1GB

11.2 28.0 14.0 63.6 1120
Radeon HD 4870

12.0 30.0 15.0 115.2 1200
Radeon HD 4890

13.6 34.0 17.0 124.8 1360
Radeon HD 4890 OC

14.4 36.0 18.0 124.8 1440
Radeon HD 4850 X2

20.0 50.0 25.0 127.1 2000
Radeon HD 4870 X2

24.0 60.0 30.0 230.4 2400

Although the Radeon HD 4890 OC and GeForce GTX 275 are ostensibly direct competitors, they diverge from each other quite a bit on paper: the GeForce easily leads in fill rate and texture filtering capacity, while the Radeon has a clear advantage in shader FLOPS. Despite different memory types and interface widths, though, memory bandwidth is roughly equal, with a slight edge to the GTX 275.

Incidentally, the GTX 275’s place in Nvidia’s lineup is probably worth calling out. Notice that the older GeForce GTX 280, based on the 65nm GT200 chip, trails the 275 in texture filtering and shader arithmetic capacity. So although 280 is a higher number than 275, the newer card may prove to be superior in many cases.

This would be an unexpected result, were it not for the fact that we’ve seen it many times before. In spite of the theoretical numbers, the Radeon measures out with more real-world pixel and texture fill rate.

And despite the 4890 OC’s pronounced FLOPS advantage on paper, the GTX 275 leads the Radeon in two of the four shader tests.

Far Cry 2
We tested Far Cry 2 using the game’s built-in benchmarking tool, which allowed us to test the different cards at multiple resolutions in a precisely repeatable manner. We used the benchmark tool’s “Very high” quality presets with the DirectX 10 renderer and 4X multisampled antialiasing.

Our first real game test shows us just how close of a contest this is shaping up to be. The 4890 OC leads at the two more common resolutions, but the GTX 275 steps into the lead at our four-megapixel peak res. We see this same pattern in the match-up between the GeForce GTX 260 and the Radeon HD 4870 1GB, so it’s not an unexpected result.

Left 4 Dead
We tested Valve’s zombie shooter using a custom-recorded timedemo from the game’s first campaign. We maxed out all of the game’s quality options and used 4X multisampled antialiasing in combination with 16X anisotropic texture filtering.

You just can’t stress either of these cards terribly much with Left 4 Dead and these (pretty darned high) quality settings. They just offer more GPU than one needs for this game. Once again, though, the 4890 OC is faster in the two lower resolutions before falling behind at 2560×1600.

Call of Duty: World at War
We tested the latest Call of Duty title by playing through the first 60 seconds of the game’s third mission and recording frame rates via FRAPS. Although testing in this matter isn’t precisely repeatable from run to run, we believe averaging the results from five runs is sufficient to get reasonably reliable comparative numbers. With FRAPS, we can also report the lowest frame rate we encountered. Rather than average those, we’ve reported the median of the low scores from the five test runs, to reduce the impact of outliers. The frame-by-frame info for each card was taken from a single, hopefully representative play-testing session.

The GTX 275 pulls out a victory here, although the match-up is very close indeed. The 4890 OC’s median low frame rate is only two FPS lower than the GTX 275’s.

Fallout 3
This is another game we tested with FRAPS, this time simply by walking down a road outside of the former Washington, D.C. We used Fallout 3‘s “Ultra high” quality presets, which basically means every slider maxed, along with 4X antialiasing and what the game calls 15X anisotropic filtering.

Chalk up one for the new Radeon, which produces an ever-so-slightly higher average frame rate coupled with a more substantial gap in median low numbers. Still, with the median lows near 50 FPS, neither card runs this game with perceptible choppiness.

Crysis Warhead
This game is sufficient to tax even the fastest GPUs without using the highest possible resolution or quality setting—or any form of antialiasing. So we tested at 1920×1200 using the “Gamer” quality setting. Of course, the fact that Warhead tends to apparently run out of memory and crash (with most cards) at higher resolutions is a bit of a deterrent, as is the fact that MSAA doesn’t always produce the best results in this game. Regardless, Warhead looks great on a fast video card, with the best explosions in any game yet.

If you thought this game would tip the scales one way or another, think again. Two frames per second separate the cards’ averages, and both of our contenders bottom out at 25 FPS.

Power consumption
We measured total system power consumption at the wall socket using an Extech power analyzer model 380803. The monitor was plugged into a separate outlet, so its power draw was not part of our measurement. The cards were plugged into a motherboard on an open test bench.

The idle measurements were taken at the Windows Vista desktop with the Aero theme enabled. The cards were tested under load running Left 4 Dead at 2560×1600 resolution, using the same settings we did for performance testing.

This power thing isn’t making my job any easier. The Radeon HD 4890 OC draws quite a bit less power when running a game, but the situation is almost exactly reversed at idle, continuing the dizzying asymmetrical parity of our test results.

Noise levels
We measured noise levels on our test system, sitting on an open test bench, using an Extech model 407738 digital sound level meter. The meter was mounted on a tripod approximately 8″ from the test system at a height even with the top of the video card. We used the OSHA-standard weighting and speed for these measurements.

You can think of these noise level measurements much like our system power consumption tests, because the entire systems’ noise levels were measured. Of course, noise levels will vary greatly in the real world along with the acoustic properties of the PC enclosure used, whether the enclosure provides adequate cooling to avoid a card’s highest fan speeds, placement of the enclosure in the room, and a whole range of other variables. These results should give a reasonably good picture of comparative fan noise, though.

At full tilt, the Radeon HD 4890 OC’s cooler makes more racket than the GTX 275’s. That’s pretty clear, and it matches with my subjective impressions.

GPU temperatures
I used GPU-Z to log temperatures during our load testing. In the case of multi-GPU setups, I recorded temperatures on the primary card.

Here’s one possible reason why the 4890 OC’s cooler is louder: it’s working to keep GPU temperatures relatively low. This is something of a reversal for AMD’s stock coolers in the Radeon HD 4800 series. The 4870 512MB above, at 84°C, is more typical. Meanwhile, the GTX 275 is one hot chip. Only the two Asus Radeons in CrossFire, with their plainly broken coolers, produce higher temperatures—just before they crash. The GTX 275 isn’t in the same boat, but it does appear Nvidia has chosen to favor good acoustics over lower GPU temperatures, while AMD has taken a step in the opposite direction.

I don’t usually test overclocking with video cards because, well, it’s typically a real pain in the rear. Video cards crash in some bad ways, and finding the limits of your GPU can be incredibly tedious. However, I decided to give it a try this time around, since AMD is touting the clock speed headroom built into the 4890, and I came away surprised by a couple of things.

First, although it first disappeared a while back, I hadn’t realized Nvidia never restored the GPU auto-overclocking function to its downloadable system tools suite. Back in the nTune days, that feature was remarkably good at finding the practical limits for your GPU. Apparently, it has evaporated into the wind, never to return. (At least not in Vista x64, anyway.)

I was even more shocked to find that the overclocking utility built into AMD’s Catalyst drivers proved truly useful. My past experiences with this utility were not, shall we say, good. Lots of system lock-ups, very little progress toward higher clock frequencies. But in this case, with the 4890 OC, AMD’s utility didn’t lock up once and methodically found its way up to a 990MHz core clock with 1190MHz memory, a config that proved wholly stable in subsequent testing.

Since I wasn’t willing to endure hours of trial-and-error with the GTX 275, I simply tried setting it to the stock clocks for the GeForce GTX 285: 670MHz core, 1550MHz shaders, and 1300MHz memory. The card was generally OK at those speeds, with no visual anomalies, but it wound up locking up right at the end of our benchmark tests. You’d probably want to back it down a notch for everyday use.

Overclocking allows for some nice performance improvements from both cards, although neither one has a pronounced advantage on this front.

Jeez. We plow through all of those numbers, and I still can’t declare any clear winner. What we do know is that, with the Radeon HD 4890 and its OC variant, AMD has succeeded in establishing a new standard for performance at the $249 price point. This is a most excellent development, and it has forced Nvidia’s hand. Fortunately, Nvidia’s response is quite respectable, as well. The GeForce GTX 275 is a worthy competitor to the 4890, and which card is a better value may depend on how prices shake out once both options are out in the wild. Remember, the 4890 OC card we tested costs $265, and if anything, it was an almost exact match for our stock-clocked GeForce GTX. Then again, the first GTX 275 has apparently popped up at Newegg as I write, and it’s priced at $259.99. So perhaps the 4890 OC is the most apt comparison—and the parity remains intact.

That’s not to say the cards are exactly interchangeable. Nvidia has biased its cooling solution in favor of higher GPU temperatures and lower noise levels under load, while AMD has done the opposite. Also, the 4890 OC draws less power under load than the GTX 275, but more when idling at the Windows desktop. You can choose which set of attributes you prefer there. The 4890 is physically shorter, too, which could be important inside of an especially cramped case.

If you think I’m gonna pick for you when things are this tight, though, forget it. This one is too close to call. My only advice is that you might want to consider carefully whether you need to pay for the extra performance in these cards versus the next rung down the ladder, like this Radeon HD 4870 1GB for $190 or this GeForce GTX 260 for $179. Those might be even better deals, and they may be all the GPU you need for current games, too.

Latest News

Key Pinduoduo Statistics for 2023

Pinduoduo Statistics 2023 (Facts about the $200B Farming Frenzy)

UK Mobile Phone Firms Face £3bn Lawsuit for Overcharging Customers

UK Mobile Phone Firms Face £3bn Lawsuit for Overcharging Loyal Customers

Vodafone, O2, EE, and Three — the four largest network operators in the UK have been sued for £3 billion in a class-action lawsuit accusing them of overcharging. The companies...

Meta Finally Makes Encrypted Messaging Default for Facebook

Meta Finally Makes Encrypted Messaging Default for Facebook, Messenger and Instagram Chat

Seven years after Meta (known as Facebook at the time) began working on it, end-to-end encryption is finally being made default across the company’s messaging platforms. Loredana Crisan, the Head...

Crypto News

Top Crypto Gainers on December 7 – HNT And AVAX


OpenAI Drama Strengthens Ties with Microsoft, What’s the Ground Reality?

Crypto News

Robinhood Expands Its Crypto Trading Services To Europe

Crypto News

Cardano (ADA) Surges by 20% While New Telegram Token Raises $3.5M in Presale – Is the Bull Run Here Already?