AMD’s Radeon HD 4770 graphics processor

The Radeon HD 4850, I think we can agree, is a pretty good graphics card. Since its arrival, the 4850 has set the tempo for much of the graphics card market by delivering strong performance for the price, along with a pretty good total package of image quality, features, and power efficiency.

So here’s an interesting question. As you know, chip-making technology regularly advances. What would you do, if you were in the shoes of the folks at AMD, as a follow-up to the 4850?

Their answer, it turns out, is a pretty good one. They’ve created the RV740, a new graphics chip that’s just over half the size of the GPU in the Radeon HD 4850 but packs nearly as much punch. And they’ve packaged up this new GPU into the Radeon HD 4770, a graphics card that sells for just north of a hundred bucks. Remarkably, this welterweight graphics card may be all you need to play the latest games at buttery smooth frame rates. Sounds pretty good, no? Let’s have a closer look.

The RV740 graphics processor

The RV740 is the first graphics chip to hit the market that’s manufactured using TSMC’s 40-nanometer fabrication process. Most current GPUs are made by chip foundry TSMC, and almost all use that firm’s 55nm process. The benefits of a smaller, more advanced fabrication method are fairly straightforward: smaller chips are cheaper to make and tend to consume less power. A new fab process may also enable higher clock speeds, either by allowing transistors to switch more quickly or, as is often the case these days, simply by freeing up additional thermal headroom.

A block diagram of the RV740 architecture. Source: AMD.

AMD’s first vehicle for this 40nm process shares its architectural roots with the rest of the RV700-series chips, which power the Radeon HD 4000 series of graphics processors. In fact, the RV740 architecture looks to be only a slightly scaled-down version of the RV770 GPU from the Radeon HD 4850.

As you, erm, might be able to see if you squint really carefully at the block diagram above, the RV740 has eight SIMD shader arrays, each of which contains 16 superscalar execution units. Those units have five ALUs each, so the GPU has a grand total of 640 ALUs, or stream processors, as AMD likes to call them. Each SIMD array has a texture unit associated with it, and each of those can address and filter up to four texels per clock, so the RV740 as a whole can process 32 texels per clock. In terms of both shading and texturing, then, the RV740 has 80% of the RV770’s capacity, clock for clock.

However, just like its older brother, the RV740 has four render back-ends. That gives it the ability to produce up to 16 pixels per clock, which it does with the same amount of antialiasing resolve power as the RV770, as well. Perhaps the biggest concession to its lower weight class is the RV740’s dual 64-bit memory interfaces—only half as many as its elder sibling. The first implementation of the RV740 makes up for it by using 800MHz GDDR5 memory, which transfers data four times per clock cycle rather than twice, like GDDR3. As a result, the Radeon HD 4770’s rated memory bandwidth isn’t too far behind that of the Radeon HD 4850.

At 750MHz, the Radeon HD 4770’s GPU clock frequency is a little higher than a stock Radeon HD 4850’s, allowing it to make up much of the ground lost by the omission of those shader arrays. All told, the 4770 lands roughly in between the Radeon HD 4830, which it replaces, and the 4850 in terms of key specifications. Here’s how the three compare.

Peak
pixel
fill rate
(Gpixels/s)

Peak bilinear

texel
filtering
rate
(Gtexels/s)


Peak bilinear

FP16 texel
filtering
rate
(Gtexels/s)


Peak
memory
bandwidth
(GB/s)

Peak shader
arithmetic (GFLOPS)

Radeon HD 4830 9.2 18.4 9.2 57.6 736
Radeon HD 4770 12.0 24.0 12.0 51.2 960
Radeon HD 4850

10.9 27.2 13.6 67.2 1088

Memory bandwidth is the only major category in which the Radeon HD 4830 is superior; the 4770 is otherwise faster across the board. Meanwhile, the 4850 trails the 4770 solely in terms of pixel fill rate, due to the 4770’s higher clock speed.

But here’s the kicker. At 40nm, the RV740 crams 826 million transistors into a 137 mm² die. Elder brother RV770 has 926 million transistors but occupies approximately 260 mm², nearly twice the size. Nvidia offerings in this price range are based on the G92 GPU, which we’ve measured at roughly 324 mm² in its 65nm form and 256 mm² at 55nm. (Both incarnations are still out in the wild, so you don’t always know which version you’ll get.)

To give you a sense of the scale involved, I’ve included pictures of each of these chips below, next to a quarter for reference. My image sizing isn’t exact, but hopefully it’s close enough, in combination with the size reference, not to mislead.

RV770

RV740

G92 (the “b” rev or 55nm version)

G92 (the 65nm original)

Teeny, innit? Once AMD and TSMC work out the almost inevitable kinks in this brand-new fab process, the RV740 ought to bring considerable graphics power to the market at very affordable prices. That trend begins, obviously, with the introduction of the Radeon HD 4770.

The cards

AMD intends for the first Radeon HD 4770 cards to sell for about $110 straight up, but like almost everybody involved in PC hardware these days, AMD and its partners are hooked on mail-in rebates. The initial rebates for 4770 cards should promise $10 on Tuesday (six to eight weeks from now) for a hamburger today, taking the net price down to $100, also known in sales parlance as $99.99.

As complicated as that sounds, the 4770’s value proposition should be fairly simple to see. The cards pack 512MB of GDDR5 memory, and the two examples we’ve seen sport dual-slot coolers that should be relatively quiet.

Here’s the Asus card we used in testing, which comes with a very fancy-looking cooler.

This cooler appears quite similar to the one we recently tested on an Asus Radeon HD 4850 that had a major problem: with a card installed in the adjacent slot—either a video card for CrossFire or simply something large enough like a sound card—the fan became starved for air and the GPU overheated. Naturally, we were worried about this cooler having the same problem, so we tested the Asus 4770 in CrossFire for any signs of overheating. Happily, it passed with flying colors, keeping GPU temperatures well in check during extended use without even making too much noise. We’ve noticed that this particular cooler is affixed to several brands on Radeon HD 4770 cards selling on Newegg, so it must not be an Asus exclusive.

Believe it or not, the card pictured above is our sample of the Radeon HD 4770 from AMD, and it comes complete with a dual-slot cooler that blows hot air out of the expansion port covers. This cooler is quite the improvement over the single-slot reference cooler on the Radeon HD 4850. Compared to the Asus card, this reference one is very perceptibly heavier, likely due to the presence of more copper in the heatsink under that plastic shroud. Don’t write the Asus off yet, though. The reference cooler is potentially louder, as we’ll show.

We needed a foil for the Radeon HD 4770, and the natural choice would seem to be the GeForce 9800 GT. These sell in the same basic price range as the Radeon HD 4770, although some after-rebate deals can take net prices as low as about 90 bucks.

Asus was kind enough to provide us with a sample of its GeForce 9800 GT Matrix for use in this review, and it’s a good representative of the breed, with a tricked-out cooler and a built-in HDMI port. In spite of this extroverted exterior, the Matrix doesn’t stray too far from formula. The GPU core clock is up 12MHz from stock, at 612MHz, but its 1500MHz shader and 900MHz memory speeds are bone stock for a 9800 GT.

Test notes

Speaking of straying from formula, we decided to try out some new games in different genres for this review, some with support for Nvidia’s exclusive PhysX API and some with support for the Radeon-exclusive DirectX 10.1. As you may know, we’ve generally found this latest generation of GPUs from AMD and Nvidia to be very evenly matched in terms of features and image quality, leaving price and performance as the primary deciding factors between the two brands. Perhaps looking at a broad scope of games and putting PhysX and DX10.1 into the mix will give us some new insights. Where possible, we’ve tested both with and without PhysX and DX10.1, to demonstrate the impact of these technologies.

Branching out into a bunch of new games tends to be time-consuming, so we decided to focus on the Radeon HD 4770 vs. GeForce 9800 GT match-up for the bulk of our testing. In order to also give you some broader context, we’ve included scores from 3DMark and Far Cry 2, along with power and noise testing, for a range of video cards. Those numbers were drawn from our Radeon HD 4890 vs. GeForce GTX 275 review and include results from older driver revisions, as noted in that article’s testing methods section. These scores should be decent enough as point of reference, but be aware that the comparison to the newer cards and drivers may not be perfectly exact.

For the rest of our testing, we used FRAPS to capture frame rates while playing the games. We typically recorded frame rates over five gameplay sessions of 60 seconds each. We averaged the mean frame rate from each of the five sessions, and we also reported the median of the minimum frame rates from those sessions. This method of testing will inevitably introduce some variability, but we believe averaging five sessions ought to be sufficient to give us reliable results. However, for two games, Sacred 2 and Dawn of War II, we were not confident the 60-second window was long enough given the variability of the games’ performance over time, so we extended the window to five minutes and still recorded five sessions.

Our testing methods

As ever, we did our best to deliver clean benchmark numbers. Tests were run at least three times, and the results were averaged.

Our test systems were configured like so:

Processor Core i7-965
Extreme
3.2GHz
System bus QPI 6.4 GT/s
(3.2GHz)
Motherboard Gigabyte
EX58-UD5
BIOS revision F4
North bridge X58 IOH
South bridge ICH10R
Chipset drivers INF update
9.1.0.1007
Matrix Storage Manager 8.6.0.1007
Memory size 6GB (3 DIMMs)
Memory type Corsair
Dominator TR3X6G1600C8D
DDR3 SDRAM
at 1333MHz
CAS latency (CL) 8
RAS to CAS delay (tRCD) 8
RAS precharge (tRP) 8
Cycle time (tRAS) 24
Command rate 2T
Audio Integrated
ICH10R/ALC889A
with Realtek 6.0.1.5745 drivers
Graphics

Asus Radeon HD 4770 512MB PCIe

with Catalyst 8.60-090316a-078299C drivers

Asus GeForce 9800 GT 512MB PCIe

with ForceWare 185.68 drivers

Hard drive WD Caviar SE16 320GB SATA
OS Windows Vista Ultimate x64 Edition
OS updates Service Pack 1, DirectX
March 2009 update

Thanks to Corsair for providing us with memory for our testing. Their quality, service, and support are easily superior to no-name DIMMs.

Our test systems were powered by PC Power & Cooling Silencer 750W power supply units. The Silencer 750W was a runaway Editor’s Choice winner in our epic 11-way power supply roundup, so it seemed like a fitting choice for our test rigs.

Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests.

We used the following versions of our test applications:

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Specs and synthetics

Peak
pixel
fill rate
(Gpixels/s)

Peak bilinear

texel
filtering
rate
(Gtexels/s)


Peak bilinear

FP16 texel
filtering
rate
(Gtexels/s)


Peak
memory
bandwidth
(GB/s)

Peak shader
arithmetic (GFLOPS)

Single-issue Dual-issue

GeForce 9500 GT

4.4 8.8 4.4 25.6 90 134

GeForce 9600 GT

11.6 23.2 11.6 62.2 237 355

GeForce 9800 GT

9.8 34.3 17.1 57.6 339 508
GeForce 9800 GTX+

11.8 47.2 23.6 70.4 470 705
GeForce GTS 250

12.3 49.3 24.6 71.9 484 726
GeForce 9800 GX2

19.2 76.8 38.4 128.0 768 1152
GeForce GTX 260 (192 SPs)

16.1 36.9 18.4 111.9 477 715
GeForce GTX 260 (216 SPs)

17.5 45.1 22.5 117.9 583 875
GeForce GTX 275

17.7 50.6 25.4 127.0 674 1011
GeForce GTX 280

19.3 48.2 24.1 141.7 622 933
GeForce GTX 285

21.4 53.6 26.8 166.4 744 1116
GeForce GTX 295

32.3 92.2 46.1 223.9 1192 1788
Radeon HD 4650 4.8 19.2 9.6 16.0 384
Radeon HD 4670 6.0 24.0 12.0 32.0 480
Radeon HD 4770 12.0 24.0 12.0 51.2 960
Radeon HD 4830 9.2 18.4 9.2 57.6 736
Radeon HD 4850

10.9 27.2 13.6 67.2 1088
Radeon HD 4850 1GB

11.2 28.0 14.0 63.6 1120
Radeon HD 4870

12.0 30.0 15.0 115.2 1200
Radeon HD 4890

13.6 34.0 17.0 124.8 1360
Radeon HD 4890 OC

14.4 36.0 18.0 124.8 1440
Radeon HD 4850 X2

20.0 50.0 25.0 127.1 2000
Radeon HD 4870 X2

24.0 60.0 30.0 230.4 2400

We’ve already discussed how the 4770 compares to a couple of its siblings, but here’s a broader look at the specifications of recent video cards. Note that these numbers are, where applicable, derived from actual clock speeds of the cards we’ve tested; some of them stray from the baseline clocks established by the chipmakers.

That’s not the case with our Radeon HD 4770, though. Even so, you can see that the 4770 has a higher pixel fill rate than the GeForce 9800 GT, and depending on whether you count the G92’s dual-issue capability or not, nearly two or three times the peak shader FLOPS. The 9800 GT leads by a pretty clear margins in terms of memory bandwidth and texture filtering capacity, but the RV700-series GPUs tend to overachieve on this front compared to their specs.

In fact, the 9800 GT almost catches the 4770 in the pixel fill rate test, and the 4770 takes a lead in the texturing benchmark.

The shader benchmark results are closer than you might think given the disparity in theoretical capacities, but the 4770 only ties the 9800 GT once, in the GPU cloth test, where the GeForces do especially well. The 4770 runs the board otherwise, signaling that its shader superiority isn’t just on paper.

Far Cry 2

We tested Far Cry 2 using the game’s built-in benchmarking tool, which allowed us to test the different cards at multiple resolutions in a precisely repeatable manner. We used the benchmark tool’s “Very high” quality presets with the DirectX 10 renderer and 4X multisampled antialiasing.

This gives us some context in which to place the $100-ish graphics cards we’re featuring today, but with these image quality settings, Far Cry 2 is rather hostile turf for both of them. The 4770 performs about as expected compared to the Radeon HD 4850, though, just a few frames per second behind it. The 9800 GT can’t keep up with its ostensible competition here.

Tom Clancy’s HAWX

HAWX is one of several games we used in this review that probably delayed our publication date by causing me to conduct additional, uh, testing beyond the necessary amount. I haven’t seen too many good flying games lately, and the mix of arcade, sim, and RPG (really) elements in HAWX makes quite the cocktail.

At least, that’s my excuse.

HAWX will run in DirectX 9, but it looks best in DX10, where some additional lighting effects come into play, including ambient occlusion. This game also uses DirectX 10.1 to improve performance on Radeon graphics cards.

For the record, DirectX 10.1 is a set of extensions to Microsoft’s main graphics programming interface. DX10.1 gives developers more control over the way antialiasing hardware operates and enables a new form of data organization, a cube map array, that can help accelerate certain lighting algorithms, including an approximation of global illumination. Both of these things typically result in performance increases, if developers take advantage of them. Since DX10.1 compatibility is an all-or-nothing deal and today’s GeForces can’t support the full DX10.1 feature set, no current GeForce GPU can claim to be DX10.1 compliant. AMD has taken an active role in encouraging game developers to use DX10.1, and some recent games like this one make use of it, as a result.

We tested the 4770 with both DX10 and DX10.1 to see the difference, using 1680×1050 resolution with all of the in-game quality options at their best and 4X AA enabled.

The newest Radeon runs HAWX faster than the 9800 GT either way, but turning on DX10.1 gives it an additional boost. This one isn’t even close—the 9800 GT’s average frame rate is lower than the 4770’s minimum.

Sacred 2: Fallen Angel

Sacred 2 is billed as an “action RPG” in the vein of Diablo II, but it has much prettier graphics. The game’s developers have raised the ante even further by adding some effects courtesy of Nvidia’s PhysX technology, which enables GPU-accelerated physics on recent GeForce GPUs. PhysX uses the shader processors to handle mathematical simulations of things like collisions, fluid flow, and realistic cloth.

I don’t believe there’s anything inherent in the architecture of current Radeon GPUs that would prevent them from being similarly effective at processing physics routines. In fact, AMD has demonstrated GPU-accelerated physics using a combination of the OpenCL GPU-compute API and Havok middleware. But Nvidia owns PhysX and has elected not to extend support for GPU acceleration to third parties.

Like most games that use hardware-accelerated PhysX, the effects in Sacred 2 don’t involve actual physical interactions in a way that affects the outcome of the game. They just enhance its look—not a bad contribution from a graphics card, if you think about it.

With PhysX, leaves swirl about the landscape, blowing in the wind

No PhysX, no leaves

In Sacred 2, turning on PhysX effects (via an option in a game menu) is mostly about leaves and other bits flying around on the screen. Without PhysX, the game looks great and seems fairly normal. With PhysX, you get lots and lots of leaves swirling about everywhere, especially when you cast a spell. I’d say the additional smithereens kicking about are an improvement overall, but they are probably overdone. Developers: just because you can doesn’t mean you should. Sometimes, less is more.

We tested Sacred 2 at 1680×1050 resolution using its “high” quality presets along with 2X antialiasing. I chose 2X AA because going up to 4X seems to exact a big frame-rate hit. As a result, I found myself playing this game (for way too many hours) at 2X AA for the best combination of image quality and performance, even when I wasn’t testing.

Without PhysX, the 4770 runs Sacred 2 faster than the 9800 GT does. To me, the 4770 runs the game fluidly at these settings, while the 9800 GT feels a little sluggish from time to time.

With PhysX enabled, the 9800 GT’s frame rates drop quite a bit. You might have to compromise on antialiasing or dial back some other in-game quality settings to offset the hit. Then again, I tried turning on PhysX effects with the Radeon HD 4770, where the CPU has to do all of the work, and the performance hit there was brutal—we were into the 3-4 FPS range, too slow to play at all.

Mirror’s Edge

We’ve already covered the way that Mirror’s Edge uses PhysX, if you’re not familiar. For our testing, we chose to play through a portion of the game that includes the long slide down the side of a building shown in the screenshot above. This segment starts out in a hallway, which you run through while a helicopter outside pours bullets in through the glass, shattering the windows and causing the blinds to warp and wiggle. Without PhysX effects enabled, the blinds aren’t even there, and there are fewer shards of glass flying about. Once you’re out of the hallway, the spruced up PhysX effects continue with enhanced bullet impact effects and the like.

Again here, Mirror’s Edge looks great without the added eye candy, and sometimes the extra smithereens seem a little over the top. Generally, though, they’re a welcome visual improvement.

The 9800 GT and 4770 are very evenly matched without PhysX enhancements in the picture. With PhysX enabled, the 9800 GT maintains decent frame rates, while even our fast quad-core processor isn’t sufficient to keep the 4770 in playable territory. At least the 4770 was quick enough here to allow me to test. In some other areas of the game, leaving PhysX enabled with the Radeon card slowed performance to a crawl.

World of Warcraft

World of Warcraft is, not to put too fine a point on it, the single most popular PC game ever—and a likely application for a graphics card in this price range. To test WoW performance, we used the trial version of the game, cranked up all of the image quality settings to “ultra,” and enabled 4X antialiasing. Even that wasn’t too much of a challenge for these cards, so we bumped the display resolution up to 1920×1200. I then took it upon myself to do something about the terrible wolf menace. Over and over again. This may not be the most strenuous test of WoW performance, but playing solo against computer-run monsters should keep network traffic out of the equation, so that the video cards are our primary constraint.

Both of these cards run WoW perfectly well at this resolution, but the 9800 GT has a pronounced lead over the Radeon. That perhaps makes some sense, because the simpler shader effects in WoW aren’t likely to tax either GPU too much, so the 4770’s superior shader throughput probably isn’t much help to it. The 9800 GT’s higher memory bandwidth could be giving it the edge here. Just a guess, though.

Dawn of War II

The final game in the mix today is Dawn of War II, a real-time strategy game with a bit of an unconventional bent. I was shocked to be able to pick up and play almost immediately, with very little help from a tutorial. This one has fast action, too, with no time spent on building units. DoW2 doesn’t use DX10.1 or PhysX, but it does look very nice and turned out to be easier to benchmark than most RTS games I’ve tried.

Chalk up another one for the Radeon HD 4770, which takes this game in convincing fashion.

Power consumption

We measured total system power consumption at the wall socket using an Extech power analyzer model 380803. The monitor was plugged into a separate outlet, so its power draw was not part of our measurement. The cards were plugged into a motherboard on an open test bench.

The idle measurements were taken at the Windows Vista desktop with the Aero theme enabled. The cards were tested under load running Left 4 Dead at 2560×1600 resolution.

True to form for a small, 40nm GPU, the Radeon HD 4770 draws less power than any other graphics card we measured under load. When idle at the Windows desktop, it draws more power than a couple of GeForce cards, though not the 9800 GT. Nvidia has made some nice strides in idle power use with its newer designs, but the older 9800 GT (which is just a renamed GeForce 8800 GT) doesn’t benefit from that work.

Noise levels

We measured noise levels on our test system, sitting on an open test bench, using an Extech model 407738 digital sound level meter. The meter was mounted on a tripod approximately 8″ from the test system at a height even with the top of the video card. We used the OSHA-standard weighting and speed for these measurements.

You can think of these noise level measurements much like our system power consumption tests, because the entire systems’ noise levels were measured. Of course, noise levels will vary greatly in the real world along with the acoustic properties of the PC enclosure used, whether the enclosure provides adequate cooling to avoid a card’s highest fan speeds, placement of the enclosure in the room, and a whole range of other variables. These results should give a reasonably good picture of comparative fan noise, though.

I’ve tested the stock-AMD and Asus versions of the Radeon HD 4770 here, since they have different coolers. As you might expect from looking at the power consumption numbers, though, neither has to be very loud to keep the RV740 GPU cool. Surprisingly, the Asus 9800 GT Matrix is the noisiest card we tested under load. The Matrix cooler has aggressive fan speed settings and is made for overclocking. I know for a fact we’ve tested much quieter GeForce 9800 GTs in the past.

Speaking of noisy coolers, you should hear how the reference 4770 card sounds at power-on, when it briefly cranks up to full speed. The thing has a medium-pitched whine/roar, evoking the infamous Dustbuster on the GeForce FX 5800 Ultra, not the silky-smooth hiss of, say, a Radeon HD 4870 reference cooler. Obviously, the 4770 cooler had no need to reach top speed when we tested noise levels, but I found that it did briefly step to a higher gear from time to time as I used it in other games. To give you some idea how this cooler’s peak speed compares to its usual noise levels, I manually set its speed to 100% and measured a noise level of 63 dB. By contrast, the Asus card’s cooler topped out at just 58.4 dB, worlds apart to my ears.

GPU temperatures

I used GPU-Z to log temperatures during our load testing. In the case of multi-GPU setups, I recorded temperatures on the primary card.

The 4770 matches the 9800 GT at a relatively cool 65°C. That’s a far cry—well, 20°, really—from the temperatures we’ve observed under load with a stock Radeon HD 4850. Among the cards tested, only two Asus Radeon HD 4800-series cards maintain lower temperatures, although those same coolers essentially fail in CrossFire mode, as the numbers above reflect. (They were recorded just before the system locked up.)

Overclocking

Overclocking the Radeon HD 4770 using AMD’s new Overdrive auto-tuning tool was as simple as pressing a button and being patient. Finding the card’s top GPU and memory speeds took some time, but when the dust had settled, we had a GPU clock of 820MHz and memory speeds of 840MHz. That’s a 70MHz increase in GPU clocks, but only 40MHz in memory.

Looks to me like the 4770’s memory speed is the limiting factor in its performance here. The GPU headroom we found is great, but isn’t enough to take the 4770 up to 4850 levels.

Conclusions

The Radeon HD 4770 looks pretty darned good to me. The performance contest between the 4770 and the GeForce 9800 GT isn’t a clean sweep for either card, likely for the reasons we saw illustrated in our look at the cards’ specifications and synthetic performance results. The 4770’s peak shader arithmetic power is unmatched by the 9800 GT, but the GeForce has notably more memory bandwidth. Which card is faster will depend largely on the requirements of the game in question.

Still, the 4770’s shader power gave it a bit of an edge overall in the games we tested, and I’d be more confident about its ability to crunch through advanced visual effects future games, too. The 4770 uses a little less power than the 9800 GT, which also enables some nice, low noise levels—without the high GPU temperatures we eyed with suspicion on the reference Radeon HD 4850s.

I suspect Nvidia will respond by dropping prices, perhaps sufficiently to make the 512MB version of the GeForce GTS 250 into a direct competitor for the 4770. If they do that, hey, it’s game on again. However, there have to be some limits to Nvidia’s price flexibility with a larger 55nm chip and a 256-bit card. We’ll have to watch and see how this one shapes up.

Then again, I know times are hard, but one has to wonder about the value proposition of any of the cards in this price range when you can sometimes snag a Radeon HD 4870 for $150. Dropping another 40 bucks on the 4870 would take you into another performance league entirely and guarantee more longevity from your graphics card. Also, with these newer games, we did see some marginal frame rates from the 4770, even at the relatively low display resolution of 1680×1050. In many cases, moving up to a more capable GPU will mean fewer image quality compromises with current and near-future games.

As for our look at PhysX and DirectX 10.1, I had hoped to test at least one more DX10.1 game to keep everything even-Steven, but I just ran out of time. There are some good candidates out there, like Stalker: Clear Sky and Battle Forge. I’m hoping to get to them before too long.

For now, I suppose I should modify what I’ve been saying about PhysX and DX10.1 for some time now. They are indeed in many ways equal and opposite features that make their respective GPU types more compelling. Yes, PhysX and DX10.1 are as much about software differences are they are about hardware capabilities, but they do offer their advantages. I’ve said repeatedly that I wouldn’t pick, say, a GeForce over a Radeon in order to get PhysX, or vice-versa for DX10.1. But we are, after all, dealing with graphics cards here that cost about the same as two top-flight games. If you’re upgrading in order to play a specific game or two, heck, follow your fancy—within reason. If you’re all about Mirror’s Edge, go for the GeForce. If HAWX is your thing, definitely get the Radeon. Just be careful: as we saw, the 4770 is probably the better card for Sacred 2, even though it lacks PhysX compatibility. There’s still much to be said for picking the best card overall.

Comments closed
    • Marius-Mark
    • 10 years ago

    I’m had nothing but problems from 2 XFX 4770’s I recieved over the last 4 weeks from Newegg. Both cards have refused to give me a screen AFTER booting into bios. I’m using a Gigabyte GA-EP45T-UD3LR MB, Intel Q9400 cpu, Noctua cooler, 4 gigs of Kingston PC3 10600 and a Soundblaster XFI card. The HDD is a WD 750. The OS is XP Pro/32bit (I think). The PS is an Antec True Power 550. Can anyone on this site look at this system profile and see any reason why both of these cards refuse to give me a Windows screen? I know that the slot is good because I bought an XFX 4350 to test it, and that card works fine. I also installed the drivers for that card off the XFX disk that came with it, so it seems to me that those drivers should allow the 4770 to function at some level prior to installing its drivers. The CD drive is a 48X Lite-on. The machine works fine with the 4350 installed, but will not show me the welcome screen with the 4770 installed. I’m using the provided DVI to VGA adapter to go to a Proview 19 iinch monitor. The fan on the card is working. Can someone tell me what I’m missing? I can’t believe that Newegg would send me two bad cards in a row. The odds of that are almost unbelievable. Thanks in advance for any help I find here. Marius-Mark

    • Bensam123
    • 10 years ago

    I know WoW draws a lot of tards, but it’s hardly a test of GPU power. Perhaps if you’re in a cluster screw in alterac valley or sitting in the middle of a city and just watching tons of people go by, that might do something. Even still, it made me cringe when I saw it in the benchmarking suite. ๐Ÿ™

    “I was shocked to be able to pick up and play almost immediately, with very little help from a tutorial.”

    And you put it down just as fast as you pick it up.

      • Meadows
      • 10 years ago

      It’s a fair GPU benchmark at high resolutions with *[

    • Matt v
    • 10 years ago

    What’s with the seeming lack of availaibility of the AMD version of this card, which has a nice enclosure around the fan that directs hot air out the rear?

    I’ve tried fruitlessly to find this particular AMD version, but can only locate several versions by other manufacturers all having the same crappy open air fan that does nothing to help actively direct heat out of the computer’s case.

    • sergeant_skyes
    • 10 years ago

    well nice review i finally made up my mind to grab this one and replace my 3650 running on my P4 3ghz CPU!!! I guess ill wait until 2010 and then upgrade my CPU to a 6 core one.

    • LiamC
    • 10 years ago

    /reply to #34…

    Thanks, that’s good to know, though I’m not holding my breath.
    4830 prices have plummeted, so I thought I’d have a look at those as well. Unfortunately, they all seem to be two-slot as well.

    <rant>Why can’t the makers put the cooler dimensions in an easy-to-find place. For most of them, I’ve had to email them to find out. And in six-out-of-ten cases, I’ve emailed the (r)etailer as the makers don’t bother to respond…
    </rant>

    • Tarx
    • 10 years ago

    Folding performance cares both about shader speed and the number of shaders. But it way more shaders (like ATI has) than can be used, it ends up being shader speed that counts. Nvidia could also have some extra optimizations.
    As the projects grow, ATI cards are starting to do better.e.g. The 4870 on some of the biggest GPU projects have almost doubled their PPD. But that gap seems a bit too big to disappear anytime soon.
    We’ve seen on some low shader count nvidia cards a significant hit with some of the biggest gpu projects. But cards like the 250 seem to have enough shaders (for now).
    edit: reply to 35…

      • swaaye
      • 10 years ago

      Don’t forget that the way ATI counts “shaders” is kinda dumb. They add up each separate ALU in their 5-way VLIW shader units. You probably should divide that number by five to get a real count of the number of shader units they have.

      NVIDIA counts the same way but their architecture works quite differently so it’s more accurate in a way. Their architecture is based on separate scalar stream processors.

      ATI’s shader units potentially get a whole lot more done per clock (key word = potentially), but counting the individual ALUs of those units as separate “shaders” is rather off the mark I think. Their design unfortunately seems to give marketing a feeling of inadequacy in the numerical superiority dept. Big numbers obviously excite people though so maybe it is a good idea.

        • travbrad
        • 10 years ago

        Yep, that’s also why they include large amounts of memory on very low-end cards that gain virtually no benefit from the additional memory. People see “512MB” on the video card box and think it’s automatically great. Obviously a 7600GS is no match for an HD4850 though.

        I can’t begin to count the number of people who have trouble running games then say “but it’s x number of MB”.

    • eitje
    • 10 years ago

    Can we get temp measurements on the two different 4770 coolers?
    Or at least a note letting us know if the temp was the same between the two coolers?

    • masaki
    • 10 years ago

    So, another good review but you have been pairing the new HD4770 up with the relatively old 9800GT.

    Maybe currently, talking about recent releases, HD4770 should be paired up with GTS250. Of course, there are some GTS250 numbers so they speak for themselves.

    But at the moment, a bad one economically, is there a chance that TR (Cyril especially) can try to compile a GPU value article like you did with CPU? I would like to see those numbers and graphs and it may help people to choose the well priced, good performing hardware. Too many too choose from today.

    • tay
    • 10 years ago

    I don’t like the fact that extremely high end cards are included, especially in SLI and crossfire. Do it like the xbitlabs review and only include cards in a similar price and performance range. It clarifies matter a whole lot.

    No offense intended, TR is my fav tech hardware site, but that Xbit review was A+ while this one was an A-.

    • Fighterpilot
    • 10 years ago

    This new GPU scales really well with Crossfire.
    Two of these puppies run about 20% faster than a Radeon 4890.
    That’s really excellent graphics performance for $200 dollars.

    • 0g1
    • 10 years ago

    1 year ago my 9800GTX was the best thing around. Now it gets owned by a 99$ card. Amazing how fast things change.

    • Firestarter
    • 10 years ago

    This must make a killer laptop GPU

      • Jigar
      • 10 years ago

      Exactly my thoughts…

    • Fighterpilot
    • 10 years ago

    That sure is a nice new chip.So small and good power draw and heat levels yet it’s a strong performer.
    I notice DX 10.1 in HAWX was rockin….it does look good I have to say.
    Still..if I really want a few leaves to blow past my jet in a dogfight I oughta get CUDA I suppose ๐Ÿ™‚

      • Meadows
      • 10 years ago

      While you’re mixing unrelated things from the review, that actually was funny.

      • asdsa
      • 10 years ago

      You get halved frame rate too with the physX leaves as a bonus. ๐Ÿ™‚

        • Meadows
        • 10 years ago

        Which is ten times faster than the same with ATI ๐Ÿ˜‰

    • esterhasz
    • 10 years ago

    I must say that this review of the 4770 – like most of them out there – does not convince me. Testing a mid-range/budget card on the fastest processor there is leaves a strange taste in the mouth. Somebody on a tight budget does quite pobably not have a core i7 rig, but rather something in the line of an x2 or a 4300. So, yeah, the 4870 is faster than the 4770 (what a surprise!) on the i7 but how about a slower system? I’d love to see a budget card tested in a budget context. Sure, a second testbed is a lot more work but it would distinguish the review from the armada of identical propositions out there…

    • Chillectric
    • 10 years ago

    Hmm 45 FPS in WoW at 1920×1200 4xAA Ultra… I can barely get that with my HD 4850 at 1680×1050 4xAA Ultra. Are shadows turned on?

      • eon_blue
      • 10 years ago

      HD4890 OC’d to over 1ghz on the core
      E8400 @ 4ghz

      Lucky to be pulling 15 fps in dalaran at peak hours @ 1920×1200.
      25 man raids – 25-30fps.

      • Meadows
      • 10 years ago

      The zone Scott tested was retarded, in terms of GPU utilisation of course.
      Yes, Ultra means everything is turned on or maxed out. I’m guessing you have 1-2 expansion packs installed, and they have progressively more demanding environments and encounters – some of the latest locales could shoot the kneecaps off these tested cards if antialias remained on.

      Then again, getting a character there is so much time and effort that TR would need support from Blizzard, including free top level characters and items, if they wanted to test without wasting approximately 2000 hours on it.

      • Bensam123
      • 10 years ago

      WoW is a coding pile-o-ass, like supreme commander. I’ve heard of people running in sli with 8800s seeing teens while sitting in some of the capitals. I don’t think results in WoW have any real world performance impacts.

        • Meadows
        • 10 years ago

        I’ve never heard of that. They must’ve had terrible processors (or the bad CPU overhead from nVidia drivers appeared too).
        I could go to any capital, any time of the day, and still get over 20 fps with a stock 8800 GT. At 2048×1536. With antialias, if you want.

        Wintergrasp battles inside the fortress are another thing, that’s where things plunge to (or below!) 10-15 fps due to a lot of things converging at the same time in the wrong moment.

      • HurgyMcGurgyGurg
      • 10 years ago

      Slow down for a minute before you go off like that.

      I could type several pages here about how basing a F@H GPU purchase on current performance is a bad idea. (I already have if you care to dig around in the TR Folding Forum)

      To put it simply the AMD side of the spectrum is moving rapidly but there is not a single HD 4000 series optimization yet. Some reports cite a 2.5-3x improvement is possible.

      Contrarily Nvidia PPD is already maxed out, they have come to the end of the road for optimizations. In fact, Nvidia PPD will actually decrease from here on out as they shift to larger WU’s.

      And TechPowerUps folding benchies are widely inaccurate as they do not standardize what WU’s are folded. As such PPD can very by over 1k between WU’s and you can see this in their benchmarks. The HD 4890 preforms less than the HD 4870, why? Different WU. And they don’t even try to run two clients for their X2 cards.

      I have an HD 4870, some WU’s get only 3k PPD some get near 5k PPD, in November you were lucky to get 3k PPD at all on an HD 4870.

      However I agree TR needs a GPU folding section, but a standardized test needs to be developed.

        • d2brothe
        • 10 years ago

        Also, almost nobody care about F@H performance. These are graphics cards, 99.999% or more of purchasers do only care about gaming performance.

          • Tarx
          • 10 years ago

          TR already includes folding results for CPU (although not on the SMP client).
          TR is one of the top teams out of thousands worldwide but they are slowly slipping.
          TR’s folding team has hundreds of members.
          The podcast already stated they were short on time.

          However the problem with the results is that the folding client doesn’t fully use the GPU card, especially in ATI’s case where they have way more shaders than is needed. So for now Nvidia cards give much better results (although lower-end Nvidia card PPDs jump around a lot depending on the project). But who knows – perhaps if there was a sudden switch to new huge projects tomorrow, all the older test results would be invalid.

          If it is worth it to include in a review is a question up to the site.

    • LiamC
    • 10 years ago

    Nice review.

    Any idea on when/if we’ll see a single slot cooling solution? I’d just love to drop one of these into my HTPC to replace a HD 3650, but it has to be single slot.

      • Palek
      • 10 years ago

      I recall a news article (sorry, don’t know where or when) in which AMD/ATi were quoted as saying that the reference cooler would be 2-slot but a single-slot cooler configuration was most certainly possible and that card vendors would be free to implement such a config.

      Personally I’d like to see a Radeon 4670 with a DECENT single slot cooler that has a variable speed fan – I know of no such cards in existence as of this moment. I would exchange my current HTPC card in a heartbeat. The 4670 is a great little card but way too noisy…

    • jonybiskit
    • 10 years ago

    Help. A Radeon with PhysX? Is there something I missed in the article? Or have I been wrong about something this whole time.

      • HurgyMcGurgyGurg
      • 10 years ago

      Physx has the ability to run on the CPU as well.

    • BoBzeBuilder
    • 10 years ago

    Nice review and good too see some new games in the benchmarks, but what about Crysis, CoD, and L4D benchmarks? Will they be included in future reviews?

    • ssidbroadcast
    • 10 years ago

    Thanks for including Dawn of War II in the benchmarks. That game looks really neato.

    • Vasilyfav
    • 10 years ago

    Again, not very impressed with the card, since it draws 15W more than GTS250 in idle, negating most of the energy savings, unless you are folding 24/7. And the fact that the card gets beaten by the 4850 in at least half the tests, so it brings nothing new from a performance or value standpoint, since 4850 costs on average 5-10$ higher.

    Come on ATI, bring us some high end 40nm parts, and then I’ll be sold.

      • Konstantine
      • 10 years ago

      You can edit your 2D profile in the bios and re-adjust the clocks and the voltages…
      What matters is the load power consumption…

    • PRIME1
    • 10 years ago

    The GTS250 looks to be the best bang for the buck in this range.

      • FuturePastNow
      • 10 years ago

      I am thinking so, as well. Apart from very slightly better performance, CUDA, and Physx, I’m willing to pay an extra $10-20 for a cooler that exhausts out the back.

      • MadManOriginal
      • 10 years ago

      I wouldn’t call it the same price range, at least not the GTS250 that’s on TRs chart. That one is a 1GB model and is nearer $150. The 512MB is basically a 9800GTX+ right? In the few tests it’s present here it doesn’t do so well, the dual slot exhaust cooling is nice though.

        • continuum
        • 10 years ago

        True. I do think the GTS250 makes the most sense in this competition as far as performance… nVidia’s definitely outgunned here.

    • travbrad
    • 10 years ago

    Is it just me, or does this card seem to be competing with another of AMD’s cards? (the 4830). The 4830 wasn’t included in the charts, but I imagine the performance would be very similar, and the price is very similar as well. The 4830 is a bit cheaper, and probably a bit slower, but the difference must be incredibly small (likely only a couple FPS in most games)

    Having more choices is always nice of course, but this card just seems to have such a small “window” of performance and price. You can get something almost as fast for $20 less, or get something significantly faster for $30 more.

      • Bruce
      • 10 years ago

      Re-read page 1 – this replaces the 4830.

        • travbrad
        • 10 years ago

        Ahh ok, I missed that bit. Thanks ๐Ÿ™‚

        In that case I hope it comes down in price a bit. The 4830 was a better value for your money. New cards always take a little bit till the prices settle though, so I suppose it’ll get there. Right now it is $20-$25 more expensive for almost identical performance.

        • ssidbroadcast
        • 10 years ago

        Since this replaces the 4830… I kinda wanted to see the 4830 in the benchmarks.

        I know, I know… I’m starting to sound like one of /[

          • travbrad
          • 10 years ago

          I don’t like to promote other sites, but anandtech has the 4830 in their review if you are interested. The 4770 is faster than the 4830 by maybe 5-10% on average according to them

          So once the prices settle down it should be a pretty good value. At the moment though, it’s over 20% higher cost, for 5-10% more performance. If the idea is to reduce manufacturing costs, it would be nice if some of that got passed on to the consumer.

          • BobbinThreadbare
          • 10 years ago

          Count me as disappointed that the 4830 wasn’t included. Shouldn’t we be able to see for ourselves if the new product is actually better than the old one?

          Also, I would have liked to see crossfire performance.

            • LiamC
            • 10 years ago

            What I would like to have seen was how does this mid-range card compare to similar from a generation or two ago–say HD 3650/2600 XT or Pro and 7900GS or similar. Is it worthwhile to upgrade?

            And will there be a 4750?

            • Meadows
            • 10 years ago

            It’s worthwhile to upgrade, should be 3-5 times faster than those, if memory and gut instinct serve me correctly.

    • Bruce
    • 10 years ago

    You have transposed two digits in the 4870, 4th paragraph on the last page.

    • shank15217
    • 10 years ago

    WoW classic has vastly different in graphics requirement (because its 4 years old) than Wotlk which is the latest expansion, the WoW benchmark is not very valid. You can get frame rates dipping to the mid teens with the latest cards in heavy traffic areas like Dalaran.

    • Lazier_Said
    • 10 years ago

    Outstanding results for what must be a very cheap card with its small die and 128 bit PCB.

    The cheapest 4770 at Newegg is $117 shipped, which at just $3 less than a GTS 250 doesn’t really reflect the bargain this has the potential to be.

    • no51
    • 10 years ago

    Ooh.. new games for benchmarking, and a fine selection at that.

      • DrDillyBar
      • 10 years ago

      Agreed.

    • ish718
    • 10 years ago

    The performance gain going from DX10 to DX10.1 in HAWX is impressive. Optimization is always a good thing.

      • TurtlePerson2
      • 10 years ago

      Makes me wish that there was more DX10.1 games out there. The AA benefits are great.

        • travbrad
        • 10 years ago

        Yeah, I remember Assassin’s Creed had similarly large jumps in AA performance with 10.1

        Unfortunately 10.1 support can really only be added as an “extra” or “add-on” though, since Nvidia cards don’t support it, and a lot of game devs don’t feel the extra time and resources are worth it (they can’t even release games without major bugs a lot of the time).

    • SecretMaster
    • 10 years ago

    Two things Damage.
    1. On conclusions page you say “one has to wonder about the value proposition of any of the cards in this price range when you can sometimes snag a Radeon HD 4780 for $150” it should be 4870. Some people will probably nitpick that.

    2. Load temperatures are dramatically different from the original 4870/4850 reviews. I remember these things used to be in the high 80’s/90’s degrees celsius. Has AMD changed the fan speed rate on newer models of the cards? I remember for the 4890 review you said AMD opted for cooler cards/noiser output for the 4890.

      • d2brothe
      • 10 years ago

      Some people: AKA you :P….</nitpick> ๐Ÿ˜›

    • FuturePastNow
    • 10 years ago

    Excellent review. Thanks for comparing the coolers.

    • FireGryphon
    • 10 years ago

    I’ll gladly thank you Tuesday, for a review today.

    Awesome review, as always. This chip looks like a winner.

    • AlphaGeek
    • 10 years ago

    Why did you guys leave the 4830 out of the performance testing data graphs, after including it in the theoretical-performance table on page 1?

    I’ll freely admit that my immediate, selfish purpose for reading this review is to figure out if I made a mistake buying a 1GB R4830 card just two weeks ago. It sure would be nice if you could compare it to the card it’s presumably replacing at the $100 price point.

    • BooTs
    • 10 years ago

    Wow. I had no idea AMD had something like this coming out. The modest over-clocking is a little disappointing but hopefully that will be different when higher-end cards are released on the 40nm fab.

    Excellent review as always. I especially liked the comments on Hawx. I saw that game on Steam and have been curious as me and my friends might need something to follow Left 4 Dead.

    • ludi
    • 10 years ago

    Good review.

    Maybe somebody who has been following the market more closely can contextualize this for me. I have an 8800GT/256MB right now. In loose percentage terms of both performance and audible noise, what are the advantages of upgrading from that, to this?

      • BooTs
      • 10 years ago

      If you’re playing a lot of newer games, you might get some benefit from the additional on-board memory and modest performance increases.

      If you want a significant boost in performance you should probably look at a higher end card like a 4870 1or GTX260

        • henfactor
        • 10 years ago

        I concur. You seem to like to hang on to your cards for a while, so just take the $160 plunge (probably less that you paid for your 8800 GT) and be game-happy for the next two- three years!

      • FuturePastNow
      • 10 years ago

      Depends in part on the resolution, if you are at 1280×1024 or below, there’s little point in upgrading from an 8800GT.

        • ludi
        • 10 years ago

        Thanks (everyone). I’m actually using a 22″ monitor so 1680×1050 is my native res. However, I don’t play a lot of newer games anymore, but I am developing a fondness for cool operation and reduced noise. A 4870 is definitely out and I can’t throw that kind of money into a graphics card right now anyway. However, for $100, I might bite at something in the next month or two…

          • Veerappan
          • 10 years ago

          If you’re not playing many newer games, I’d say wait until this fall when the new DX11 cards start coming out if you can put off the purchase that long. It’ll give you a little more longevity in the new card, and all of the 40nm processes should probably be pretty refined by then which should give you lower power draw and noise than even the current cards.

          Just a thought, but it makes sense in my head ๐Ÿ™‚

      • Meadows
      • 10 years ago

      A lot, because you’re handicapped VRAM-wise too. Then again, for silent operation, your best bet is simply looking for something passive in the approximate range between the 9600 GSO and the HD 4770.

        • MadManOriginal
        • 10 years ago

        For silence an aftermarket cooler is the way to go, some of the stock passive coolers are ok but many of them are barely sufficient and depend a lot on supplemental airflow. An HR-03 is the ‘best’ but expensive and only a marginal improvement over the pretty low cost Accelero S1. I had the latter on an HD4850 passive in a case with only one low speed intake and one low speed exhaust+PSU exhaust. The temps were better than the stock cooler.

      • Bombadil
      • 10 years ago

      The 256MB 8800GT is much slower than a 512MB 8800GT. The lesser model used slower memory (800MHz DDR3 instead of 1000MHz) in addition to the reduced size. Even a 512MB HD 3850/4670 is faster. I am glad I sold mine immediately after buying it. A Radeon HD 4770 would be a substantial upgrade in performance.

Pin It on Pinterest

Share This