AMD’s Radeon HD 5770 and 5750 graphics cards


Oh dear. Our review of the Radeon HD 5700 series is late. AMD keeps pumping out Radeons at such a ridiculous rate, they broke me. I just couldn’t keep up with it all—ran out of snark and iffy jokes, or something like that.

Just a few short weeks ago, the Radeon HD 5870 debuted, the first of a new generation of graphics processors that packs improved performance, new capabilities needed to support DirectX 11 and emerging GPU-computing standards, higher image quality, improved power efficiency, and multiple other marketing bullet points of varying worth. We liked it, to put things mildly. My wife keeps trying to convince me not to keep mine under my pillow at night, but I know better. Oh, yes.

New generations of GPU technology tend to make their way from higher end products into more affordable ones over time. This trend is often called the waterfall effect, because, well, I dunno. Water falls down, and stuff. Spray goes everywhere. It’s noisy, and sometimes there are rainbows.

Where was I?

Oh, right. Anyhow, the technology built into the Radeon HD 5870 is propagating to a second GPU in record time. AMD has just introduced a pair of graphics cards based on a new mid-range graphics processor code-named Juniper, bringing DirectX 11 and a whole slew of other buzzwords into price ranges heretofore reserved for older buzzwords followed by lower numbers. The bottom line for you and me, roughly speaking, is better graphics cards for less money. Let’s see how well the new Radeons deliver on that expectation.

Juniper takes a, erm, bough

Yep, the GPU behind the Radeon HD 5700 series is code-named Juniper, part of the “Evergreen” naming scheme AMD is using for these chips. The big daddy in the family, Cypress, is the chip inside the Radeon HD 5800 series. Juniper is classic example of a chip company firing up its design tools, including the vaunted World’s Smallest Chainsaw, and essentially chopping its high-end part in half to serve a broader market.

In truth, it’s not quite that simple, because Cypress and Juniper were essentially co-developed. The Juniper silicon came back from the fab about a week after Cypress, and AMD had both chips in validation at the same time. If a bug was found on one chip, AMD would then attempt to replicate it on the other one and, if needed, apply a fix to both. That co-development strategy is what allowed the firm to crank out multiple new Radeons in an unprecedently tight time window.

A block diagram of Juniper. Source: AMD.

Still, if you look at the block diagram above, you can see what I’m saying about that tiny chainsaw, a tool Canadians are renowned for using well. Architecturally, in many key respects, Juniper has half the resources of Cypress. That reality is reflected first in the graphics engine at the top of the diagram, which includes only a single rasterizer, not two like Cypress.

In its shader core, Juniper has 10 SIMD cores, each of which has 16 superscalar execution units. Each of those has five ALUs. Multiply it all out, and Juniper is sporting a total of 800 ALUs, or “stream processors,” as AMD likes to call them, a bit immodestly. Each SIMD core has a texture unit associated with it, so Juniper includes 10 of those, giving it 40 texels per clock of texture filtering power.

Also halved on Juniper is the number of render back ends, which now stands at four. As a result, the chip can produce 16 color pixels per clock and 64 Z/stencil pixels per clock. Again, that’s half of Cypress’ capacities.

Interestingly enough, Juniper turns out to be quite similar to the RV770 GPU that powers the prior-gen high-end Radeon HD 4800 series. Both Juniper and RV770 have 800 SPs, 10 texture units, four render back-ends, and a single rasterizer. But Juniper is a smaller chip intended for less expensive graphics cards, so it has only two 64-bit memory controllers or an aggregate 128-bit memory interface. Both Cypress and RV770 have 256-bit memory interfaces. Still, the use of higher-clocked GDDR5 memory in the Radeon HD 5700 series will at least help bridge the deficit versus the RV770.

Across the nascent Evergreen lineup, AMD has increased processing and graphics power disproportionately compared to memory bandwidth. That trend may largely be driven by cost considerations and the bandwidth per pin limitations of available memory types. Still, AMD claims the tradeoff makes sense, asserting that the RV770 actually had more memory bandwidth than it needed and that chips like Juniper are simply more balanced, not bandwidth-starved.

Of course, Juniper varies from the RV770 in many other respects. The chip’s hardware supports many capabilities exposed in new software APIs like DirectX 11, DirectCompute, and OpenCL 1.0. AMD has improved its image quality via superior texture filtering methods, as well, and revamped its display output block to support up to six four-megapixel displays simultaneously. I suggest you read our Radeon HD 5870 review for a full rundown on these features. The many incremental improvements are considerable, when taken together.

Juniper has dropped one major feature from Cypress, though: the ability to process double-precision floating-point math. Both Cypress and RV770 can process double-precision datatypes at one-fifth the rate they do single-precision numbers, with varying degrees of capability and internal precision. Cypress is more or less fully IEEE 754-2008 compliant. But if that string of letters and numbers doesn’t mean anything to you, you probably won’t miss DP support in Juniper. The ability to handle DP math is crucial for certain GPU computing markets, but its value for a consumer product is shaky. DP math doesn’t matter a whit for real-time graphics, for one thing; its only real use is for GPU computing. Even for GPU computing, the first consumer applications aren’t likely to need double-precision datatypes. Things like image processing, video compression and effects, and even physics simulations for gaming get along just fine with integer or single-precision FP datatypes. Omitting DP support reduces the size of the chip and thus cuts AMD’s costs, which is why the company chose to leave it out of this very consumer-focused GPU.

Sizing up the chip

Estimated

transistor

count

(Millions)

Approximate

die
size

(mm²)

Fabrication

process node

G92b

754 256 55-nm
TSMC
GT200

1400 576* 65-nm
TSMC
GT200b

1400 470* 55-nm
TSMC
RV740

826 137 40-nm
TSMC
RV770

956 256 55-nm
TSMC
Juniper

1040 166 40-nm
TSMC
Cypress

2150 334 40-nm
TSMC

Despite the omission of double-precision math support, Juniper’s transistor count still exceeds the RV770’s somewhat, as the numbers in the table to the right attest. Because, like Cypress, the Juniper GPU is manufactured using a 40-nanometer fab process, it’s a much smaller chip, though—closer in size to the RV740 GPU in the Radeon HD 4770. As you might be gathering, that means Juniper packs a heckuva wallop for its size. We’ll get into specifics on that front shortly.

Please note that the numbers in the table are somewhat approximate, since they’re culled from various sources. Most notably, Nvidia has avoided divulging die sizes for its largest chips, so our GT200 and GT200b numbers are potentially off. I just Googled around for them and settled on the most widely accepted numbers. You may prefer the “spin the bottle method,” which could produce more accurate results. I’ve not pried the cap off of a GT200/b to check them. Hence the asterisks.

Below are pictures of the various GPUs sized up, again approximately, next to a quarter for reference. As you can see, Juniper is very much a welterweight.

RV770

Cypress

Juniper

RV740

The 55-nm G92b

The 65-nm GT200 under its metal cap

What’s in the cards

AMD has unleashed a couple of graphics cards with Juniper chips onboard. The one that will most likely capture your interest is the Radeon HD 5770, which is the fastest Juniper-based card. Have a look.

Gigabyte’s Radeon HD 5770

That’s Gigabyte’s take on the 5770, and yes, it’s clothed in one of those Batmobile-inspired cooling shrouds. AMD calls this the “Phoenix” shroud, and all early versions of the 5770 should include it. Over time, though, the various brands will be free to come up with their own custom coolers, so the Phoenix’s availability may be somewhat limited.

Juniper cranks away at 850MHz in the 5770, paired with a gig of memory clocked at 1.2GHz. That memory clock translates into a 4.8 GT/s data rate, since this is GDDR5 memory. Happily, Juniper has inherited Cypress’s ability to conserve power at idle by dropping its GDDR5 memory into a low-power state. As a result, AMD rates the 5770’s idle power at only 18W. Even under load, the 5770 is relatively tame, with a 108W max power rating.

Thus, the 5770 gets by easily with just a single six-pin auxiliary power connector. The board’s relatively short, too, at just under 8.25″ in length—an easy fit for most any mid-sized desktop enclosure, though it will occupy two slots.

The Radeon HD 5750

And here’s the Radeon HD 5750, the 5770’s little brother. For this mission, Juniper has had one of its SIMD cores clipped, along with the corresponding texture unit. Clock speeds are de-tuned, too, with the GPU at 700MHz and memory at 1150MHz. Thanks to the changes, the 5750 tops out at 86W of power draw and is rated for just 16W at idle. The board’s shorter, too, at just 7 3/16″.

The 5770’s outputs

…and the 5750’s

Both 5700-series cards sport the same array of outputs as the Radeon HD 5870, including a pair of dual-link DVI ports, a DisplayPort connector, and an HDMI out. The cards can drive up to three monitors at up to 2560×1600 in several port combinations, although the third display must use DisplayPort. AMD’s Eyefinity capability is fully supported in all 5700-series cards, too, making it possible to play games across three displays with either card, provided the pixel-pushing demands of the games aren’t too strenuous.

Incidentally, AMD says board makers will be able to create multi-display cards based on Juniper that feature more than three DisplayPort connectors, but it will ask them to limit those cards to five outputs. You’ll have to step up to one of the upcoming Radeon HD 5870 Eyefinity6 Edition cards to drive six displays via a single GPU.

Another limitation of the 5700 series will be the number of cards supported in CrossFire multi-GPU configs. Although the cards have dual CrossFire connectors up top, AMD plans to limit them to dual-GPU configs only. Again, going beyond that will mean moving up to the 5800 series.

AMD told us it expects “nearly all” board vendors to include a coupon with their Radeon HD 5770 cards good for a Steam download of the upcoming racing game, DiRT 2. As I understand it, board vendors may even include the DiRT 2 coupon with the 5750, if they choose. They don’t appear to be doing so right now, though. At present, most 5700-series cards listings at Newegg say nothing about a DiRT 2 coupon. Our retail boxed review sample of Gigabyte’s Radeon HD 5770 from Gigabyte didn’t include a coupon, either, and Gigabyte says only its 5800-series cards will feature that perk.

The Radeon HD 5770 in the nude

The initial suggested e-tail price for the Radeon HD 5770 is $159. That’s an interesting opening bid from AMD, because the Radeon HD 4870 1GB can be had right now for a prevailing price of about $150 at online retailers, with rebate deals that can drop the price another 10 or 15 bucks below that. The 4870 has a lower core clock speed than the 5770, but higher memory bandwidth, so the performance contest between them is no sure thing (as we’ll soon see). AMD has apparently decided to charge a bit of a premium at the outset for its latest GPU and the fresh technology it offers.

That decision puts the Radeon HD 5770 in more or less direct competition with the GeForce GTX 260, too. Prevailing prices online for the GTX 260 are about $165, with rebate deals shaving up to 20 bucks off the final toll, provided that the rebate company actually pays.

One tricky thing about making comparisons to the GTX 260 is the fact that clock speeds on end products tend to vary. For this review, we tested with a GTX 260 card from Asus that’s clocked quite a bit higher than Nvidia’s baseline speeds, primarily because it was the only one we had on hand that uses the 55-nm “b” version of the GT200 GPU. Our Asus card has clock speeds of 650MHz (core), 1400MHz (shaders), and 1150MHz (memory, or 2300 MT/s with DDR3). This particular Asus model is apparently not available at online retailers right now. This Gigabyte card with very similar clock speeds is selling for $179 at Newegg with a $30 mail-in rebate attached. Depending on whether the rebate works out, then, that GTX 260 will either cost more or less than the 5770. Be aware, though, that not all GTX 260s are created equal, and the one we’ve tested is at the upper end of the bell curve, like the word count in a TR review.

Meanwhile, the 1GB variant of the 5750 will list for $129, where it will have to contend with the Radeon HD 4850 1GB (~$120 prevailing price) and the GeForce GTS 250 1GB (~$140, with some rebates available). Again, clock speeds vary on the GeForces, and this Gigabyte offering for $150, plus a $20 rebate, is probably closest to our EVGA Superclocked card’s 770MHz core, 1890MHz shaders, and 1123MHz memory.

AMD says it has shipped “tens of thousands” of both 5750 and 5770 1GB cards for this initial launch, so availability hopefully won’t be as spotty as it has been for the Radeon HD 5800-series cards. Over time, Radeon board makers should begin shipping a 512MB version of the 5750, as well, for around $109.

Our testing methods

As ever, we did our best to deliver clean benchmark numbers. Tests were run at least three times, and we’ve reported the median result.

Our test systems were configured like so:

Processor Core i7-965 Extreme 3.2GHz
System bus QPI 6.4 GT/s (3.2GHz)
Motherboard Gigabyte EX58-UD5
BIOS revision F7
North bridge X58 IOH
South bridge ICH10R
Chipset drivers INF update 9.1.1.1015

Matrix Storage Manager 8.9.0.1023

Memory size 6GB (3 DIMMs)
Memory type Corsair Dominator TR3X6G1600C8D

DDR3 SDRAM at 1333MHz

CAS latency (CL) 8
RAS to CAS delay (tRCD) 8
RAS precharge (tRP) 8
Cycle time (tRAS) 24
Command rate 2T
Audio Integrated ICH10R/ALC889A

with Realtek 6.0.1.5919 drivers

Graphics Gigabyte
Radeon HD 4850 1GB PCIe

with Catalyst 8.66.6-090929a-089412E drivers

Diamond
Radeon HD 4870 1GB PCIe

with Catalyst 8.66.6-090929a-089412E drivers

Sapphire Radeon HD 4890 OC 1GB PCIe

with Catalyst 8.66-090910a-088431E drivers

Radeon HD 4870 X2 2GB PCIe

with Catalyst 8.66-090910a-088431E drivers

Radeon HD 5850 1GB PCIe

with Catalyst 8.66-090910a-088431E drivers

Radeon HD 5750 1GB PCIe

with Catalyst 8.66.6-090929a-089412E drivers

Gigabyte Radeon HD
5770 1GB PCIe

with Catalyst 8.66.6-090929a-089412E drivers

Radeon HD 5870 1GB PCIe

with Catalyst 8.66-090910a-088431E drivers

Dual Radeon HD 5870 1GB PCIe

with Catalyst 8.66-090910a-088431E drivers

EVGA GeForce
GTS 250 Superclocked 1GB PCIe

with ForceWare 191.07 drivers

Asus
ENGTX260 TOP SP216

GeForce GTX 260 896MB PCIe

with ForceWare 191.07 drivers

Asus GeForce GTX 285 1GB PCIe

with ForceWare 190.62 drivers

Dual Asus GeForce GTX 285 1GB PCIe

with ForceWare 190.62 drivers

GeForce GTX 295 2GB PCIe

with ForceWare 190.62 drivers

Hard drive WD Caviar SE16 320GB SATA
Power supply PC Power & Cooling Silencer 750 Watt
OS Windows 7 Ultimate x64 Edition RTM
OS updates DirectX March 2009 update

Thanks to Corsair for providing us with memory for our testing. Their quality, service, and support are easily superior to no-name DIMMs.

Our test systems were powered by PC Power & Cooling Silencer 750W power supply units. The Silencer 750W was a runaway Editor’s Choice winner in our epic 11-way power supply roundup, so it seemed like a fitting choice for our test rigs.

Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests.

We used the following versions of our test applications:

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Running the numbers

Peak
pixel
fill rate
(Gpixels/s)


Peak bilinear

texel
filtering
rate
(Gtexels/s)

Peak bilinear

FP16 texel
filtering
rate
(Gtexels/s)


Peak
memory
bandwidth
(GB/s)

Peak shader
arithmetic (GFLOPS)

Single-issue Dual-issue
GeForce GTS 250 1GB

12.3 49.3 24.6 71.9 484 726
GeForce GTX 260 (216 SPs)

18.2 46.8 23.4 128.8 605 907
GeForce GTX 275

17.7 50.6 25.4 127.0 674 1011
GeForce GTX 285

21.4 53.6 26.8 166.4 744 1116
GeForce GTX 295

32.3 92.2 46.1 223.9 1192 1788
Radeon HD 4850 1GB

11.2 28.0 14.0 63.6 1120
Radeon HD 4870 1GB

12.0 30.0 15.0 115.2 1200
Radeon HD 4890

14.4 36.0 18.0 124.8 1440
Radeon HD 4870 X2

24.0 60.0 30.0 230.4 2400
Radeon HD 5750

11.2 25.2 12.6 73.6 1008
Radeon HD 5770

13.6 34.0 17.0 76.8 1360
Radeon HD 5850

23.2 52.2 26.1 128.0 2088
Radeon HD 5870

27.2 68.0 34.0 153.6 2720

We’ll begin by looking at some theoreticals, before moving on to in-game performance. Please note that the specifications in the table above are based, where applicable, on the actual clock speeds of the cards we tested, not on the baseline clocks established by the chipmakers.

There’s an interesting split going on here with the Radeons. The 5750 is slower than the Radeon HD 4850 in almost every respect, except for memory bandwidth. The 4850 has more peak texture filtering capacity and more FLOPS of shader power. Meanwhile, the 5770’s contest with the 4870 is the opposite; the 5770 leads in every respect except for memory bandwidth, where the 4870 has a decisive lead.

3DMark’s directed tests should give us some insight into how these theoreticals play out in the hardware.

This test of color fill rate is often limited by memory bandwidth, and these results track pretty well with the cards’ theoretical limits on that front. The Radeon HD 4870 and the GeForce GTX 260 score quite a bit higher than the 5770, but the 5750 just edges out its two closest rivals.

This test stresses FP16 texture filtering, and despite the GeForce cards’ impressive specs on paper in this department, the Radeons clearly have the upper hand. The 5770 is nearly 50% faster than the GeForce GTX 260 here, and it beats the Radeon HD 4870 pretty soundly, too.

The results here vary from test to test, but a couple of trends are obvious. For one, the GeForce cards perform better than their somewhat low theoretical peak FLOPS numbers would suggest. Also, the Radeon HD 5770 outperforms the 4870 consistently, by very nice margins in a couple of cases.

Far Cry 2

We tested Far Cry 2 using the game’s built-in benchmarking tool, which allowed us to test the different cards at multiple resolutions in a precisely repeatable manner. We used the benchmark tool’s “Very high” quality presets with the DirectX 10 renderer and 4X multisampled antialiasing.

With frame rates in the 30s at 1920×1200, this is a pretty tough test for this class of graphics card. The differences between the 5700-series cards and their closest rivals are pretty minor, too.

The 5750 outperforms the 4850, but just trails the GeForce GTS 250. The 5770, meanwhile, shadows the 4870 closely, but can’t keep pace with our hot-clocked GTX 260. Keep in mind that the GTX 260 is a little pricier before its rebate, so the value contest remains very tight.

Wolfenstein

We recorded a demo during a multiplayer game on the Hospital map and played it back using the “timeNetDemo” command. At all resolutions, the game’s quality options were at their peaks, with 4X multisampled AA and 8X anisotropic filtering enabled.

Chalk this one up as an unequivocal win for Nvidia, whose graphics cards have long excelled in id Software’s OpenGL-based game engines. The newer Radeons trail their similarly priced elder siblings, here, too.

Left 4 Dead

We also used a custom-recorded timedemo with Valve’s excellent zombie shooter, Left 4 Dead. We tested with 4X multisampled AA and 16X anisotropic filtering enabled and all of the game’s quality options cranked

This game runs quite well on relatively lightweight hardware. We’re seeing frame rates near 60 FPS at 2560×1600 on the slowest cards we tested, which raises the question of whether the differences in performance between these cards really matter. Nevertheless, the 5750 and 5770 are a slight bit slower than the 4850 and 4870.

Tom Clancy’s HAWX

We used the built-in benchmark tool in HAWX, which seems to do a good job of putting a video card through its paces. We tested this game in DirectX 10 mode with all of the image quality options either turned on or set to “High”, along with 4X multisampled antialiasing. Since this game supports DirectX 10.1 for enhanced performance, we enabled it on the Radeons. No current GeForce GPU supports DX10.1, though, so we couldn’t use it with them.

For some reason, perhaps immature drivers, the new Radeons in the 5000 series perform relatively weakly in this game. Even the 5850 and 5870 are affected.

Sacred 2: Fallen Angel

A little surprisingly for an RPG, this game is demanding enough to test even the fastest GPUs at its highest quality settings. And it puts all of that GPU power to good use by churning out some fantastic visuals.

We tested at 1920×1200 resolution with the game’s quality options at their “Very high” presets (typically the best possible quality setting) with 4X MSAA.

Given the way this game tends to play, we decided to test with fewer, longer sessions when capturing frame rates with FRAPS. We settled on three five-minute-long play sessions, all in the same area of the game. We then reported the median of the average and minimum frame rates from the three runs.

In spite of the iffy minimum frame rates, this RPG plays pretty well at these settings on all of the cards. The GeForces take their respective price classes, and the Radeons are left to fight for second place. The 5770’s performance is a little disappointing compared to the 4870’s.

Crysis Warhead

We tested Warhead using its “Gamer” quality presets at a display resolution of 1680×1050, with 4X antialiasing enabled..

For this game, we tested each GPU config in five 60-second sessions, covering the same portion of the game each time. We’ve then reported the median average and minimum frame rates from those five runs.

Wow, the ~$130 cards are incredibly closely matched. Among the higher-priced cards, the differences are still minor, but the 5770 is the slowest of the three.

Power consumption

We measured total system power consumption at the wall socket using an Extech power analyzer model 380803. The monitor was plugged into a separate outlet, so its power draw was not part of our measurement. The cards were plugged into a motherboard on an open test bench.

The idle measurements were taken at the Windows desktop with the Aero theme enabled. The cards were tested under load running Left 4 Dead at a 2560×1600 resolution, using the same settings we did for performance testing.

Here’s where having a newer, smaller, 40-nm design pays off. The 5700-series cards draw substantially less power under load than their price competitors, and they idle down to some very nice lows.

Noise levels

We measured noise levels on our test system, sitting on an open test bench, using an Extech model 407738 digital sound level meter. The meter was mounted on a tripod approximately 8″ from the test system at a height even with the top of the video card. We used the OSHA-standard weighting and speed for these measurements.

You can think of these noise level measurements much like our system power consumption tests, because the entire systems’ noise levels were measured. Of course, noise levels will vary greatly in the real world along with the acoustic properties of the PC enclosure used, whether the enclosure provides adequate cooling to avoid a card’s highest fan speeds, placement of the enclosure in the room, and a whole range of other variables. These results should give a reasonably good picture of comparative fan noise, though.

The 5770 has a beefier cooler than the 5750, which allows it to create less fan noise at idle. When running a game, though, the 5770’s higher power draw demands a little more fan power—and thus noise. Both 5700 cards are among the quietest under load, regardless.

The third-party cooler on our Gigabyte 4850 doesn’t ramp down its fan speed at idle, by the way; that’s why it’s comparatively loud in that test.

GPU temperatures

For most of the cards, we used GPU-Z to log temperatures during our load testing. In the case of multi-GPU setups, we recorded temperatures on the primary card. However, GPU-Z didn’t yet know what to do with the 5700- and 5800- series cards, so we had to resort to running a 3D program in a window while reading the temperature from the Overdrive section of AMD’s Catalyst control panel.

AMD has tweaked its fan control algorithms to reduce GPU temperatures in its more recent products, and that leads to an 18° C drop in peak temperatures compared to the 4870.

Conclusions

The performance comparisons we made in this review aren’t entirely fair, in the sense that we’ve pitted older, larger chips with wider paths to memory against the 5700-series cards. The GT200b GPU in the GeForce GTX 260 has nearly three times the land mass of a Juniper chip, to put things in perspective. In reality, the Radeon HD 5770 is a replacement for the Radeon HD 4770, not the 4870.

The thing is, AMD knows it has captured lightning in a bottle with the first DirectX 11-capable graphics cards, and it has priced its products accordingly. Yes, the 5770 may not quite be as fast as the 4870 in current games, but those performance differences are rather minor, a few frames per second here or there at the most common resolutions. And some games run exceptionally well on any of these cards, regardless of resolution.

What’s much harder to quantify is the true difference between DirectX 10-class hardware and DirectX 11-class hardware. A great many things are encompassed under that one umbrella, DX11. For instance, Juniper has a hardware tessellation capability that games may exploit to produce much higher quality polygon meshes at fluid frame rates. Although prior-generation Radeons also had hardware tessellation, DX10 doesn’t make use of it. And current GeForces have no tessellation hardware at all. At some point down the road, games will begin to use DX11 tessellation, and one of two things will happen: either the game will simply look better on the DX11-class parts, or it will run much faster on the DX11 hardware. We’ll likely see similar tradeoffs with other DX11 features, including HDR texture compression and multi-threaded command processing.

Juniper offers value on other fronts, as well, including higher quality texture filtering and supersampled antialiasing, not to mention the Eyefinity feature that enables gaming across three monitors at once. Toss in much lower power draw, cooler GPU temperatures, shorter board lengths, and lighter power supply requirements, too, as 5700-series advantages over similarly priced competitors. Soon, the value meter begins to wobble in the direction of the new Radeons. If board vendors begin include that DiRT 2 coupon more widely, the needle will move for sure.

The choice is yours, of course. Grabbing a GeForce GTX 260 still looks pretty rational from a raw price-performance perspective. The very fact that a Juniper-based graphics card could even begin to challenge the GTX 260 on performance is proof that AMD has built one heck of a potent little GPU. Once its DX11 price premium erodes, which could happen in a few short months, I expect the Radeon HD 5700 series to settle into the space between $99 and $129, where it will likely be the undisputed king for some time to come.

Comments closed
    • JoJoBoy
    • 10 years ago

    Still no Folding@home info. Is it possible to run the gpu2 core to get benchmarks like you do for the cpu cores?

    • esterhasz
    • 10 years ago

    I am very curious how fast ATI can translate the juniper into an MXM package. If you compare the power draw of the 5770 and 4870, then project that onto the mobile 4870 where they had to greatly reduce clockspeed to get into a 65W power envelope, there seems to be a very fast juniper mobile version possible without going over 40W. That should run even upcoming games OK in 1650 resolution. Giving the fact that laptops now sell better than desktops, this could be the lifeline AMD needs to become profitable again and work off some dept…

    Anybody seen an announcement of mobile parts?

    • Convert
    • 10 years ago

    Great review, meh products.

    The additional display ports are cool though and I like how AMD made them available on the lower end cards like this.

    If I ever need display port or HDMI I would pick one of these but otherwise I think I will pass with current pricing.

    • Rza79
    • 10 years ago

    Well this shows pretty nicely how much performance can be gained from DX11:
    ยง[<http://www.computerbase.de/artikel/hardware/grafikkarten/2009/test_ati_radeon_hd_5770_crossfire/11/#abschnitt_battleforge_unter_directx_11<]ยง

    • Cyco-Dude
    • 10 years ago

    meh, the 5800 / 5700 series is not so great compared to the 4800 series in terms of price / performance. i guess now’s a good time to buy up all the old stock befor it’s gone.

      • khands
      • 10 years ago

      That’s because it really doesn’t have much competition, they’re getting the development costs back on early adopters and the longer it takes Nvidia to come up with a competitor the slower the prices will drop.

      • Krogoth
      • 10 years ago

      [facepalm]
      Replacement = Successor.

        • Meadows
        • 10 years ago

        No.

      • indeego
      • 10 years ago

      You do that. I shall take a wild guess and imagine you’ll see quite a big stock on ebay for the next 2-3 yearsg{<.<}g

    • Freon
    • 10 years ago

    Pretty disappointing results for the price. The 5770 needs an immediate $20-30 price drop to be competitive. This review just made the GTX 260 look that much better.

    At least power consumption is exceptional.

      • khands
      • 10 years ago

      As a replacement for the 4770 it’s great but yeah, it’s in the wrong price bracket.

      • Mystic-G
      • 10 years ago

      DX11 is a lot of the price point here.

        • Freon
        • 10 years ago

        With how PC gaming is going vs. consoles and cross-platform games I’m not holding my breath at a huge release of DX11 games. It’d be a nice bonus if the card was otherwise equal in value than an older card. I’ve never seen a great value in making sure to buy the latest DX version card so soon in the cycle.

        More specifically, I always worry that the first wave of mid-low end cards for a given DX revision never end up being fast enough to run those games anyway. I.e. This is the first batch of midrange cards for DX11, but I think it is likely you’ll need a 58xx to enjoy the DX11. Later we might get a 5790 or 5830 that will fair ok. And of course good luck with a 5730 or 5670, 5650 later. Fancy features don’t mean much if the card is slow to begin with.

        I’m more interested in the lower power consumption than DX11. It is great, and maybe after a year or two it will pay off to spend a few more bucks, but even then it is behind on performance. Just not impressed…

          • swaaye
          • 10 years ago

          That’s a smart stance. All one needs to do is look back in time at hardware progression within DX versions.

          NV3 vs NV5
          NV10 vs NV17
          NV20 vs NV25
          R300 vs. R580
          NV30 vs G71 (lolz)
          R600 vs RV770
          G80 vs GT200

          Within the midrange things are maybe more interesting
          RV350 vs. RV560
          NV31 vs. G73 (!!!!!)
          RV630 vs. RV670
          G84 vs. G96/94

          You can of course still play some games pretty well on R420 or even R300 perhaps, but you’re always better off waiting. Buying based on future speculation = dumb. Video cards are “live in the now” products. Buy them for what you want to play in the short term.

            • flip-mode
            • 10 years ago

            Meh, these things are hard to define a rule for, other than buy it because you need it, not just because it is newer; but if you’re buying anyway, I would not consider it unwise to buy a 5770 over a 4870 and to do so just for “future proofing”. Nor would I consider it foolish to get the 4870 or GTX 260 over the 5770 due to the fact that, as things stand, they are faster. I don’t think any of these cards are “bad deals” per se, but I do think that the launch price of the 5770 does somewhat blunt its attractiveness, IMO.

            Perhaps more remarkable are the prices that the other cards have fallen to – a 1GB 4870 for $130? c-r-a-z-y.

            • swaaye
            • 10 years ago

            Yup, the prices on the former high-end stuff sure are crazy.

            I just think that people get too caught up on speculation. I remember when reviews were saying X1600XT would become “better” later on as shader ops became more prevalent in games. Yeah right. There will always be a better card. If it doesn’t perform now, it’s not going to get better, especially when there will be a better chip next year.

            There have been some particularly painful examples of this in the midrange.

            • flip-mode
            • 10 years ago

            Amen to that. That has long been a refrain for ATI cards, but it never plays out. Even if it ever did, by that time it would be too slow to matter – can you imagine: ‘man, I sure am glad I got the x1600xt instead of the other card, or my frame rate would be 12 instead of 16!’

            • JustAnEngineer
            • 10 years ago

            Radeon 9700 Pro and Radeon 9500 Pro (first DX9 GPUs) performed well in games for quite a long time after their introduction.

            GPU advances in features and performance are step-wise, not a smooth progression. Sometimes, a good value comes along on one of those steps.

            • flip-mode
            • 10 years ago

            Yes, but the difference there is that the 9700 and 9500 performed excellently from the start, and there was no need to imagine some hypothetical future in which some paper advantage actually started showing practical benefits.

            • Lazier_Said
            • 10 years ago

            I bought a 9500 the week they were released and soft-9700d it.

            My favorite game around that time was BF Vietnam. AA and AF didn’t work, ATI took months to fix this.

            C&C Generals gave a blackscreen whenever one of the tank ripple effects came on screen, unless I turned off or down some acceleration or other in the control panel – which made other games stutter.

            (Dealing with ‘R9700 stutter’ was an entire subforum in those days)

            The performance was there from day one but the usability sure wasn’t.

            • MadManOriginal
            • 10 years ago

            Yup the pricing on former high-end or near high-end cards at this time is a wierd situation that really hasn’t happened before as far as I can recall. It used to be high-end cards would fade out of stock at or near their price points rather than go down and down, aside from when they were completely superceded and simply being written off. I wrote once that the HD4000 series price war era may be a fluke that we won’t see repeated but it’s something I hadn’t taken in to account when considering the current price/performance situation. That doesn’t change the current price/performance situation but it puts it in perspective.

      • Kaleid
      • 10 years ago

      Ati/AMD probably wants to sell out those older cards first and then lower the prices for these new cards.

        • MadManOriginal
        • 10 years ago

        Meh not quite, the prices have been around there for a good long while.

      • Krogoth
      • 10 years ago

      Remember that 260 in this review is a factory overclocked flavor. They tend to cost as much as 4890s ($159 versus $199). The regular 260s are slightly cheaper and closer to 4870 in performance. 4770 is biting at the 4870’s neck.

        • khands
        • 10 years ago

        Which I believe is where the issues come in, the 5770 is to replace the 4770, but really doesn’t offer any change in performance (except for maybe DX11 titles when they become available) save maybe some additional AA from having more RAM, and costs ~50% more.

    • puppetworx
    • 10 years ago

    The power consumption on these 5-series really perks up my nipples.

    I would have really liked a CrossFire analysis included, as it is I’ll have to look elsewhere, in the future maybe?

      • khands
      • 10 years ago

      Guru3D has one that goes to triple Xfire

      ยง[<http://www.guru3d.com/article/radeon-hd-5770-in-3way-crossfirex-review-test/<]ยง Basically, when it works, it's awesome (nearly 100% scaling in some games with 3 GPUs), and the 2 GPU solutions seem to work all the time, and beats/meets the 5870 in nearly all the tests. 3GPU's get CPU bound much earlier than 2, so if you're going to Xfire 5770's a good CPU is pretty much absolutely necessary.

        • puppetworx
        • 10 years ago

        Wow, thanks for the link!

          • khands
          • 10 years ago

          No problem ๐Ÿ˜€

    • sbarash
    • 10 years ago

    I love my 4770 – perfect for my Home Theater / Gaming system. But now the 5770 comes along…oh man I want one!

    Disappointed a 4770 wasn’t included for comparison. Does anyone know of a site that compares both?

    slosuenos

      • bwcbiz
      • 10 years ago

      Yeah, that’s what I was thinking. The 5750 is in the 4770’s price range, too, not just the other products.

      I guess the bottom line is that the 4770 was a 40 nm prototype for the 5xxx series that was so good they did a limited run for market research. I still haven’t decided whether to go for the lower power consumption of the 5750 or the oomph of the 5770 in my HTPC, but I’m damn glad to have the choice.

        • khands
        • 10 years ago

        Old reviews put the 4770 pretty close to the 4850, so it would be about the same as the 5750, the 5770 should beat it in pretty much everything.

    • deruberhanyok
    • 10 years ago

    l[

    • derFunkenstein
    • 10 years ago

    Well these cards are…OK I guess. I suppose I should not have expected a paradigm shift in the mid-performance price range – after all, if you could get a card faster than the 4870 or GTX 260 for $160-$175, why bother with the very high-end?

    I would expect the 4870 and 4850 to disappear very quickly once AMD can get the 5770 and 5750 out in adequate numbers, because there’s no reason to buy a Juniper card today, and not for another year. By then there will be another refresh.

      • Jigar
      • 10 years ago

      Exactly what i thought, let the Direct X 11 games jumpin and then decide which company serves better card Nvidia or AMD.

      I think by than we might see HD 6870 around, till than my 8800GT SLI are doing good in my PC…

        • MadManOriginal
        • 10 years ago

        I don’t think things look too good for NV on that front in the near- to mid-term. I imagine we’d be hearing rumors by now of a cut-down Fermi and otherwise NV is just rolling out DX10.1 parts at the low end, have nothing new in the performance midrange like the HD5700s, and the rumors of GT200b phase out seem true. I guess it depends upon when you expect DX11 games to ‘get jumpin.’

      • Krogoth
      • 10 years ago

      Smaller, lower power consumption.

      4870 has always been a power hog and a loud SOB with its reference cooler. 5770 fares far better, while delivering nearly the same performance.

      There is no reason to get a 4870 over a 5770. That is unless you can get a nice “discontinued” discount on 4870. ๐Ÿ˜‰

        • derFunkenstein
        • 10 years ago

        but you lose performance and gain additional cost compared to th 4870. It is not a winner in that respect. The only way these things win is down the road DX11 really takes off, or if they can get some Evergreen-specific driver optimizations.

          • Krogoth
          • 10 years ago

          less than a 5% performance shortfall at the worse case? That is hardly a loss. I forgot to mention that 5770 offers superior IQ than 4870.

          You may want to check prices again. 4870 and 5770 are much closer in price than you think. 5770 only costs a little more at the moment due to being “new”.

            • derFunkenstein
            • 10 years ago

            so with one hand you type “you better check cost” and then with your other you admit that the 5770 costs more. Your left hand really doesn’t know what the right is doing.

            • Krogoth
            • 10 years ago

            1GiB versions of 4870 are only $5-15 cheaper than 5770 which only comes in 1GiB. IMO, that is hardly a difference unless you are on a tight budget.

            The 1GiB 4870 with non-reference coolers are at the same price as 5770 1GiB!

            • derFunkenstein
            • 10 years ago

            $15 is abit more than 10% over the price of a $145 4870. Add that to a 5-10% performance loss by this “upgrade” and buying a 5770 right now is a bad proposition.

            • Krogoth
            • 10 years ago

            How is it exactly a poor choice? 5770 has superior IQ, lower power consumption and cooling requirements. To some gamers, they may value these far more than raw performance.

            If raw performance/$$$$ is only the criteria for the gamer in question. 260 Reload trumps both 4870 and 5770.

            • flip-mode
            • 10 years ago

            Regardless of what angle you look at it, the 5770 is just not that exciting. Price is too high or performance is too low – take your pick (personally, I wish performance was a little higher). I imagine that most would grudgingly take the 5770 over the 4870 1GB, but try as you might to paint this as some sort of decisive triumph of the 5770 over the 4870, you and the other “red team” fanboys are not going to find your excitement broadly shared.

            • Krogoth
            • 10 years ago

            Yeah, it is not that exciting for people who are already on 4xxx/GT2xx parts. Again, it is rare for a next-generation mid-range GPU to decisively outperform the high-end GPU of the previous generation.

            For those who are running on older hardware, the choice is less clear. This is the market that 5770/5750 are targeted after.

            • Lazier_Said
            • 10 years ago

            TR’s review showed effectively zero difference between 512MB and 1GB 4870s.

            Marking up the 4870 an additional $20 for “feature parity” on a useless feature is cooking your numbers.

            The bottom line is the $125 4870 outperforms the $160-170 5770.

            • Krogoth
            • 10 years ago

            Not when they are pushed at extreme resolutions with AF and AA thrown in.

            512MiB of video memory doesn’t cut it. Granted, nether card is going obtain a playable framerate under such conditions unless you are using a CF setup.

            • Lazier_Said
            • 10 years ago

            Great, all you people shopping $125 value cards to drive your $1200 WQXGA displays listen up.

            For the rest of us the 5770 remains a ripoff.

    • sweatshopking
    • 10 years ago

    geez scott! with reviews like this and that smooth voice of yours, you can come by my place anytime. ill light some candles, wear something sexy, you’d be on the road, all lonely and needing some love. nobody has to know… next time you’re in nova scotia, give me a call.

      • khands
      • 10 years ago

      You’ve been on something lately, too bad I’m no where near Nova Scotia or I’d ask you for some.

    • Krogoth
    • 10 years ago

    Great 4850 and 4870 successors.

      • flip-mode
      • 10 years ago

      That doesn’t even make sense.

        • Meadows
        • 10 years ago

        It’s Krogoth for you.

        • poulpy
        • 10 years ago

        I’m a bit at loss why everybody seem confused by his post when TFA is about /[<"AMD's Radeon HD 5770 and 5750"<]/ and his posts says /[<"Great 4850 and 4870 successors."<]/ ?!

          • Meadows
          • 10 years ago

          That’s exactly why.

          • derFunkenstein
          • 10 years ago

          mostly because they’re not great 4850/4870 successors. They’re at the same (more or less) price point but at a performance penalty. How’s that a great replacement?

      • derFunkenstein
      • 10 years ago

      You mean the 5850 and 5870? Yes, I agree.

      • DrDillyBar
      • 10 years ago

      Maybe for the 4850, but not really the 4870. Unless power consumption is your only gauge.

      • Krogoth
      • 10 years ago

      Epic facepalm………

      I guess the nitpickers are too blinded by their personal “vendetta” that their critical thinking skills go south.

      Let me explain for all intends and purposes why the 5770 and 5750 are successors to the current 4870 and 4850.

      The new 5770 and 5750 deliver practically the same performance as 4850 and 4870. They are doing it at the almost the same price point. 5770 and 5750 at the moment do cost a little more because they are “new”. That will change in a few months though. 5770 and 5750 have the benefit of supporting newer standards and features, while running cooler, quieter than their predecessors.

      I know that nitpickers will claim, “But, 4870 and 4850 were high-end of last generation! 5770 and 5750 are mid-range of this generation”. However, in the eyes of market. 4850 and 4870 were already demoted from their position by the 5850 and 5870. They were acting as filler for the mid-range until 5770 and 5750 came around.

        • Fighterpilot
        • 10 years ago

        Basically a sub $150 card like the new HD5770 performs on a par with the last gen card that was released at $300.
        As K noted above it adds new features like DX11,runs cooler and uses less power when under load.
        What’s not to like?

        • flip-mode
        • 10 years ago

        Whatev. That’s how you want to look at it, go for it. You’ve got Fighterpilot agreeing with you…..

          • Tamale
          • 10 years ago

          If he just would have said “Replacements” instead of “successors” I would’ve had an easier time agreeing with him.

            • flip-mode
            • 10 years ago

            Yes, me too.

        • Meadows
        • 10 years ago

        You’re doing it wrong

        • playboysmoov
        • 10 years ago

        I totally agree with you K. I believe that for the price nVidia has some tough competition. Considering that the HD 5800 series cards are the fastest single high end cards, beating out the HD 4870X2 and GTX 285 in most tests, it knocks the previous generation to mid-range.

        This card is basically the replacment for the HD 4700 series, which last year was low end, in my opinion. Plus if you add the odds of obtaining a HD 4770 at the time of its release was like finding a unicorn, and these cards are widely available is a huge win for AMD! This card beats the crap out of the GeForce 8800 series which is what most people are still running and it’s DX11 out of the box.

        ยง[<http://store.steampowered.com/hwsurvey/<]ยง Considering how most manufacturers think they are going to want to get the most bang for their buck and price them against them the product with the lowest margins, high end cards reduced to $170 - $109 (HD 4800 series) range or low end pushed to $150 (HD 5700 series) range due to demand.

    • flip-mode
    • 10 years ago

    Nice review, Mr. Wasson. “unprecendently”? I might have to call you one that one!

    The cards look nice, but feel a little overpriced. The 5770 may just be the successors to the 4770, but as priced, it is fighting with the 4870.

    If forced to pick I’d probably take the 5770 due to the new features, and the fact that at 1600×1200, where I roll, the 5770 actually wins a few more than at the higher resolution.

    Again, I think there are some strange things to answer for. The 5770 GPU has all the same functional units as the 4870 GPU and even runs 150 MHz faster, but still often loses. The difference between the two is the memory bandwidth, but AMD says bandwidth is not a factor. Well, either it is a factor, or AMD has produced a less efficient GPU, IPC wise, than the previous generation.

      • Hattig
      • 10 years ago

      Interpolation is done on the shader cores in the 5000 series, not in fixed function hardware. Angle-invariant filtering presumably also has an effect on performance. These taken together will use up some of the resources, making Juniper have less shader resources than RV770.

      Maybe AMD should have tried to get 12 shader units in the chip instead of 10, but that might have required a lot more work on the supporting logic, delaying the chip. I imagine that the lower-end chip isn’t out yet (5600 and below) because the supporting logic is overkill and has had more work and time spent on economising it.

      When the 4000 series products run out of chips, the entire offerings will switch to the 5000 series anyway.

        • flip-mode
        • 10 years ago

        Ah ha. Thanks for pointing that out. I guess that explains things.

          • ssidbroadcast
          • 10 years ago

          That explains everything.

            • Damage
            • 10 years ago

            I may have to disagree. I think memory bandwidth is probably the main factor. AMD said the 4870 was bandwidth rich and shader poor; that makes some sense, since the gap between 4850 and 4870 isn’t too huge. Yet that doesn’t mean the 5770, with more shader power, isn’t constrained by substantially less memory bandwidth than the 4870. Although the 5770 might be more “balanced,” it’s probably bandwidth-limited.

            These things are all relative and depend on what the GPU is doing, anyhow.

            Angle-invariant aniso means more texture sampling, so it could exacerbate the issue. I really doubt shader-based texture coordinate interpolation is anything other than an overall win, though, given the likely balance between shader power and bandwidth–and how fast the shader array has got to be at LERPs.

            • MadManOriginal
            • 10 years ago

            Test test test! No other site is doing tests to investigate the theory of bandwidth limitation Scott. It would make a great unique article ๐Ÿ™‚ more so than a gaming FPS review (no offense, I like your reviews.)

            • ssidbroadcast
            • 10 years ago

            You /[

            • MadManOriginal
            • 10 years ago

            Yeah and this website isn’t a hobby either, it’s his profession. If he doesn’t want to investigate it fine, maybe someone else will and get all the hits, links, and credits for doing so.

            • Damage
            • 10 years ago

            Both of them!

            • MadManOriginal
            • 10 years ago

            Ookay, not very clear there but whatever works for you!

            • flip-mode
            • 10 years ago

            both hobby and profession

            • Damage
            • 10 years ago

            Nah, both hits, both links. ๐Ÿ™‚

            • flip-mode
            • 10 years ago

            Thanks much, Mr. Wasson. Academically speaking, it would be nice to know exactly what is going on. In practical terms, it does not much matter, I suppose. Excellent coverage of the 5xxx cards, Scott. Thank you.

        • shank15217
        • 10 years ago

        Your reason sounds technical but I don’t think its accurate, otherwise the 5750 would be losing to the 4850 as well, it is memory bandwidth, and maybe some driver optimizations.

          • Hattig
          • 10 years ago

          Huh? The 5850 has 80% more shaders than the 4850, so there’s plenty of space to accomodate these extra functions and still increase performance.

            • Xaser04
            • 10 years ago

            He said 5750.

    • Hattig
    • 10 years ago

    “revamped its display output block to support up to six four-megapixel displays simultaneously”

    But the block diagram only has five blocks, not six like the 5870.

    Otherwise good review. A good replacement for the previous 4700 series at a good price (game coupon included). Hopefully ATI/AMD will get some nice profits off these cards.

      • shank15217
      • 10 years ago

      They said up-to…only 5 will be allowed so it doesn’t step on the 5870.

    • Fighterpilot
    • 10 years ago

    They look to be pretty good cards for the price.
    They have most of the new stuff,run cool and don’t use much power.
    They’ll probably sell heaps of them.

    • Proxicon
    • 10 years ago

    No overclocking results? I heard these were not that great for overclocking..

    • ub3r
    • 10 years ago

    Why do i jump straight to the crysis and farcry results?

    • uksnapper
    • 10 years ago

    I don’t play games on a ppc,I only use it for work and thats mainly Photoshop.
    I’m aware that there is a move to the graphics card doing more of the work rather than the M/B CPU so my interest lies only with its capabilities in that area.
    The review does not help me.
    Not everyone who uses a PC is a game player in fact most people I know don’t actually play games,perhaps its something to do with being a pensioner,an age thing :-))

    • asdsa
    • 10 years ago

    You could easily have down-clocked the factory overclocked GTX260 but you choose not to do it. This reminds me of Tom’s Hardware tactics which are unusual here at TR.

      • pmonti80
      • 10 years ago

      Sorry to burst your bubble but TR is known to use factory-overclocked cards against non factory-overclocked ones for quite sometime. They don’t do it always, only when price is supposed to be the +/- same. Some people may claim that if the price is the same it doesn’t matter, some may claim otherwise (I fall into such case, because in europe there is no mail-in rebates); I won’t preach if it is good or bad, but it happens. The best thing you can do is visit different sites to have a better overall understanding.

      • Tamale
      • 10 years ago

      why gimp a product that’s sold that way?

    • maroon1
    • 10 years ago

    HD5750 performed worse than HD4850 and GTS 250 in majority of benchmarks

    HD5770 performed only slightly better than GTS 250 overall

      • Meadows
      • 10 years ago

      Well done, you can read graphs.

      • Johnny5
      • 10 years ago

      What a maroon!

    • alwayssts
    • 10 years ago

    (In response to #1)

    Considering the 5770 has the same exact configuration as the 4890, sans half the bandwidth, isn’t it obvious? The 5750 is almost where it should be, 5770 is massively hampered. The difference? A huge bandwidth ratio discrepancy.

    Obviously there are scenarios where different bottlenecks come into play. Games optimized for G80/GT200’s superior pixel (RBE/ROP) and texture (TMU) fillrates are not going to come close with these parts, just like they didn’t with the 4800 series, and gpu speed will help. A clockspeed of just over 1ghz would bring you close to parity with the GTX260 in those situations. But to put them back on parity with the former generation and capitalize on where they do excel or reach almost parity,because of the sheer fpu/int power they require more bandwidth.

    I’m not saying that it needs twice the bandwidth, as 4870 and 4890 show diminishing returns, and 4850 very much show this not to be the case. There is obviously though, a baseline where the s hits the f, and 5770 has done it.

    Even if you compared to the 4850 (64gbps/1tf), relatively speaking with clockspeed increase, you would need a mem bandwidth of 87gbps to retain parity. The 4750 does this, and actually exceeds it while doing quite well, narrowing the gap between it and 4870 to an almost reasonable level. Instead, the 5770 can’t even keep parity with 4770 (51.2gbps/960gflops…normalized to 60gbps/1tf), instead at 56.7gbps/tf. . Obviously it’s bw limited. That is to say if bandwidth needs scale linearly with increased gpu power, which I thinks it’s safe to say they do not. The need is likely exponentially greater. How much, I do not know..

    4850 and 5850 both SEEM to be a nice mix though, and they are both roughly 64gbps/1tf.

    I think it’s safe to say the whole 5000 series is BW limited.

    If you were to test all these variables, which I’m all for…I’d ask it to be a wide variety of tests at each setting.

      • MadManOriginal
      • 10 years ago

      All I see is a lot of words that are b[

    • Meadows
    • 10 years ago

    An HD 5770 might just be the average gamer’s thing for upcoming Dx11 titles where it’ll perform better anyway. Performance seems sub-par now, but I think it should satisfy (unlike nVidia’s old “8600 GT” attempt at bringing Dx10 support down in price, at a catastrophic performance cost).

    • Ryhadar
    • 10 years ago

    I’m not much of an overclocker but I keep thinking about the headroom the 5 series has. Reviewers were able to overclock the 5850 to clock speeds close to the 5870, and others were able to take their 5770s pretty high as well (much closer in performance vs the 4870).

    At stock the 57xx line isn’t earth shattering for the price. They’re actually pretty average, but OC them and they start to shine.

    • Bluekkis
    • 10 years ago

    For me price doesn’t matter that much, performance difference to same price point doesn’t matter that much. But those beautiful thermals does. AC S1 on 5770 will make a very fast passively cooled gfx card that should stay cooled even in small low airflow case. 5770 will most likely find its way to my next gaming system.

      • slaimus
      • 10 years ago

      The power usage seems just right for me as well, looking to upgrade a 8800GT. The 4870/260 both need two PCI-E plugs and would be too long for my case.

      • FuturePastNow
      • 10 years ago

      Better make sure that S1 will fit first. You’ll have to bend some of the fins at least because of that big metal RF shield around the DVI connectors.

    • jstern
    • 10 years ago

    I’m starting to get into computer gaming, and I’m hoping to get rid of my 360. I’m still finding it hard to follow, I just lack so much knowledge, but I’m slowly understanding. I’m seriously curious to know how the 360 and the PS3 GPUs would have done in these tests. Time to do some research.

      • BoBzeBuilder
      • 10 years ago

      The 360 and PS3 gpu would be getting 1/4 the performance of the slowest card in these benchmarks, if even that.

        • coldpower27
        • 10 years ago

        Not that it matters much considering they are running optimized platforms that are static configurations.

        They also don’t do much AA if any, the 360 GPU does 2xAA I think…they are made to run at 720p or 1080p.

        But you got to understand they are from 2005/2006 era and it’s now 2009 the GPU on the PC have continued evolving. Using newer manufacturing processes to increase performance, using the same amount of surface area.

        I

      • lethal
      • 10 years ago

      If you were to put any of those GPU on those test they would probably be small bumps at the bottom. Keep in mind that consoles have much less overhead than PCs when it comes to gaming so comparing either GPU on a PC scenario would yield lower performance than whats possible on their closed boxes. On the other hand, the difference between the PS3’s or the 360’s GPU vs whats currently available on the PC is massive.

      The evolution of the GPU going by Nvidia

      RSX(PS3) -> 7900 series -> 8800 series -> 9800 series -> GT200 series

      Ati:

      R500(X360) -> X1000 series -> HD2000 series -> HD3000 series -> HD4000 series -> HD5000 series

      Most console games run below 720p (1280×720) due to memory/performance constraints and then the image is then scaled to the actual resolution of the TV. If you were to run games at a higher resolution its likely that you end up with a slideshow rather than a game.

        • shank15217
        • 10 years ago

        You got the facts wrong for the xbox gpu ๐Ÿ™‚ its X1900 then Xbox, there was a big difference between x1800 and x1900 as well.

          • lethal
          • 10 years ago

          As far as “evolution” yes, the R500 was the basis of their unified architecture, but chronologically it was R500 -> X1900 -> R600.

      • Bauxite
      • 10 years ago

      Enthusiast 3D hardware is way ahead of software right now…even these mid range cards could render the 3D of two console games at once without a sweat.

      Of course theres always the spaghetti code out there to bring them to their knees, or tacked-on eye candy with mediocre gameplay.

        • Meadows
        • 10 years ago

        Not a bad thing, more and more people can afford antialias for example.

      • HurgyMcGurgyGurg
      • 10 years ago

      It’s tempting to look directly at the raw performance, and it is true that something like the 5870 could run laps around the RSX in the PS3, but what you do get with console GPUs is intense optimization to the point where developers can squeeze a lot more out of something like the RSX than they could out of the 7900 it is based off.

      In general, the gap is starting to become more noticeable (Especially with resolution) and once DX 11 gains traction, the gap will definitely widen. I know as someone use to an HD 4870 that playing something like Halo 3 already is looking dated, but for now, the difference isn’t eye popping to the average person.

      With the next gen not likely to make it around till 2011-12. It’s probably safe to say that PC gaming won’t disappoint graphics wise over the next few years while consoles stall even more.

    • toxent
    • 10 years ago

    Another great review! I know you guys are probably overwhelmed right now, but keep up the good work!

    • MadManOriginal
    • 10 years ago

    Finally! ๐Ÿ˜‰ Nice review as usual thanks.

    Scott, an interesting investigation to put to rest debate over these cards’ limitations, in particular the 5770 vs 4870, with regards to memory bandwidth would be great. It shouldn’t be *too* hard, just run stock, core oc, memory oc, and core+memory oc and then compare the results to see where the limitation is.

Pin It on Pinterest

Share This