AMD’s Radeon HD 5850 arrives

At this point, we know AMD’s new Radeon HD 5870 graphics processor is very fast—roughly as quick as two previous-gen GPUs put together. We know the 5870 introduces DirectX 11 support, improved anisotropic filtering algorithms, and reborn supersampled antialiasing. We know the card actually draws less power than the older Radeon HD 4890, both at idle and under load.

We shared all of those discoveries last week in our review of the Radeon HD 5870.

If you’ve read that review, you’ll also know about the 5870’s formidably long circuit board and its relatively onerous $379 price tag. Those aren’t surprising attributes for a top-of-the-line graphics card, but they definitely call for a more mainstream derivative. That’s where the Radeon HD 5850 comes in. With a $259 suggested retail price and a shorter board, the 5850 serves up a slightly diluted version of the 5870’s potent cocktail. Let’s see if it goes down any easier.

Cypress sheds a couple of branches

Lift the 5850’s cooler, and you’ll find the exact same 40-nm Cypress graphics processor that powers the 5870. AMD has made a few adjustments to go along with the 5850’s lower price tag and smaller footprint, however. First, it’s disabled two of Cypress’ 20 SIMD arrays and two of the texture units that accompany each array. Since every SIMD array includes 80 ALUs, or stream processors, and each texture unit can churn out four texels per clock, this change takes us down from 1600 SPs and 80 texels/clock on the Radeon HD 5870 to 1440 SPs and 72 texels/clock on its little brother.

AMD has also reduced clock speeds to keep the 5850 from flying too close to the sun. Where the 5870’s GPU ticks away at 850 MHz with its memory at 1200 MHz (for an effective 4.8 Gbps data rate), the 5850 runs at 725 MHz with 1000 MHz (or 4 Gbps) RAM. Both cards have the same amount and type of memory, though: 1GB of GDDR5. Considering the resolutions and quality options available in the latest PC games, 1GB seems like the bare minimum for an enthusiast graphics solution. Even the aptly named Radeon HD 4870 1GB comes with that amount by default, and it’s selling for less than $150 these days.

Viewed from the outside, the Radeon HD 5850 looks pretty much like a shorter 5870. Makes sense, right? The new card has a similar Batmobile-style cooler dressed in black and red hues, but its PCB is a more manageable 9.5″—an inch shorter than the 5870 and the same length as the old 4870. No need to take a hacksaw to your hard-drive cage with this one. You’ll still need a pair of six-pin PCI Express power connectors, though.

The reference 5850 also has the same port configuration as its big brother; a pair of vertically stacked DVI outputs share the ride with DisplayPort and HDMI 1.3a connections. These ports should allow you to use up to three displays simultaneously, just like on the 5870. As we explained last week, AMD’s Eyefinity technology presents such triple-monitor configs to the operating system as one ultra-wide display, so many existing games will happily stretch across.

What kind of competition is this newcomer facing? Technically speaking, the 5850 has no direct rivals, since it’s one of the only two DirectX 11 graphics processors on the market today (the other being the 5870). However, Nvidia offers two DX10 cards in the same neighborhood. There’s the GeForce GTX 285, which has recently dropped to $295.99 after rebate at Newegg, and the slower GeForce GTX 275, which can be nabbed for as little as $209.99. We’ll be contrasting the 5850’s performance with the faster of those two GeForces today.

Before we move on, readers unfamiliar with AMD’s new DirectX 11 graphics processor would do well to peruse our initial look at the Radeon HD 5870. That piece includes all the details and diagrams you’ll need to help wrap your head around the Cypress GPU. If you’re already well-versed in the particulars of AMD’s latest graphics processor, read on for our look at how it performs in the Radeon HD 5850.

Our testing methods

As ever, we did our best to deliver clean benchmark numbers. Tests were run at least three times, and we’ve reported the median result.

Our test systems were configured like so:

Processor Core i7-965 Extreme 3.2GHz
System bus QPI 6.4 GT/s (3.2GHz)
Motherboard Gigabyte EX58-UD5
BIOS revision F7
North bridge X58 IOH
South bridge ICH10R
Chipset drivers INF update 9.1.1.1015

Matrix Storage Manager 8.9.0.1023

Memory size 6GB (3 DIMMs)
Memory type Corsair Dominator TR3X6G1600C8D

DDR3 SDRAM at 1333MHz

CAS latency (CL) 8
RAS to CAS delay (tRCD) 8
RAS precharge (tRP) 8
Cycle time (tRAS) 24
Command rate 2T
Audio Integrated ICH10R/ALC889A

with Realtek 6.0.1.5919 drivers

Graphics Sapphire Radeon HD 4890 OC 1GB PCIe

with Catalyst 8.66-090910a-088431E drivers

Radeon HD 4870 X2 2GB PCIe

with Catalyst 8.66-090910a-088431E drivers

Radeon HD 5850 1GB PCIe

with Catalyst 8.66-090910a-088431E drivers

Radeon HD 5870 1GB PCIe

with Catalyst 8.66-090910a-088431E drivers

Dual Radeon HD 5870 1GB PCIe

with Catalyst 8.66-090910a-088431E drivers

Asus GeForce GTX 285 1GB PCIe

with ForceWare 190.62 drivers

Dual Asus GeForce GTX 285 1GB PCIe

with ForceWare 190.62 drivers

GeForce GTX 295 2GB PCIe

with ForceWare 190.62 drivers

Hard drive WD Caviar SE16 320GB SATA
Power supply PC Power & Cooling Silencer 750 Watt
OS Windows 7 Ultimate x64 Edition RTM
OS updates DirectX March 2009 update

Thanks to Corsair for providing us with memory for our testing. Their quality, service, and support are easily superior to no-name DIMMs.

Our test systems were powered by PC Power & Cooling Silencer 750W power supply units. The Silencer 750W was a runaway Editor’s Choice winner in our epic 11-way power supply roundup, so it seemed like a fitting choice for our test rigs.

Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests.

We used the following versions of our test applications:

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Far Cry 2

We tested Far Cry 2 using the game’s built-in benchmarking tool, which allowed us to test the different cards at multiple resolutions in a precisely repeatable manner. We used the benchmark tool’s “Very high” quality presets with the DirectX 10 renderer and 4X multisampled antialiasing.

The Radeon HD 5850 is off to a promising start here, edging out Nvidia’s GeForce GTX 285, its most direct competitor, at all resolutions. We’re also getting our first glimpse at how Cypress performs after a little pruning. At resolutions of 1680×1050 and up, the 5850 lies almost exactly half-way between the 4890 and 5870. Product segmentation at its finest, folks.

Wolfenstein

We recorded a demo during a multiplayer game on the Hospital map and played it back using the “timeNetDemo” command. At all resolutions, the game’s quality options were at their peaks, with 4X multisampled AA and 8X anisotropic filtering enabled.

The tables turn here, with the GeForce GTX 285 scoring a small win over the 5850. Since Wolfenstein uses a modified version of the id Tech 4 engine, we can probably chalk up Nvidia’s edge to its traditionally better OpenGL implementation.

Left 4 Dead

We also used a custom-recorded timedemo with Valve’s excellent zombie shooter, Left 4 Dead. We tested with 4X multisampled AA and 16X anisotropic filtering enabled and all of the game’s quality options cranked

The Radeon HD 5850 is back in full force in Left 4 Dead, although this test is a bit like the Yankees crashing a little league game. At 2560×1600 with AA and AF cranked, even the slowest card here runs Left 4 Dead well beyond the 60 Hz refresh rate of most LCD monitors.

Tom Clancy’s HAWX

We used the built-in benchmark tool in HAWX, which seems to do a good job of putting a video card through its paces. We tested this game in DirectX 10 mode with all of the image quality options either turned on or set to “High”, along with 4X multisampled antialiasing. Since this game supports DirectX 10.1 for enhanced performance, we enabled it on the Radeons. No current GeForce GPU supports DX10.1, though, so we couldn’t use it with them.

We’re back in the major leagues here, and the Radeon HD 5850 continues to outpace its GeForce rival at higher resolutions.

In surprising twist, the Radeon HD 4890 pulls ahead of those two two cards at 1680×1050 and 1920×1200. If we were the betting sort, we’d pin the odd result on the 5850’s lower core speed (725 MHz vs. 900 MHz for our 4890), driver immaturity, or some obscure bottleneck in the new Cypress design.

Sacred 2: Fallen Angel

A little surprisingly for an RPG, this game is demanding enough to test even the fastest GPUs at its highest quality settings. And it puts all of that GPU power to good use by churning out some fantastic visuals.

We tested at 2560×1600 resolution with the game’s quality options at their “Very high” presets (typically the best possible quality setting) with 4X MSAA.

Given the way this game tends to play, we decided to test with fewer, longer sessions when capturing frame rates with FRAPS. We settled on three five-minute-long play sessions, all in the same area of the game. We then reported the median of the average and minimum frame rates from the three runs.

This game also supports Nvidia’s PhysX, with some nice GPU-accelerated add-on effects if you have a GeForce card. Processing those effects will put a strain on your GPU, and we’re already testing at some pretty strenuous settings. Still, we’ve included results for the GeForce GTX 295 in two additional configurations: with PhysX effects enabled in the card’s default multi-GPU SLI configuration, and with on-card SLI disabled, in which case the second GPU is dedicated solely to PhysX effects. It is possible to play Sacred 2 with the extra PhysX eye candy enabled on a Radeon, but in that case, the physical simulations are handled entirely on the CPU—and they’re unbearably slow, unfortunately.

The Radeon HD 5850 scores another win against the GeForce GTX 285 here, and it puts the 4890 back in its place, as well.

Crysis Warhead

Although we’ve had a bit of a tough time finding games that will really push the limits of the new Radeons, this game engine is certain to do it. In a true test of GPU power, we turned up all of the quality settings in Warhead to the highest settings using the cheesily named “Enthusiast” presets. The game looks absolutely gorgeous at these settings, but few video cards will run it smoothly. In fact, we chose to test at 1920×1200 rather than 2560×1600 because it appears at least some of the cards have serious trouble at the higher resolution, almost as if they were running out of video RAM. Anyhow, this is a pretty brutal test, tough enough to challenge even our fastest multi-GPU setups.

For this game, we tested each GPU config in five 60-second sessions, covering the same portion of the game each time. We’ve then reported the median average and minumum frame rates from those five runs.

The 5850’s average frame rate lies almost equidistant between that of the 5870 and 4890 yet again. Its minimum frame rate is only slightly higher than 4890’s, though. A 23 FPS average isn’t really playable to begin with, of course, but this test gives us a rare glimpse at how these GPUs scale when pushed to their limits by a DirectX 10 game.

Power consumption

We measured total system power consumption at the wall socket using an Extech power analyzer model 380803. The monitor was plugged into a separate outlet, so its power draw was not part of our measurement. The cards were plugged into a motherboard on an open test bench.

The idle measurements were taken at the Windows desktop with the Aero theme enabled. The cards were tested under load running Left 4 Dead at a 2560×1600 resolution, using the same settings we did for performance testing.

Even the Radeon HD 5870 consumes less power than the GeForce GTX 285, so there’s really no contest between the Nvidia card and the new 5850. AMD’s DirectX 11 offerings both have about the same idle power draw—a testament to Cypress’ power efficiency—but the 5850 draws a good 36W less than its big brother under load.

Noise levels

We measured noise levels on our test system, sitting on an open test bench, using an Extech model 407738 digital sound level meter. The meter was mounted on a tripod approximately 8″ from the test system at a height even with the top of the video card. We used the OSHA-standard weighting and speed for these measurements.

You can think of these noise level measurements much like our system power consumption tests, because the entire systems’ noise levels were measured. Of course, noise levels will vary greatly in the real world along with the acoustic properties of the PC enclosure used, whether the enclosure provides adequate cooling to avoid a card’s highest fan speeds, placement of the enclosure in the room, and a whole range of other variables. These results should give a reasonably good picture of comparative fan noise, though.

The 5850’s lower power consumption pays off, allowing for much lower noise levels under load. The Radeon HD 5850 is nevertheless a tad louder than the 5870 at idle, although it still does better than the GTX 285 across the board.

GPU temperatures

For most of the cards, we used GPU-Z to log temperatures during our load testing. In the case of multi-GPU setups, we recorded temperatures on the primary card. However, GPU-Z didn’t yet know what to do with the 5870, so we had to resort to running a 3D program in a window while reading the temperature from the Overdrive section of AMD’s Catalyst control panel.

The Radeon HD 5850 runs considerably cooler than either the 5870 or the GeForce GTX 285—an unexpectedly strong showing, seeing as the 5850 has a smaller cooler than the 5870 and lower noise levels under load. What a refreshing change from the blistering-hot temperatures of previous high-end cards.

Conclusions

Well, there you have it. The Radeon HD 5850 manages to outshine the fastest single-GPU GeForce card overall while costing less, drawing less power, and producing less noise. We wouldn’t be surprised to see Nvidia cut prices in the near future, but in any case, the 5850 is hands-down the second-fastest single-GPU graphics card on the market.

Performance isn’t the 5850’s only strength, either. AMD’s newcomer also brings higher-quality antialiasing and filtering algorithms, as well as next-generation DirectX 11 goodness, so it’ll let you enjoy extra eye candy in upcoming games while making old ones look even better. That functionality would be worth a price premium if the 5850 commanded one, but it doesn’t—at least not for now.

AMD Radeon HD 5850

September 2009

The Radeon HD 5850 is also a compelling solution for quiet, low-power, or compact builds, since it delivers excellent performance with a more reasonable power and noise footprint than previous high-end cards. And it doesn’t have a freakishly long circuit board like the Radeon HD 5870, which is always a plus.

Speaking of the 5870, we’re now left wondering whether that behemoth is worth the $120 premium over its little brother. The 5850 may be slower, but for the most part, it’s still quick enough to generate smooth frame rates at 2560×1600 with AA and AF cranked up in current games (Crysis Warhead‘s “Enthusiast” preset excepted). We’ll have to see whether that changes once DirectX 11 games start hitting stores, but for now, the 5850 sure seems to be fast enough for today’s titles.

One could certainly make a case for the 5870 for users who wish to game on triple-monitor setups or with supersampled antialiasing, since those activities will benefit from greater horsepower. However, for the vast majority of even hardcore gamers, the Radeon HD 5850 looks to be plenty powerful.

Comments closed
    • Thanato
    • 10 years ago

    I’ve got 2 4850’s in crossfire (monitor 1680×1050), my guess is that the 5850 wouldn’t really be an upgrade speed wise, as they would be about the same speed. 2 X 4850’s is about 20 to 30% faster than a 4890, which seems to be where the 5850 is…

    I have yet to come across a review that compares the two.

      • indeego
      • 10 years ago

      Your resolution is limiting you. You should invest in a larger display before thinking about a new graphics cardg{<.<}g

        • Thanato
        • 10 years ago

        Yeah..that’s would be the way to go.

        I like an Eyefinity set up… hmm but it might take up to much space. I wonder if I could get creative with one large monitor like 1920 x 1200 and two smaller side monitors 1280 x 1024 and turn em sideways.

    • Zorb
    • 10 years ago

    I’m sorry but…. this card looks like it kick’s it big time at the opening price point even if dx11 is a non factor…. good work amd!

    • Convert
    • 10 years ago

    Almost missed the review, given the picture I thought it was the 5870 and just kept browsing.

    One question, do you plan on keeping that list of cards in future reviews or are we going to see a 5850 vs older cards? Was just kind of odd not to even see a 4850 in the mix at least.

      • indeego
      • 10 years ago

      I believe because TR just changed their graphics techbench.
      Compare those benches with the ones from this review:
      §[<https://techreport.com/articles.x/16504/1<]§ The same system was [roughly] used. Different OS/Drivers/BIOS thoughg{<.<}g

    • NIKOLAS
    • 10 years ago

    #54 – Lycium
    I’m not sure where you get your 50% figure from?
    Looking at the review that these comments are attached to, I see the following performance advantage at 1900 x 1200 or greater resolution(in the case of Sacred 2 and Cry:WarHead)

    19%, 18%, 17%, Even, 25%, 15%

    Perhaps Direct X 11 will be worth it, but if one had a 1gig 4870 or 4890, I would be holding onto it till well into next year at least and then see what was on offer.

    #55 – Cegras
    I am all for improved power efficiency, but as the 4890 isn’t a massive power hog, then whilst the 5850’s improved efficiency is of course welcome, it isn’t for me at least a massive attraction.

    #56 Khands
    You might be right, but as a betting man I would say that Nvidia will probably end up with the fastest single GPU card, simply because I don’t think the 58xx series is as big an advance as I thought they would be able to achieve. I would really like to see a 2gig version of the 5870, that would get me a bit excited. Unfortunately Nvidia is one of the worst price gougers in the computer industry, so if they end up fastest, they will exploit it ruthlessly.

    #60 asda

    You need to be less childish in your responses if you want me to take you seriously.

    #65 flip-mode

    Thanks. You got exactly the point I was making.

    #80 Walt C

    True, and I guess I am thinking Nvidia will be able to respond by Christmas time, but maybe they will make everyone wait till Q1 to see if their offerings are any better.

    But the thing is, the 5850 and 5870 just don’t blow me away, so I am more than happy to wait and see what Nvidia bring out. Maybe then ATI will release a 2gig version of the 5870.

    At this stage I am not planning to buy less than a 2gig card.

      • ssidbroadcast
      • 10 years ago

      @ #65 : That is why he is on point.

      • asdsa
      • 10 years ago

      You need to be less like a green troll for me to take you seriously. Only thing you do here is spread negative comments about HD5850 which clearly beats GTX285 in performance, price, power consumption and heat dissipation. That’s not even childish, that’s stupid.

      • glynor
      • 10 years ago

      All indications are that it will *[

        • NIKOLAS
        • 10 years ago

        Yeah it looks like Nvidia are no chance now to ship before Christmas, so ATI will have the high end all to themselves.

        I’ll still be holding off until I see what a 2gig 5870 can do, and/or Nvidia’s next offerings if they manage to release it before I am ready to buy.

          • Krogoth
          • 10 years ago

          IMHO, 2GiB isn’t worth the wait, unless you intent on running them in a CF configuration.

          A single 5870 just does not have the pixel-pushing power to handle situations where 1GiB of video memory does not cut it.

            • NIKOLAS
            • 10 years ago

            I might be getting a 30″ monitor, so I would be playing at 2560 x 1600.

            The reviews will tell the story I guess.

            As it is, I have the luxury of waiting till my current computer dies, or StarCraft 2 and RAGE come out.

            Either of those events will be my trigger to upgrade/replace my current old box.

            • Meadows
            • 10 years ago

            Wrong. There are games that can allocate more than 1 GiB of videomemory, and I know one that can do more than 1.5.
            It can go up from here, and future proofing is not a stupid idea.

            • Krogoth
            • 10 years ago

            Are you referring to games that are horribly coded pieces of junk like GTA4? Please.

            IMO, 2GiB is a waste on this 5870 for gaming purposes. It is no different then budget GPUs that have 1GiB of memory. The 5870 does not pixel-pushing power by the time 1GiB does not cut it anymore. They will be far faster GPUs at that the point.

            The current bench suite on TR yields little evidence that the 5870 is currently VRAM–bounded at 1GiB. It is more likely that it is memory-bandwidth bounded. 4870X2 manages to squeeze ahead of the 5870 in areas where memory bandwidth starts to matter. FYI, 4870X2 has more bandwidth then 5870 despite having to deal with CF.

            Future-proofing is an oxymoron. You should always build your systems based on your needs of now. You should not based it on the needs of what if.

            • Meadows
            • 10 years ago

            Your opinion is irrelevant, and your information outdated.

            g{

            • Krogoth
            • 10 years ago

            Huh, out of date information? HD 5xxx are only a week old and their reviews are out of date? Are you from the future now?

            It is clear that 1GiB of VRAM only matters when you render the current round of game engines at 1920×1200 and beyond with AA + AF on top of that. I honestly doubt that 2GiB is going make much of a difference for 5870. It is better to just wait for a refresh or beyond to jump onto the 2GiB VRAM bandwagon.

      • Lazier_Said
      • 10 years ago

      “I am all for improved power efficiency, but as the 4890 isn’t a massive power hog …”

      The 4890’s power draw is very reasonable for its performance in games.

      The other 95% of the time, 70 watts to draw the Windows desktop is far higher than any other contemporary single GPU.

        • moriz
        • 10 years ago

        that’s because early DDR5 generations cannot put themselves into lower power states. that alone accounts for its high power usage while idle.

      • ish718
      • 10 years ago

      You’re thinking too much about hardware, think software for a second.

      When a new card comes out, it’s not like games automatically scale up and take advantage of the new hardware.

      HD 5870 has 1600 SPs while HD4890 only has 800 SPs, not to mention TMU, ROPs, etc.. have been doubled. Yet we rarely get even close to double the performance, this is where software comes in to play.

      And oh yeah, TR needs to include more games in their reviews…

      EDIT: I was skimming through some hd5870 reviews from various sites.
      Dead Space gets almost double the fps going from a HD4870 to HD5870.

      The new batman game gets around a 70% increase in fps going from HD4870 to an HD5870.

    • boing
    • 10 years ago

    Any single-slot version of this coming out?

      • MadManOriginal
      • 10 years ago

      Definitely not with these chips but probably with some of the lower-end derivatives.

      • Kurotetsu
      • 10 years ago

      On the Wednesday shortbread I found out that Galaxy is releasing a single-slot GTX260-216 model:

      §[<http://vr-zone.com/articles/galaxy-gtx260-razor-edition-is-the-world-s-first-single-slot-gtx-260-/7770.html?doc=7770<]§ It uses the same vapor chamber technology that Sapphire implements in its Vapor-X cards. Now comparing the heat and power consumption numbers of the 5850 compared to the GTX260-216: §[<http://www.techreport.com/articles.x/17652/9<]§ <-- 5850 §[<http://www.techreport.com/articles.x/15651/11<]§ <-- GTX260-216 (stock/non-Reloaded) 114 (GTX) vs. 118 (5850) Idle 260 (GTX) vs. 254 (5850) Load 76 C (GTX) vs. 67 C (5850) Temp Under Load Their numbers are similar enough, with the 5850 doing better overall, that I can see a single-slot vapor chamber cooling system being quite reasonable for the 5850.

        • MadManOriginal
        • 10 years ago

        Interesting although I’d still much prefer an exhausting heatsink with the power disippation of these cards. It’s nice to see cardmakers coming up with different ideas though…I wonder if they see the application for a card like that in HPC rather than consumer sales.

    • shank15217
    • 10 years ago

    Right now if AMD closed it’s platform to just AMD chips, boards and graphics they would have the fastest/cheapest overall platform.

      • flip-mode
      • 10 years ago

      Awesome. And then they’d crash like the stock market. Short term greed = long term FAIL.

        • shank15217
        • 10 years ago

        Yea, I don’t think its a good idea but they could potentially pull it off (with bad repercussions in the future).

          • derFunkenstein
          • 10 years ago

          then how is that “pulling it off”

            • flip-mode
            • 10 years ago

            LOL, exactly.

            • shank15217
            • 10 years ago

            You first need the actual products to do what I suggested and they have it. It may not make business sense but its a business decision and not a technological one. No one else can even do it even if they wanted to, that’s my point, get it?

            • flip-mode
            • 10 years ago

            How does your response answer his question? It doesn’t. It’s not exactly going to kill you to just say “yeah, you’re right, I hadn’t thought about it that way.”

            • shank15217
            • 10 years ago

            Exactly what did he say that was worth responding to except a snide remark? I just said AMD has a set of technologies that could allow them to close their hardware platform and still be effective. Its probably a bad idea, but I don’t see you or anybody else make that observation, unless of course its just such an obvious observation its not even worth making.. if so forgive my novice remark.

            • flip-mode
            • 10 years ago

            -[

            • derFunkenstein
            • 10 years ago

            There are only two ways to “pull something off”

            1.) Grab it and yank
            2.) Successfully implement an idea

            I suppose your hypothetical might meet #1. LMAO

            • shank15217
            • 10 years ago

            Ok you can successfully implement the idea for a period of time then have it fail, happens all the freaking time. Do you call everyone that fails after some time a failure? If so, you have a very high standard, I hope you can back it up in life.

            • derFunkenstein
            • 10 years ago

            if it’s a money loser from day one (which this would be and we all know it – apparently that includes you since you’ve conceded that point) then it’s never been “pulled off”. You’re redefining phrases here to look like less of a jackass, an art form that I wish I could master, but you’re not really pulling that off, either. 😉

      • WaltC
      • 10 years ago

      Nah, it would cost them more to close it than to maintain the status quo.

      • Kurotetsu
      • 10 years ago

      They could, maybe, sell the idea to Apple. They actually have the marketing muscle to make a closed platform work. There has been some fairly wild speculation that new, cheaper Macbooks might use AMD hardware.

    • esterhasz
    • 10 years ago

    Nice review and great card! Two things I’d like to hear more about though:

    1) It’d be really interesting to put the 5850 in an older system (like an Athlon X2 with a 8800 or something in that ballpark) and see how big the real-world difference is.

    2) In the 5870 review at hardware.fr there’s a fun little fact about the new Batman game locking AA functions when an ATI card is detected. Apparently a little driver swindling (vendor id crossdressing) get’s AA back to work, at least in the demo. Sounds like perfect podcast rant material.

      • DrDillyBar
      • 10 years ago

      TWIMTBP

        • Bauxite
        • 10 years ago

        TGVIPNGTBA

        The gpu vendor I’m probably never going to buy again.

        I’m not some brand-only fanboy (and have a long vidcard history from voodoo1 to back it up) I’ve just been burned enough on multiple fronts.

        The laptop BS was the worst, the OEM gets some cash out of it and ‘extends’ my warranty for a useless year while turning on the fan to full load 24/7 but I’ll never get a dime. Doing driver/app shenanigans is just icing on that cake.

          • Krogoth
          • 10 years ago

          TGVIPNGTBA?

            • MadManOriginal
            • 10 years ago

            He writes out the acronym right after saying it. n00b!

    • Lord.Blue
    • 10 years ago

    I was wondering if I could mount one of these along with the 4870 that I have.

      • cygnus1
      • 10 years ago

      Get a motherboard with that Lucid Hydra chip and you can!

    • indeego
    • 10 years ago

    As an Nvidia investor, this is the card I’ll probably get in Q2 2010. Well done *[

    • Chrispy_
    • 10 years ago

    I really want one of these but reasoning says there’s no point….

    I have a 1920×1200 display and every game I have runs at max or high settings at a smooth 40+ fps, more often than not it runs at the 60fps cap. Yes, I have a quad-SLI GTX295 setup.

    Oh wait, no. I have a 2-year-old Geforce 8800GTS 512 and an relatively rubbish Q9550 on an ancient intel 965 chipset.

    I blame the consoles for forcing games to run on rubbish hardware. Developers aren’t prepared to put the effort into the PC versions and the only games that really shine on PC’s still look good on my obsolete card. Crysis struggles a little bit on max, but I can hit the vsync with a little tweaking and lets face it: “almost-max” settings at 1920×1200, 60fps is still a damn sight prettier than the XBOX 360’s 720p 30fps. Maybe it’s 1080 but I think it’s upscaling on my 360 where other games don’t.

      • cygnus1
      • 10 years ago

      All 360 games are 720p or less, upscaled/downscaled to whatever resolution your TV is running. I say or less, because Halo 3 broke the 720p requirement, and runs at slightly less, 1152×640 resolution instead of the usual 1280×720.

      §[<http://en.wikipedia.org/wiki/Halo_3#Graphics<]§ I wouldn't be surprised if other games run less too.

        • Prion
        • 10 years ago

        PGR3 is another famous example, although of course it was allowed to break the requirements in order to make launch

      • Freon
      • 10 years ago

      Yeah I really do think things are slowing down on the game graphics front. Right now it is just at the point where you can run equal games, but a higher resolution, better or equal frame rates, and extra FSAA or anisotropic. Strict image quality ignoring those factors is pretty much equal, short of maybe some of the Crytek stuff with custom CFG files.

      Moving forward more expensive cards will have little to offer if all we’re playing is console ports or cross-platform games. I think this is almost the end of the line for the rapid PC graphics advances, at least in a wide-spread consumer sense. I’m not sure how successful any of these next generation PC cards will be since the last generation or two are still relatively fast.

        • Chrispy_
        • 10 years ago

        The problem is the amount of work that has to go into next-gen graphics.

        Just because the hardware can handle fancy, realtime shading on high-poly models doesn’t mean that developers can be bothered to model and texture in more detail 😛

      • NeXus 6
      • 10 years ago

      Agreed. Unless you really need to upgrade for better performance, there’s little reason to buy a new video card. Don’t buy this card for DX11 unless you like waiting.

      • yogibbear
      • 10 years ago

      Haha I have an 8800gt, q9450 and p35 chipset…

      Absolutely no reason to upgrade. Yet.

      F^** @ consoles stifling game development.

    • tarantula1947
    • 10 years ago

    Look at it this way … the much lower power consumption/heat factor … plus the $100.00 or more price difference …. it would be the perfect setup to buy 1 now and add a second later without breaking the bank or over drawing your power supply requirements …and they’re smaller to fit in more cases . Plus if you’re … which everyone will be doing … going to run win 7 … which is DX 11 … it’s a NO BRAINER ! Once again nvidia has lost the battle … price drops ??? Get real …. nvidia does’nt know what they are …. LOL !
    And the prices of the next gen nvidia’s … just look at the prices of the last bunch … what a nightmare … overpriced obsolete for future and present win 7 capabilities …. as usual nvidia is 2 years behind on technology .

      • UberGerbil
      • 10 years ago

      Could you…use more…annoying…ellipses…please?

        • derFunkenstein
        • 10 years ago

        Don’t…mince words…Bones…what…do you really…think?

        /Shatner’d

          • alphaGulp
          • 10 years ago

          Lol

          It’s not every day I get to see someone get shatner’d – v. much appreciated 🙂

          • UberGerbil
          • 10 years ago

          But…Spock!…What…would…Intel do?

            • derFunkenstein
            • 10 years ago

            intel is illogical

            whats wrong with them?

            • ssidbroadcast
            • 10 years ago

            Scotty, two nerds to beam up.

      • Kaleid
      • 10 years ago

      GT300 seems to be a beast:
      §[<http://www.techpowerup.com/104942/NVIDIA_GT300__Fermi__Detailed.html<]§ But, it has way more power than I'll need for quite some time. Its not really worth it buying high-end cards.

        • StashTheVampede
        • 10 years ago

        Of course it’ll be a beast … at $499-$599! The last rumors also have a low yield — that card will be a price premium for sure! I’m sure the “lesser” $250-$300 card will be in demand, but it has definite competition that is already on the market.

    • Freon
    • 10 years ago

    Looks like a good buy for its price point, and that power consumption and noise level is just absolutely superb. Good job, AMD!

    Great review as always. I look forward to followups.

    • flip-mode
    • 10 years ago

    I have a hunch what the Deal of the Week will be, pending availability.

    • phez
    • 10 years ago

    Are we going to see a DX11 review anytime soon?

      • UberGerbil
      • 10 years ago

      That’s going to wait on the release of DX11 games. And the Windows 7 general release isn’t for another three weeks, so you’re getting way ahead of yourself.

      Yes, there are some “DX11” patches out now, and Win7 is obviously available, but that’s hardly enough to justify a whole new set of benchmark testing just yet.

    • Ryhadar
    • 10 years ago

    I’m not much of an overclocker though I’d certainly make an exception for the 5850. I saw a few sites get up t 775MHz core easily with CCC and guru3D got up to 820Mhz core with RivaTuner. With an overclock like that you would have a $260 card that closely rivals it’s $380 older brother. Exciting stuff and great review guys!

      • spigzone
      • 10 years ago

      Firing Squad claimed a 878/1400 Mhz overclock on one of their two 5850 boards using AMG’s GPU clock tool which bettered their 5870 stock board benchmarks.

      Six months and a stepping or two down the road the overclocking potential of the 5850 will be mind boggling and the performance potential at or below $200 staggering.

    • SubSeven
    • 10 years ago

    l[

      • flip-mode
      • 10 years ago

      I hadn’t caught that. Nice, Scott!

        • SubSeven
        • 10 years ago

        Shame on you! You of all people! Well, at least Silus is happy now 🙂

          • flip-mode
          • 10 years ago

          Sorry! Skim-reading FTL! Gawd! It’s even the very first sentence. flip-mode FAIL.

      • Meadows
      • 10 years ago

      What’s the matter when it’s true?

        • flip-mode
        • 10 years ago
          • Meadows
          • 10 years ago

          The bolded part of the green quote?

            • flip-mode
            • 10 years ago
            • WaltC
            • 10 years ago

            Well, it seems to me that saying “as fast as two previous products put together” is the same thing as “twice as fast.” Especially since the 5850 *is* literally “two previous gen gpus put together” plus some…;)

            • Meadows
            • 10 years ago

            Except this is about the HD 5870, but the point does stand regardless.

            • flip-mode
            • 10 years ago
            • WaltC
            • 10 years ago

            I really don’t know what you’re thinking about…;) I don’t know of any gpu that when doubled in Xfire or SLI always provides “twice the performance” across the board consistently. A lot depends on how the games are tested and how much of the hardware a given game actually stresses. OK, so if we don’t expect that in Xfire or SLI, then why expect it in a single gpu that is literally doubled (just like 2 cards xFire’d or SLI’ed is doubled over one) in terms of hardware resources?

            Besides, as I mentioned in the other thread, under the right testing conditions, and under the right game, it’s possible that we’d see more than doubling in terms of performance, when things like DX11 hardware tessellation are examined. Then there are other features to consider which aren’t even directly comparable in terms of performance, either, like SS FSAA.

    • puppetworx
    • 10 years ago

    *ZOMFG mis-post*

    • honoradept
    • 10 years ago

    this damn card will be worth to Double jump from 3850 256MB

    • derFunkenstein
    • 10 years ago

    Sweet card, but you’ll need a 2 megapixel display before you can get anything out of it. My 1440×900 monitor can’t even make my Radeon 4870 sweat.

      • sigher
      • 10 years ago

      You need to try some new games and not only play quake3.

        • glynor
        • 10 years ago

        Um…. What?!? The 4870 can push pretty much anything you throw at it at 1440×900, except maybe Crysis on “Enthusiast”…

          • Meadows
          • 10 years ago

          It can push that too, just skip antialias.

            • swaaye
            • 10 years ago

            At 1440×900, AA should be quite usable.

          • derFunkenstein
          • 10 years ago

          you obviously didn’t get what he was saying – he’s just saying that I need newer games if I want to work the 4870 hard. He’s wrong, AFAICT, because I have everything I play set to 8x AA and the highest details I can get (running into CPU boundaries eventually, though).

            • glynor
            • 10 years ago

            I got what he was saying. He was just wrong. There is very little that benefit you would get from moving to a faster video card with that low of a resolution.

            You are right… Any upgrade you do to that machine should first go towards a new monitor.

        • derFunkenstein
        • 10 years ago

        The NFS Shift demo and GTA 4 are all I’m playing right now, and both of them are running at some finely-tuned settings that generally leave me CPU-constrained (and refresh rate-constrained) in both games. Even Supreme Commander with everything turned up to 11 and 8x AA runs at 60fps at all times.

        I just need a new monitor.

          • DrDillyBar
          • 10 years ago

          2048×1152 is serving me well

            • derFunkenstein
            • 10 years ago

            well goodie for you. :p

            • DrDillyBar
            • 10 years ago

            touchy 🙂

          • glynor
          • 10 years ago

          My 4870 1GB in my HTPC runs both of those games perfectly fine at 1080p resolution as well, so even a small bump in resolution probably wouldn’t require a new card (GTA does require some slightly lower settings at 1080p on the original 512MB 4870 cards if you have one of those).

    • puppetworx
    • 10 years ago

    :0 Look at those load numbers!

    Seems like these new ATIs are scaling beautifully in CrossFire and to top it off they use a fraction of the power of previous gens/NVIDIA.

    If I was going for a mainstream card I’d go ATI and if I was going extreme I’d get a bunch of them. We’ll see in a year’s time when I actually have to update my 4850.

    • brsett
    • 10 years ago

    Why didn’t you test crossfired 5850s?

    • NIKOLAS
    • 10 years ago

    Sorry, I can’t see why this card is getting so much praise.

    It is not that much faster than the 4890.

    I suspect that Nvidia will do better against the 58xx series than they did with the 48xx series, once they release their next gen card(s).

      • lycium
      • 10 years ago

      usually 50% faster, cooler, better texturing, dx11

      • cegras
      • 10 years ago

      Efficiency?

      • khands
      • 10 years ago

      I would hope that they can pull some massive performance gains over their predecessors, but if AMD decides to release a 5890 at some point I really don’t see Nvidia being able to take /[

      • asdsa
      • 10 years ago

      If you take the green glasses of you could actually see something. You are comparing apples and oranges. HD5850 is not the top model like HD4890 was from previous generation. Also DX11 codepaths will make a things even faster as seen e.g. here §[<http://www.fudzilla.com/content/view/15721/1/<]§

        • flip-mode
        • 10 years ago

        Respectfully, I say that the “stop comparing apples and oranges” rhetoric is waaay overused. He’s comparing a video card to a video card, so at worst he’s comparing an Ambrosia to a Winesap – one kind of apple to another kind of apple. And I think you missed his point, which was not to compare performance from one generation to the next but was more to say that he does not understand all the excitement given that one can grab a 4890 for less and still get similar performance. Also, don’t bring out the “green glasses” polemic defense until the comment is truly appropo – he was pitting ATI against ATI, so I don’t see any green there.

          • BoBzeBuilder
          • 10 years ago

          On point, flip-mode is.

          • puppetworx
          • 10 years ago

          Appropo is my word of the day.

      • WaltC
      • 10 years ago

      I think a major point has to be that at the moment nVidia has no next-gen card to release to compare to the 58xx series.

    • darryl
    • 10 years ago

    I don’t get why you guys are drooling over this 5850 GPU. Just because it uses less power (and thus saves a bit of energy), and just because it beats the GTX285 in all those games (I was holding out for one of those), and just because it costs less than the -285 (and is quieter too?) doesn’t mean it’s a better…
    /[

    • yogibbear
    • 10 years ago

    So the TR systems guide is already out of date again…

    Oh the horror!

      • khands
      • 10 years ago

      Yeah, I was hoping they’d wait for these to come out and release the guide in October, but oh well.

        • UberGerbil
        • 10 years ago

        I don’t know, you think maybe we’ll get another buying guide in about a month, after Windows 7 is out and the rest of the AMD line (including the Juniper designs) have been released? I know it’s hard to imagine, since they are /[https://techreport.com/system/<]§

          • indeego
          • 10 years ago

          Needs more SATA 6Gb/s, USB3, mini/micro-ATX… But yeah end of November 2009 almost assuredlyg{<.<}g

            • khands
            • 10 years ago

            Maybe a Christmas rendition this year?

          • wibeasley
          • 10 years ago

          You’re in a grouchy mood today.

            • UberGerbil
            • 10 years ago

            Hmmm, perhaps I am.

    • colinstu
    • 10 years ago

    Me-likey!!!

    • rostam
    • 10 years ago

    I would like to see the most demanding games tested with AA off in addition to on. Crysis Warhead, for example, can probably go from an unplayable framerate to a playable framerate on the 5850 just by turning off AA. I think it would represent a more real-world scenario for owners of the lower-end cards.

    In the past AA-enabled benchmarks have tended to favor Nvidia even though the visual quality increase is arguably very minor, especially at high resolutions.

      • ImSpartacus
      • 10 years ago

      Unplayable with AA? I run two monitors (1080p &1280×1024) with my 4890. I can push Crysis on max settings with like 2-4x AA (depending on the level) @1080p and get a playable framerate. I’m thinking my fps isn’t much north of 40, but I don’t notice it.

        • WaltC
        • 10 years ago

        Imo, generally 30 fps is perfectly playable and you usually won’t notice stalls unless the minimum dips below 15 fps. You always know when a game is playable when a player is too busy playing the game to worry about getting exact frame-rate counts…;)

          • Meadows
          • 10 years ago

          “Imo” your eyes are broken.

            • WaltC
            • 10 years ago

            IMO, not a chance…;) Yours must be addicted to frame-rate charts…;)

            • Meadows
            • 10 years ago

            No, but I know 30 fps when I see it, and it’s not well playable enough.
            Motion blur helps, but things only begin to turn smooth north of 40 (preferably 50) with motion blur, or north of 60-80 without.

            • WaltC
            • 10 years ago

            I’d be willing to bet that you wouldn’t know the difference between 30 fps and 130 fps, provided both frame-rates were consistent. I’ll bet that if you did a double blind test with two people that neither would be able to tell which was which–provided both frame rates were consistent.

            • khands
            • 10 years ago

            Dude, your eyes are borked. I’m 1/4 blind and I can see the difference between 30 and 60 fps.

            • WaltC
            • 10 years ago

            OK, fine–pleased tell me what to look for so I can see the difference, too…;)

            • khands
            • 10 years ago

            specifically, more fluid movement on /[

            • WaltC
            • 10 years ago

            I really did not intend to talk about this so much…;) But this is exactly the sort of response I thought I’d hear, which indicates to me that we actually aren’t talking about a consistent 30 fps, but about a drop from 30 fps per second sufficient to result in noticeably less smooth movement, camera action, etc.

            What happens is that when people think they “notice the difference” between 30 fps and 60 fps, or between 120 fps and 60 fps, what they are actually noticing is how often the game drops *below* 30 fps to 15-20 fps, which is where the frame-rate stalls momentarily to the degree that is noticeable. Obviously, the difference between 30 fps and 15-20 fps is much shorter than the distance between 60 fps and 15-20 fps, so a 3d-card with sufficient power to hit 60 fps is obviously going to hit 15-20 fps far less than a 3d-card that can only do 30 fps. But that wasn’t what I was talking about at all…;) I still don’t think that given a double-blind test between a consistent 30 fps and a consistent 60 fps–which means no dropping to 15-20 fps, ever–that the test subjects would be able to tell the difference between the frame-rates.

            This opens up a whole new can of worms such as the psychological impact of benchmark frame-rate bar charts on what people expect to see and on what they believe they perceive when playing various games with various pieces of hardware. We could talk about that all day long–but who wants to?…;)

            • Meadows
            • 10 years ago

            Seriously, comment #108 is too correct. Not only can I see a blatant difference between 30 and 40 or 40 and 60 fps, but I can easily spot the difference between 60 and 80+, and I believe that with a little effort, I could tell you 80 fps from 120+ in a really fast-paced game.

            And I’m far from alone making the above solid claims.

            Just because you never use your eyes doesn’t mean everyone else has to be clinically blind as well.

            Edit: do place your bet though, I need some money anyway.

            • MadManOriginal
            • 10 years ago

            I see shades of mp3 bitrate arguments… Meadows has goldeneyes! 😀

            • Meadows
            • 10 years ago

            This is a bit different, since vision is far less subjective than hearing.

            • MadManOriginal
            • 10 years ago

            If you say so, but if I say I can tell the difference between 256kbps+ VBR mp3’s and lossless and much prefer the latter don’t poopoo me :p

            • OneArmedScissor
            • 10 years ago

            How the hell can you stand watching movies, then? :p

            Do you realize that “real life graphics” are “only” 24 FPS?

            • moriz
            • 10 years ago

            movies get away with 24 fps because of motion blurring, and also you are not in control of it.

            if memory serves me correctly, i recall reading that the human eye can see at around 133 MILLION FPS in low light/black+white situations, and few hundred thousand FPS in colour. so no, “real-life” graphics are way more than 24 fps.

            EDIT: errr, maybe not that high, but certainly well beyond what a computer screen (and system) can deliver.

            • Meadows
            • 10 years ago

            Those are very high numbers, but north of 200-300 Hz (fps) is definitely plausible for many people. As far as I’m aware, this is only specifically tested on special people, such as some volunteering pilots or marines or NASA janitors or the like.

            • Meadows
            • 10 years ago

            I do, and it does annoy me. Movement is jerky, but luckily your eyes can interpret or interpolate natural motion at as low as 15 fps. The real problem is certain anime productions which fall very well short of even that puny standard, therefore being terrible to look at unless you’re really immersed in the presentation.

            Cinemas get away with it because it’s dark. In fact, /[

            • crazybus
            • 10 years ago

            Most cinemas actually project at 48Hz in order to decrease visible flicker.

            • Meadows
            • 10 years ago

            Still less than TV (which does the exact same in fact, just via interlacing), and TV is still a bit slow.

            • crazybus
            • 10 years ago

            You can’t really compare the flicker caused by the rotating shutter of a film projector with flicker caused by the limited scan rate from a CRT display (which is what I assume you meant by TV). Watching a 24Hz film in a cinema projected at 48Hz is definitely not worse (in terms of flicker) than watching the same film telecined to 60 Hz on a standard CRT.

            • Usacomp2k3
            • 10 years ago

            I would too. Very openly. (although I’d rather see it bump to 60fps, not 48, just from compatibility’s sake. You’d need to have a 240hz set to avoid 3:2 pulldown.

            • cegras
            • 10 years ago

            There’s a difference: are you really getting 80 FPS or are you, on average getting 80 FPS? 120 FPS in a hectic game on average may mean the minimum is higher, but if the minimum fps is 60, maybe you won’t be able to tell.

          • Freon
          • 10 years ago

          Ug, 30 fps is not that great for a fast paced game, especially an FPS.

          • StashTheVampede
          • 10 years ago

          Says the non-heavy FPS player.

          Go ahead and try to play a FPS at a locked 30fps and locked at 60fps. Going for 30fps is utterly painful and frustrating — enough you will quit in no time.

          Even back when I used CRTs for gaming, I’d have a resolution that allowed 120Hz — I easily could see the difference between 60 and 90fps!

          Graphics displayed on CRTs and LCDs aren’t the same as film and movies, you truly need more FPS!

            • ironoutsider
            • 10 years ago

            Aye!! Having north of 40 FPS makes a HUGE difference. 30fps is crap. No one should have to play at that rate.

            • swaaye
            • 10 years ago

            >30fps is a luxury. Maybe not for shooter fans this decade, but man how bout some of that 15fps N64 action. 🙂

            Crysis has some sort of framerate smoothing that allows 20-30fps to feel smoother than it does in most other shooters.

            I’ll happily argue that 30 fps is ok in a shooter unless you’re one of those competitive shooter fans. I’ve watched those people run games at ugly ass settings just to get the extra frames.

            • Meadows
            • 10 years ago

            It’s called “motion blur” and a lot of games have it.

            • indeego
            • 10 years ago

            I’ve been playing fps games for ~15 years and I notice the difference between 30 and 60, /[

            • spiritwalker2222
            • 10 years ago

            30 fps is playable, but your at a distinct disadvantage. I play fps at ugly settings so I can get ~100 fps. So when I get a dip in frame rates it should rarely go below 60 fps. And the dips always happen when the action gets hot.

            • Bauxite
            • 10 years ago

            You still play on a CRT?

            Even I gave up awhile ago and I have one of the best CRTs ever made, a 24″ sony widescreen. (yes, they do exist)

            LCDs are capped at 60hz inputs, yes even pretty much all the “120hz” marketing babble types out right now. While they help with some video, they don’t really do 120hz on inputs. (or outputs to be honest, lcd pixels aren’t instant)

            • spiritwalker2222
            • 10 years ago

            Yes, I still have my Sony trinitron. But I don’t play fps much anymore so my LCD works fine for everything else I do.

          • Bauxite
          • 10 years ago

          Your eyes are broken, and google “how many frames can the human eye see” for some enlightenment.

            • Krogoth
            • 10 years ago

            The entire FPS (frames per second) argument is filled with so much misinformation.

            It is true that the human eye is capable of seeing 200+ FPS. The visual cortex (what is really responsive for your vision) is typically the bottleneck for majority of individuals. It becomes more complicated with the animation in question.

            The benefits of rendering animation at such a high frame rate only yields benefits at super-fast motion (race car zooming by at close) without it being choppy. This is not found in fast-paced FPS (First-Person Shooters).

            60-85FPS (Frames per Second) is found to be the sweet spot for the majority of users in a fast-paced FPS (First-Person Shooter). Anything more is a waste of resources for very little gain if any. For this reason, game programmers hard-code their stuff to render at this framerate at maximum for timing and syncing reasons.

            This was not the case with older games where systems were much slower and games were coded to go as fast as possible. The unfortunate consequence is that if you were play the same older games on a modern rig it runs at “turbo” speeds that makes them practically unplayable. You need to force the game to render its maximum at a more sane framerate. Enabling Vsync usually works, but it may require additional tweaks.

            The main benefit of having your hardware capable of rendering games at 100+ FPS is to give users overhead for “hot” scenes where there is a ton of effects and actors going on the screen without it becoming a slide show (happens a lot on busy servers).

            24-30FPS does work well enough for movies and games that lack any fast-pace action or motion.

            • Meadows
            • 10 years ago

            g{<60-85 fps is found to be the sweet spot for the majority of users in a fast-paced FPS. Anything more is a waste of resources for very little gain if any. For this reason, game programmers hard-code their stuff to render at this framerate at maximum for timing and syncing reasons.<}g That's _[

            • Krogoth
            • 10 years ago

            No, it is because most people cannot perceive the subtle differences that happen beyond 85 FPS under fluid, fast-paced animation in a FPS.

            Why waste resources on something that a tiny minority may be able to make out under controlled circumstances?

            It does not help that “placebo” effect plays a role in convincing people that 85FPS+ framerate is outright superior to 60-85 rate.

            • Meadows
            • 10 years ago

            It’s not just placebo, but these things are restricted to pro gamers only.

            • Krogoth
            • 10 years ago

            Funny, that is the crowd that typically falls for the “placebo” effect. What makes it more funny is that that build-in locks are more enforced in network games for syncing reasons.

            They blame the system rather themselves as the bottleneck for reaction time.

            There is one major “Esport” game that it made a difference for a different reason. Quake 3’s build-in physic engine was infamous for for quarky behavior when it operated at 100+ FPS. It allowed players to “jump”, run-jump and bunny hop further then the crowd that ran it at 60-85 FPS. You can obviously see that it gave the “eSport” professionals a competitive edge.

            • Bauxite
            • 10 years ago

            l[

            • Krogoth
            • 10 years ago

            On mouse sensitivities, that no human can manage to handle without accuracy going down the tubes?

            It is easier to blame the system and interface rather then your own biological limitations. The eyes and a “trained” vision cortex might be able to see the subtle difference, but the body cannot catch-up with it ( > 10ms+!) being reactive. The body relies more on proactive patten recognition (reading moves) and intuition (make a good prediction based on previous experience).

          • clone
          • 10 years ago

          anything under 30 frames is noticeable, once I get a steady 45fps I don’t see the need often save large expanses with quick panning even then it’s more critique than “oh my goodness it’s unplayable.”

          but I fully disagree with your comment that anything under 30 is even remotely fine.

    • BehemothJackal
    • 10 years ago

    That’s the first in a very long time that Scott gave an “Editor’s Choice” award to a video card. Very impressive.

    • JustAnEngineer
    • 10 years ago

    It seems more than a bit disingenuous to compare this brand new card at $259 list price to the GeForce GTX285 that still averages $366.50 (minimum: $326) at Newegg. That’s 41.5% higher cost for the NVidia-based card.

    A comparison to the GeForce GTX275 that averages $244.23 (minimum: $210) at Newegg would be much more appropriate, since its actual discount e-tailer pricing is only 6% less than Radeon HD5850’s list price.

      • flip-mode
      • 10 years ago

      Personally, I like the comparison that was made. If it beats or meets the 285, then we know it would beat the 275. Think of it this way: comparing to the 285 shows you how much Nvidia needs to adjust prices of the 285 to make it attractive again. When a game changer like this arrives, making comparisons based on current prices doesn’t really do you any favors. At this point, do you really care to see the 275 in there know that regardless it is below the 285? I just don’t see the purpose.

        • JustAnEngineer
        • 10 years ago

        The Asus GeForce GTX285 that TR used for the review goes for $360 + 8.50 shipping:
        §[<http://www.newegg.com/Product/Product.aspx?Item=N82E16814121335<]§

          • flip-mode
          • 10 years ago

          Cool. I’m just wondering why you want to see comparisons based on price at this juncture.

            • Lans
            • 10 years ago

            I agree. The HD 5850 needs to beat the GTX 275, at a *[

      • Hattig
      • 10 years ago

      It shows that the 275 and 285 are now worthless considerations for anyone buying a graphics card in that ballpark. The 5850 wins from $220 until $350. The NVIDIA die size is so large that they will get hurt dropping the price down to the same level, yet they have no other option. GT300 looks really good, but it’s not here yet.

      (btw it looks like GT300 will have amazing GPGPU power, especially with double precision FMA, where it could be 2-3x as fast at the 5870 (because the 5870 can only do one DP FMA per five SP shaders, whereas the GT300 looks like it can do one DP FMA per two SP shaders). However SP will probably be slower, despite the higher shader clocks on the NVIDIA cards historically, because there are only 512 shaders).

    • flip-mode
    • 10 years ago

    I see some junk in the 5850’s Bat-trunk.

    Edit: done reading. Nice review Scott. Your job is starting to get tougher with fewer and fewer games able to stress these cards. To put any of the tested cards to good use, one needs a 30″, thousand-dollar monitor. Or I guess some multi-monitor action. Have you considered changing any of the benchmarks to dual-monitor? L4D and Wolfenstein put todays GPUs to waste at standard, single-monitor resolutions.

    I’d still like to have a 5850 ‘in my trunk’; er, that didn’t sound good at all.

    Peace and love to all TR readers; may the wind always be at your backs and the sun always shine warm on your faces.

    Edit 2: I have to give a huge thumbs-up to AMD for initial pricing of these cards. It’s a totally different flavor than Nvidia’s razor blade-dildo rapage launch prices like $650 for the GTX280. Super-cool AMD. Nvidia – please take some notes.

      • ssidbroadcast
      • 10 years ago

      LMAO @ “razor-blade dildo rapage launch prices” ! So true!

    • oldDummy
    • 10 years ago

    Nice article.
    .
    This is a good card but not an upgrade vs the gf 285.
    At 1920 x 1200 the 285 does very well in most all games.
    hmm..might be able to skip a gfx generation….

    based on performance, anyway.

      • khands
      • 10 years ago

      If you bought a high end card last gen I would really suggest doing that actually, the need just isn’t there yet and in a year and a half there should be few decent DX11 games worth having.

    • Wintermane
    • 10 years ago

    Well it better be impressive its up against the 8800 XTREME III omega XXX .2:/

      • kvndoom
      • 10 years ago

      Aha! You mean the G92-based GT350? 😛

    • StuG
    • 10 years ago

    Impressive, though i will still be sticking with getting two 5870’s 😀

      • Chrispy_
      • 10 years ago

      To run what, exactly? Crysis on a 12-screen Eyefinity setup?

      • ironoutsider
      • 10 years ago

      🙂 !!! OH yeah! that would really fuN!! I would totally just wander around the jungle in my own made-up escapades on how I had to return the turtle to Griffin Rock or something!!! LOL!!. My wife and child might not want me to be gone from reality for a long time again though…

    • thesaint
    • 10 years ago

    this is nothing.
    i just heard that intel is overclocking and water cooling the gma 4500.
    it’s going to be the real 5870 killer.

      • kvndoom
      • 10 years ago

      More like 9700Pro killer, right? 😉

      GMA 4500WC XXXXXXXtreme!!!1!

    • Buzzard44
    • 10 years ago

    Hip hip hooray! Hip hip hooray! Hip hip hooray!

    This is just awesome. The 5850 is so so sweet. So so sexy.
    So….breathtaking.

    Like being touched by an angel.

      • Meadows
      • 10 years ago

      Get out more.

    • FuturePastNow
    • 10 years ago

    Very impressive card. The (how short our memories are) high prices of these cards probably have a lot of room for cuts depending on what Nvidia does.

      • shank15217
      • 10 years ago

      $259 is high price now? You are delusional.

        • OneArmedScissor
        • 10 years ago

        Most people do not spend even $200 on a video card. Money doesn’t grow on trees.

          • willyolio
          • 10 years ago

          “most people” buy integrated Intel graphics. to those who care, mid-$200 range is a more than reasonable price to spend for high-end performance.

        • Meadows
        • 10 years ago

        It pretty much is.

    • gerryg
    • 10 years ago

    Hoping for a 5830 that paces the 4890’s performance numbers but even quieter and lower power than the 5850 while getting below the $200 mark. It would make a perfect HTPC card! Cross your fingers…

      • Envy007
      • 10 years ago

      I’m going for the 5750/5770 cards, 4870 1GB performance and a lot less power requirements for under $150.

        • bimmerlovere39
        • 10 years ago

        If, at $150, I can get 4890 like performance, It’ll likely be a 5770 (or 5830) that I’ll get.

        If it’s any more than that, I’ll likely either wait for price drops or add a second 4770 to my rig.

          • khands
          • 10 years ago

          Likely, 4890 performance will just get more power efficient and have a couple of enhancements with the 5770 at ~$200 and the 5750 will be replacing the 4870 at it’s current price point.

    • Vaughn
    • 10 years ago

    I wasn’t expecting this to be so close in performance to the 5870.
    It will be hard pressed to speed the extra money for that small of a boost, just wondering which card is the better overclocker now.

    • Meadows
    • 10 years ago

    Outstanding. The GeForces look like power-hungry inefficient pieces of turd next to this thing.
    Think about the supposedly large overclocking headroom, too.

    If I wasn’t so much of a green lover, I’d consider this as an upgrade path. Going to wait for whatever the other side finally comes up with, I guess. Let’s hope they’re not late with introducing any mid-range Dx11 hardware.

      • MadManOriginal
      • 10 years ago

      I think you’re going to be disappointed. Unless they’ve got something up their sleeve it looks as if the best NV will do in the low-end to mid-range or a bit higher is going to be GT21x DX10.1 GPUs. Very disappointing because that means a lot less direct competition and price movement in the sub-$150 area.

    • glynor
    • 10 years ago

    It looks to me like AMD is going to have a hit on their hands. Nvidia really can’t respond with anything other than price cuts and overclocked parts, probably until Q1 2010.

    Great review, as always. I’m very interested in seeing more about the new audio bitstreaming support in these cards. This could well be a perfect high-end HTPC card.

    • danny e.
    • 10 years ago

    this looks like the card to get.
    the power numbers / temps / noise levels … all good.

    • ApockofFork
    • 10 years ago

    I’m curious to see if anyone can mod this thing to unlock the missing cores. Of course it is highly likely that isn’t possible considering that not only is this a 5870 with some cores disabled they also seem to of removed a chunk of the card. I’m actually somewhat curious as to what they took out with the chunk considering in all respects except for the disabled cores this card appears identical to the 5870. That being said are the last inch or two of the 5870 just blank pcb covered up by a slightly longer batmobile cooler?

      • OneArmedScissor
      • 10 years ago

      They probably knocked off some power-related stuff.

    • SecretMaster
    • 10 years ago

    Jeez louise this thing is damn impressive. I thought the 5870 was good, but this is even better. For me, the even smaller power/noise/heat footprints seal the deal. They are lower than what I expected. I’m still tempted to hold out for the Juniper series (which should be coming soon as well), but the 5850 is close to being penciled in.

      • [+Duracell-]
      • 10 years ago

      I have a 4850, and I’m tempted to upgrade to the 5850. However, I’ll wait for Juniper and see how those parts perform compared to my 4850 before I make the jump.

      Although…I don’t play anything above 1680×1050 thanks to my monitor, and my 4850 can push 8xAA in most of my games with reasonable framerates, so I don’t really have too much of a reason to upgrade.

        • BlackStar
        • 10 years ago

        Bah, there’s little reason to upgrade from 48×0 to 58×0 right now, unless you wish to use 3 monitors. Better wait until DX11 games start trickling out.

        On the other hand, the performance delta is high enough to make an upgrade from older hardware worthwhile (8×00 or 2×00/3×00).

        My 2 cents at least.

    • charged3800z24
    • 10 years ago

    I think this will be my next card. It has nice power draw and plenty of power. Most likely some OCing head room too, if you choose to do so. Nice review, as always.

    • BoBzeBuilder
    • 10 years ago

    Nice card. It’s great to see AMD is keeping power in check. Can’t wait till GT300 pops up and these things drop to ~$150.

    BTW, Great review Scoot.

      • shank15217
      • 10 years ago

      GT300 is so fast AMD will drop 5870 to $12.99 at walmart.

        • Meadows
        • 10 years ago

        Prove it.

        • OneArmedScissor
        • 10 years ago

        It would be highly unfortunate if we were all forced into buying two 5870s for $26, rather than one, slower, $400+ GT300.

    • Kurotetsu
    • 10 years ago

    Thank goodness its a reasonable size, and not the ‘single core GPU that thinks its a dual core GPU’ monstrosity that is the 5870. Its actually fairly impressive, its dead even with the GTX 285 while being smaller, quieter and eating less power.

      • MadManOriginal
      • 10 years ago

      That’s a bit too harsh considering the 5870 is the same size as the larger single GPU NV cards.

      • Fighterpilot
      • 10 years ago

      l[

    • Adaptive
    • 10 years ago

    This is going to be a tough decision over the 5870. I was hoping to actually get a better price-to-performance ratio with the 5850 rather than what looks like direct price scaling. Can’t really complain, but I suppose choosing between the cards really comes down to what resolution you plan to use.

      • marvelous
      • 10 years ago

      Well you could potentially overclock the 5850. It uses the same memory so it will go same overclock as 5870. I don’t know about core but I’m guessing it will overclock to 850 no problems I’m sure. For $259 this is a good deal with all the bells and whistles of 5870 except for those 8TMU and 160SP

        • ImSpartacus
        • 10 years ago

        Yeah, unless you wanted to run 6 monitors, I’d get the 5850, no contest. You can overclock the 5850 to 5870 speeds and get close to 5870 performance for a hundred bucks less.

          • shank15217
          • 10 years ago

          you cant run with 6 monitors even with the 5870.

      • flip-mode
      • 10 years ago

      From Anand:

      When you take the Cypress based Radeon HD 5870 and cut out 2 SIMDs and 15% of the clock speed to make a Radeon HD 5850, on paper you have a card 23% slower. In practice, that difference is only between 10% and 15% depending on the resolution. What’s not a theory is AMD’s pricing: they may have cut off 15% of the performance to make the 5850, but they have also cut the price by well more than 15%; 31% to be precise.

Pin It on Pinterest

Share This