Nvidia’s GeForce GTX 480 and 470 graphics processors


Seems like we’ve been waiting for these new GeForces for a long time now. Nvidia gave us a first glimpse at its latest GPU architecture about half a year ago, right around the time that AMD was introducing its Radeon HD 5870. In the intervening six months, AMD has fleshed out its Radeon HD 5000 series with a full suite of DirectX 11-class GPUs and graphics cards. Meanwhile, Nvidia’s GF100 chip is later than a stoner to study hall.

Fortunately, our wait is coming to an end. The GeForce GTX 470 and 480 are expected to be available soon, and we’ve wrangled several of them for testing. We can say with confidence that the GF100 is nothing if not fascinating, regardless of whether it succeeds or fails.

A pair of GeForce GTX 480 cards

Fermi is GF100 is GTX 480

We’ve already covered the GF100 graphics chip and its architecture rather extensively here at TR, so we won’t cover the same ground again in any great detail here. There is much to know about this three-billion-transistor behemoth, though, so we’ll try to bring you up to speed in brief.

Our first look at the GF100 was focused solely on the GPU architecture, dubbed Fermi, and how that architecture has been adapted to serve the needs of the nascent market for GPU-based computing devices. Nvidia intends this chip to serve multiple markets, from consumer gaming cards to high-performance computing clusters, and the firm has committed an awful lot of time, treasure, and transistors toward making the GF100 well suited for GPU computing. Thus, the GF100 has a number of compute-centric capabilities that no other GPU can match. The highlights include improved scheduling with the ability to execute multiple, concurrent kernels; a real, fully coherent L2 cache; robust support for double-precision floating-point math; ECC-protected memories throughout the hierarchy; and a large, unified address space with support for C++-style pointers. Some of these provisions—better scheduling and caching, for instance—may have side benefits for consumers, whose GeForce cards have the potential to be especially good at GPU-based video transcoding or in-game physics simulations. Most of them, however, will be practically useless in a desktop PC, particularly since they have no utility in real-time graphics.

After we considered the compute-focused parts of the Fermi architecture, Rys reminded us all that the GF100 is still very much a graphics chip by offering his informed speculation about the specifics of its graphics hardware. Nvidia eventually confirmed many of his hunches when it revealed the details of the GF100’s graphics architecture to us just after CES. As expected, the move to a DX11 feature set means the GF100 adopts nearly every major graphics feature its competitor has, but we were thrown for a loop by how extensively Nvidia’s architects chose to overhaul the GF100’s geometry processing capabilities. Not only does Fermi support DirectX 11’s hardware tessellation—by means of which the GPU can amplify the polygon detail in a scene dramatically—but Nvidia believes it is the world’s first parallel architecture for geometry processing. With quad rasterizers and a host of geometry processing engines distributed across the chip, the GF100 has the potential nearly to quadruple the number of polygons possible in real-time graphics compared to even its fastest contemporaries (GT200 and AMD’s Cypress). In this way, the GF100 is just as audacious an attempt at advancing the state of the art in graphics as it is in computing.

The trouble is that ambitious architectures and major technological advances aren’t easy to achieve. New capabilities add time to the design cycle and complexity to the design itself. Nvidia may well have had both eyes on the potential competition from Intel and its vaunted Larrabee project when conceiving the GF100, with too little focus on the more immediate threat from AMD. Now that the first-generation Larrabee has failed to materialize as a consumer product, the GF100 must face its sole rival in the form of the lean, efficient, and much smaller Cypress chip in AMD’s new Radeons. With a 50% wider path to memory and roughly half again as many transistors as Cypress, the GF100 ought to have no trouble capturing the overall graphics performance title. Yet the GF100 project has been dogged by delays and the inevitable rumors about the problems that have caused them, among them the time-honored classics of chip yield, heat, and power issues.

In this context, we’ve made several attempts at handicapping the key throughput rates of GF100-based products, and we’ve constantly had to revise our expectations downward with each trickle of new information. Now that the flagship GeForce GTX 480 is set to ship soon, we can make one more downward revision that brings us to the final numbers.

Nvidia has elected to disable one of the GF100’s 16 shader multiprocessor groups even in the top-of-the-line GTX 480. That fact suggests some yield issues with this very large chip, and indeed, the company says the concession was needed in order to ensure sufficient initial supplies of the GTX 480. This change reduces the number of ALUs or “CUDA cores” by 32 in the final product, along with one texture unit that would have been good for sampling and filtering four texels per clock. With this modification and the settling of the base GPU clock at 700MHz, the shader ALUs at twice that, and the memory clock at 924MHz, the GTX 480’s key rates become apparent.

GeForce
GTX 285
GeForce
GTX 480
Radeon
HD 5870
Transistor count 1.4B 3.0B 2.15B
Process node 55 nm @ TSMC 40 nm @ TSMC 40 nm @ TSMC
Core clock 648 MHz 700 MHz 850 MHz
“Hot” (shader) clock 1476 MHz 1401 MHz
Memory clock 1300 MHz 924 MHz 1200 MHz
Memory transfer rate 2600 MT/s 3696 MT/s 4800 MT/s
Memory bus width 512 bits 384 bits 256 bits
Memory bandwidth 166.4 GB/s 177.4 GB/s 153.6 GB/s
ALUs 240 480 1600
Peak single-precision arithmetic rate 0.708 Tflops 1.35 Tflops 2.72 Tflops
Peak double-precision arithmetic rate 88.5 Gflops 168 Gflops 544 Gflops
ROPs 32 48 32
ROP rate 21.4 Gpixels/s 33.6 Gpixels/s 27.2 Gpixels/s
INT8 bilinear texel rate

(Half rate for FP16)

51.8 Gtexels/s 42.0 Gtexels/s 68.0 Gtexels/s

The GTX 480 is a straightforward near-doubling of peak shader arithmetic and ROP throughput rates versus the GeForce GTX 285, but memory bandwidth is only marginally higher. In theory, the GTX 480 is, amazingly, slower at texturing than the GTX 285, but Nvidia expects the GF100 to deliver higher real-world texturing performance thanks to some texture cache optimizations that should reduce conflicts during sampling.

(Those of you familiar with the Fermi architecture may wonder why double-precision math performance is only doubled versus the GTX 285. In theory, GF100 does DP math at half the single-precision rate. However, Nvidia has elected to reserve all of that double-precision power for its professional-level Tesla products. GeForce cards will get only a quarter of the DP math rate.)

More troubling are the comparisons to the Radeon HD 5870. Yes, the GTX 480 is badly beaten in the FLOPS numbers by the 5870. That was also true of the comparison between the GTX 280 and Radeon HD 4870 in the prior generation, yet it was never a real problem, because Nvidia’s scheduling methods are more efficient than AMD’s. The sheer magnitude of the gap in FLOPS here is unsettling, but the areas of potentially greater concern include memory bandwidth, ROP rate, and texturing. Theoretically, the GTX 480 is only slightly quicker in the former two categories, since relatively low GDDR5 clock rates look to have hampered its memory bandwidth. And it’s substantially slower than the 5870 at texturing. As we’ve said before, the GF100 will have to make up in efficiency what it lacks in brute-force capacity, assuming the competition leaves room for that.

Clearly, the GF100 missed its targets on a number of fronts. The fact that it’s still tops in memory bandwidth and ROP rate illustrates how Nvidia’s strategy of building a very large chip mitigates risk, in a sense. Even when dialed back, the GF100 is in running for the performance title. The question is whether capturing that title will be worth it—to Nvidia in terms of manufacturing costs and delays, and to the consumer in terms of power draw, heat, and noise. Last time around, the GT200 wasn’t really a paragon of architectural efficiency, but Nvidia was able to reach some fairly satisfactory compromises on clock speed, power, and performance.

GeForce GTX 480: The big daddy warms up his pipes

The GeForce GTX 480

For what it is, the GeForce GTX 480 is a rather impressive specimen. At first glance, it looks like any other high-end graphics card. Take stock, though, and several items stand out. The four heatpipes menacingly snaking up and back down into its cooling shroud suggest serious cooling horsepower—and the need for it. The plated metal grooves on the side of the card are handsome—and a heatsink surface. Above the dual DVI ports is a rather unusual Mini HDMI connector, surely chosen because it’s compact enough not to impinge on the venting area in the adjacent slot cover. (Nvidia expects cards to ship with adapters to regular-sized HDMI connectors, of course.) In all of these ways, the GTX 480 looks to be designed to expel and radiate heat as efficiently as any video card we can recall.

From left to right: Radeon HD 5850, GeForce GTX 470, GeForce GTX 480, Radeon HD 5870, Radeon HD 5970

In spite of those omens, the GTX 480 is pretty much exactly 10.5″ long, or ever-so-slightly shorter than a Radeon HD 5870. Unlike the 5870, though, the GTX 480 requires both a six-pin auxiliary power connector and an eight-pin one, because the 250W max power rating of the board exceeds the limits of two six-pin connectors.

Remove the cooler, and you can see the source of all of the commotion: the GF100 chip under its expansive metal cap. Nvidia remains mum on the GF100’s exact die area, but it’s gotta be roughly the size of a TARP loan. Flanking the GF100 are 12 pieces of GDDR5 memory, totaling 1536MB.

The GTX 480’s cooler is a nifty bit of engineering in its own right, with five heatpipes that come into direct contact with the metal cap atop the GPU. Hint: do not touch said heatpipes. I measured one at 141° F with an IR thermometer. Moreover, at one point as I was uninstalling a card from our test system, I personally confirmed, with my fingertip, that this temperature is above the threshold of pain.

Nvidia expects the GeForce GTX 480 to sell for $499 at online retailers. That price positions the GTX 480 a step above the Radeon HD 5870, whose list price was supposed to be $399 until the reality of 40-nm supply problems intruded; the 5870’s prevailing street price now looks to be about $419. The closest competition for the GTX 480 may come in the form of upcoming 2GB variants of the Radeon HD 5870, such as the Eyefinity6 edition that’s likely to be priced at $499—or the Asus Matrix card we’ll show you on the next page.

Speaking of supply problems, AMD’s fastest graphics card, of course, is the dual-GPU Radeon HD 5970. That is if you can find one. They’re currently out of stock at Newegg, with stated prices ranging from $699 to $739, well above the card’s initially projected $599 price tag.

Product availability is one of the big questions about the GeForce GTX 400 series, as well. When Nvidia first briefed us on the GTX 470 and 480, we were told to expect to see cards selling at online retailers within a week after last Friday’s official product announcement. Late last week, that time frame changed to the week of April 12. That’s well past the self-imposed first-quarter ship date Nvidia had pledged for the GF100-based cards at CES, which doesn’t inspire loads of confidence. Still, Nvidia’s Nick Stam told us last week that the firm is “building tens of thousands of units for initial availability.” Whether that many cards will truly be available in mid-April—and whether that supply will be sufficient to meet demand—is anyone’s guess. We’ll just have to wait and see.

GeForce GTX 470: Fermi attenuated

Ultimately, the card that may arouse more interest in the buying public is the GeForce GTX 470, another GF100 variant that’s been detuned a bit. For this product, Nvidia disables a second SM unit, along with a ROP partition and its associated memory controller. The resulting specs line: 448 ALUs, 56 texture units, 40 ROPs, and a 320-bit path to memory. The GTX 470’s core clock speed is 607MHz, and its 1280MB of GDDR5 memory ticks along at 837MHz—or 3348 MT/s.

This is a smaller, 9.5″-long card with a 215W max power rating and only two six-pin aux power plugs. With an expected e-tail price of $349, the GTX 470 should probably slot in between the Radeon HD 5850 and the 5870. However, the 5850 long ago left its initial list price of $259 in the dust and has climbed into the $299-329 range, not far from where the GTX 470 is expected to land.

Surround Fermi?

The GTX 470 sports the same set of display outputs as the GTX 480: two DVI ports and a Mini HDMI connector. We should note, though, that the GF100’s display block can only drive two outputs at a time. Nvidia announced plans at CES to counter AMD’s Eyefinity feature with a triple-monitor Surround Gaming capability, along with 3D Vision Surround, which will incorporate support for 3D glasses to add the impression of depth. Thanks to the GF100’s display output limitations, you’ll need a pair of GTX-400-series graphics cards in SLI in order to drive three monitors. That’s a shame, since a card like this is easily powerful enough to drive at least three multi-megapixel displays competently in modern games, as AMD’s Eyefinity initiative has proven. On the flip side, given our experiences with 3D Vision, we expect you’d need two very fast GPUs in order to get decent performance across three displays with it.

Driver support for both Surround Gaming and 3D Vision Surround is still pending. Nvidia tells us it will add these features in its 256 driver release, slated for “early April.” This driver will purportedly come out of the gate with bezel compensation—a capability AMD has only recently added to its Catalyst drivers. Still, given the fact that AMD has a broad lineup of Radeon HD 5000-series cards capable of supporting three monitors and a six-monitor Eyefinity6 card whose release is imminent, Nvidia has miles to go to catch up on the multi-monitor front.

Team AMD’s new ringer

Since it’s had Radeon HD 5870 cards in the market for half a year, AMD has the luxury of meeting the new GeForces with some specially tailored competition. The card pictured above arrived in Damage Labs just as the GF100 cards did, and it’s a Radeon HD 5870 that’s hopped up on Adderall. Part of Asus’ Matrix series, this 5870 has a custom cooler and a pair of eight-pin power connectors. Together, those things should allow a fair bit of additional overclocking headroom. Also, right out of the box, the Matrix card should be a little faster than a vanilla 5870 thanks to its 894MHz core clock—44MHz above stock—and 2GB of onboard RAM.

Cards like this one should be available for purchase in early April. We don’t have pricing yet, but I’d expect them to be priced at or below the GeForce GTX 480’s $499 mark.

We’ve included the Matrix 5870 2GB in our tests, alongside a regular Radeon HD 5870. The Matrix card is labeled as “Radeon HD 5870 2GB,” but keep in mind that its performance may be more influenced by its bumped-up core clock speed than by the additional video RAM. In fact, memory may be a limiting factor in this card’s performance, since its clock speed hasn’t budged from the usual 1200MHz.

Test notes

We’ve concentrated our tests of these new GeForces primarily on games this time around, rather than delving into the specifics of the GPU architecture. That’s due in part to limited time with the cards prior to publication, and in part to my dissatisfaction with the current state of the tools we have to measure these things. With luck, we should be able to go back and take a closer look at some architectural specifics at a later date.

For now, in order to give you the broadest comparisons possible, we’ve broken our testing into two rounds. Round one incorporates test data from our Radeon HD 5830 review, which includes results from 13 different graphics cards dating back to the Radeon HD 3870 and the GeForce 7900 GTX. To those results, we’ve added the GeForce GTX 470, 480, and the Matrix 5870. In round two, we’ve concentrated our attention on newer games, higher quality settings, DirectX 11, and a few directed tests. This round includes some higher-end hardware, including dual GTX 480 cards in SLI, 5870s in CrossFire, and some single-card multi-GPU solutions.

You can see our basic test configurations for round one in the testing methods section of our 5830 review. Our test config for the newer cards was the same, with the sole exception of video drivers. We tested the GTX 470/480 with the ForceWare 197.17 drivers and the Matrix 5870 with the Catalyst 10.3a preview drivers—complete with the latest profile update—for both rounds. The other cards in round one used slightly older drivers. That may be notable because AMD says it’s delivered some general performance improvements for all Radeon HD 4000- and 5000-series GPUs in its latest public driver release. However, I suspect the pre-release drivers we used for the 5830 review already incorporated many of those optimizations, and our results would seem to bear that out.

For round two, our test systems were configured per the information below.

Our testing methods

As ever, we did our best to deliver clean benchmark numbers. Tests were run at least three times, and we’ve reported the median result.

Our test systems were configured like so:

Processor Core i7-965 Extreme 3.2GHz
Motherboard Gigabyte EX58-UD5
North bridge X58 IOH
South bridge ICH10R
Memory size 12GB
(6 DIMMs)
Memory type Corsair Dominator
CMD12GX3M6A1600C8

DDR3 SDRAM at 1333MHz

Memory
timings
8-8-8-24 2T
Chipset drivers INF update 9.1.1.1015

Rapid Storage Technology 9.6.0.1014

Audio Integrated ICH10R/ALC889A

with Realtek 6.0.1.6069 drivers

Graphics Radeon HD
5850 1GB

with Catalyst 8.712-100313a-097309E (10.3a preview) drivers

Profile update 1.0

Asus
EAH5870 Radeon HD 5870 1GB

with Catalyst 8.712-100313a-097309E (10.3a preview) drivers

Profile update 1.0

Dual
Radeon HD 5870 1GB

with Catalyst 8.712-100313a-097309E (10.3a preview) drivers

Profile update 1.0

Asus
Matrix Radeon HD 5870 2GB

with Catalyst 8.712-100313a-097309E (10.3a preview) drivers

Profile update 1.0

Radeon HD
5970 2GB

with Catalyst 8.712-100313a-097309E (10.3a preview) driver

Profile update 1.0

Asus
ENGTX285 TOP GeForce GTX 285 1GB

with ForceWare 197.13 drivers

Asus ENGTX295 GeForce GTX
295 2GB

with ForceWare 197.13 drivers

GeForce GTX
470 1280MB

with ForceWare 197.17 drivers

GeForce GTX
480 1536MB

with ForceWare 197.17 drivers

Dual GeForce
GTX 480 1536MB

with ForceWare 197.17 drivers

Hard drive WD Caviar SE16 320GB SATA
Power supply PC Power & Cooling Silencer 750 Watt
OS Windows 7 Ultimate x64 Edition

DirectX runtime update February 2010

Thanks to Intel, Corsair, Gigabyte, and PC Power & Cooling for helping to outfit our test rigs with some of the finest hardware available. AMD, Nvidia, XFX, Asus, Diamond, and Gigabyte supplied the graphics cards for testing, as well.

Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests.

We used the following test applications:

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Round 1: Running the numbers

Peak
pixel
fill rate
(Gpixels/s)


Peak bilinear

INT8 texel
filtering
rate
(Gtexels/s)

Peak bilinear

FP16 texel
filtering
rate
(Gtexels/s)


Peak
memory
bandwidth
(GB/s)

Peak shader
arithmetic (GFLOPS)

Single-

issue

Dual-

issue

GeForce 8800 GT

11.2 39.2 19.6 64.6 392 588
GeForce 9800 GTX

10.8 43.2 21.6 70.4 432 648
GeForce GTS 250

12.3 49.3 24.6 71.9 484 726
GeForce GTX 260 (216 SPs)

18.2 46.8 23.4 128.8 605 907
GeForce GTX 275

17.7 50.6 25.3 127.0 674 1011
GeForce GTX 285

21.4 53.6 26.8 166.4 744 1116
GeForce GTX 470

24.3 34.0 17.0 133.9 1089
GeForce GTX 480

33.6 42.0 21.0 177.4 1345
Radeon HD 3870

13.6 13.6 13.6 73.2 544
Radeon HD 4850

11.2 28.0 14.0 63.6 1120
Radeon HD 4870

12.0 30.0 15.0 115.2 1200
Radeon HD 4890

14.4 36.0 18.0 124.8 1440
Radeon HD 5750

11.2 25.2 12.6 73.6 1008
Radeon HD 5770

13.6 34.0 17.0 76.8 1360
Radeon HD 5830

12.8 44.8 22.4 128.0 1792
Radeon HD 5850

23.2 52.2 26.1 128.0 2088
Radeon HD 5870

27.2 68.0 34.0 153.6 2720

We’ve already looked at the GTX 480’s theoretical capacities in some detail, but the table above gives us a wider view and includes the GeForce GTX 470, as well. These numbers are instructive and sometimes helpful for handicapping things, but delivered performance is what matters most, so we won’t fixate on them too much. For instance, note how the GeForce GTX 470’s basic specs compare to its likely chief rival, the Radeon HD 5850. The pixel fill rate and memory bandwidth of the two cards are comparable, with a slight edge to the GeForce in both categories. The 5850 has almost double the peak texturing and shader arithmetic rates, though. The balances here are very different in theory, but I doubt the real-world performance will diverge nearly that dramatically.

We can use some synthetic tests to give us a better sense of things. We’re not especially enamored with 3DMark Vantage overall, but its directed benchmarks will have to do for now.

The results of this test tend to track pretty closely with theoretical memory bandwidth numbers, and the new GeForces appear to follow suit. That puts the GTX 480 at the top of the heap, with the GTX 470 settling in just below the Radeon HD 5850. Of course, ROP rates are crucial for things other than just raw pixel fill rates, most notably for antialiasing performance. We’ll look at that aspect of things in the following pages.

We learned in the process of putting together our Radeon HD 5830 review that 3DMark’s texture fill rate test really is just that and nothing more—textures aren’t even sampled bilinearly. The test does employ FP16 texture formats, but the result is to put GeForce GTX 200-series GPUs (and older Nvidia parts) at a disadvantage, since they can only sample FP16 textures at the rate they filter them, while newer Radeons can sample FP16 textures at full speed. Fortunately, the GF100 can also sample FP16 formats at full speed, so it’s not handicapped by the peculiarities of this test.

In fact, both GF100 cards prove to be quite efficient here, coming close to their projected peak rates. Then again, the Radeons look to be similarly efficient, with markedly higher peak throughput.

These directed shader tests have long been a study in contrasts, with the architectures from rival camps showing their relative strengths on different workloads. If anything, that contrast is heightened by the arrival of the GF100. The Radeons rule the Perlin noise and parallax occlusion mapping tests, while the new GeForces dominate the GPU cloth and particle simulations. If we can learn anything from these results, I’d say the lesson is that the Radeon HD 5870’s gaudy advantage in peak FLOPS rate on paper doesn’t necessarily translate into superior shader performance.

Round 1: DiRT 2

This excellent racer packs a nicely scriptable performance test. For round one, we tested at the game’s “high” quality presets with 4X antialiasing in DirectX 9 mode. (We’ll look at DX11 in round two.) Because this automated test uses computer A.I. and involves some variance, we tested five times at each resolution and have reported the median results.

Our first game test does not produce a happy outcome for the new GeForces. At the highest resolution, the GeForce GTX 480 ties the Radeon HD 5870. The GeForce GTX 470 is in a dead heat with the Radeon HD 5850, too.

Round 1: Borderlands

We tested Gearbox’s post-apocalyptic role-playing shooter by using the game’s built-in performance test. We tested with all of the in-game quality options at their max. We couldn’t enable antialiasing, because the game’s Unreal Engine doesn’t support it.

This sort of result may be what Nvidia was counting on when it set the prices for its new graphics cards. The GTX 480 and 470 take the top two spots at the highest resolution, followed surprisingly enough by the GTX 285.

I’m not entirely sure what to make of the minimum frame rates reported by the Borderlands performance test. I wouldn’t put too much stock into them, though, when the GTX 470 has a higher minimum in a couple of cases than the GTX 480.

Round 1: Left 4 Dead 2

In Left 4 Dead 2, we got our data by recording and playing back a custom timedemo comprised of several minutes of gameplay. We had L4D2‘s image quality settings maxed out, with multi-core rendering enabled, along with 4X AA and 16X anisotropic filtering.

The Radeons are faster at lower resolutions, but as the display resolution climbs and the GPU becomes unquestionably the primary performance constraint, the GTX 480 rises to the top spot—only by a paltry two frames per second, though. The GTX 470 doesn’t appreciably outperform the Radeon HD 5850, either.

Round 1: Modern Warfare 2
Call of Duty: Modern Warfare 2 generally runs pretty well on most modern PC hardware, but it does have some parts where lots of activity and heavy use of shader effects can slow it down. We chose to test performance in one such area, where you’re in a firefight inside of an office building. This close-quarters fight involves lots of flying debris, smoke, and a whole mess of enemy soldiers cooped up with your squad in close proximity.

To test, we played through this scene for 60 seconds while recording frame rates with FRAPS. This firefight is chaotic enough that there’s really no hope of playing through it exactly the same way each time, although we did try the best we could. We conducted five play-throughs on each card and then reported the median of the average and minimum frame rate values from all five runs. The frame-by-frame results come from a single, representative test session.

We had all of MW2’s image quality settings maxed out, with 4X antialiasing enabled, as well. We could only include a subset of the cards at this resolution, since the slower ones couldn’t produce playable frame rates.

The trend we’ve established in our prior round-one tests continues here: the new GeForces just aren’t appreciably faster than their closest Radeon competitors—even though they cost more. Disappointingly, the GeForce GTX 480 really isn’t that much faster than the GeForce GTX 285, either.

Round 2: Unigine Heaven demo

And now we begin round two, with a little more focus on high-end solutions and DirectX 11. As you have hopefully gathered by now, one of the distinguishing characteristics of the Fermi architecture is its potential for geometry processing throughput. Nvidia has made a big bet here on the usage model for real-time graphics shifting in the direction of much, much more geometric detail. Generally speaking, the Radeon HD 5870 seems to be quite good at tessellation, although once you get to a certain point with a very large number of triangles in a scene, the GF100 ought to outperform the 5870 thanks to its quad rasterizers and the like.

Few current games (and quite likely, not many in the near future) will really push on this point. Even if they do use tessellation, they’re not likely to put an inordinate strain on the geometry processing throughput of Radeon HD 5000-series GPUs. To do that, we’ll turn to a synthetic benchmark: Unigine’s Heaven 2.0 demo.

This DirectX 11 demo looks pretty nice, overall. One of the major highlights of the demo is its use of tessellation, and the new 2.0 release takes that even further than earlier versions. I think this demo presents us with a nice opportunity peek into the Fermi architecture’s geometry processing abilities, but I do have some reservations to note.

Although the Heaven demo does push through a lot of triangles, it’s apparently not especially smart about how it does so. To my eye, the three levels of tessellation available in the demo, “moderate,” “normal,” and “extreme,” are visually indistinguishable unless you switch to a wireframe view. That suggests, well, that one shouldn’t use the higher levels of geometry subdivision, since they’re not really helping.

Once you do switch into wireframe view, you can begin to see the problem. Here’s an example from the demo.

That’s a tremendous amount of detail—large portions of the screen are just white with polygon edges. Nifty, right? But look at the close-up of the hull of the ship, and you can see that its silhouette isn’t a smooth curve as one would expect with a nicely tessellated object. Instead, major transitions between large polygons create visible seams, while all of the detail goes toward subdividing the geometry inside of those large, essentially flat triangles, where it’s pretty much useless.

Toggling tessellation on and off for this object reveals that tessellation imparts a kind of “inflated” look to it, where the white areas covered by lots of detail tend to swell as if they’d been exposed to a Joe Biden speech. This is one use of tessellation, I suppose, but it doesn’t seem to be a very good one. One hopes Nvidia won’t be tempted to persuade game developers to make inappropriate use of tessellation just so it can demonstrate the hidden virtues of the Fermi architecture. True progress on this front will require perceptible and unequivocal visual improvements, not just… inflation.

Nevertheless, we do have a heaping load of triangles, and we can see how the GF100 handles them compared to the competition.

The GF100 cards perform quite well in the Heaven demo, and they become relatively stronger when we move from the “normal” tessellation level to “extreme.” It is nice to see that this attribute of the Fermi architecture pays off in a measurable way. Of course, whether or not that will translate into higher image quality and better performance in real games any time soon is another question, one whose answer may not be so positive.

Round 2: Antialiasing scaling with Far Cry 2

My thought here was to show you how performance scaled at different antialiasing levels, because this has been a point of distinction between AMD and Nvidia GPUs in recent years. Starting with the 4000 series, Radeons have handled 8X multisampled AA with a fairly small performance hit compared to 4X AA. Meanwhile, the GeForce GTX 200 series has seen a steeper drop in frame rates when going from 4X to 8X MSAA. (The obvious solution with GeForce cards has been to use one of Nvidia’s coverage-sampled AA modes, which achieve similar image quality with less of a performance hit. However, doing so complicated direct comparisons between GPU brands, since AMD has no direct analog to CSAA.) Nvidia expects the GF100 to remedy that problem thanks to improved color compression in its ROPs.

Trouble is, I’m not sure these results show us what we expected to see. The GeForce GTX 285 doesn’t slow down inordinately when asked to run in 8X MSAA mode, upending our expectations. The contest between the Radeon HD 5870 2GB and the GeForce GTX 470 may be instructive, though. The 5870 2GB churns out more frames per second without antialiasing and with 2X AA enabled. At 4X and 8X, the GTX 470 proves quicker. I’d like to confirm this with a similar set of directed tests using another game, but perhaps the GF100’s ROPs have been brought up to snuff in this respect.

Round 2: DiRT 2

Our second round of DiRT 2 tests raises the ante by switching to DirectX 11 and cranking up the in-game visual quality settings to the “Ultra” preset. We’re also using a newer version of DiRT 2 with some graphical tweaks, so these results truly aren’t comparable to our round-one set. In DX11, this game tessellates the surfaces of the water puddles on the track so they behave more realistically. Tessellation is also used to increase the detail of the characters in the crowd beside the track.

Once we get to the highest resolution, the GTX 480 once again is no faster than the Radeon HD 5870 in this game, although the GTX 480 SLI config scales better than two 5870s in a CrossFire pairing.

Round 2: Battlefield: Bad Company 2

I considered getting Crysis Warhead into the mix of games we’d test for this review—until I got a good look at Bad Company 2. I think we have a new winner in the ongoing sweeps for best-looking PC game. BC2 looks how the Crysis games wanted to look, before they got sidetracked by all of that on-screen high-frequency noise.

BC2 uses DirectX 11, but according to this interview, DX11 is mainly used to speed up soft shadow filtering. The DirectX 10 rendering path produces the same images.

Since these are all relatively fast graphics cards, we turned up all of the image quality settings in the game. Our test sessions took place in the first 60 seconds of the “Heart of Darkness” level.

Here we are again, with a major new game that really takes good advantage of a modern PC’s capabilities, and the new GeForces are no faster than their less-expensive rivals from AMD. Heck, the GTX 470 performs almost exactly like the GeForce GTX 285.

Contrary to appearances, though, Nvidia gets points for a better multi-GPU implementation in BC2. Although the CrossFire solutions produce higher performance numbers, both the 5970 and the dual 5870s exhibited image quality issues in this game, even with the latest profile update from AMD.

Incidentally, the average frame rates you’re seeing in the graph above may seem relatively low for the slower cards, but notice that the minimum frame rates are fairly high. All of these cards delivered a fluid, playable experience during our testing. Our frame-by-frame plots illustrate that fact visually.

Round 2: Metro 2033

If Bad Company 2 has a rival for the title of best-looking game, it’s gotta be Metro 2033. This game uses DX10 and DX11 to create some of the best visuals on the PC today. You can get essentially the same visuals using either version of DirectX, but with DirectX 11, Metro 2033 offers a couple of additional options: tessellation and a DirectCompute-based depth of field shader. If you have a GeForce card, Metro 2033 will use it to accelerate some types of in-game physics calculations, since it uses the PhysX API. We didn’t enable advanced PhysX effects in our tests, though, since we wanted to do a direct comparison to the new Radeons. Perhaps next time. See here for more on this game’s exhaustively state-of-the-art technology.

If it was capable, we tested each card three ways: with DX10, DX11, and DX11 with tessellation and the advanced depth-of-field shader activated. Also, out of curiosity, I made a late addition, testing the GTX 480 and the 5870 2GB with only tessellation enabled, without the advanced depth-of-field shader. Otherwise, we had Metro 2033‘s best visual quality settings selected, along with the game engine’s built-in adaptive antialiasing scheme.

To get repeatable results with FRAPS, we decided to test this game a little bit differently. We stood in one spot, in the scene shown above, and captured frame rates over a 20-second period. This scene is packed with non-player characters and is relatively intensive compared to many areas in the game, so consider these frame rates more of a likely minimum. Many areas of the game will run faster than this particular spot does.

Without the advanced features enabled, this game runs at about the same speed with DX10 and DX11, and the new GeForces are a smidgen quicker than the closest competing Radeons. Enabling both tessellation and advanced depth of field exacts a hefty performance penalty from every solution we tested, and frankly, the visual differences are tough to detect. I wouldn’t bother with those features.

As I said, out of curiosity, I tried using just tessellation on a couple of cards. In that configuration, only three frames per second separate the GTX 480 from the Radeon HD 5870 2GB, and neither card really manages acceptable frame rates.

Power consumption

We measured total system power consumption at the wall socket using an our fancy new Yokogawa WT210 digital power meter. The monitor was plugged into a separate outlet, so its power draw was not part of our measurement. The cards were plugged into a motherboard on an open test bench.

The idle measurements were taken at the Windows desktop with the Aero theme enabled. The cards were tested under load running Left 4 Dead at a 2560×1600 resolution with 4X AA and 16X anisotropic filtering. We test power with Left 4 Dead because we’ve found that this game’s fairly simple shaders tend to cause GPUs to draw quite a bit of power, so we think it’s a solidly representative peak gaming workload.

Well, that’s not very good. At idle, the GTX 470’s power draw isn’t too scary, but the GTX 480 pulls more juice than the dual-GPU Radeon HD 5970. Two idle GTX 480s in SLI draw just 20W less than a Radeon HD 5850 does while running a game.

The new GeForces draw substantially more power when running a game, too, than the competing Radeons. You’ve gotta take power circuitry inefficiencies into account, of course, but our GTX 480 system pulls 105W more under load than the same system with a Radeon HD 5870 installed. Wow.

Noise levels

We measured noise levels on our test system, sitting on an open test bench, using an Extech model 407738 digital sound level meter. The meter was mounted on a tripod approximately 8″ from the test system at a height even with the top of the video card. We used the OSHA-standard weighting and speed for these measurements.

You can think of these noise level measurements much like our system power consumption tests, because the entire systems’ noise levels were measured. Of course, noise levels will vary greatly in the real world along with the acoustic properties of the PC enclosure used, whether the enclosure provides adequate cooling to avoid a card’s highest fan speeds, placement of the enclosure in the room, and a whole range of other variables. These results should give a reasonably good picture of comparative fan noise, though.

The GF100 cards’ higher power draw numbers translate pretty directly into higher noise level readings on our meter. In spite of some slick engineering in the GTX 480’s cooler, the card is quite a bit noisier than a stock Radeon HD 5870. The only single-GPU solution that’s louder is Asus’ overclocking-oriented Matrix 5870 2GB, which is tuned to keep temperatures down.

I should note that I did all of my GTX 480 SLI testing with the cards installed together in adjacent PCIe x16 slots. Nvidia’s reviewer’s guide suggests separating the cards if possible by installing them spaced apart, but doing so would have sacrificed eight lanes of PCIe connectivity on our X58 motherboard—and possibly some performance. The GTX 480 SLI config might have been quieter—and slower—if I had taken Nvidia’s advice.

For what it’s worth, in case the graphs don’t really convey this, two GTX 480s in SLI are really frickin’ loud.

GPU temperatures

We used GPU-Z to log temperatures during our load testing. In the case of multi-GPU setups, we recorded temperatures on the primary card.

Looks like Nvidia has tuned its fan control profiles to achieve lower noise levels at the cost of higher GPU temperatures. The GTX 480 runs 12° hotter than the GTX 285, and the GTX 470 is a tick hotter still. With that said, Nvidia hasn’t really forged any new territory. The Radeon HD 5870 runs at similar temperatures, and it gets even warmer in a CrossFire team.

We don’t usually report idle GPU temperatures because they can vary quite a bit, depending on the conditions, and aren’t always primarily determined by fan speed control profiles. I should point out, though, that both our GTX 470 and 480 cards tended to drop back to much lower temperatures as they idled on our open-air test bench. We didn’t see the sort of constant 90° temperatures that we saw from, say, the first wave of Radeon HD 4850s.

Conclusions

These two new GeForces draw more power, generate more heat and noise, and have higher price tags than the closest competing Radeons, but they’re not substantially faster at running current games. For many, that will be the brutal bottom line on the GeForce GTX 470 and 480. Given the complexity and the rich feature sets of modern graphics processors, that hardly seems fair, but the GF100 is facing formidable competition that made it to market first and is clearly more efficient in pretty much every way that matters. The GF100’s major contribution to real-time graphics, beyond the DirectX 11 features that its competitor also possesses, is an increased geometry processing facility that has little value for today’s games and questionable value for tomorrow’s. As a graphics geek, it’s not hard to admire this aspect of the GF100, but I think it will be difficult for gamers to appreciate for quite some time—perhaps throughout the useful life of these graphics cards.

Then again, given the supply problems and inflated prices that we’ve seen in the graphics card market over the past six months, we’re just glad to see Nvidia back at the table. Even if the value propositions on the GeForce GTX 470 and 480 aren’t spectacular, they’re a darn sight better than zero competition for AMD.

Also, I suspect some folks will still find these graphics cards attractive for a host of pretty decent reasons. If consumer-level GPU computing takes off, the GF100 may be the GPU to have. We haven’t formally tested its compute prowess against the latest Radeons, but the GTX 480’s exceptional performance in 3DMark’s cloth and particle simulations is a postive indicator. Nvidia is also constantly pushing on initiatives that can give its GPUs an exclusive advantage over any competitor, whether it be games with advanced PhysX effects or 3D Vision—or just plain old driver solidity and instant compatibility with the newest games, an area where I still think Nvidia has an edge on AMD.

Then again, AMD seems to be making inroads with game developers, which is what happens when you’re first to market with a whole family of DX11 GPUs.

For what it’s worth, the GF100 may not be a disappointment in all markets. With its geometry processing throughput, it should make a fantastic Quadro workstation graphics card. GF100-based Tesla cards could still succeed in the realm of dedicated GPU computing, too. The Fermi architecture really is ahead of any of its competitors there for a number of reasons that can’t be ignored, and the question now is whether Nvidia can build a considerable business around it. The firm seemed to be expecting huge progress in this regard when it revealed the first details of this architecture to us.

We’re curious to see how good a graphics chip this generation of Nvidia’s technology could make when it’s stripped of all the extra fat needed to serve other markets: the extensive double-precision support, ECC, fairly large caches, and perhaps two or three of its raster units. You don’t need any of those things to play games—or even to transcode video on a GPU. A leaner, meaner mid-range variant of the Fermi architecture might make a much more attractive graphics card, especially if Nvidia can get some of the apparent chip-level issues worked out and reach some higher clock speeds.

Comments closed
    • Tamale
    • 10 years ago

    I’m all for efforts to increase the search for protein-folding breakthroughs, but I don’t quite understand why people would pay more for better folding performance. It’s not like getting more units done per day is going to be something you’ll be able to actually enjoy.. it’s just a number in a database somewhere.

    Getting higher framerates in a game you play very much changes the feel of the game.. it makes everything else you do while playing more enjoyable. Getting more units crunched in a huge distributed project like f@h doesn’t really do anything ‘for’ you..

    Am I the only one who finds this whole thing a little odd?

    • Arbie
    • 10 years ago

    One thing people should consider before buying a GTX480 or 470 is what “running hot” really means, at least with these products. It’s no secret that even after locking out a lot of shaders, the GF100 (Fermi) yields are very low. This is attributed to 1) huge die size and 2) TSMC 40nm process difficulties.

    A large die size increases the number of chips that will be lost to unavoidable wafer defects. The effect of the process problems is more complex, but the result appears to be that operating voltage must be higher than originally intended in order to get enough transistors working to make a chip functional. Because power goes up at least as fast as voltaqge, and usually much faster than that, this makes the chip run much hotter.

    What you end up with is a very shaky design. Since yield is such a problem with GF100, you can bet that a significant percentage of the chips that pass the automated testing will only do so because of the elevated voltage. Given the number of transistors, the impracticality of thoroughly testing such complex chips over time, and the negative effects of high temperatures on reliability, I would not be surprised if GF100 board failures turn out to be much higher than average even for a high-performance video card.

    Considering that risk in addition to the modest average performance increase that the GTX480 shows over the ATI 5870, and the much larger amount of power it requires and heat that it dissipates into a PC chassis, the Nvidia card looks to me like a very poor choice.

    • Rakhmaninov3
    • 10 years ago

    Edit: What I initially typed about the GTX 4xx’s wasn’t family-friendly.

    • pogsnet
    • 10 years ago
    • theboltski
    • 10 years ago

    awsome, nice card. nvidia is always better than ATI. hope to pick one up and see how much it kicks ATI’s a-hole in crysis 2.

      • ub3r
      • 10 years ago

      lol. Good luck finding one.
      And if you do find one, be prepared to mortgage the house.

        • theboltski
        • 10 years ago

        haha, i just got a bunch of PM’s calling me all sorts of insults aimed at americans. i should mention that here in australia, nvidia has far better imports and volume. i bought my 8800GT 512MB for $30 less than a 3850 512MB. sadly, we also get crappy coolers here from ATI products. Nvidia are much larger cards, but always have much better quality and cooling and warranty.

          • anotherengineer
          • 10 years ago

          Well maybe that’s the way it is down under. However if Nvidia was ALWAYS better than ATI, then ATI would not exist today, as they would have gone bankrupt or been bought out.

            • MadManOriginal
            • 10 years ago

            q[<...or been bought out<]q o_0 (I know they weren't bought because of their poor performance which is what you're saying, I just thought this was a funny thing to say.)

          • Lans
          • 10 years ago

          That r[

        • ub3r
        • 10 years ago

        Well im from Aust too. Lets wait and see how long before MSY stocks them.
        Current best prices are $712 AUD at Mtech.
        §[<http://www.staticice.com.au/cgi-bin/search.cgi?q=gtx480&spos=3<]§ PS: i also got those sad PM's.

      • DrDillyBar
      • 10 years ago

      Crysis; Classic.

      • Sunburn74
      • 10 years ago

      Radeon 5850 oced 5% is equal to the gtx 470 whilst still being cooler, less power hungry, and quieter. Come on man. My 5850 black edition (factory oced 5 percent) at idle runs at 20 percent fan speed (inaudible) and at load at 28 percent fan speed (almost inaudible, but still quieter than a gtx 260 at idle at the lowest possible temp). To me its temps and noise, because the temps from the gpu will heat up my chipset and the general air in my box and increase the total noise of the cpu fan as it ramps up and the psu fan as it ramps up too. The noise is just unaccetable.

      Considering that on ebay you can buy powercolor 5850’s new in box for $279 shipping included, I don’t see how anyone can even remotely think Nvidia has won this round or even has a decent product out there fighting for space. I’m no fanboy (i’ve owned 3 ati cards and 2 nvidia cards over the last 8 years with the two nvidia cards being the most recent), but geez its obvious that the gtx 470 and 480 in their current form are not what Nvidia intended to release and obviously are a last ditch effort to at least get SOMETHING out the door.

        • Fighterpilot
        • 10 years ago

        I agree,my XFX 5850 runs whisper quiet and is idling right now on 34C.
        It maxes out at 54C when gaming.
        Cypress is the clear winner of this generation of GPUs.

          • Silus
          • 10 years ago

          Hehe in here the HD 2900 XT was definitely the best option over the similarly performing 8800 GTS 640, that consumed less power, generated less heat and was quieter:

          §[<https://techreport.com/ja.zz?id=183296<]§ Love the bias :)

            • BoBzeBuilder
            • 10 years ago

            I wonder how long it took you to find that post. It was years ago. Or maybe you keep records.

    • BoBzeBuilder
    • 10 years ago
    • Fighterpilot
    • 10 years ago

    Imagine if Thermi had shipped with NVidia’s recent “Heat em Up” drivers.
    LOL.

      • Farting Bob
      • 10 years ago

      That may have resulted in wildfires being started in every fanboys home, spreading quickly to engulf have the country.

      Thanks to rediculous prices here in the UK (470 = 5870 prices) nvidia will be lucky to sell more than a dozen cards until they can work out how to make a profit and still remain competitive at the mid, midhigh price ranges.

      • Kaleid
      • 10 years ago
        • OneArmedScissor
        • 10 years ago

        LOL! That’s the best yet.

        • Suspenders
        • 10 years ago

        Thank you! The scene changes so suddenly…I haven’t laughed that hard in a while 😉

    • darkpulse
    • 10 years ago

    Well Scott is right. even though fermi is a good gpgpu it dosent matter for *[

    • Satyr_2
    • 10 years ago

    Scott,

    Fantastic review.

    Quality information is better than the “rush to take advantage of the media hype” stuff that i’ve read over the past few days.

    I was really hoping that team green would produce a clear winner, but now, i suspect that a lot of NVidia fans will have to think seriously about their next purchase.

    Me included

    • playboysmoov
    • 10 years ago

    Damage,

    Could you do a value proposition chart like you did with the core i3 and i5 reviews?

    I would love to see how this lays on a scatter plot with the core i7 builds, say in a sweeter spot or utility player config. Just a thought….might generate front page spots at digg and slashdot again.

    • anotherengineer
    • 10 years ago

    Wow over 310 posts on Fermi!!!! It succeeded in getting lots of attention at least 😉

    And yeah a post over 310 😉

    So Scott, when can we expect the 5870 eyefinity review?? 😀

    Hey Prime1 check this out, pretty cool eh!!

    §[<http://www.techpowerup.com/119073/NVIDIA_CUDA_Emulator_for_every_PC.html<]§

      • flip-mode
      • 10 years ago

      Obvious typo is obvious.

        • anotherengineer
        • 10 years ago

        lolz, I could try to pawn that 4870 off as april fools, but that was just me making an obvious typo/brainfart lol

        changed to 5870 my bad

    • Freon
    • 10 years ago

    I think you nailed the conclusion overall. Not an utter failure, but also not terribly exciting or world moving. A tough sale compared to ATI’s offerings, especially if their prices slip just to their original MSRPs.

    You can see there is potential. Good DX11 performance, though some of that gain disappears at 2560×1600 in games, which is disappointing. Strong on tesselation. Again, not sure tesselation to the extreme where the 470/480 start to really win is not blatant overkill and wasteful. I find it likely the computing will be strong, but the unanswered question is whether that will matter to the target audience. A decent dose of “promise.”

    The noise level of the 470 isn’t that bad, but the 480 is starting to get loud and looks bad compared to the 5870. 5db+ is a big difference. Probably not awful, I’ve lived with my single slot 4850 for a while, but I certainly would rather have a quieter card when possible.

    An extra 100w under load between the 480 and 5870 means rethinking a PSU purchase and spending a few extra bucks. This adds to cost. I suppose you could also argue over the course of a few years of ownership it will cost more to power. Back of hand calculation gives me an extra $20-35/year between the 480 and 5870. Not insignificant, though it depends a lot on if you keep your machine on 24/7, use standby, etc. This still adds to overall cost.

    I’m definitely not buying one based on the price/performance now, nor the promise of future potential. I did see a few 5850s for sale yesterday on Newegg for ~$280 and a 5870 or two for under $400, though, so maybe Fermi is functioning correctly for me, which is to scare ATI enough to give their workers some more hours to reduce this “scarcity” with their cards.

    Also, 5870 2GB for $499 looks like a tough sale…

      • khands
      • 10 years ago

      Yeah, I’d like to know what AMD was thinking with that one, it provides so little benefit and is IMO way overpriced. Biggest benefit is single card 6-way eyefinity, but even then it’d be more prudent to just get two 5670’s/5750’s most of the time.

    • dpaus
    • 10 years ago

    THIS!! IS!! SPARTA!!

    Edit: crap….

      • flip-mode
      • 10 years ago

      Bwa ha ha ha ha ha!!!!

        • dpaus
        • 10 years ago

        At least one of us tried!

          • flip-mode
          • 10 years ago

          Yeah, I so totally wasted it 😥 I’ll do better next time.

    • green
    • 10 years ago

    while i would never buy a card like this (the ones that look like a brick in the case) i will say it’s not too bad
    it’ll be interesting to see if optimus comes to the desktop side of things. at which poing idle temps will get lower
    i’m sure in half a year the drivers will mature and the scores will improve like it did with the the 58xx
    i don’t see tessellation as a big thing in the future as you’ll be asking devs/coders to mix with artists
    i do see gpgpu going larger scale (not necessarily with cuda but with opencl and direct compute)
    i don’t see 3d vision a much of a plus (I don’t want to have to wear glasses for the effect)
    the temps do need a fair bit of taming when it’s pushed
    but overall the gtx480 is to nvidia as the hd2900xt was to ati

    but overall… all this has happened before and will happen again

      • douglar
      • 10 years ago

      Not so sure. The delay may be because of poor yeilds, but Fermi has been on the software driver’s workbench for 4-5 months already. The drivers will get better, sure, but I wouldn’t expect a large additional increase in driver performance.

      In my opinion, the Fermi launch is exactly what things would have been like if Nvida had held back the 5800 until May, 2003 instead of releasing it in January. They improved the drivers to make the product competitive. They got thermal solutions into place that were not an embarassment. And never produced a compelling product until the next die shrink.

      Unfortunatly for the AIB makers, the Fermi experience isn’t coming after years of solid sales. This situation is after 2-3 years of bad times for Nvida AIB makers. While Nvidia is going to get by with OEM sales and high end workstation products, the BFG’s and the XFX’s are going to have to switch to ATI or go under.

    • NIKOLAS
    • 10 years ago

    Nice review and after giving you guys a serve in previous reviews, I am glad that you did the charts without SLI and Crossfire results blighting the single card results.

    Made for much more enjoyable reading.

    • spigzone
    • 10 years ago

    ?? … The fightin’ BFG … ??

    Apparently sucking hind GTX 470/480 teat, this most boutique of the Nvidia AIB’s is surely in dire straits with only 30,000 total GTX 4xx cards available (according to Charlie) and the GTX2xx long EOL’ed. Newegg currently lists 7 vendors for the GTX 4×0 cards and BFG isn’t among them them.

    Unless they get AMD to take them on, how can they survive?

      • douglar
      • 10 years ago

      This is a rough economic climate for AIB’s that depend on Nvida. 2 1/2 years of drought takes its toll.

      First there was 2008 which was just a bad year for everyone. Then there was 2009 where the GTX 260, 275, & 285 chips failed to sustain market share against the 4xxx series. And now the first 1/2 of 2010 is going to be completely devoid of any viable quantity of competive product.

      If this current situation continues, BFG isn’t going to be the only company to go under.

    • blackheart
    • 10 years ago

    Please delete this comment

      • VILLAIN_xx
      • 10 years ago

      Well this was a reply to the #273. lol.

      Unlikely they will die, but they do need pick up some slack… AMD/ATI probably wont be making better products either because of the mediocre competition. Would you be satisfied then?

    • blackheart
    • 10 years ago

    I really hope NVIDIA dies, who cares, they have never had any worthy VGAS around, stupid driver issues, cards failing much more often. No professional would use NVIDIA, ever, all our workstations are using Radeons, in our company, there is NOTHING from NVIDIA nor I see any reasons to acquire any product from them.

    As if it wasn’t enough, no gamer will want a Geforce 470/480, only the worst fanboys, the NVIDIA fangroup will slowly get smaller until it disappears.

    These cards aren’t even here and they need a new product to be competitive… This gotta be a joke, seriously. If you use the newer 10.3 driver, the Radeons will probably smoke these Fermies, do not forget the huge overclock headroom.

    They need another year or two to design a new competitive product, and the newer Radeons are coming by the end of the year and will probably be at least 50% faster than the previous generation… No chipsets, no GPUs, NVIDIA will end up being sold to VIA. DEATH.

      • Palek
      • 10 years ago

      Are you the evil(er?) twin of PRIME1?

        • MadManOriginal
        • 10 years ago

        Nah, he’s Bizarro PRIME1.

        §[<http://en.wikipedia.org/wiki/Bizarro_World<]§

          • Palek
          • 10 years ago

          Thank you for that very important and educational lesson in pop culture. 🙂 Bizarro PRIME1 it is!

      • Meadows
      • 10 years ago

      Incorrect.

      • Silus
      • 10 years ago

      At least you’re honest, unlike most of the ATI/AMD fanatics around here.

      But for the sake of your “argument” (if one can call it that) let’s assume NVIDIA does take such a hit with this launch that it won’t be able to lift itself (unlikely, since they have no debt and one product cycle that isn’t good, certainly won’t kill them, plus they have other markets in which they have their hands on).
      If they “die”, guess who will pick up the graphics division ? That’s right, Intel. And you know what that means ? AMD’s world of hurt in the CPU division right now, will be extended to the GPU division as well, just like the R600 days.

      Anyway, that’s an unlikely scenario, since they won’t die as you seem to want (at least you don’t say you want competition, but secretly hope for them to “die” as most around here). If you haven’t noticed, despite the problems they had with this chip, this architecture certainly has “legs” in it. A chip with disabled units, suffering from leakage and lower clocks, is beating AMD’s best GPU. And if some of the rumors around about GF100 having more TMUs than those seen in PR events, when they fix their leakage problems enable all units and increase the clocks, GF100 will be on the heels of the HD 5970.

        • poulpy
        • 10 years ago

        The OP is obviously retarded but to get to your own point -and beat a dead horse- yes it’s a very interesting architecture on a technical level which is strong in both gaming and GPGPU. That’s a given, and as a geek I really enjoy reading about it.

        Now there’s no such things as either a free lunch nor miracles and on this one because of the sheer complexity of the 3B transistor design and TSMC poor track record at 40nm it’s very unlikely IMO that Nvidia will improve GF100 in short term by much if at all.

        The power consumption, thermals and noise won’t just wither away like this and even if Nvidia manages to re-re-spin the design in a couple of month they’d surely face AMD’s own re-spin which even in a double GPU card configuration would consume less, cost less and perform better.

        Nvidia tried to chew a bit too much here, this will be a great design for GPGPU computing through Tesla products but I’m not convinced it’ll be a hit with gamers nor a money making product for Nvidia in that market.

          • Silus
          • 10 years ago

          “Short term” in these things means 3 to 6 months, but there’s no doubt it will be done. This situation reminds of ATI’s R520 (although the performance difference between it and the 7800 GTX 256 was smaller, than Fermi has over Cypress). It was late, but overall had a lead in performance, which NVIDIA eliminated almost immediately with the 7800 GTX 512. ATI had no choice but to release the X1900s 3 months later, even on the same process (90 nm), since they knew they were having too many problems with R520 and were going to be late, so worked in parallel on another chip.

          If it isn’t “Fermi 2”, it’s likely a revamped Fermi 1, with the things that need to be fixed, fixed and maybe some of its “fat” removed (one that isn’t graphics related)

            • poulpy
            • 10 years ago

            ~3 Months would be short term, ~6 months would be normal execution IMO for a refresh, especially when the initial product clearly needs it.

            Then sure they both have teams working in parallel and a delay for TeamA doesn’t always mean the same delay for TeamB, but you still can’t say “it’s ok as when they’ll fix the leakage, the frequencies, the power consumption and the noise: this card will kick ATi’s current offering”.

            Because:
            – you’re talking about a 3B transistor chip with a very complex architecture (that already when through -what- 3 respin?) and no change in the process within that time
            – they also have to allocate R&D to work on all the other lower cards, that’s only the flagship and its wingman out.
            – in 3/6 months ATi -if needed- will also have their own revision or even new architecture out and that’s what Fermi will have to fight against

            Time will tell but -again- as interesting Fermi is for the geek I doubt its value for gamers and for Nvidia to make money from it (professional market apart). If at least it could deliver a stable base for a more efficient Fermi2 that’s already something.

      • blackheart
      • 10 years ago

      I’m just upset my GTX 285 got fried rofl.

      Probably going to see a lot of Fermies ending in the same way with so much heat.

      I’d be happy enough to just get Jen-Hsun Huang killed on my die remarks.

      • revparadigm
      • 10 years ago

      Wishing the death of any competitor in a already sparsely populated industry is lose-lose to real customers in every way imaginable. Only rabid fanboyz launch into such rhetoric without thinking through the consequences of such a market where one company has free reign to throw out uninspired “safe” designs without the threat of losing money & pride. I personally wish there was 15 companies to pick and choose from and the best one would end up would up in my pc for that release cycle only.

      Then on to the next round, next year…

      Please God help it let be 🙂

    • outcast
    • 10 years ago

    Thanks for your hard work…now i know the 4xx is not for me at this time…well i just lost my will to upgrade to dx11 GPU…maybe next generation or some serious price cuts…

    • wira020
    • 10 years ago

    Almost forgot, does the card really idle at 50mhz core? Meaning starting even internet browser would jump the card to max load again?

    • wira020
    • 10 years ago

    Nice review, i just dont understand how you got the 5850 to that temp… seems too low to believe.. and thanks for explaining about tesselation in unigine benches.. i always wondered why people said that benchmark is not ideal concept for games…

    • thermistor
    • 10 years ago

    I’ll say it again…I expect that nVidia will slash prices to their AIB vendors to keep themselves afloat. I fully expect cheap last-gen 2xx series in a month or two.

    The lower-power variants from a huge die will not be all that profitable, whereas ATI’s strategy of a reasonable die, plus their 4770 experience that set them up for a successful shrink means they’re making good bucks on everything they sell.

    Then again, this is the moment to screw up. Console ports are not driving big performance gains.

    Then again, a successful Win7 means that DX11 has a real future ahead of it, unlike DX10 (.1), and even that red-headed stepchild Vista gets DX11. At some point sooner rather than later last-gen cards will be undesirable.

    nVidia has to walk a narrow path to come out on the other side unscathed.

    • Da_Boss
    • 10 years ago

    Why do I get the feeling that in a year or two, the games and applications will start to really take advantage nVidia’s overly ambitious Fermi? That in the future, their design choices will pay huge dividends?

    I mean, I’d never be dumb enough to spend $500 on that notion.. just saying. Maybe the risk would pay off.

    Then again. That’s what they said about HD 2900 XT as well…

      • OneArmedScissor
      • 10 years ago

      In “a year or two,” a card like this will be in the $100 range.

      It never pays off to buy the high end in an attempt to “future proof” yourself.

    • JoJoBoy
    • 10 years ago

    Great review Scott, much more entertaining than other reviews. The gtx 470 at $349 will sell by the truck load. It has plenty of performance in games, folding, and ok power/heat usage. The worst thing about this is the AMD 5850 will not fall in price much anytime soon. Both companies would enjoy large profits for awhile during this chip supply problem.

      • Skrying
      • 10 years ago

      Did you read a different review? The GTX 470 is either the same performance or sometimes worse performance than the HD5850. The HD5850 on the other hand draws less power, produces less heat, produces less noise and cost less.

      The HD5850 is an even better deal in light of the GTX 470 than it was at launch. That’s insane. especially considering its price has actually gone up. The HD5850 is really a shining example of the highest end part scaling down near perfectly. The GTX 470 is the complete opposite.

        • imtheunknown176
        • 10 years ago

        He didn’t say it was faster than a 5850. I don’t think it will sell by the truckload just because of its computing power. I would buy one over a 5850 because I’m sure it will be a beast of a folder. There are probably other people who would do the same thing (assuming they were looking for a new card).

          • Da_Boss
          • 10 years ago

          Really? Considering you can find 5870s for under $400? You’d get a 470? I guess it depends on how much folding you do with respect to gaming. If I were looking, I’d be looking for a gaming card first.

          To each their own, I guess.

            • imtheunknown176
            • 10 years ago

            My gaming resolution stops at 1920*1200, so yeah the $50+ I would spend on getting a 5870 isn’t exactly worth it to me (especially when weighed against the computational potential). If I knew someone that was buying a card purely for gaming I would recommend the 5850.

        • JoJoBoy
        • 10 years ago

        Right now I would not recommend the Geforce GTX 470 over the HD 5850, but I would have no problem sayings it was a fine choice for some people. Much as imtheunknown176 has stated it is suppose to be great in gpgpu related things. Anandtech found the GTX480 to be 3.5 times faster than the GTX285 in Folding@Home. Personally I have my computer folding about 8hrs a day, Im lucky to get 2 or more hrs a day playing games. My whole point was it is a competitive card to the rest of the market so nvidia will not see much reason to lower its price, unless AMD drops all of its HD 58X0 cards back to their orignal prices. Seeing that the HD 58X0 series are appearing to be the better choice for most AMD will not see much reason to lower its price or its margins.

          • Skrying
          • 10 years ago

          But it’s not competitive. At all really. Yes it potentially can do GPGPU functions significantly better but besides Folding@Home what would people have to benefit from that? Additionally at what point does this substantial power draw not start to scare people off? Otherwise beyond GPGPU it is inferior in every way. That’s not competitive.

            • JoJoBoy
            • 10 years ago

            Im not sure how you believe it won’t be competitive at all. According to the benchmarks in this review the performance of the HD5850 is at 96.4% of what the GTX470 scores. The benchmarks show that the frame rate numbers of the GTX470 are better…I would find that very competitive and not inferior in every way.

      • just brew it!
      • 10 years ago

      Actually, no… they will likely /[

    • FuturePastNow
    • 10 years ago

    I am impressed. It’s not the kind of impressed that Nvidia was going for, I’m sure, but still, impressed.

      • MadManOriginal
      • 10 years ago

      Are you talking about the grill-type impression you get when you touch the top of the card while running Furmark?

        • squeeb
        • 10 years ago

        hahaha thats pretty good..

    • flip-mode
    • 10 years ago

    We all have to admit, it is an aspiring design. It is quite bold, in every sense. If it weren’t for the dang power consumption…. if it weren’t for TSMC’s 40nm fail… if it weren’t for that disabled shader cluster… but, even still, this is, as Mr. Wasson said, a fascinating beast, regardless of whether is succeeds or fails.

    • jamsbong
    • 10 years ago

    Fermi seems to be only good with the GPU computing stuff, like pumping out more triangles and doing better at folding@home.

    At the moment, the things that are slow with computing is engineering simulation like CFD, and HD video encoding. We know that these two types of work are easily overcome with a high end GPU that is able to crunch TFLOPs of data in seconds.

    However, if I want to encode a 2hr video and have to endure the dustbuster noise that Fermi makes and pay the electric bill. Nah… Thats annoying and not for my quiet compfy home.

    For engineering, I can connect to a server computer full of Fermi chips that does my expensive computing and I don’t have to listen to the noise it makes, feel the heat it produces or pay the electric bills it demands. That is a good thing.

    Fermi is worth it as a engineering number cruncher. As a gaming card or video encoding card? I prefer something quieter card that consumes less power (and produce less heat). The people who bought ATI Cypress must laughing with joy knowing how good of an investment it has been.

    • Convert
    • 10 years ago

    Uh, this review is downright damning for Fermi performance wise.

    If I was remotely interested in video a card this expensive I don’t see any reason at all to go with an Nvidia card.

    I don’t think there is any doubt about the cards computing power but for the majority of users out there the benefits do not outweigh the downsides.

    • rUmX
    • 10 years ago

    I’ve been waiting for the TR review. Very well written and a great choice of games tested. Thanks Scott for the awesome review!

    Kyle from [H] mentioned that Nvidia will be skipping the B stepping and going straight to Fermi 2. Does this mean there won’t be a Fermi refresh and we’ll instead see a GTX580 instead?

    Does this also mean that Nvidia will abandon the low end/mid-range DX11 cards to AMD as well? Since this is one behemoth of a monster, I don’t expect lower end iterations to materialize. I wouldn’t be surprised if Nv re-spinned either GT200b/G92 with DX11 to compete in the low end.

    • leor
    • 10 years ago

    This is more of a test card than anything that will make a big impact. Nvidia fanboys will buy the card, as well as the uninformed who get caught up in the marketing, but as Scott said in the review, it’s likely that by the time the new features in this card actually get used there will be a new gen out.

    What this may provide is an architecture to build on similar to what ATI’s done with Cypress to make the next gen faster to release and better in terms of performance scaling.

    • colinstu
    • 10 years ago

    So… it looks like the 5870 is the card to get.

    • playboysmoov
    • 10 years ago

    After reading this review, I hope Huang doesn’t hang himself. I was in the market for a GTX470, now I’m going to spend the extra dough for a HD5870. It performs better runs cooler and uses less energy. It may be be bigger but I have a HAF932 so it will fit.

    This is a major win for ATI. Not only are the cards late, but they also demand a huge price premium over ATI HD 5800 current products at their inflated prices.

    Looking at my make shift table below, I don’t know how the boys in green could have justified their suggested MSRP for a product of this quality.

    Even if you add the going street rates the GTX470 is 60-70 bucks cheaper than the inflated prices of a HD5870.

    Suggested MSRP

    $269/HD5850 $349/GTX470
    VS
    $379/HD5870 $499/GTX480

    If the 5800 series ever goes back to their suggested retail prices then nvdia is going to be in big trouble.

    I need to find out what flavor of dope the boys at nvidia smoke because it must be some good stuff! Wow! Major fail!

      • mcforce0208
      • 10 years ago

      i lol’ed!!!

    • Ardrid
    • 10 years ago

    Fantastic article as usual, Scott. I’m sure you put in some serious man hours to get this thing cranked out. You deserve a nice break and a cold one, sir.

    • douglar
    • 10 years ago

    It would have been nice to see the dual monitor power consumption numbers for Damage. Some reports show siginificant Fermi power increases if a second display is enabled, even when the second screen it is just displaying a windows desktop. 40% power increase at idle. 15% increase at load.

      • MadManOriginal
      • 10 years ago

      I second that. It would be easy to check too. I know when I had a GTX 260 that it would *not* downclock on the desktop when two monitors were hooked up. As soon as I unplugged one of the monitors the card downclocked and I am willing to bet the same thing is happening here.

    • StuG
    • 10 years ago

    I have to say those crossfire temps don’t resemble anything that my crossfire temps do….and I’m in a low CFM silent case situation.

    They never break 90C…

      • khands
      • 10 years ago

      What’s the other hardware you’re using? They’ve got an i7 965 Extreme, and 6 DIMMs in there.

    • BooTs
    • 10 years ago

    flip-mode is on thewhatnow?

    edit: reply fail.

    • swaaye
    • 10 years ago

    The chip is just too much for the manufacturing process. Amazing architecture, but while designing this baby they guessed wrong on TSMC’s future capabilities.

      • NeelyCam
      • 10 years ago

      Yes; this is mostly a fail by NVidia – not TSMC.
      Although TSMC is not without blame…

        • WaltC
        • 10 years ago

        Agreed–this is all so reminiscent of the nV30, after which nV didn’t hesitate to blame the FABs for its failure. nV never did have much comment about the fact that nV4x differed dramatically from nV3x in design, and accordingly wasn’t “hampered” by inept FABs…;)

    • tejas84
    • 10 years ago

    I have to say PRIME1 is right. Fermi is a lot better than some of the TR readers wish to give it credit for.

    Scott’s conclusion was hardly that Fermi was a damning failure. Far from it, like me, he admires the GF100 architecture!

    I admit that TSMC is an epic fail company though… Can’t wait for GF to put them out of business.

      • PRIME1
      • 10 years ago

      l[

        • flip-mode
        • 10 years ago

        Now that you mention it, it seems like a dead ringer for the x1800xt

        • Meadows
        • 10 years ago

        g{

          • khands
          • 10 years ago

          If lying caused pain I’d have killed a lot of people by now.

      • NeelyCam
      • 10 years ago

      /[

        • flip-mode
        • 10 years ago

        GloFo is better than TSMC just as Intel is better than GloFo.

        • anotherengineer
        • 10 years ago

        Indeed. I think some people just don’t realize the complexity of tapping out these chips in a chunk of silicon is.

        Also I think we hear more about the TSMC because both ATI and Nvidia said there werent enough cards because the TSMC had problems and then they came out and admitted it.

        IF there was a lack of Nehelams, due to fab process, Intel would probly say its due to a transition rather than manufacturing defects and they could since its an in-house operation.

        • Shining Arcanine
        • 10 years ago

        Intel has been able to do 32nm without issue. Just the fact that something is hard does not mean that you get partial-credit if you fail and someone else can do it instead.

          • just brew it!
          • 10 years ago

          Intel has the best process technology in the industry, period. And the deep pockets (due to their dominance of the x86 market) to ensure that it stays at the bleeding edge. What’s easy for them is difficult (or impossible) for pretty much everyone else.

          • NeelyCam
          • 10 years ago

          Not without issues, but without outsiders knowing about them.
          Also, Intel has very very deep pockets and an army of engineers to deal with said issues.

            • Shining Arcanine
            • 10 years ago

            Maybe Nvidia should talk to Intel to see if they would fabricate chips for them. I am sure that Intel would love the income that would come from it, especially since it will hurt AMD, which currently owns ATI.

            • wibeasley
            • 10 years ago

            Those two companies haven’t been very friendly the last two years. I’m not sure nvidia would want a competitor to have in-depth access to its IP.

            Maybe I’m wrong though. Especially if the usual foundaries aren’t able to do what nvidia needs.

            Does Intel even manufacture product for other companies?

            • NeelyCam
            • 10 years ago

            No – Intel fabs are only for Intel products.

            • OneArmedScissor
            • 10 years ago

            Except when they have a joint venture’s name on them. :p

            • NeelyCam
            • 10 years ago

            If you mean Intel/Micron, then yeah I guess you’re right. 🙂
            … somehow I don’t think there will ever be a fab with a big green NVIntel logo on the front lawn, though.

            • reactorfuel
            • 10 years ago

            The chip designers and process people can talk honestly, too. TSMC’s got reasons to gloss over any potential problems (nobody’s going to transition to a new process if there are big known issues), while at Intel, they can talk and figure out workarounds for those known issues. Any chip designer who contracts out the fab work has to feel around in the dark for potential problem areas (complete with expensive, time-consuming test wafer designs). If you’ve got in-house fab capability, on the other hand, you can know about the problems /[

            • NeelyCam
            • 10 years ago

            I’m sure you’re right. TSMC never had to be open and honest because there was no competition.. Maybe things will change with GF entering the foundry market.

            • reactorfuel
            • 10 years ago

            Even with competition, dishonesty can pay. If one foundry is honestly saying “we’ve got problems A, B, and C,” and the other’s lying through their teeth and saying, “everything’s pretty much OK, no major problems,” people are likely to go with the liars. By the time it comes out that they’ve actually got a bunch of problems, the customers are already pretty much locked in.

            Boeing’s not a semiconductor company, but they provide an excellent example of how this stuff can work. Their high-tech 787 super-jetliner has hit all kinds of delays because they outsourced everything they could. Their suppliers promised zero problems, while the internal departments provided realistic projections. Executives went with the outsourcing option, because hey, zero problems! Surprise surprise, the outsourced parts are ridiculously late, because “we can do that no problem!” turned into “just give us a couple of years to learn to do it, first.” Outsourcing core business activities is generally a pretty bad idea.

    • flip-mode
    • 10 years ago

    s[

      • Darkmage
      • 10 years ago

      Let’s not forget the rest:
      Price_________________5870_____480_____480/5870
      NewEgg_________________419_____499_____119.09

      Assuming the 480 comes in at the projected retail price and the 5870 stays at the current inflated level.

      Whatever. I’m an ATI fanboy of many years and EyeFinity is a must for me. I’m pulling the trigger on a 5870 tomorrow.

        • indeego
        • 10 years ago

        Curious, why is Eyefinity a mustg{

          • Darkmage
          • 10 years ago

          I have size issues. 🙂

          I’m already running two displays and the videos of three displays running MW2 have me hooked. I suppose EyeFinity isn’t a “must have” feature, but it’s here, it’s available and I find it marginally more valuable than the GPU acceleration my video suite would gain if I switched to NVidia. I don’t spend a lot of time editing DVDs, but I spend a lot of time killing zombies.

          Hmm… properly reworded, I suppose “EyeFinity is a feature that is in production and tips the balance towards ATI for my next purchase.” is an accurate statement.

          Edit: “Next purchase” was this morning. Picked up a 5870 for $389, so the prices are starting to drop already.

            • indeego
            • 10 years ago

            I don’t even have a 30″ screen (I have a 24″ 1920×1200 and 20″ 1600×1200) and I can’t imagine getting it wider and still sitting 2 feet away. I would think less screens/higher resolution would trump more (3+)screens/lower resolution any day of the week, both in terms of gaming and productivityg{<.<}g

            • SomeOtherGeek
            • 10 years ago

            Maybe it will benefit those that have bad eyes? I know I would.

            • indeego
            • 10 years ago

            Bad eyes, uh, what? How does putting bezels in your face help eyesight? If you had bad eyes, a larger monitor is probably a better way to go, and perhaps increasing Font DPIg{<...<}g

            • SomeOtherGeek
            • 10 years ago

            I didn’t know you were talking about bezels, but ok. Just the fact that you could sit back and view something at a distance is a nice thought. Of course, I would never get something like this. Yep, a 30″ would be the way to go – having the higher resolution plus being able to see it, just makes sense. Of course, I would need the 1000+ bucks to get it and it is just not worth it, so I will stick with the 1280X1024 with the DPI at 125, like you said.

            • grantmeaname
            • 10 years ago

            you can get a 28″ 1920*1200 monitor for around 300 bucks. A few of us have the Hanns-G HG281D, and it’s wonderful for my uses. Someone also recently bought the HP 27.5″ monitor, and I think they were relatively happy with it.

            • derFunkenstein
            • 10 years ago

            not to mention the bezels present if you try multi-monitor gaming. I don’t get Eyefinity, but stereoscopic 3D isn’t that interesting to me, either, and that’s the feature that Sony (with the PS3 and their 3D TVs) and nVidia (as of semi-recently) is pushing. Meh.

    • Da_Boss
    • 10 years ago

    Well over 160 people managed to say “ouch” before I did.

    Reading that article, I can’t think of a single good reason to get a GTX 470 or 480, given the current market landscape.

    That being said, this doesn’t bode well for that price war I was hoping for. AMD has no reason to lower prices as they’re already in a terrific position. Then again, I’m spiteful. Knowing nV can’t survive lowering it’s prices, I’d drop mine just to slowly squeeze them out profitability and market share.

      • mcnabney
      • 10 years ago

      If AMD knows anything about business, and after Hector I was really starting to doubt that, they should only drop their prices down to the price that they can sell all they can make. Any cheaper and they won’t be available, any more and the inventory will build-up. Make no mistake, due to chip size alone they have a lot of room to move in price if they chose to, but it would make no sense starting a price war just to harm a competitor.

        • Da_Boss
        • 10 years ago

        Well I see it as gaining valuable market share and ‘mindshare’.

        I’d bet that most people who buy an AMD this generation will be more inclined to buy one the next. Also, I bet AMD can still make a decent profit even if the 5000 series was 20-50 dollars cheaper, which is something nVidia cannot claim.

        Going forward, I’d be more worthwhile to have a much higher market share of current gen cards, and a slightly lower profit margin per card, than a more even split, with a higher margin per card. They’ve already got 99% of the DX11 market share, why stop now?

    • anotherengineer
    • 10 years ago

    WOWZERS

    Look at all the posts.

    My question is, are there any reviews with the gtx 470 and 480 showing folding performance??

    And we need a poll here, who is going to get one??

    • michael_d
    • 10 years ago

    The review is wrong in these areas:

    -Battlefield: Bad Company 2 is NOT nearly as good looking as Crysis.
    -Metro 2033 is very playable in DX11 and Tessellation enabled, it only becomes unplayable when Advanced DOF is enabled.

    • 5150
    • 10 years ago

    Can anyone here justify why they care more about one brand than another?

      • flip-mode
      • 10 years ago

      Some people feel they have been burned by one or the other. Some people feel Nvidia innovates more than ATI. Some people like Nvidia because it used to be allied with AMD. Some people used to like ATI because it used to be allied with Intel. Some people (me) hate Nvidia’s behavior, some people love Nvidia for their drivers, some people hate ATI for their drivers, some people love Nvidia for their Linux drivers, some people love ATI for their attitude toward open source, some people love Nvidia from the days when OpenGL mattered, some people love ATI from the Radeon 9700 Pro days, some people love Nvidia for SLI… I’m sure there are more reasons than I could think of…

        • esterhasz
        • 10 years ago

        Am I blind or is there no public information on Fermi die size?

        The 58xx chip is reported to be 334mm2 and if you figure in the 2.15B vs. 3B transistor count, that would make for a little under 500mm2. But smart transistor stacking can reduce needed die space. Anyways, with a die that big, the chance of losing chips in TMSC’s still less than perfect process is pretty high, even with the one shader group disabled. It will be interesting to see what kind of margins NVDA can squeeze out there… looking forward to the Q3 earnings conference…

        edit: sorry, wrong thread…
        But while I’m at it: I like AMD for going from 2.5$ to 9.30$ in a year…

    • Vaughn
    • 10 years ago

    i’m really glad i picked up a 4890 last fall. I will look at ati’s next refresh as a possible upgrade if I find games that make my current card struggle. i’m gaming at 1680×1050!

    • indeego
    • 10 years ago

    /[

    • End User
    • 10 years ago

    I’m going to hold onto my GTX 260 216 until ATI releases their Southern Islands update.

    • Mat3
    • 10 years ago

    Are the double precision flop numbers on the table on page 1 correct? Isn’t the DP for flops on current Radeons exactly 2/5 of the single precision number, given how the four of the five stream cores work in pairs to do it? Also, is the 480 correct there?

      • Damage
      • 10 years ago

      Cypress’ DP rate is 1/5th of its SP rate. As I understand it, at peak, all five ALUs in a block can execute one SP MADD per clock. In DP, the four “thin” ALUs in a block work together to execute just one DP MADD per clock; the “fat” ALU doesn’t help. So one DP MADD happens per clock peak, versus five SP MADDs.

      I believe the numbers for everything else are correct. I explain about the crippling of GeForce variants of the GF100 in the text.

      • thecoldanddarkone
      • 10 years ago

      No, I pretty sure it was that each branch can do a dp so it’s 2,720/5= 544. When it comes to to Fermi its 1/2 the speed of single precision. However, Nvidia handicapped it and is actually limited to 1/4 of that.

      1350/2/4=168.75

      “Those of you familiar with the Fermi architecture may wonder why double-precision math performance is only doubled versus the GTX 285. In theory, GF100 does DP math at half the single-precision rate. However, Nvidia has elected to reserve all of that double-precision power for its professional-level Tesla products. GeForce cards will get only a quarter of the DP math rate”

      • Mat3
      • 10 years ago

      I think I got it now. Went back to the 4870 review. The Fat one can do one add or mul. The four slim ones team up for one add. So essentially 2/10, hence 1/5. I got mixed up with how they team up for 64-bit.

    • moriz
    • 10 years ago

    here’s an interesting question:

    which is faster, GTX470, or 2x HD5770?
    which is faster, GTX480, or 2x HD5850?

    from a value proposition point of view, i think the dual radeon solutions end up superior in almost every way.

      • Prion
      • 10 years ago

      I would gladly and willingly pay extra to not have to deal with a dual-GPU solution.

      • TaBoVilla
      • 10 years ago

      I’m seriously considering xfire 5770s for my next rig, just waiting for 1Gb’s to be $100-110 a piece, but without G100 midrange derivates in the near horizon looks like I’m going to have to wait until the summer. eyeing also triple 24” in portrait, as it would serve me well for music production.

      xfire 5770s offer same and above 5870 performance, and if the game is not scalable, still a single 5770 is great.

    • glynor
    • 10 years ago

    Thank you, Scott. As I expected, this is *[

      • shalmon
      • 10 years ago

      good point…i agree

      • ET3D
      • 10 years ago

      I think that after waiting months for these cards a few more weeks don’t make that much of a difference to most people. That’s probably the main reason people don’t comment about the paper launch.

        • paulWTAMU
        • 10 years ago

        that and we’re not eagerly awaiting these, at least not after the review. I’m still running an 8800 *sigh* talk about an upgrade itch.

          • Lans
          • 10 years ago

          Well how about since they late already, how about doing a proper launch? I still don’t like to see any company getting any slack for paper launches…

      • TaBoVilla
      • 10 years ago

      I think this paper launch is ok because the products being launch are meh, and people have realized that the best option has been on the channel for 6-7 months now.

      A paper launch is bad when the promised parts have been reviewed and are proven to bring substantial improvements over previous products, making people desire it and halts purchase decisions until availability.

      The only people waiting for fermi right now are people that want to tessellate 2x faster.

    • darryl
    • 10 years ago

    Well, I waited for, and read many reviews of the new Nvidia cards. For purely gaming purposes (my main interest) I’ll have to revert to ATi for my next GPU after many years in the Green camp. It’s about the old price/performance equation. Initially I didn’t want to spend money for the GTX480, and was hoping the -470 might be a stronger contender. It’ doesn’t have enough separation from even the 5850 to be a consideration for me. How very sad. Nearly all other reviews state the same conclusion as here. Nvidia has lost it’s way, imo. Long live the King, but he has no clothes.

    • Fighterpilot
    • 10 years ago

    #124 If you play FPS games you’d notice the smoothness once you start looking around.HD5850 is also very quiet.It adds a crispness to the game world I never had with the 4850.
    But a GTX 275 is still a really good card so I don’t think it would be all that much better unless you are going to play Crysis or some other card buster game.
    Wait for Fermi 2 🙂 or the Hybrid Cypress/Northern Islands chip we may see later this year.

    • PRIME1
    • 10 years ago

    Upwards of 30% faster depending on the game. Better feature set.

    A true next gen part

    Not as hot or loud as some have been suggesting.

    I guess we will have to wait until next year to see ATI’s answer.

      • Fighterpilot
      • 10 years ago

      <crickets chirp quietly>
      some tumbleweed drifts by…..

      • SubSeven
      • 10 years ago

      And now, after long last, you can heat your 2700 SQFT home while playing your favorite video game!!!!

        • PRIME1
        • 10 years ago

        Yeah that extra 3 degrees over the 5870 should make a real difference. If I wanted that I would have bought a 4870 instead of a GTX260.

          • Jigar
          • 10 years ago

          But what about the power consumption mate ?

            • PRIME1
            • 10 years ago

            Same as what I said above.
            §[<http://www.techreport.com/articles.x/16229/13<]§ 4870 also consumed more power than my 260. Although to it's credit the 4870 was slower. For years ATI has had the hotter, louder, more power hungry, slower card. Now it seems they just have the slower card.

            • glynor
            • 10 years ago

            Must…. Resist…. Feeding…. Troll….

            l[

            • Welch
            • 10 years ago

            Just going to put it out there as my buddy Steve put it…….

            “Sure, they finally released Fermi, sure it might be a few frames (LITERALLY) faster than the ATI equivalent (and not as fast as the 5970) which produces less heat/noise and uses less power……. All of that doesn’t make a difference since I’ve been able to kick ass in games with my 5850 for well over 1/2 a year now”

            So…. stick that in your pipe and smoke it.

            To be fair this is a first generation where Nvidia attempted to push their cards in a completely new direction (Tessellation and GPU Computing). The latter I think is going to be a failure in regards to the majority of the market. Sure your going to have those of you who will find a way to Fold on it, and that’s awesome. I do think that Tessellation will become a new “Staple” of the game developing community as long as they learn to use it where it matters as Scott mentioned in his review. I believe Nvidia is counting on the GPU Computing and Tessellation as its main selling points. Its going to take some refinement to bring the power consumption to a much more modest level than it is now, which would solve their heat and noise issues for the most part.

            But as it stands right now…..

            Fermi =*[<*BIG RED STAMP FLYING FOR THE FERMI WHITE PAPER*<]* r[<**********************************FAILURE**********************************<]r

            • TaBoVilla
            • 10 years ago

            prime’s comments have reached comical status

            • RumpleForeSkin72
            • 10 years ago

            I would have to agree.. He’s on the [H] forums as well and is far more humble there where the reviews tend to be a little more intensive

            • MadManOriginal
            • 10 years ago

            He’s also more humble there because the admins don’t take sh*t and are much more banhappy.

          • SubSeven
          • 10 years ago

          You see buddy, the difference is that on the 5870 the fan is running at low levels and can be easily turned up to greatly reduce the temps. Sadly, the same can’t be said of the 480 or the 470 as they already sound like windblowers. Also, look at the stock cooling of Nvidia vs AMD. Nvidia’s coolers are far larger and more impressive but are insufficient; suggesting how much heat the chips generate. Look at the coolers used by AMDs vendors… much better. Hopefully the same will be true for Nvidia. Besides, I like the 5850 which is cool, quiet and runs circles around the very impressively cooled GTX260.

          • shalmon
          • 10 years ago

          I find it interesting to notice that tech’s review shows a 3 degree difference between the 480 and 5870 under load…

          yet anand’s review shows a temperature difference of 6 to 17 degrees:

          §[<http://anandtech.com/show/2977/nvidia-s-geforce-gtx-480-and-gtx-470-6-months-late-was-it-worth-the-wait-/19<]§ and guru3d's review mirrored a temperature difference of 18 degrees?: §[<http://www.guru3d.com/article/geforce-gtx-470-480-review/13<]§ §[<http://www.guru3d.com/article/radeon-hd-5870-review-test/13<]§ We also see a substantial idle temperature difference of 12 degrees in anand's review, and 10 degrees in guru3d's. Aftermarket coolers have seen even further reductions in the 5870's load and idle temperature: §[<http://www.guru3d.com/article/his-5870-v-icooler-turbo-review/6<]§ ...is it possible for someone to make a better cooler for the 480 without adding even more cost to the card?

            • SubSeven
            • 10 years ago

            On point my man. The problem is that Nvidia’s stock cooler seems of nice quality. So either the cooler is horrible, is fastened poorly, or the cooler is truly excellent while the chip itself is a bunson burner. If the cooler is already good, then any after market cooler that does a better job in cooling will probably be larger and more expensive, thus adding to the card’s cost.

            • anotherengineer
            • 10 years ago

            Ya, I mentioned the cooler also. It does seem pretty decent, 4 or 5 direct touch heat pipes, I thought it would be cooler than it was. I mean the cooler seems to be better than the 5870 cooler from a visual perspective.

            As for the descrepencies from site to site, there is a lot to consider when it comes to measuring temperatures.

            Did the cards all have/use same voltage monitoring chip?
            How accurate is the chip?
            What did the reviewer use to measure the temps?
            What was the dry and wet bulb room temperature at the time of testing?
            Was it on an open bench or in a case?
            etc, etc.

            As you can see, there is lots of variables.

      • 5150
      • 10 years ago

      Forget it, too easy.

      • djgandy
      • 10 years ago

      Twice as many transistors, 30% faster. Way to fail.

        • flip-mode
        • 10 years ago

        Only 50% more trannies.

          • Krogoth
          • 10 years ago

          ITS A TRAP!

          • djgandy
          • 10 years ago

          GeForce GTX 285 1.4B
          GeForce GTX 480 3.0B

      • Meadows
      • 10 years ago

      Where is it “upwards”? Where is it even 30%?

        • Voldenuit
        • 10 years ago

        Upwards is the new downwards.

        In other news, we have always been at war with Eastasia.

        • derFunkenstein
        • 10 years ago

        2560×1600 in Borderlands 61fps is 138% of the 44fps obtained by 5870s. So it does reach it at one point.

          • Meadows
          • 10 years ago

          I agree, although let’s not forget that Borderlands is one of those “biased titles”, and on top of that, the minimum framerates suck on /[

            • derFunkenstein
            • 10 years ago

            I don’t disagree with any of that, you just didn’t seem to believe it really did hit 30% somewhere. :p

      • rechicero
      • 10 years ago

      Do you really get paid for this? Wow!

      • anotherengineer
      • 10 years ago

      Those must be the reasons why XFX is NOT making a gtx 480 card 😉

      I was wondering why they didnt have a fermi card, now I know 🙂

        • kroker
        • 10 years ago

        Fudzilla claims that it wasn’t XFX who decided not to launch Fermi, but that it was Nvidia who denied them the new chips as punishment for not being Nvidia exclusive anymore: §[<http://www.fudzilla.com/content/view/18278/65/<]§ Though it sounds weird, I think this is more likely than XFX suddenly deciding not to launch Fermi. XFX even had the box design for Fermi. Maybe there's more to the story, maybe XFX did other things to anger Nvidia as well, such as leaking information? Whatever the case and whoever took that decision, I think this will hurt the relation between XFX and Nvidia even more.

          • anotherengineer
          • 10 years ago

          AH makes more sense for sure, sad though, about being nvidia exclusive, thats just being a cry baby in my book, if its true.

      • SomeOtherGeek
      • 10 years ago

      Prime1, you need to knock it off! Seriously!

      You don’t need to make things any worst than it is. So, you like (love) nVidia, fine, but going borderline crazy over it, will not help, you or anyone else. You need to drop this charade and move on. Maybe, you bought this card, great, no problem, share your info, but you don’t have to defend it cuz there is really nothing to defend.

      Or maybe you like being a troll?

        • danny e.
        • 10 years ago

        i’m fairly sure he just enjoys being a troll. .. no one could be that illogical without being put in a home.

      • grantmeaname
      • 10 years ago

      I agree with all of this but the feature set. I was actually very surprised by the relatively low temperature and noise levels. However, pricing remains to be determined, GPGPU holds no value for me, and it consumes a TON of power. Also, I would have trouble voting with my feet (so to speak) for nVidia’s corporate practices.

      • Kaleid
      • 10 years ago

      New benchmark:
      §[<http://img36.imageshack.us/img36/1957/gpuo.png<]§ Nvidia in the lead

    • StashTheVampede
    • 10 years ago

    Semiaccurate may not have been 100% correct, but their sources were close enough to draw the conclusion I have: Fermi is fast, but not significantly faster than anything on the market today.

    It’s a shame that Fermi isn’t really much faster in MOST benchmarks than the previous single GPU monster from Nvidia. Wouldn’t nearly a year of time get the card roughly 2x of the previous generation?

    Over time, Fermi may show why it’s superior in games, but that’s clearly not the case for what users can do with it now.

      • OneArmedScissor
      • 10 years ago

      “Wouldn’t nearly a year of time get the card roughly 2x of the previous generation?”

      Nope. Even with many newer games, 8/9800GTs with 112 shaders are still roughly equal to only slightly faster than 9600GTs with 64 shaders…but the game will still run on either. That’s not always the case, but it still says quite a bit about the software side of things.

      Rather than adding a handful of shaders that are easily made use of, each shrink now adds hundreds more than the last. As a software developer, what would you do with it? It does not appear that anyone has that answer.

      Games are only going to be designed to use what they need to run. I don’t think anyone is going to be too terribly interested in spending tons of time, and therefore, money, just trying to create some lighting effect that eats 200 shaders.

        • StashTheVampede
        • 10 years ago

        My point is this: the latest tech should be roughly 2x the previous generation’s initial tech.

        480GTX should be roughly 2x 280GTX, no? Is that not the case with Fermi?

          • OneArmedScissor
          • 10 years ago

          It’s significantly more than twice as powerful as the previous GPUs at some things.

          None of those things are games. My point is that that has already been the case for a very long time, and it will only get worse.

            • swaaye
            • 10 years ago

            Diminishing returns, in action. The more complex things get the easier it is to stop up the pipes.

            And some of what Fermi is good at isn’t really beneficial for games. If all you care about is gaming, it definitely has some “useless extras” taking up space.

            It’s interesting how 5870 has massively higher potential shader performance compared to Fermi, but it doesn’t work out like that except in some corner cases and even gets smacked around sometimes. Unique architectures with unique caveats.

            • StashTheVampede
            • 10 years ago

            This is where I’m out then: games.

            That why I’m buying the card. If the new card isn’t faster at what I’m going to use it for, then it’s not worth it.

            • swaaye
            • 10 years ago

            This is NVIDIA’s new direction. They aren’t building just for graphics apps now. This beast is intended just as much for GPGPU as it is for graphics. They are trying to find/create a market among Intel and AMD CPUs with their GPU.

    • torquer
    • 10 years ago

    Pretty disappointing. On paper and in theory its a great chip, and I had high hopes. I try to be really objective about these things – I’ve been an Nvidia guy since I replaced my Voodoo 3 3500TV back in the day. But, I was also an AMD guy for years until Intel swayed me back with the Conroe chips.

    For the first time I’m considering making a switch to the Red team. I’ve got a GTX 275 that works pretty fantastically well for everything I throw at it, but it has been awhile since my last upgrade and I’m feeling the itch. Sure, I could wait 6+ months for the inevitable die shrink of Fermi, but who knows whether that will make the difference that the 5900 did for the 5800 series.

    So, I’ll pose this question to the fanboys and astute observers alike: is it worth it to upgrade from a GTX 275 to a 5850?

      • Voldenuit
      • 10 years ago

      l[

        • torquer
        • 10 years ago

        Well I think that over the past generations there really hasn’t been a compelling reason to switch. Maybe for some, just not enough for me. That doesn’t mean I think ATI has made crap – I was just used to Nvidia and there wasn’t enough of a gulf in performance/features for *me* to switch. Now there may be. The fact that I’m considering switching (and switched from AMD to Intel) should show I’m not a stalwart hardware partisan 🙂

        And no I have no serious need for DX11 – I’m really only concerned with raw performance. I’m not CPU limited, and run dual 2048×1156 LCDs. I’m getting “good enough” performance on everything I do, but sometimes I just like to upgrade just to get more – rationality probably doesn’t come in to play.

      • flip-mode
      • 10 years ago

      Not unless you NEED DX11. And even then, the 5850 doesn’t seem like enough of a jump for my tastes. -[<5970<]- 5870 maybe. But, no, I'd hold on the the GTX 275 if I were you, until it's not giving you enough performance. edit, meant to say 5870

      • just brew it!
      • 10 years ago

      q[

      • Sunburn74
      • 10 years ago

      Dunno… prices have actually fallen for 5850s. You can grab them for 280-289 now. Couple in the resell value of your gtx 275 and you may end up paying less than 50 bucks for the upgrade.

      I dunno… I don’t agree with tech reports idle and load noise levels for their 5850s. My 5850 is the quietest thing in my case (and I built specifically a quiet system). I literally cannot hear it under idle or under load and have never seen it’s fan speed above 30 percent. Its usually in the 21-23 percent until about 70 degrees and then it goes up a smidgeon.

      I have however heard the gtx 480 under load (hardwareOCP.com has video recordings of all the fan noises so you can listen to them ramp up)and it sounds like a buzz saw literally.

      • Farting Bob
      • 10 years ago

      You think fermi will get a die shrink to 32nm (is that the next step?) in just 6 months? Lol. They are still having problems at 40nm production, itll be a decent time before we see 3bn transistor chips on a smaller process.

      • clone
      • 10 years ago

      l[

    • dpaus
    • 10 years ago

    l[<"you could even call it late for dinner"<]l Make that lunch, since AMD's already eaten it.

    • codedivine
    • 10 years ago

    Typo on 1st page: professional-level Tegra should be professional-level Tesla. I do not expect DP on Tegra to be more than on GTX 480 😀

    • Triskaine
    • 10 years ago

    HEADS UP TO SCOTT :

    Nice test Scott, but you got the ROP rate wrong. A GF100 can only rasterise 32 pixels per clock (just as much as Cypress) which comes out to a ROP rate of 22,4 Gpixels/s for the GTX 480. The other 16 ROPs are for extra AA efficiency and can not contribute to the pixelrate when AA is not used.

    • Longshot099
    • 10 years ago

    Great review and thanks for posting 8800GT numbers! Gives me a nice basis for comparison for evaluating upgrades. While there is no chance I get any card that costs more than $200, it can be fun to dream….

    • NewfieBullet
    • 10 years ago

    It’s very interesting the the GTX480 under load draws 31W more than the 5970 when the TDP for the 5970 is 294W and Nvidia states 250W for the 480. I think Nvidia has some splainin’ to do.

    It’s also interesting that two 480s cause a 10dB rise in noise. On an open bench I would expect 3dB. My guess is that the power supply must be ramping up it’s fan to 11 as well.

      • OneArmedScissor
      • 10 years ago

      TDP ratings for graphics cards are even worse than with CPUs.

      While CPUs likely will never hit an actual power draw equal to their TDP rating, many graphics cards can easily exceed it. The chips may be the same, but the cards are all different, and of course, whatever figure AMD or Nvidia give is going to be the most forgiving.

      It’s a terrible metric for just about any given comparison.

    • Jigar
    • 10 years ago

    I am so happy that i bought HD5850…

    • Dposcorp
    • 10 years ago

    Another fine review. Thanks Scott and the rest of the TR crew.

    Its nice to see the pluses and minuses being pointed out with the total GPU, since FPS is never the ultimate judge.

    • ub3r
    • 10 years ago

    Just wait until the next driver and bios update. These will run heaps cooler, and thrash AMD in all benchmarks.

    Now that the hardware is sorted, they can really focus on the firmware, software side.

    Im predicting at least 20% more performance by the time the drivers mature.

      • Voldenuit
      • 10 years ago

      I’m not sure if your post was meant to be ironic, so I’ll just read it the only way that makes sense.

      (Just in case you’re curious: /[

    • MaxTheLimit
    • 10 years ago

    So the layout seems to be like this:
    5970
    GTX480
    5870
    GTX470
    5850…and then it’s AMD down from there for a bit ( if we are talking current gen products )

    I wonder if the reception for the 480/470 would have been warmer, if they had come out around the same time as the 5870/5850? Yes, they are hot, loud and power hungry. But the 480 seems to be more powerful than the 5870 (2gb included). The 470 seems to be more powerful than the 5850. It’s hard to say at this point what the prices will be at release ( because I expect some price jockeying when the 480/470 hit the shelves ). What it comes down to is personal value. What matters to you? What games do you play? What features do you like? Do you care about noise, heat, or power consumption? Are you betting on tessellation being big for you.

      • travbrad
      • 10 years ago

      I definitely think the reception would be better if it had come out around the same time as the 5800 series. It just looks bad when you barely outperform a 7-month old card (for more money, noise, power, and heat). I think the whole situation of the GPU market at the moment is pretty disappointing though , which just adds to the poor reception for these new cards.

      I like to at least double (if not triple) my performance every time I upgrade. To double my performance right now I would need a $350 video card, whereas I spent $100 on my current card 1 1/2 YEARS ago. 3.5X the cost for double the performance, 18 MONTHS later…that’s terrible.

      That is completely the opposite of how computer hardware normally works. It’s a sad state of affairs that the performance/dollar hasn’t improved at all in 18 months (at least in my price range).

    • derFunkenstein
    • 10 years ago

    Does anybody make a relatively high-end graphics card that’s even remotely quiet under load? Looks like the fastest card you can get with relatively quiet load noise levels is a Radeon 5770.

    Let me explain. To keep little fingers from pushing buttons on the front of my PC, it has to sit up on my desk about 12″ from my head. Since we’ve not bought a house yet, my desk is in the living room of our apartment. In order to keep from disturbing my wife watching TV, I have to keep things as absolutely silent as I can.

    Meh.

      • NewfieBullet
      • 10 years ago

      If you replace the cooler on a 5850 with some aftermarket cooler such as the Arctic Cooling Accelero S1 Rev. 2 you might be able to quietly cool a 5870 even. You’ll probably need to look at the system as a whole though since if your case can’t handle the heat this might actually make matters worst.

        • flip-mode
        • 10 years ago

        I think he wants quiet cooling on the retail product, not aftermarket. And that’s what I want too.

          • derFunkenstein
          • 10 years ago

          I think I found something. Sapphire’s got a line of “Vapor-X” cards, and the reviews have all been really solid in terms of temps and noise levels. The down-side is that the 5870 is $450 and the 5850 is out of stock at $320. Both of those are a bit rich for my blood, and Fermi does little to instill confidence in the idea of prices coming down.

      • Voldenuit
      • 10 years ago

      Up until last year, I had a Radeon 4850 with an Arctic Cooling Accelero S1 that was perfectly silent and ran Fallout 3 for hours passively cooled with no complaints.

        • derFunkenstein
        • 10 years ago

        The 4850 isn’t really that much faster than my 9800GTX+ (which has a factory OC). It’s using an Accelero L2, which is a sweet, silent cooler you can get for $12. It’s not sufficient for today’s high-end cards though.

      • d0g_p00p
      • 10 years ago

      You can go watercooling only for your video card, use a after market VGA cooler or use RivaTuner to tweak your fan settings. There are plenty of options to keep heat and noise in check. I use RivaTuner to scale up my video card from 30% to 65% fan settings everything below 60% I cannot hear. 65% I can slightly hear the fan and it keeps the GPU cool enough for 3D games for a couple of hours.

      edit: I mean RivaTuner, not ATi Tool

        • OneArmedScissor
        • 10 years ago

        Yes, adjust the fan speed. There should be plenty of room for that on 5850s. I’ve yet to encounter a video card with “ideal” fan settings out of the box.

        Heatsinks are much better across the board than they used to be. In most cases, you may not even see the temperature go up if you turn the fan down a little.

        TR’s load test makes more sense than a synthetic benchmark, but it’s still not indicative of what most people will see.

      • potatochobit
      • 10 years ago

      doesn’t the 5970 come with a waterblock?

      that should be whisper quiet?

    • spigzone
    • 10 years ago

    Benchmarks aside, Fermi’s still a DISASTER if the total chip availability number of 30,000 reported by Charlie is correct.

    If that number is anywhere near accurate, Nvidia would be losing money on each card sold and what profits exist for the AIB’s directly subsidized by Nvidia. With the longevity of the cards a big question mark, it’s little wonder XFX passed on the opportunity.

    Then there’s Kyle’s comment on the H forum he heard there would be no B stepping for Fermi and they were going straight to Fermi II.

    So … Nvidia will trickle out 470/480 cards over the coming months, causing untold Fanboi and market frustration, and cedes the high end to AMD for the rest of the year?

      • potatochobit
      • 10 years ago

      if nvidia does not protect their market share soon it will come back to bite them in the following years significantly.
      so even if it is at a slight loss right now, they need to get something on the table.

        • spigzone
        • 10 years ago

        Maybe, but how much market share is protected with such limited availability, and there’s the impending massive frustration and kickback from tens of thousands of people trying for weeks and months to get a card that never succeed because the numbers just aren’t there.

        Nvidia is going to lose most of those customers to AMD this generation anyway, but using this strategy they are also going to permanently lose a whole lot of Fanbois.

          • poulpy
          • 10 years ago

          q[

            • swaaye
            • 10 years ago

            Eh most get over it. Even the 3dfxers have come around I think. It took many years of course. 😉

            Many have been forced to face the truth, including PowerVR, Rendition and Matrox extremists. There were even S3 fans back in the day.

            Some of these guys just don’t have the intellectual faculties to competently judge something through empirical evidence. I’m sure that a large portion of the populace fits in that category, because they are stupid, too lazy, or simply have some sort of psychological defect. 🙂

            • leor
            • 10 years ago

            it was a sad day when i transitioned from my beloved G400MAXX to a ti4200 (OCed to 4600 speeds).

            i was also greatly let down by parhelia (never bought it of course), but am very happy now with my 2x 5850s bought on launch for $249 each.

            • flip-mode
            • 10 years ago

            300

            Second time!
            §[<http://www.techreport.com/discussions.x/17618<]§ s[

            • dpaus
            • 10 years ago

            Nope. Nice try, though. (besides, you were wasting it)

            • flip-mode
            • 10 years ago

            Huh? Look after my name, where it says (#300). I did waste it though. Should not have used “reply”.

            • dpaus
            • 10 years ago

            Curse you, Red Baron!

            • wibeasley
            • 10 years ago

            Were you squatting on the post to make sure it was assigned #300? I’m curious why that message was edited?

            • flip-mode
            • 10 years ago

            No, wasn’t squatting at all. Just right place at right time. Edited it to test the s[

            • wibeasley
            • 10 years ago

            In that case, congratulations on the accomplishment. One more 300 and you’ll be on fire.

            • flip-mode
            • 10 years ago

            Heh, unfortunately, now I’ll be tempted to try for it.

            • Mr Bill
            • 10 years ago

            Good call that. I went from the G400Max to the P750. It really was a great pic but just too slow. Then I bought a couple Radeon X800 GTO for the old AGP iron and just recently the Radeon HD 4770 for my first PCIe build.

            • BlackStar
            • 10 years ago

            My Rendition Verite still holds a soft spot in my heart. Unbelievable image quality compared to the Riva 128s and Voodoo 1s of that era.

            • derFunkenstein
            • 10 years ago

            at least back then you could get dedicated 3D cards to make up for performance issues. :p

        • Freon
        • 10 years ago

        High end doesn’t make the bulk of the market share. They can still say “we have the fastest single chip card” with the GTX 480 and use that halo to convince the OEMs to buy $50-100 versions of their old 8800 architecture in droves.

        ATI really needs to push their 5650 and lower end products to the OEMs, and they are like the aircraft carriers in the ocean of graphics–they don’t turn very fast.

          • Anvil
          • 10 years ago

          Well from what I can see, ATI’s really making a good push with their mainstream cards, especially on the laptop front. Availability seems to be a whole lot better compared to the previous generation there.

    • donkeycrock
    • 10 years ago

    I was a bit disappointed that your review took so long, Bit now im happy again.
    I feel as though you used the most meaningful games to do you comparisons. Unlike alot of other sites.
    I also think that nvidia tells Reviews which games to use, so that their product preforms the best.
    So cheers to you for not following orders. And giving us what we want to know!

    • dpaus
    • 10 years ago

    Between Larrabee and Fermi, it’s been a tough year to be in the graphics chip biz. While it may not be an epic win – or not even a win at all – I still say kudos to NVidia for taking the chance, because doing that is what pushes boundaries and leads to real progress. Whether they find a way to salvage it or decide to deep-6 it like Intel did with Larrabee (or maybe didn’t do…) and start all over, they will at least have learned much from this painful experience.

    That doesn’t mean I won’t savage them if they try to spin this into a win, though.

      • crsh1976
      • 10 years ago

      “Between Larrabee and Fermi, it’s been a tough year to be in the graphics chip biz.”

      AMD would disagree. 😉

        • Peldor
        • 10 years ago

        1 out of 3 ain’t good. And the consumer pays more because of it.

    • cegras
    • 10 years ago

    Thanks for the review.

    • mcforce0208
    • 10 years ago

    Thanks for the excellent reveiew. Just one thing i have 2 5870’s in crossfire with a double spacing between them and the temps dont get anywhere close to those mentioned in the artical. Max by a long shot 70 degrees C. I have them in an antec 900 with fair cooling.

    • flip-mode
    • 10 years ago

    Good job Mr. Wasson; thank you for the article.

    GF100 doesn’t look terrible, but it doesn’t look as good as anyone hoped. This is by no means a Geforce FX 5800 redux, but _[

      • shank15217
      • 10 years ago

      GPGPU performance is castrated even more that GPU performance for the 480 GTX, even if it is double the gt200 it’s certainly not what Nvidia claimed for this generation and they just gave the competition some breathing room.

      • OneArmedScissor
      • 10 years ago

      “TSMC is cause for worry. Their 40nm process – when will it become healthy?”

      Never. It’s over a year old now. It always sucked and always will. AMD just invested a lot in working around it, while Nvidia hoped and prayed the problem would go away.

      • PRIME1
      • 10 years ago

      That was an oddly rational post. I mostly agree with you (other than the line about gaming performance).

        • flip-mode
        • 10 years ago

        If you were less interested in shilling Nvidia and more interested in objectivity, you’d probably feel that way about the majority of my posts.

    • Chrispy_
    • 10 years ago

    Congratulations Scott, that is by far the most useful review of Fermi that I have seen on the web, and what’s a five day delay when the cards aren’t likely to be available for at least a fortnight?

    I particularly agree with the last paragraph of your conclusion. Nvidia shot for the moon and missed, but if they just recalibrate their next design with more emphasis on graphics and less on GPGPU features, I suspect they’ll reach competetive levels again, in terms of performance/manufacturing cost.

    So Fermi is at least competetive at the suggested retail prices?

    Given the more expensive PCB and the extra cost of those fancy coolers, I’m guessing that the 470 and 480 will be the X800PE of this generation: So many cards go to reviewers and OEM’s for their flagship desktops that they realistically do not exist for consumers wanting to buy one. I’d even stick my neck out and say that each Fermi card is being sold at a loss to nvidia.

    I hope Fermi has been a learning exercise for nvidia. I hope the successor to Fermi isn’t the size of a credit card, and I hope it won’t need a 2KW PSU.

    • crsh1976
    • 10 years ago

    Decent performance, too bad they come to the party roughly 6 months after the 5800 series; one could argue both the 480 and 5870 are about the same age, but multiple delays tho have put Nvidia at a disadvantage.

    The power consumption of the 480 however is totally whacked, that’s just insane.

    • poulpy
    • 10 years ago

    It’s all nice and well but.. where is Prime1?!

      • flip-mode
      • 10 years ago

      Reading every single word, memorizing every single chart, obsessively pulling out of context and saving every single positive snippet, rationalizing the irrational, and reading the direct mailer Nvidia sends to its viral marketers: “Nvidia Confidential: Talking Points for the GTX 480 and GTX 470”, and taking high-blood pressure and anti-depressant meds. Er, well, that’s what I’m imagining him doing.

      • TaBoVilla
      • 10 years ago

      thinking this is a “botched” review too..

        • Voldenuit
        • 10 years ago

        That’s not a botched review…

        *This* is a botched review!

        §[<http://i.imgur.com/kWRhF.jpg<]§ Benchmarking. The way it's meant to be played.

          • TaBoVilla
          • 10 years ago

          OMG LOL 0.1 FPS difference equates to 1.5 inches bar graph difference!

          In alienbabeltech’s defense, they did add a disclaimer bellow and also show graphs that do begin X on 0.

          They are vendor agnostic as far as I remember.

          • SubSeven
          • 10 years ago

          OMG. I’m sorry but that just made me laugh. I’m at work and I started laughing like a hyena; people were looking at me like I was an alien or something. At any rate, good one!

          • grantmeaname
          • 10 years ago

          hahahahahaha.
          Oh, what a sad world we live in.

        • Fighterpilot
        • 10 years ago

        lol +1 nice flamebait.
        Permi1 adds TR to that list?

        • willyolio
        • 10 years ago

        well, at least they didn’t “botch” the Borderlands benchmarks.

      • jdaven
      • 10 years ago

      I think I saw him sneaking out of a Frys with an XFX Radeon 5970 4GB carefully hidden under his coat. The only reason I knew what it was is because the end of the gun packaging was sticking out of his jacket. Upon closer examination, it turns out the poor guy had already made a custom under-the-shoulder sling to hold the gun casing. Looks like he made it from his mom’s underwear.

      Out of curiousity of whether he was going to smash the video card to bits in a fit of jealous rage, I followed him home. Peaking through the window, I saw him carefully place the gun packaging in a custom made box (looks like his dad’s old fishing tackle box). He then sat down in the corner and starting masturbating in front of the tackle box, quietly whispering, “Why? Why couldn’t it have been you Nvidia?” while tears streamed down his face.

      I quickly got the hell out of there with the hope that the images burned into my skull would one day vanish from my mind.

      One day…

      Oh god, why didn’t I go to Best Buy that day instead.

        • Jigar
        • 10 years ago

        That was uncalled for, most of TR regulars like that guy.

          • poulpy
          • 10 years ago

          Don’t know how about the forum because I never go there but over here he never contributes anything than pure PR-Fanboi-biased dribble..
          As a person that comes around here quite often I definitely opt out that /[<"most of TR regulars like that guy"<]/ group. I even prefered snakeoil if we go down that road, that was pure brainless trolling not taking itself seriously.

        • PRIME1
        • 10 years ago

        ATI fans ^ gotta love em.

          • MadManOriginal
          • 10 years ago

          NV fans^ gotta pity them.

            • 5150
            • 10 years ago

            Reply fail.

            • mcforce0208
            • 10 years ago

            haha this rivalry is just to funny!

          • SubSeven
          • 10 years ago

          This reminds something about kettles and black pots….

        • Darkmage
        • 10 years ago

        Dude, take that crap to the locker room at junior high. I have my own issues with PRIME1.. but even I don’t reveal my own deep personal psychoses like you just did when I call him on his cheerleading.

    • wingless
    • 10 years ago

    I really hoped to see some GTX470 Folding@Home performance. In fact, I hope TR gets around to doing a full GPGPU suite of tests with the cards to show performance and power usage. These cards are touted as being GPGPU champions just as much as gaming champs so a review like that would make sense.

      • liquidsquid
      • 10 years ago

      They may need to upgrade the power to their labs before they can run those! My gosh, with the power draw of those beasts, it is obvious your bang per watt would be terrible.

    • jokinin
    • 10 years ago

    That GTX480 gives me a Geforce FX – ish feeling: more power hungry, noisy, not necessarily faster than the competition, more expensive, and some new possible features, that aren’t being implemented right now.
    Well, I guess that give me the confirmation that i will skip this generation of GPU’s, and wait to see what the next one has to offer to upgrade.

    • flip-mode
    • 10 years ago

    Page 10: Far Cry 2 AA graphic does not indicate screen resolution, nor do I see it in the text…???

    • Voldenuit
    • 10 years ago

    Ow. Ow. Ow. Those Metro 2033 hatched bar graphs hurt my eyes.

    The information needs to be conveyed in a more palatable fashion, please.

    No surprises here. If anything, Scott is kinder to the 480 than many other reviews on the web, and definitely not as scathing towards the 470 as some/most.

    So far, though, we have not seen Fermi’s superior computation and tessellation power help it /[

      • flip-mode
      • 10 years ago

      Looked fine to me.

    • Fighterpilot
    • 10 years ago

    The Cypress 58** series cards are the winners of this generation.
    All things considered…its a better chip.
    Kinda surprised the Catalyst 10.3 drivers weren’t used for the HD5850 and 58701GB.
    They are the best performance drivers from ATi in a long time and I think most were expecting to see them here.

      • Shining Arcanine
      • 10 years ago

      You cannot say that it is a better chip without specifying at what. Tech Report’s review completely ignored GPGPU performance and workstation performance, both of which are fields in which it should excel far beyond anything from ATI. Quite frankly, I think Proteomics research is more important than a few more FPS in Crysis, so that makes Fermi the better chip.

        • TheEmrys
        • 10 years ago

        Aside from F@H, I don’t know of anyone who has any use for GPGPU. Its just too much of a niche right now. Also, its looking more and more like it depends on how the GPGPU program is coded more than which offering is stronger.

        • just brew it!
        • 10 years ago

        GPGPU performance does not matter at all to most people, and workstation users aren’t going to put up with the noise level of these cards.

          • Voldenuit
          • 10 years ago

          Historically, all the workstations I’ve used have been *horrendously* noisy. IBM PowerPC 604, SGI Irix, some Dell Dimension thing we had at work that sounded like a jet engine taking off (ironically, we were using it for aircraft CAD/FEA).

          The only place where workstation silence would have been crucial would be in sound production, and those places tend not to use “big iron” stuff/companies.

          Of course, this may have changed in recent years.

        • MrJP
        • 10 years ago

        This is a review of the Geforce GTX 470 and 480 graphics cards. They are for gaming. Tech Report evaluated how good they are at playing games relative to other products that are for playing games. I’m sure Nvidia sell many splendid workstation and GPGPU products, but that’s not what these are primarily intended for.

    • Stargazer
    • 10 years ago

    Page 1:
    q[

      • Voldenuit
      • 10 years ago

      Tesla is a rebranded Tegra? :p

        • Stargazer
        • 10 years ago

        Or maybe Tegra 3 will use an integrated GTX480? 🙂

          • Voldenuit
          • 10 years ago

          The new iPhone 3DB (3rd Degree Burns)!

          Where the Lighter app is no longer just cosmetic…

    • Firestarter
    • 10 years ago

    Welp, no scatterplot! Would you TR editors mind awfully to provide one when the GTX480/470 hit the shelfs? <3

    • Krogoth
    • 10 years ago

    GTX480 is only worth it if you want absolute GPU performance while price, noise, heat are not a concern.

    Otherwise, go for a 5850/5870 for high-end GPUs. They are almost as fast while running a lot cooler and being somewhat quiet.

    It is FX 2.0 all over again. An overambitious designed that got plagued by manufacturing difficulties. I just hope that Nvidia manages to tweak the design into a more workable form in the next refresh. The second iteration of FX series weren’t that bad. It should be the same for Fermi if not better, since the architecture has some potential (excellent AA/AF scaling).

    The Telsa/Quadro flavors of GTX480 are going to rock the HPC/workstation world. The folding nuts are going to crazy over it, because according to Anandtech’s review it has four times the yield of a GTX285.

    • rechicero
    • 10 years ago

    I’m somewhat disappointed with this review…

    DiRT, Borderlands and Left4Dead are the way to go (resolution-wise).

    Battlefield and Modern Warfare 2 are not IMHO. A review is supposed to inform you about the product performance you can expect, but in these games that’s only true for the 1% with a 2560×1600 monitor.

      • Farting Bob
      • 10 years ago

      In MW2 for example, all the tested cards and their nearest rivals had good playable framerates at 2560×1600 with 4x AA. So from that you can quite easily conclude “these cards will be damn quick on my smaller monitor”.

      I saw some reviews (techpowerup one example) that had a huge list of games tested and all games were tested at resolutions as low as 1280×1024. Being told this new card gets 280fps at that res is completely pointless.

    • djgandy
    • 10 years ago

    mmm that 8800GT is sure lookin good.

    • Kaleid
    • 10 years ago

    Rename to thermi please.

      • LordEkim
      • 10 years ago

      No Fermi is good name, the card runs hot because it is still in fermentation stage.

      Later we can expect Alco line of card, some Fermi cards will have option for firmware upgrade (only if fermented long enough)

        • just brew it!
        • 10 years ago

        When I see “Fermi” I don’t think “fermentation”; I think “Fermilab” (the DOE high energy physics laboratory where I used to work, and which I can see out the window of the place I work now). Given that the Tevatron particle accelerator at Fermilab consumes 70 MW (megawatts) of electricity, I’d say it’s even more apt than “fermentation”! 😀

          • Voldenuit
          • 10 years ago

          I liked legitreview’s “Thermi” best :p.

          • dpaus
          • 10 years ago

          Wasn’t it the Fermi pile just north of Chicago that nearly went critical back in the 50s?

            • just brew it!
            • 10 years ago

            I’m not aware of any near-miss events like that, but they did bury the thing in a forest preserve southwest of the city when they were done with it. IIRC a few years ago they decided to cap off a couple of wells in the immediate vicinity of the burial site, when they detected slightly elevated levels of radioactivity in the local groundwater.

            Linkage:
            §[<http://en.wikipedia.org/wiki/Chicago_Pile-1<]§ §[<http://en.wikipedia.org/wiki/Site_A/Plot_M_Disposal_Site<]§ (Sorry for the off-topic tangent; I now return you to your regularly scheduled programming, "As The GPU Turns".)

            • dpaus
            • 10 years ago

            Just before we go back… The error was mine; although the reactor was indeed named “Fermi” it was located north of Detroit, not Chicago, and the event happened in 1966, not “the 50s”:

            §[<http://en.wikipedia.org/wiki/We_Almost_Lost_Detroit<]§ And clearly, the loss would have been the City of Windsor, not Detroit, who nobody even cared about in 1966 :-)

            • derFunkenstein
            • 10 years ago

            Fermi seems an even more appropriate name now, given temps and power draw. :p

            • TaBoVilla
            • 10 years ago

            someone mentioned “fail-me” on the forums… I kinda liked that one, maybe jensen calls it that way..

      • Konstantine
      • 10 years ago

      How about Burn-me?

    • Ushio01
    • 10 years ago

    Where’s the progress? the GTX 480 equals or is slower than the GTX 295 just as the GTX 470 matches or is slower than the GTX 285.

    After over a year I expect performance increases I don’t see that here, so are these really new chips or just more rebranding?

      • Kulith
      • 10 years ago

      RTFA..?

      8-9-10

      • just brew it!
      • 10 years ago

      q[

    • Richie_G
    • 10 years ago

    Great review, went well with my morning tea. I’m relieved that there were no nasty surprises which would have burst my bubble of vindication for having bought a 5850 already.

    I agree and suspect that many games will not take advantage of many of the new features this gen brings to the table, since so many are tied to console development now.

    It seems to me that the behemoth single chip architecture has hit its cap with current fabrication techniques. That leaves AMD in a comfortable position (with gamers at least) until we see nVidia overhaul their design.

      • just brew it!
      • 10 years ago

      q[

    • Edvin1984
    • 10 years ago

    Well I got a pair of 5870’s on Day one for 349.95 w/free shipping thanks to NewEgg, and I can not say how glad I am that I went and bought day one lol. Usually I get screwed on the day one purchases, but being that I had a Nvidia card since I can remember, and their was nothing coming out I got these bad boys. I was in the market to build a new monster computer, and it turns out I did quit well.

    I am not happy to see Nvidia do so poorly since this means nothing more than ATI does not have to try as hard, and that is not so good for the consumer. If you are in the market for a new GPU I see no reason why you should not buy the power, noise, and dollar value kind that is the 5k series from ATI.

    PS: Yes I can play Crysis at fairly well FPS lol

    • Palek
    • 10 years ago

    It would be interesting to see if it was actually possible to cook an egg on the Fermi heatsink. You can time it, too, and we have a new benchmark on our hands: the “Eggmark!”

      • Kurkotain
      • 10 years ago

      +1

      • MrJP
      • 10 years ago

      Surprisingly, it’s not quite hot enough:

      §[<http://www.legitreviews.com/article/1264/1/<]§

        • Palek
        • 10 years ago

        That is awesome! 🙂 Legit Reviews deserve some sort of an award for this.

        • wira020
        • 10 years ago

        I laughed my ass off… we need more of those…

      • SubSeven
      • 10 years ago

      An egg cooks at about 160F or 71C. The heatsink itself may not be warm enough to cook the egg, but I’m willing to bet that the heat pipes are more than enough for the job.

        • LawrenceofArabia
        • 10 years ago

        Well Scott writes in the article the heatpipes reached around 60C, so not quite enough.

          • SubSeven
          • 10 years ago

          I was thinking more along the lines of the heat pipes under the hood.

    • phez
    • 10 years ago

    nvidia has failed on the 480.

    but not so much on the 470 …

    • Unckmania
    • 10 years ago

    I don’t care about the very high end. I just wished Nvidia would have created a proper competition for the 5850. A proper successor to the 260 (216).
    Maybe a variant later can solve that problem and the other problems.

    I have to agree with Scott in that Nvidia having their drivers ready the same day that games come out is a big advantage. I’m a day one gamer, so it would really suck i f i had to wait for my card maker to make a proper driver.

    Damn. This is not going to be good enough to lower prices. AMD is still king and has no good reason to lower them. It would be interesting if they did, because if they did that would turn Fermi from a “bad choice” to a “stupid choice”.

    • Spotpuff
    • 10 years ago

    Looks like a 5850 is the card to have. Disappointing after such a long wait and all the fuss Nvidia made.

    • Pettytheft
    • 10 years ago

    Well worth the wait. Great review.

    • SomeOtherGeek
    • 10 years ago

    Wow! First of all, Scott, thanks for the post and the laughs!

    Well, here it is and it is as excepted… Too bad tho. nVidia blew it in so many different ways, it is unspeakable. The price, heat, performance and power consumption is nothing but disappointing. I just can’t for the life of me figure out why nVidia would be so, um, stupid about releasing a product like this. But then history has a way of repeating itself… Even in the business sector, I think this will be questioned.

    Whatever, I enjoyed the read of not the results. Looking forward to you doing minute reports on these cards using other avenues. Like folding. 😉 Maybe, you can create a miracle cooler that will kill the heat/noise and OC it to kill three birds with one stone?

      • just brew it!
      • 10 years ago

      q[

        • Voldenuit
        • 10 years ago

        Which is why ATI is being smart by having a backup plan like Southern Islands.

        As much as everyone is bagging nvidia for Fermi right now, part of the blame has to go to TSMC who overpromised and underdelivered on 40nm.

    • marvelous
    • 10 years ago

    way too power hungry for a single gpu.

    then there’s heat.

    i’m getting a 5850

      • DrDillyBar
      • 10 years ago

      If mine exploded, I agree.

    • Prestige Worldwide
    • 10 years ago

    Excellent review, been looking forward to it a lot and it delivered!

    • MrJP
    • 10 years ago

    Nice review, Scott. You must be glad to see the back of it! I hope it wasn’t the addition of the 2GB 5870 that delayed things, as that seems to be a pretty insignificant step over the standard 5870.

    If there is going to be a follow-up, how about the performance/cost and performance/Watt plots as pioneered in the CPU reviews?

    • Meadows
    • 10 years ago

    By Shakti, finally. What everyone expected, too.

    • alphaGulp
    • 10 years ago

    Looking at the pics comparing the tessellation levels from the Unigine dealio from the anandtech article, it seemed like the rooftops (where you see the tiles go from being flat to being 3-D) were an area where the effect is nicely applied (took me a few min to find that :>).

    Interestingly enough, the anandtech article also mentions how the ECC memory implementation in Fermi did not require the typical extra wiring (instead of 8 bits for each byte in the path, you would have 9) – instead they store the result of the ECC calculation in a section of the RAM (1/9th). I’m not doing a good job of explaining, but basically this means that there is little effect on mainstream products (which don’t use ECC), with the downside being that when the ECC is turned on it isn’t as efficient as the typical implementation is.

    Although the Fermi cards are far from overwhelming, it does seem like Nvidia is very well placed for its next tick (or tock). The folding@home results on the anandtech site were astonishing (a bit hard to believe, actually)…

    • BoBzeBuilder
    • 10 years ago

    Wow, I must say this review was well worth the wait.

    Still, I see no reason why I should upgrade from my 8800GTX from 2006. It runs everything perfect (except Crysis).

      • flip-mode
      • 10 years ago

      Yes, you have a card with unique longevity there. I think that and the Radeon 9700 Pro are the only two cards in history that, in retrospect, I would find worth the $300+ asking price.

    • ssidbroadcast
    • 10 years ago

    I liked how the cooler on that 5870 2GB looked. Pretty sexy.

      • Joerdgs
      • 10 years ago

      Too bad it doesn’t do a good job at keeping the noise down.

        • flip-mode
        • 10 years ago

        This.

          • khands
          • 10 years ago

          It also didn’t get nearly the gains I was hoping for at higher resolutions, I wonder what other difference there is between the 5870 and 5870 2GB besides just the RAM? As one or two of the tests it came out slower. Also, this is the first time I’ve seen the 5870 2GB get used in a review, has anyone seen it elsewhere?

            • Stargazer
            • 10 years ago

            q[

    • nsx241
    • 10 years ago

    Given the consistency of the reviews so far, nvidia will probably lower the prices down to 5870/5850 levels after the initial orders (i.e., those with deep pockets) are through.

    • Bauxite
    • 10 years ago

    TR-isms:

    later than a stoner to study hall
    an awful lot of time, treasure, and transistors
    but it’s gotta be roughly the size of a TARP loan
    it’s a Radeon HD 5870 that’s hopped up on Adderall
    tend to swell as if they’d been exposed to a Joe Biden speech

    More coins next time!

      • ssidbroadcast
      • 10 years ago

      q[

    • Fighterpilot
    • 10 years ago

    Thanks for the review Scott.
    Based on that and other reviews it seems the Fermi cards are quite good,but hard to recommend over the ATi alternatives.Price,performance per watt and noise levels just aren’t as good.
    Just a small query:
    On the Left4Dead page you said:l[

    • Konstantine
    • 10 years ago

    Holy crap!!!

    It is as I said in a comment of mine n a news thread on this site:
    l[<"The gtx380 wont exceed the performance level of the 5870"<]l Of course, at that point, it was not known yet that Nvidia will skip the 300 series.

      • d0g_p00p
      • 10 years ago

      nVidia did not skip the 300 series. It’s out on the market but I think it’s OEM only.

    • danny e.
    • 10 years ago

    looks like the HD 5850 is the card to have… good performance, low power, low noise, low temps.

      • MadManOriginal
      • 10 years ago

      Yeah it seems that way. Just not enough push in graphics these days. It’s kind of funny seeing so many reviews where the only resolution used, or only meaningful one because below it everything is fine, is 2560×1600. Maybe ‘a lot’ of enthusiasts have 1920×1200/1080 screens but the number with 2560×1600 is a lot smaller I’m sure and that’s not to mention more normal people.

      Sadly what ends up missing due to this is easy to find information on what card is good or sufficient for a given lower resolution.

        • danny e.
        • 10 years ago

        yeah, few care about 2560×1600. It’s nice to have those benchmarks as well.. but I’m guessing the vast majority play games at 1920×1200 or even 1680×1050.

        The 5850 needs to drop down to 250. Then it’d be perfect. It’s crazy that I picked it up for 285 and now it’s 300.

          • Bauxite
          • 10 years ago

          Don’t forget 2560×1440!

          But yes, 1920×1200 was de-facto ‘high end’ for a good few years with 24″ panels.

          HDTVs have made its 16:9 bastard child 1080p ubiquitous, and it will probably be the standard resolution for eons in tech time. Considering the convergence of tv+computer, it might have a run comparable to NTSC/PAL even though bigger and better show up.

            • UberGerbil
            • 10 years ago

            Yes, if I was writing specs for the hardware or the games I simply wouldn’t worry about anything above 1920×1080. If you can hit decent framerates at higher resolutions, great, but as a design goal 1080 is going to be the sweet stop for a long, long time.

      • d0g_p00p
      • 10 years ago

      Yep, I have one and hope to score a second one for cheap Crossfire now that nVidia has unleashed their beast. The last couple generations of video cards has been awesome (8800, 4800 & 5800 series) and I hope the trend will stay the same.

        • danny e.
        • 10 years ago

        unfortunately, Nvidias “answer” may fail to make prices drop much for a while.

    • NeelyCam
    • 10 years ago

    Good lord those HD58xx’s are amazing…

    Fun exercise:

    1) Estimate the system’s idle power consumption without graphics cards: use the difference between HD5870 and HD5870-Crossfire.

    2) Use that to estimate the idle power consumption of a single GTX480

    3) Look at the GTX480 idle in SLI. Whoops!

    Could it be that even in idle these things heat up so much that in SLI they heat up each other enough to cause an extra 29% in idle power?

      • just brew it!
      • 10 years ago

      q[

    • xtremevarun
    • 10 years ago

    AH finally the techreport review…mmmm….
    if fermi was released at same time 5870 was released…perhaps reviewers might have been more pleased with the card…6 months late and not so good temperatures and power load…makes it a lil underwhelming

      • NeelyCam
      • 10 years ago

      I wonder if this means another set of price hikes for 58xx’s…?

    • MadManOriginal
    • 10 years ago

    Page 10 –

    q[

      • Damage
      • 10 years ago

      Fixed. Thanks.

    • bdwilcox
    • 10 years ago

    Seems like a good card o[

      • HisDivineShadow
      • 10 years ago

      Be careful, you don’t want your butt to get burned by all that raging heat coming off it…

      …or do you?

      • MadManOriginal
      • 10 years ago

      I lol’d. Nice timing given the username for the prior comment.

    • d0g_p00p
    • 10 years ago

    What a wonderful thing to see in my browser. I have been waiting for the TR write up on Fermi. Kudo’s Scott, now time to read.

    • reactorfuel
    • 10 years ago

    Interesting that the extra gig of video RAM doesn’t seem to provide a big performance boost on the 5870. Other review sites were suggesting that the 5870 might be VRAM-limited in certain scenarios, and it looks like that might not be the case.

    Also, a moment of silence for the passing of Crysis as the GPU benchmark to end all GPU benchmarks? Kinda feels strange reading a review without it…

      • HisDivineShadow
      • 10 years ago

      Good thing Crysis 2 comes near the end of this year? Unless it suffers from too much portalitis…

    • UberGerbil
    • 10 years ago

    Oh boy, there are going to be some epic comment threads on this one….

      • Kurotetsu
      • 10 years ago

      Not really. Certain people will simply declare the benchmarking to be flawed, because it doesn’t give the 470 and 480 a constant blowjob, and ignore it while continuing to cling to Guru3D, the last bastion of neutral and accurate GPU testing.

        • UberGerbil
        • 10 years ago

        “Certain people” rarely stop there. What makes them “certain” makes it hard for them to stop themselves.

      • DrDillyBar
      • 10 years ago

      I too wait. 🙂

      • ssidbroadcast
      • 10 years ago

      I think we’re gonna break 300 comments.

        • Chrispy_
        • 10 years ago

        We’ll definitely break 300 comments if at 77 comments, Flip-Mode has contributed more than 10% of them.

        Someone needs to unplug that guy’s keyboard. Srsly.

        • Meadows
        • 10 years ago

        I hope nobody gets kicked into a pit.

        • grantmeaname
        • 10 years ago

        you, sir, are a visionary.

        • Krogoth
        • 10 years ago

        300 comments? That’s weak sauce.

        We used to get 500+ comments on articles like this one when Anonymous Gerbil was around. 😉

          • grantmeaname
          • 10 years ago

          yeah, why did he quit posting anyways?

            • Krogoth
            • 10 years ago

            You are new here.

            Anonymous Gerbil was an anonymous account that TR used before jazz.ca facelift back in 2004. It also had the fun of “Duke Nuked”, an account that Damage made and use to nuke front page posts. 😉

      • flip-mode
      • 10 years ago

      Letters wearing off my keyboard. Off to read first.

    • grantmeaname
    • 10 years ago

    Second!

    • Ricardo Dawkins
    • 10 years ago

    Nice card. Too expensive.

      • HisDivineShadow
      • 10 years ago

      And too hot. Too loud. Too few to hit the market. Too much disabled. Too much too much.

      It won’t be the end of nVidia because they survived the FX series and that was by far worse than these cards. Honestly, the high end doesn’t matter as much for us single monitor users in an age where games are either console ports or about to be.

      Bring on the mid-range, bring on the competition. Get the price wars ignited again. (I wouldn’t mind those high end refreshes ATI and nV had to be working on, either. nV’s six months late. That’s almost time again for a new high end and this one’s going to barely hit the market with numbers in the “tens of thousands.”)

Pin It on Pinterest