Nvidia’s GeForce GTX 460 graphics processor


We’ve been following the story of the Fermi architecture for the better part a year now, since Nvidia first tipped its hand about plans for a new generation of DirectX 11-class GPUs. Fermi’s story has been one of the more intriguing developments over that span of time, because it involves great ambitions and the strains that go with attempting to achieve them. Nvidia wanted its new top-of-the-line GPU to serve multiple markets, both traditional high-end graphics cards and the nascent market for GPUs as parallel computing engines. Not only that, but Fermi was to be unprecedentedly capable in both domains, with a novel and robust programming model for GPU computing and a first-of-its-kind parallel architecture for geometry processing in graphics.

Naturally, that rich feature set made for a large and complex GPU, and such things can be deadly in the chip business—especially when a transition to a new architecture is mated with an immature chip fabrication process, as was the case here. Time passed, and the first Fermi-based chip, the GF100, became bogged down with delays. Rumors flew about a classic set of problems: manufacturing issues, silicon re-spins, and difficult trade-offs between power consumption and performance. Eventually, as you know, the GF100 arrived in the GeForce GTX 470 and 480 graphics cards, which turned out to be reasonably solid but not much faster than the then-six-month-old Radeon HD 5870—which is based on a much smaller, cheaper-to-produce chip.

Whoops.

The GF100, though, has a lot of extra fat in it that’s unnecessary for, well, video cards. We wondered at that time, several months ago, whether a leaner version of the Fermi architecture might not be a tougher competitor. If you’ll indulge me, I’ll quote myself here:

We’re curious to see how good a graphics chip this generation of Nvidia’s technology could make when it’s stripped of all the extra fat needed to serve other markets: the extensive double-precision support, ECC, fairly large caches, and perhaps two or three of its raster units. You don’t need any of those things to play games—or even to transcode video on a GPU. A leaner, meaner mid-range variant of the Fermi architecture might make a much more attractive graphics card, especially if Nvidia can get some of the apparent chip-level issues worked out and reach some higher clock speeds.

Sounds good, no? Well, I’m pleased to report that nearly all of that has come to pass in the form of a GPU known as the GF104. What’s more, the first graphics cards based on it, to be sold as the GeForce GTX 460 768MB and 1GB, are aimed directly at the weak spot in the Radeon’s armor: the $199-229 price range.

A new Fermi: GF104

The GF104 GPU is undoubtedly based on the same generation of technology as the GF100 before it, but to thrust them both under the umbrella of the same architecture almost feels misleading. In truth, the GF104 has been pretty radically rebalanced in terms of the number and type of functional units onboard, clearly with an eye toward more efficient graphics performance. We’ll illustrate that point with a high-level functional block diagram of the GPU. If you’d like to compare against the GF100, a diagram and our discussion of that GPU is right here.

Block diagram of the GF104. Source: Nvidia.

These diagrams are becoming increasingly hard to read as the unit counts on GPUs mushroom. Starting with the largest elements, you can see that there are only two GPCs, or graphics processing clusters, in the GF104. The GF100 has four. As a result, the number of SMs, or shader multiprocessors, is down to eight. Again, GF100 has twice as many. The immediately obvious result of these cuts is that GF104 has half as many raster and polymorph engines as the GF100, which means its potential for polygon throughput is substantially reduced. That’s very much an expected change, and not necessarily a major loss at this point in time.

Another immediately obvious change is a reduction in the number of memory controllers flanking the GPCs. The GF104 has four memory controllers and associated ROP partitions, while the GF100 has six. What you can’t tell from the diagram is that, apparently, 128KB of L2 cache is also associated with each memory controller/ROP group. With four such groups, the GF104 features 512KB of L2 cache, down from 768K on the GF100. The local memory pools on the GF104 are different in another way, too: the ECC protection for these memories has been removed, since it’s essentially unneeded in a consumer product—especially a graphics card.

Our description so far may lead you to think the GF104 is simply a GF100 that’s been sawed in half, but that’s not the case. To understand the other changes, we need to zoom in on one of those SM units and take a closer look.

Block diagram of an SM in the GF104. Source: Nvidia.

Each SM in the GF104 is a little “fatter” than the GF100’s. You can count 48 “CUDA cores” in the diagram above, if you’re so inclined. That’s an increase from 32 in the GF100. We’re not really inclined to call those shader arithmetic logic units (ALUs) “cores,” though. The SM itself probably deserves that honor.

While we’re being picky, what you should really see in that diagram is a collection of five different execution units: three 16-wide vector execution units, one 16-wide load/store unit, and an eight-wide special function unit, or SFU. By contrast, the GF100’s SM has two 16-wide execution units, one 16-wide load/store unit, and a four-wide SFU block. The GF104 SM’s four dispatch units represent a doubling from the GF100, although the number of schedulers per SM remains the same.

The end result of these modifications is an SM with considerably more processing power: 50% more ALUs for general shader processing and double the number of SFUs to handle interpolation and transcendentals—both especially important mathematical operations for graphics. The doubling of instruction dispatch bandwidth should help keep the additional 16-wide ALU block occupied with warps—groups of 32 parallel threads or pixels in Nvidia’s lexicon—to process.

One place where the GF104’s SM is less capable is double-precision math, a facility important to some types of GPU computing but essentially useless for real-time graphics. Nvidia has retained double-precision support for the sake of compatibility, but only one of those 16-wide ALU blocks is DP-capable, and it processes double-precision math at one quarter the usual speed. All told, that means the GF104 is just 1/12 its regular speed for double-precision.

Another big graphics-related change is the doubling of the number of texture units in the SM to eight. That goes along nicely with the increase in interpolation capacity in the SFUs, and it grants the GF104 a more texturing-intensive personality than its elder sibling.

Boil down all of the increases here and decreases there versus the GF100, and you begin to get a picture of the GF104 as a chip with a rather different balance of internal graphics hardware—one that arguably better matches the demands of today’s games.

ROP

pixels/

clock

Textures

filtered/

clock

Shader

ALUs

Triangles/

clock

Memory

interface

width (bits)

GF100

48 64 512 4 384
GF104

32 64 384 2 256
Cypress

32 80 1600 1 256

The GF104 is a smaller chip aimed at a broader market than GF100, of course, so some compromises were necessary. What’s interesting is where those compromises were made. ROP throughput (which determines pixel fill rate and anti-aliasing power), shader ALU count, and memory interface width are each reduced by a third. The triangle throughput for rasterization (and tessellation, via the polymorph engines) is cut in half. Yet texturing capacity holds steady, with no reduction at all. When you consider that Nvidia’s shader ALUs run at twice the frequency of the rest of the chip and are typically more efficient than AMD’s, the GF104’s balance begins to look quite a bit like AMD’s Cypress, in fact.

That said, Nvidia is unquestionably following its own playbook here. A couple of generations back, the firm reshaped its enormous G80 GPU into a leaner, meaner variant. In the process, it went from a 384-bit memory interface to 256 bits, from 24 ROP units to 16, and from four texture units per SM to eight. The resulting G92 GPU performed nearly as well as the G80 in many games, and it became a long-running success story.

Sizing ‘er up

Estimated

transistor

count

(Millions)

Approximate

die
size

(mm²)

Fabrication

process node

G92b

754 256 55-nm
TSMC
GT200

1400 576* 65-nm
TSMC
GT200b

1400 470* 55-nm
TSMC
GF104

1950 320* 40-nm
TSMC
GF100

3000 529* 40-nm
TSMC
RV770

956 256 55-nm
TSMC
Juniper

1040 166 40-nm
TSMC
Cypress

2150 334 40-nm
TSMC

Chip die sizes are interesting because they tell us something about how much it costs to produce a chip and about the efficiency of its architecture and design. We’d like to compare the GF104 to its stablemates and competitors to get a better sense of things. Uniquely among the major players in PC semiconductors, though, Nvidia refuses to divulge die sizes for its chips. That’s a quirky thing, in a very Nvidia sort of way, since finding out a chip’s die size isn’t especially difficult. Heck, I’d have measured the GF104 myself by now if my X-Acto knife blade could wedge in just a little further under the metal cap that covers it. I’ll get there eventually.

In the meantime, we have our highly scientific “find the most widely reported number that looks right to you” method of obtaining Nvidia die sizes. This information could be wrong, especially in the case of a new chip like the GF104, but it’s probably not far off. I’ve added asterisks to the table on the right for die sizes gathered from around the web.

To give you more of a sense of things, the pictures below show chips where possible and “integrated heat spreaders”—that is, metal caps—where necessary. The quarter is there as a size reference and is not an FCC requirement for video cards. Based on its estimated transistor count (which comes from Nvidia) and process node, the size of the chip package, and its rumored die area culled from here, the GF104 looks to be very close in total area to Cypress, though more oblong in shape. Whether the GF104 can reach the same performance heights as Cypress does aboard the Radeon HD 5870 is an open question. Its mission in the GeForce GTX 460 is quite a bit more modest.

Juniper

RV770

Cypress

The GF104’s metal cap

The 55-nm G92b

The 65-nm GT200 under its metal cap

The GF100’s metal cap

The cards

The first GF104-based graphics card, the GeForce GTX 460, will come in two flavors. Both will use a scaled-back GF104 chip with one of its eight SMs disabled, leaving a total of 336 ALUs and 56 texels per clock of filtering capacity. The two share common clock speeds: a 675MHz core, 1350MHz shader ALUs, and 900MHz (3600 MT/s) GDDR5 memory.

The tastier flavor of the GTX 460 is the 1GB version, which has 32 ROPs, 512KB of L2 cache, and a 256-bit path to memory. The card’s max power requirement, or TDP, is 160W, and it requires two 6-pin auxiliary power inputs. Nvidia says this card will sell for $229.

The more accessible version is the 768MB card. Since it’s down one memory interface/ROP partition, it has 24 ROPs, a 192-bit memory path, and 384KB of L2 cache. This version has a slightly lower 150W TDP, but it also requires dual 6-pin power inputs. Accepting this card’s smaller memory size, lower bandwidth, lesser ROP throughput, and smaller cache will save you 30 bucks off the list price, since it should sell for $199.

We’d really prefer that Nvidia had used its magical powers of branding here to set a bright and shining line between these two products. You’re giving up a lot more than 256MB of memory by going with the 768MB version, and we understand that “GeForce GTX 455” is available. You know there will be folks who pick up a GTX 460 768MB without realizing it’s a lesser product. The GTX 460 768MB isn’t bad, but such confusion isn’t good for consumers.

Pictured above are a couple of GeForce GTX 460 cards. The one on the left is Nvidia’s reference design, and the one of the right comes from Zotac. (The reference card is a 768MB version, but the reference 1GB card looks just the same.) Nvidia’s rendition of the GTX 460 has a pair of DVI outputs and a mini-HDMI connector. Zotac’s offering is simply better, with full-sized HDMI and DisplayPort outputs, along with dual DVI ports.

Here’s a surprise: the GTX 460 supports bitstream audio over HDMI for Dolby True HD and DTS-HD Master Audio. I believe that’s a first for Nvidia graphics cards, and it should make the GTX 460 a nice candidate for an HTPC system.

Notice that these two cards have different coolers. The reference design uses a fan that has a Zalman-esque heatsink beneath, while Zotac’s card employs a blower. Nvidia expects some board vendors to use its reference cooler, but obviously, others will not. We have no problem in theory with a blower like the one Zotac uses; lots of high-end graphics move air efficiently and quietly with blowers. However, the reference cards’ fans are very noticeably—and measurably—quieter. We think it’s possible this particular blower on our Zotac card may have a bad bearing or something, because it did tend to rattle a bit at times, but we only have the one card to judge.

The GTX 460 sports a very compact board design, with a total length of only 8.25″. As you can see from the pictures above, that’s quite a bit shorter than some cards in this price class, most notably the Radeon HD 5830. Since AMD didn’t produce a reference PCB design for the 5830, most manufacturers have based their cards on the Radeon HD 5870 PCB. That makes the 5830 a relatively lengthy card for this class, and it could present fit problems in smaller cases. Even though it’s based on a much larger GPU, the GTX 460 is no longer than the Radeon HD 5770.

The competitive landscape

Speaking of the competition, we should probably map out the landscape before we move on. Heck, we’ve gotten email reminders from both AMD and Nvidia during the past few days to help us sort out the situation. Both firms gave us current expected pricing on their product lineups, so we can share that with you. We also dug up the best available prices on Newegg as of late last week.

We’ve had to add another column to the table below in order to deal with some bizarre behavior by Nvidia, its partners, and online retailers. You can’t just pull up a listing on Newegg and check out the prices of GeForce cards. Instead, you must “click to see price in cart.” Once you’ve done so, you’ll see both the price you’ll pay for that individual product and a potential net price based on a mail-in rebate offer. I hate mail-in rebates; it’s a shady practice that depends on many customers not getting paid. Nvidia has apparently gone all in on the rebate thing, though. You can barely buy a GeForce at its purported list price without going that route, so we’ve added a column to show the net price after rebate.

Pixel
fill rate
(Gpixels/s)

Filtering

rate
(Gtexels/s)

Memory
bandwidth
(GB/s)
Mfr’s
expected

list price

Street

price

Net

after

MIR

GeForce GTX 460 768MB

16.2 37.8 86.4 $199
GeForce GTX 460 1GB

21.6 37.8 115.2 $229
GeForce GTX 465

19.4 26.7 102.6 $249 $279 $249
GeForce GTX 470

24.3 34.0 133.9 $329 $349 $329
GeForce GTX 480

33.6 42.0 177.4 $499 $499 $459
Radeon HD 5770

13.6 34.0 76.8 $149 $149
Radeon HD 5830

12.8 44.8 128.0 $199 $199
Radeon HD 5850

23.2 52.2 128.0 $299 $289
Radeon HD 5870

27.2 68.0 153.6 $399 $389
Radeon HD 5870 2GB

27.2 68.0 153.6 $499 $499

We can make a few observations based on these prices and specs.

Zotac’s GeForce GTX 465

The GeForce GTX 465 was introduced just over a month ago—heck, this is our first test of the thing—but it has little reason to exist now that the GTX 460 is here. Nvidia tells us it has cut the price on the GTX 465 to $249 so the two products can coexist, but I think that’s marketing code for, “We’re clearing out our remaining inventory.”

XFX’s Radeon HD 5830

We weren’t especially taken with the Radeon HD 5830 when it debuted at between $240 and $269. Now that it’s solidly down to $199, though, it’s not such a raw deal anymore. Might even be a decent one! Obviously, the 5830 is the closest competition for the GTX 460, priced exactly opposite the 768MB variant.

Other competitors worth watching include the Radeon HD 5770, which packs an awful lot of bang for the buck at $149, and the Radeon HD 5850 at $289. If the GPU market becomes cutthroat competitive again, I could see AMD lowering prices on the 5850 to match the GTX 460. Eventually. Maybe.

Beyond that, AMD and Nvidia had several things to say about EyeX and Physfinity and… eh, I forget. Truth is, there are reasons to choose one brand of GPU over another, but both have their merits. For all of their complexity and variance, today’s GPUs really are made to do essentially the same things. I’m not saying one should buy a video card based solely on price and performance. There’s sticker color to be considered, after all. But the DX11 offerings from both major players are awfully similar these days in terms of graphics feature sets and image quality.

Test notes

Somewhat unusually, all but one of the cards we’re testing run at the base clock speed specified for the product by the GPU maker. Oftentimes, board makers will range beyond that base clock a little bit to make their products more distinctive, but that’s happening less often these days for various reasons. The one exception in the group today is Asus’ ENGTX260 TOP SP216, whose core and shader clocks are 650 and 1400MHz, respectively, and whose memory speed is 2300 MT/s. The GTX 260 displayed uncommon range during its lifespan, adding an additional SP cluster and getting de facto higher clock speeds on shipping products over time. The Asus card we’ve included represents the GTX 260’s highest point, near the end of its run.

Similarly, the Radeon HD 4870 we’ve tested is the later version with 1GB of memory.

Many of our performance tests are scripted and repeatable, but for a couple of games, Battlefield: Bad Company 2 and Metro 2033, we used the FRAPS utility to record frame rates while playing a 60-second sequence from the game. Although capturing frame rates while playing isn’t precisely repeatable, we tried to make each run as similar as possible to all of the others. We raised our sample size, testing each FRAPS sequence five times per video card, in order to counteract any variability. We’ve included second-by-second frame rate results from FRAPS for those games, and in that case, you’re seeing the results from a single, representative pass through the test sequence.

Our testing methods

As ever, we did our best to deliver clean benchmark numbers. Tests were run at least three times, and we’ve reported the median result.

Our test systems were configured like so:

Processor Core i7-965 Extreme 3.2GHz
Motherboard Gigabyte EX58-UD5
North bridge X58 IOH
South bridge ICH10R
Memory size 12GB
(6 DIMMs)
Memory type Corsair Dominator
CMD12GX3M6A1600C8

DDR3 SDRAM at 1600MHz

Memory
timings
8-8-8-24 2T
Chipset drivers INF update
9.1.1.1025

Rapid Storage Technology 9.6.0.1014

Audio Integrated ICH10R/ALC889A

with Realtek R2.49 drivers

Graphics Radeon HD 4870 1GB

with Catalyst 10.6 drivers

Gigabyte
Radeon HD 5770 1GB

with Catalyst 10.6 drivers

XFX
Radeon HD 5830 1GB

with Catalyst 10.6 drivers

Radeon HD
5850 1GB

with Catalyst 10.6 drivers

Asus Radeon HD 5870
1GB

with Catalyst 10.6 drivers

Asus
ENGTX260 TOP SP216 GeForce GTX 260 896MB

with ForceWare 258.80 drivers

GeForce GTX
460 768MB

with ForceWare 258.80 drivers

Zotac GeForce GTX
460 1GB

with ForceWare 258.80 drivers

Zotac GeForce GTX
465 1GB

with ForceWare 258.80 drivers

GeForce GTX
470 1280MB

with ForceWare 258.80 drivers

Hard drive WD Caviar SE16 320GB SATA
Power supply PC Power & Cooling Silencer 750 Watt
OS Windows 7 Ultimate x64 Edition

DirectX runtime update June 2010

Thanks to Intel, Corsair, Gigabyte, and PC Power & Cooling for helping to outfit our test rigs with some of the finest hardware available. AMD, Nvidia, XFX, Asus, Sapphire, Zotac, and Gigabyte supplied the graphics cards for testing, as well.

Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests.

We used the following test applications:

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Running the numbers

Peak pixel
fill rate
(Gpixels/s)


Peak bilinear

INT8 texel
filtering rate*
(Gtexels/s)
*FP16 is half rate


Peak
memory
bandwidth
(GB/s)

Peak shader
arithmetic

(GFLOPS)

GeForce GTS 250

12.3 49.3 71.9 484
GeForce GTX 260 (216 SPs)

18.2 46.8 128.8 605
GeForce GTX 275

17.7 50.6 127.0 674
GeForce GTX 285

21.4 53.6 166.4 744
GeForce GTX 460 768MB

16.2 37.8 86.4 907
GeForce GTX 460 1GB

21.6 37.8 115.2 907
GeForce GTX 465

19.4 26.7 102.6 855
GeForce GTX 470

24.3 34.0 133.9 1089
GeForce GTX 480

33.6 42.0 177.4 1345
Radeon HD 4850

11.2 28.0 63.6 1120
Radeon HD 4870

12.0 30.0 115.2 1200
Radeon HD 4890

14.4 36.0 124.8 1440
Radeon HD 5770

13.6 34.0 76.8 1360
Radeon HD 5830

12.8 44.8 128.0 1792
Radeon HD 5850

23.2 52.2 128.0 2088
Radeon HD 5870

27.2 68.0 153.6 2720

The numbers above represent theoretical peaks for the GPUs in question. Delivered performance, as we’ll see, is often lower. These numbers are interesting, though, in various ways. For instance, the GeForce GTX 465 trails the GTX 460 1GB in every category of note.

We’ve grown increasingly dissatisfied with the texture fill rate tool in 3DMark Vantage, so I’ve reached back into the cupboard and pulled out an old favorite, D3D RightMark, to test texture filtering performance.

Unlike 3DMark, this tool lets us test a range of filtering types, not just texture sampling rates. Unfortunately, D3D RightMark won’t test FP16 texture formats, but integer texture formats are still pretty widely used in games. I’ve plotted a range of results below, and to make things more readable, I’ve broken out a couple of filtering types into bar charts, as well.

The GTX 460 is true to its specs, outperforming the GTX 465 and nearly matching the GTX 470 in these texture filtering tests. Interestingly, the Radeon HD 5830 is substantially faster than the GTX 460 when doing bilinear filtering, but the GTX 460 becomes relatively stronger as the filtering quality ramps up. The crossover point looks to be 4X anisotropic filtering. Go beyond that, and the GTX 460 is clearly faster. Newer Radeons do have slightly higher filtering quality than recent GeForces, but the difference is hard to detect.

As I’ve noted before, the Unigine Heaven demo’s “extreme” tessellation mode isn’t a very smart use of DirectX 11 tessellation, with too many triangles and little corresponding improvement in image quality. I think that makes it a poor representation of graphics workloads in future games and thus a poor benchmark of overall GPU performance.

Pushing through all of those polygons does have its uses, though. This demo should help us tease out the differences in triangle throughput between these GPUs. To do so, we’ve tested at the relatively low resolution of 1680×1050, with 4X anisotropic filtering and no antialiasing. Shaders were set to “high” and tessellation to “extreme.”

Here’s the one spot where the GeForce GTX 465 has an edge on the 460. Since it’s based on a cut-down GF100, the GTX 465 can process more polygons per clock than the GTX 460. Even so, the GTX 460 768MB is 25% faster than its direct rival, the Radeon HD 5830, and beats out the much more expensive Radeon HD 5870, as well.

Aliens vs. Predator

The new AvP game uses several DirectX 11 features to improve image quality and performance, including tessellation, advanced shadow sampling, and DX11-enhanced multisampled anti-aliasing. Naturally, we were pleased when the game’s developers put together an easily scriptable benchmark tool. This benchmark cycles through a range of scenes in the game, including one spot where a horde of tessellated aliens comes crawling down the floor, ceiling, and walls of a corridor.

For these tests, we turned up all of the image quality options to the max, with two exceptions. We held the line at 2X antialiasing and 8X anisotropic filtering simply to keep frame rates in a playable range with most of these graphics cards. The use of DX11 effects ruled out the use of older, DX10-class video cards, so we’ve excluded them here.

The 5830 and GTX 460 768MB are neck and neck, with no notable separation between them. The GTX 460 1GB and GTX 465 are locked in effective parity, as well.

Just Cause 2

I’ve already sunk more hours than I’d care to admit into this open-world adventure, and I feel another bout coming on soon. JC2 has some flashy visuals courtesy of DirectX 10, and the sheer scope of the game world is breathtaking, as are the resulting view distances.

Although JC2 includes a couple of visual effects generated by Nvidia’s CUDA GPU-computing API, we’ve left those disabled for our testing. The CUDA effects are only used sparingly in the game, anyhow, and we’d like to keep things even between the different GPU brands. I do think the water simulation looks gorgeous, but I’m not so impressed by the Bokeh filter used for depth-of-field effects.

We tested performance with JC2‘s built-in benchmark, using the the “Dark Tower” sequence.

Three frames per second separate the GTX 460 768MB and the 5830. I’ll let you decide whether that margin matters. I should note that the GTX 460 twins bracket the GeForce GTX 260 here. Less than two years ago, the GTX 260 cards like this one sold for around $300, so we are seeing a little bit of progress on the price-performance front, even though we seemed to stall for the first half of 2010.

DiRT 2: DX9

This excellent racer packs a scriptable performance test. We tested at DiRT 2‘s “ultra” quality presets in both DirectX 9 and Direct X 11. The big difference between the two is that the DX11 mode includes tessellation on the crowd and water. Otherwise, they’re hardly distinguishable.

DiRT 2: DX11

This one is very nearly a clean sweep for the GTX 460 768MB over the Radeon HD 5830, right up to the point where the GTX 460 runs out memory in DX11 at 2560×1600. Other than that little hiccup, both cards provide playable frame rates at all of the resolutions tested in both DX9 and DX11. Heck, in all but the last graph there, the GTX 460 1GB mixes it up with the Radeon HD 5850.

Battlefield: Bad Company 2
BC2 uses DirectX 11, but according to this interview, DX11 is mainly used to speed up soft shadow filtering. The DirectX 10 rendering path produces the same images.

Since these are all relatively fast graphics cards, we turned up all of the image quality settings in the game. Our test sessions took place in the first 60 seconds of the “Heart of Darkness” level.

Have a look at the lines plotted in that last graph above for the GTX 460 768MB and the 5830. What an incredibly close contest.

Metro 2033

If Bad Company 2 has a rival for the title of best-looking game, it’s gotta be Metro 2033. This game uses DX10 and DX11 to create some of the best visuals on the PC today. You can get essentially the same visuals using either version of DirectX, but with DirectX 11, Metro 2033 offers a couple of additional options: tessellation and a DirectCompute-based depth of field shader. If you have a GeForce card, Metro 2033 will use it to accelerate some types of in-game physics calculations, since it uses the PhysX API. We didn’t enable advanced PhysX effects in our tests, though, since we wanted to do a direct comparison to the new Radeons. See here for more on this game’s exhaustively state-of-the-art technology.

Yes, Virginia, there is a game other than Crysis that requires you to turn down the image quality in order to achieve playable frame rates on a $200 graphics card. Metro 2033 is it. We had to dial back the presets two notches from the top settings and disable the performance-assassinating advanced depth-of-field effect, too.

We did leave tessellation enabled on the DX11 cards. In fact, we considered leaving out the DX10 cards entirely here, since they don’t produce exactly the same visuals. However, tessellation in this game is only used in a few specific ways, and you’ll be hard pressed to see the differences during regular gameplay. Thus, we’ve provisionally included the DX10 cards for comparison, in spite of the fact that they can’t do DX11 tessellation.

The Fermi-based GPUs have the advantage here, so much so that the GTX 460 1GB essentially matches the Radeon HD 5870. The Radeon HD 5830, meanwhile, falls behind both the GeForce GTX 260 and the Radeon HD 4870—although it is doing tessellation and using DX11, while they are not. The more noteworthy outcome may be the fact that the GTX 460 achieves playable frame rates, with a low of 30 FPS, while the 5830 doesn’t quite cut it. That’s the sort of difference one would notice while gaming.

Borderlands

We tested Gearbox’s post-apocalyptic role-playing shooter by using the game’s built-in performance test. We tested with all of the in-game quality options at their max. We couldn’t enable antialiasing, because the game’s Unreal Engine doesn’t support it.

Here’s another case where the GTX 460 1GB tangles with the pricier Radeon HD 5850 and holds its own. The GTX 460 768MB outclasses the Radeon HD 5830, too, although the 5830 still delivers playable frame rates at 1920×1080. At 2560×1600, the 5830 averages below 28 FPS, which is going to feel sluggish. The GTX 460 768MB is in safer territory.

Power consumption

We measured total system power consumption at the wall socket using an our fancy new Yokogawa WT210 digital power meter. The monitor was plugged into a separate outlet, so its power draw was not part of our measurement. The cards were plugged into a motherboard on an open test bench.

The idle measurements were taken at the Windows desktop with the Aero theme enabled. The cards were tested under load running Left 4 Dead at a 1920×1200 resolution with 4X AA and 16X anisotropic filtering. We test power with Left 4 Dead because we’ve found that this game’s fairly simple shaders tend to cause GPUs to draw quite a bit of power, so we think it’s a solidly representative peak gaming workload.

Overall, the new GeForces look quite decent on power draw. Interestingly, the Radeon HD 5850 draws less power under load than either flavor of GTX 460, yet the slower Radeon HD 5830 draws more. There’s a simple reason for that: the GPU on 5830 has more units disabled, but it also has a higher clock speed than the 5850. Higher clock frequencies increase power draw, and higher voltages are often required to reach them. With a larger board and higher clocks the 5830 isn’t particularly efficient.

The GF104 looks like real progress for Nvidia. The GTX 460 1GB pulls less power at idle and under load than the GTX 465 or the GTX 260, yet it usually matches or outperforms them both.

Noise levels

We measured noise levels on our test system, sitting on an open test bench, using an Extech model 407738 digital sound level meter. The meter was mounted on a tripod approximately 8″ from the test system at a height even with the top of the video card. We used the OSHA-standard weighting and speed for these measurements.

You can think of these noise level measurements much like our system power consumption tests, because the entire systems’ noise levels were measured. Of course, noise levels will vary greatly in the real world along with the acoustic properties of the PC enclosure used, whether the enclosure provides adequate cooling to avoid a card’s highest fan speeds, placement of the enclosure in the room, and a whole range of other variables. These results should give a reasonably good picture of comparative fan noise, though.

These results tell a couple of important stories. First, the difference in sound levels between the GTX 460 768MB and 1GB cards comes from that noisy blower on our Zotac review unit. Nvidia’s reference cooler is much quieter, as are most other video cards. I should say that we also have a GTX 460 1GB reference card, and its noise levels are similar to the 768MB card’s—nice and quiet.

The only thing quieter under load, in fact, is XFX’s custom cooler on its Radeon HD 5830, which is actually a very similar design.

GPU temperatures

We used GPU-Z to log temperatures during our load testing. We had to leave out the GTX 260, though, because it was reporting some obviously incorrect values.

Happily, with relatively low power draw, the reference GTX 460 cooler can remain quiet while keeping GPU temperatures in check.

Conclusions

Boy, is it refreshing to have strong competition at the $200 mark again. The contest between the Radeon HD 5830 and the GeForce GTX 768MB is a narrow but sure win for Nvidia. The GTX 460 768MB is noticeably faster in several games, and it’s more power efficient than the 5830, too. Nvidia’s stock cooler for the GTX 460 is blessedly quiet (unlike the Zotac one, sadly), and this much shorter board design should fit into even the most painfully cramped cases from the likes of Dell and HP.

This is a price point at which we want to recommend a video card, a traditional home of good values, but it’s been rough going for much of this year. The GTX 460 768MB isn’t really a revelation, but it is an improvement over the single DX11 option we’ve had to date. Also, the GTX 460 768MB is generally a superior choice to DX10 cards like the GeForce GTX 260 that have been haunting our sense of video card value. We can now be free of those ghosts, thank goodness.

You can decide for yourself whether it’s worth an additional 30 bucks for the GTX 460 1GB over the 768MB version. In light of what happened in our DiRT 2 DX11 tests at 2560×1600, though, you’ll probably want the larger memory size of the 1GB card if you have a four-megapixel display. Of course, in that case, you’ll probably want an even beefier video card or a second GPU. With some of these newer DX10 and DX11 games, I do think there’s a case to be made for a high-end GPU config once again. One thing I would have a hard time justifying, though, is spending $299 for a Radeon HD 5850 when you can pick up a GTX 460 1GB—which is faster in Metro 2033 and Borderlands and competitive enough elsewhere—for $229. I think it’s finally time for AMD to cut its prices. Just dropping the 5850 back to its, ahem, original introductory price of $259 would be a good start.

We’re curious to see how far the GF104 can go when pushed to its logical limits. With all eight SMs enabled and clock speeds raised as feasible (within the usual power and thermal constraints), could the GF104 challenge the Radeon HD 5870 and the GeForce GTX 470? Nvidia claims the GTX 460 has plenty of clock speed headroom, so it seems possible. We need to try overclocking this thing and see how it handles, but we’ve not yet had time. Some quick math suggests a fully enabled GF104 at 750MHz would have a comparable ROP rate and shader power to a GTX 470, with better texturing throughput. We’re very much intrigued to see where this chip goes next.

Comments closed
    • indeego
    • 9 years ago

    Just bought the 1 GB variant of this for $150 after MIR (which I don’t mind). That is $80 less than release less than 6 months after release. Not badg{<.<}g

    • jwolberg
    • 9 years ago

    I have been thinking about getting a new card lately. Right now I have a 9800 GTX which does the job for the most part. I run my games at my monitors native resolution of 1920×1200 and typically have to set everything to medium with AA either off or set very low and usually get in the mid 20s to low 30s in FPS.

    After reading this review as well as many others and checking out pricing ( as well as the 10% coupon Newegg is now offering on Fermi based cards ) it’s really hard to ignore.

    Think it’s worth an upgrade or should I wait until next year? There isn’t much on the roadmap for the rest of 2010.

    • can-a-tuna
    • 9 years ago

    Tips for updating the benchmark test suite: Left4Dead 2, Just Cause 2, Bioshock 2. A lot of people actually plays those games (unlike “Borderlands”).

      • flip-mode
      • 9 years ago

      That Borderlands comment hurt my feelings.

    • Flying Fox
    • 9 years ago

    Damnit, seems like only that Zotac 1GB model has the DisplayPort. Is it really so hard? DP is what makes it for me if I want to get the U2311: DP for computer, and HDMI->DVI for digital cable.

    • phez
    • 9 years ago

    Is TR going to do an SLI review on these cards? I’m seeing numbers of 100% scaling in some reviews, which is astounding. (techpowerup)

      • Krogoth
      • 9 years ago

      100% scaling is impossible. 80-90% is more realistic for maximum efficiency (mostly found in synthetic benchmarks). The typical case is closer to 40-70% depending on the game, resolution and AA/AF level.

        • Meadows
        • 9 years ago

        Only impossible if you think it is, Alice.

          • Krogoth
          • 9 years ago

          It is the nature of the beast.

          The rendering engine will always suffer from some overhead due to synchronization. (texture data, frame buffer, shaders etc.) Otherwise, you will suffer from graphical artifacts.

          It is easier for synthetic benchmarks to work around this, because they are predictable. This not the case with normal games where the content can vary. This is why SLI/CF are heavily driver and game depended.

    • flip-mode
    • 9 years ago

    Scott, I don’t know if you are still reading comments here, but I’m wondering about possibly including, in future reviews, dual gpu solutions when they make a lot of sense.

    For example: dual 5770 make tons of sense – it’s nearly 5870 performance but possibly for substantially less cost.

    But, how does it do in Borderlands, for example, where a single 5770 is a dog.

    Just wondering.

    Maybe there’s a justification for a special article here that looks at the current state of dual gpu configs – which one’s provide good value, which one’s provide a smoother experience at this point, which ones have issues with certain games….

    Dunno, just thinking out loud…

    • 1love4all
    • 9 years ago

    its all about good yield through small die size(good for l[

    • phez
    • 9 years ago

    What I want to know is, should I save my money for GTX 475 (or whatever the next card will be) ?

    Or even wait for ATI’s newest? When are those expected?

      • Wintermane
      • 9 years ago

      Frankly at this point id save my money for the 500 series cards as they cant be all that far away and to be blunt there isnt anything out right now that actualy matters that needs any of these cards.

        • ew
        • 9 years ago

        Good news. You can just change the 4 to a 5 and you’ll have the 500 series card!

          • bdwilcox
          • 9 years ago

          nVidia would never do such a thing, good sir!

            • JustAnEngineer
            • 9 years ago
            • willyolio
            • 9 years ago

            of course they wouldn’t. they’d change the name from 4 to 6. the ones currently in the 3 series would be turned into 5’s.

            • bdwilcox
            • 9 years ago

            Blasphemer!!! We at nVidia do not rename, rebadge or, remarket!!! This may be beyond your comprehension, you being a /[

    • esterhasz
    • 9 years ago

    in the end, one question remains – is the 460 good enough to make you want to buy nvidia stock?

    (the market seems to tentatively say yes, NVDA is up 4% today but 11$ is still far away from $38 in late 07)

      • psyclone
      • 9 years ago

      NVIDIA stock, like all the chip makers are getting a huge boost from the server end and cloud computing. Look at the Intel guidance today, server demand overall is strong and the high end even stronger. When I first heard about TESLA I thought it bunk, but evidently I was wrong. I knew that I was wrong when Intel went to the trouble to spout a bunch of FUD on gpu vs cpu computing. 😉

    • glynor
    • 9 years ago

    There’s a tiny little mistake in the article (but one that confused me momentarily when I was reading through last night):

    l[

      • Damage
      • 9 years ago

      Doh.. changed it to eight, as intended. Thanks.

        • glynor
        • 9 years ago

        No problem. Thanks again, Scott. Great work.

    • Bensam123
    • 9 years ago

    I want to buy a new graphics card, but I have no reason to. My 4870 works just fine playing current games on high settings. The gaming industry blows. *sigh*

    I think the small outlet opening on the Zotac cooler may have something to do with the noise levels, just a thought.

      • indeego
      • 9 years ago

      High Settings at low resolution, yes, it does work well. These cards are for 1920 and above resolutions and high settingsg{<.<}g

        • flip-mode
        • 9 years ago

        So? If it works for him it works for him, and he seems to be saying that it works for him.

        • Bensam123
        • 9 years ago

        1680×1050 4xAA 16xAF

        Not the highest resolution and anything over 4xAA I consider overkill, so I guess?

          • FuturePastNow
          • 9 years ago

          You probably will not have to upgrade that 4870 until you get a higher-resolution monitor.

            • Kaleid
            • 9 years ago

            I’d say probably not until new consoles are released.

            • Bensam123
            • 9 years ago

            Heh, you should visit the graphics forums on here.

            Besides that though, no I probably wouldn’t have needed to upgrade till new consoles are released seeing as everything is a port now anyway and it’s designed to run with five year old hardware.

    • jackbomb
    • 9 years ago

    Thanks for the GTX 460, GTX 560, GTS 650, and Mobile GTX 580 review! 🙂

    • Grape Flavor
    • 9 years ago

    Bwahaha. Just as I suspected, GTX 460 SLI (not tested on TR) outperforms GTX 480 for less cash.

    Now the question is when will Nvidia will follow up with a “full” GF104 with 384SPs and higher clocks. And will AMD slash prices on the 5850? How do those perform in crossfire? And will the HD 6000 series be out for back-to-school?

    The next few months should be very interesting for anyone looking to blow some money on an enthusiast PC, and the GTX 460 is a great start.

    • FuturePastNow
    • 9 years ago

    Does… does that Zotac card have a picture of C-3PO on it?

    • ElderDruid
    • 9 years ago

    Very interesting that TR didn’t give an Editor’s Choice to this GPU, unlike every other review site. What does it take, guys?

      • Skrying
      • 9 years ago

      I would say it would take a card that performs better than the competition that it’s following nearly a year later, and not one that’s even with the competition.

      • flip-mode
      • 9 years ago

      Well, the fact that its not more performance for the money than has been available for a good long time now (260, 5830) is a possible factor.

      • SNM
      • 9 years ago

      Does TR ever give out Editor’s Choice awards to GPUs? I only recall them appearing in roundup-style articles.

      • Anonymous Coward
      • 9 years ago

      AMD has been relaxing on the beach waiting for this thing to show up, maybe they’ll bother lowering prices now. Nvidia doesn’t need any award.

      • Meadows
      • 9 years ago

      I take it you’re familiar with American education then, but I’m afraid adequacy is not grounds for an award.

      • Damage
      • 9 years ago

      Yeah, we don’t typically do editor’s choice awards for new CPUs and GPUs. Something like 90% of all new products in those categories would get an award if we did, since the pricing and positioning are fresh, the tech is new, and odds are good that it’s probably the best CPU/GPU we’ve reviewed yet.

      An example of why not related to this product: What if AMD cuts the 5850 to $219 next week? We can’t go back and give the 5850 an award or take away the GTX 460’s…. but we should in a case like that.

      Another issue is that we’re often reviewing reference cards, as we were here. I don’t think I could give that Zotac, with its loud blower, an award, yet it was the only retail version of the GTX 460 we tested.

      I know these problems can be managed, but we’ve generally tried to save our awards for cross-product comparisons and round-ups instead. Keeps it simpler.

        • Flying Fox
        • 9 years ago

        I see no problem with that.

    • clone
    • 9 years ago

    FINALLY…… nvidia comes out with something that ATI should worry about.

    I’ve been waiting for the price drops that used to come shortly after both companies product had arrived but Nvidia poached it so badly ATI didn’t have to bother, thank heaven Nvidia’s finally offered something worthy a cppl weeks before SC2 comes out which gives both companies just enough time to jockey for pricing position…….if ATI doesn’t react to 460 and or Nvidia doesn’t drop the price to sub $200 before MIR’s not after that plague the industry nowadays I’ll give this whole gen a pass entirely.

    460 is faster than 5830 but it’s nothing earth shattering and the market has been stubborn in offering something compelling under $200……. while enough to get noticed Nvidia really needs to lead the market instead of trying to wiggle in between ATI’s legs so they can pick the scraps as they fall from the table.

      • d0g_p00p
      • 9 years ago

      StarCraft 2 runs fine on the last generation video cards. Anyone upgrading graphics to run SC2 is a fool

        • Vasilyfav
        • 9 years ago

        How about people upgrading their shitty Dell boxes from integrated chips?

        How about people upgrading from the lowest of the low cards like radeon 4350 or 8400 gs ?

        How about people who want to max out their eye candy on a 1920*1200 and still have 60+ fps ?

        You’re a fool if you think people DON’T upgrade their video cards for a game like Starcraft 2.

          • derFunkenstein
          • 9 years ago

          To be fair, you’re just making up scenarios that have nothing to do with the comment to which d0g_p00p replied. Equally fair here, the commenter we’re talking about is named d0g_p00p.

        • Flying Fox
        • 9 years ago

        What about the IGP people?

    • bdwilcox
    • 9 years ago

    Does anyone know if you still need to run a two-pin SPDIF cable from the PC’s soundcard to the video card for audio over HDMI or can the 460 pull sound though the PCI-Express slot like an ATI card can?

      • Majiir Paktu
      • 9 years ago

      It all runs through PCI-Express. Nvidia cards have been capable of this for a while, and starting with the 460 they can now run Dolby TrueHD and whatnot as well, which used to only be on AMD cards.

    • PenGun
    • 9 years ago

    Wonderful. My GTX 260 OC MAXCORE is not much slower and the GTX 260s will now be knocked off the $200 spot. I feel some SLI coming on.

    • SoulSlave
    • 9 years ago

    Should 5850 come down in price or not, things will be a LOT more interesting from now on…

    Nvidia can grow this chip to make it match faster (5850 ~ 5870) cards. And AMD will have to answer accordingly.

    Still I remember something being mentioned about southern islands time-frame for release being this semester… or am I mistaken?

    If Nvidia forces current-gen AMDs to a price-drop, and the answer comes in the form of a not much faster “next-gen” we could very well see Nvidia competing for the price crown like AMD did with the 4xxx’s

    • l33t-g4m3r
    • 9 years ago

    gotta read the metro 2033 fine print.
    The game’s not using full dx11 graphics, and yet we’re being shown benchmarks.
    Throw that whole section out. The numbers are completely worthless.
    Not to mention it uses phsyx, which is designed to slow down systems without a nvidia card, using x87 code, and that should be mentioned.
    The graphics are terrible looking on any other setting but high, and that’s what I use on my 4870.
    I get in the 20’s on average, and it’s playable at 1680×1050.
    I’m using the r_gi 1 tweak (global illumination, it’s off by default) and vsync.

    looks like nvidia still performs poorly when running out of ram. (dirt2)
    Ati’s handled that problem better for how many generations now?
    At least since the 9800, which was capable of playing doom3 on the ultra settings.

    Another point worth mentioning is that the 800mhz overclocked version should put out numbers that literally match the 5850.
    So if my math is correct, there is no point to buying a 5850 at current prices.

      • bdwilcox
      • 9 years ago

      But the overclocked version of the 460 is currently $250. I’m pretty sure the 5850 will be dropping to this range to compete with it (as it has a few times in various past SlickDeals). Personally, if they were equally priced I’d rather have the 5850 because I’ve come to trust ATI’s drivers more (can’t believe I just said that).

      • ClickClick5
      • 9 years ago

      I keep saying Stalker: CoP should be used instead of Metro 2033. Better DX11 look, no x87 code, no Nvidia tempering.

    • Farting Bob
    • 9 years ago

    In the UK your looking at about £160-170 for a 460, about the same for the 5830 and about £225 for a 5850. This is shaping up perfectly for a price war, providing TSMC can make enough chips that arent foobarred.

    • grantmeaname
    • 9 years ago

    First post

    • Majiir Paktu
    • 9 years ago

    l[

      • gohan
      • 9 years ago

      seconded !!! hv a 8800 GTS 512 and its still holding on. hv to turn down the eye candy a bit and overclock the card but it can play most of the games at 1920 * 1200 with respectable frame rates

    • SoulSlave
    • 9 years ago

    Finally some real competition!

    I was waiting for these to arrive, too bad it took NVidia some 6 months…

    Now if AMD feels it’s time to do some price lowering I’ll be much more compelled to replace my good old 4850…

      • rhema83
      • 9 years ago

      I believe that is the biggest significance of the GTX 460. It’s about time the HD5850 gets sold for its launch price or less.

    • flip-mode
    • 9 years ago

    Interestingly enough, I bet G104 could give Cypress a run for the money.

    Nvidia has partially disabled G104 and almost admittedly underclocked it.

    Enable the missing cluster and clock to 850 and this thing might shadow the 5870 very effectively.

    What’s Nvidia up to with disabling that cluster? Clear out some inventory of 470 and 465 first and then surprise launch a fully operational g104?

    There’s all kinds of performance in this chip that’s been shackled. That’s pretty annoying.

      • tviceman
      • 9 years ago

      That is exactly their strategy. They need to clear out existing gtx465 and 470 stock. Once that happens, they’ll release an locked gf104 with higher clock speeds than current reference designs.

      I also bet Fermi II is coming sooner rather than later (much like what nvidia did between fx5800 and fx5900). Nvidia’s fall high end line-up will be a fully unlocked dual gf104 (gtx495), Fermi II gtx485, a fully unlocked gf104 gtx475, and then of course the existing gtx460’s.

      • Flying Fox
      • 9 years ago

      Charlie D was off on this one being a simple half-cut of GF100. However, he may not be too far from the truth that even the GF104 still has comparatively (vs AMD and vs its previous products) low yields. It is like they finally get a design that is at least sellable, so they want to proceed slowly and I wonder if disabling a group improve yields to a point that brings us this reasonable price.

      As subsequent manufacturing batches and minor process improvements on future runs, they may be ready to unleash the real chip on us, subject to what AMD is going to do with 5770/5830/5850 pricing, and the rumoured Southern Islands release in September.

      • shank15217
      • 9 years ago

      Enabling the eighth cluster would increase power consumption and overclocking would amplify that, making cooling a real problem.

        • tviceman
        • 9 years ago

        Yes because the gtx460 is really hot to begin with, has a huge power draw, and AMD had the exact same problem going from the hd5850 to hd5870.

        The real reason is that a fully unlocked GF104 would equal or outperform a gtx470.

          • Flying Fox
          • 9 years ago

          I think yields is a much simpler and logical answer.

            • flip-mode
            • 9 years ago

            Simpler? Dunno. More logical? How so? Logic is yes/no, not more/less, right? Is it logical not to overshadow the 470 that still has to be sold? Yes. Is it logical to die harvest? Yes. Is it logical to handicap the product so as not to make your previous product look so ill considered just months after launch? Yes. Yields is the simple answer, but who really knows how simple it is with Nvidia. I’d bet with a die this size that Nvidia got a fair percentage of fully functional dies back. The chip is almost exactly the same size as Cypress, and Nvidia has far more experience with 40nm now. I guess there’s more going on than yields here. If this card had been any faster than it is now, Nvidia wouldn’t have been able to sell hardly any of the remaining 470s.

            • esterhasz
            • 9 years ago

            I think yields a quite convincing argument when you take into account that nvidia’s board partners have been bleeding hard over the last year, some died, some went ATI – there’s a real need for a high-volume chip to please partners and something that can make up for lost market share and help with staying relevant for developers (just look at the steam hardware survey). This does probably not keep them from handpicking good chips for a more powerful update later this year…

            • flip-mode
            • 9 years ago

            I don’t see what Nvidia’s partners have to do with Nvidia’s yields.

            • esterhasz
            • 9 years ago

            That’s not too difficult, is it – board partners need to sell cards to make a living, right? In order to get those economies of scale effects working for you (some costs like package design, drivers, and marketing do not scale in a linear fashion with volume – you design the package once whether you sell 10 cars of 10M cards), you need to have high volume sales (high margin sales for low volume products can only get you so far), i.e. nvidia has to deliver a maximum number of chips. And disabling parts of these chips give you a fault margin, thus higher yields per wafer, thus more chips to starving (and very unhappy due to 480 not going too well) board partners. Board partners already have one high margin (if the margins are high, not so sure) low volume product (480), now they need something to make sandwiches with…

            • flip-mode
            • 9 years ago

            I know what you are saying, but I’m not convinced; here’s why:

            1. It’s TSMC 40nm – a process that was very troubled at the start but has had lots of time and experience to improve

            2. Nvidia has lots of experience under belt with GF100 on this same process.

            3. GF104 is much smaller than GF100, and, by estimates, even smaller than Cypress. ATI isn’t having any trouble making Cypress.

            4. The chip design is a simplification of GF100, so I’d think it unlikely that there’s anything inherent to the chip design that would be hurting yields, and, what’s more, building on point 2, Nvidia may have made adjustments to dramatically improve yields over GF100.

            5. If they were having bad yields, I’d have expected a dual launch – a product based on a fully enable chip along side the 460 – a “harvest” product.

            Unless Nvidia is /[

            • esterhasz
            • 9 years ago

            Ati has sold close to 15M DX11 chips since the 5XXX series debuted. TSMC is not able to produce chips as fast as they can be sold (this is why they are currently investing so heavily in capacity) and the 40nm process is still far from perfect. If they can get yields from 40% to 55% (these are guesstimates, but probably not that far from reality), they will – obviously, they did.
            It’s not about competence, it’s about numbers – and believe me, the guys as nvidia have fed everything that you and I can come up with in their little pocket calculators – it’s not rocket science after all (the chip part is, but not the economics)…

            • Flying Fox
            • 9 years ago

            If you buy Charlie D’s reports that even the chopped down GF104 is still difficult to manufacture (yes, they have more knowledge about the 40nm process now, but they are still behind AMD), then it will make a bit more sense.

            I’m pretty certain that they are binning and saving up the perfect chips. It is to be expected that this first run they don’t have enough yet.

            • Triple Zero
            • 9 years ago

            Why should anyone ever believe what Charlie D has to say about NVIDIA since he is clearly anti-Team Green?

            • green
            • 9 years ago

            l[<... since he is clearly anti-Team Green? <]l down to the point he decides to 'dig up' 3 archived articles: "Why Nvidia's Chips are defective" "Why Nvidia's duff chips are due to shoddy engineering" "What Nvidia should do now" the day before the nda lifted... the editor note says he does this from 'time to time' (which i can't dispute) but i think everyone can remove all doubt that those 3 were brought up by "coincedence"

    • can-a-tuna
    • 9 years ago

    Replace Borderlands finally with something that can be called a good benchmark game. It’s like testing with “Call of Juarez” all the time when we know ATI will kick butt with it every time. How about good old Crysis?

      • flip-mode
      • 9 years ago

      +1. Tis a good point.

      • travbrad
      • 9 years ago

      I’m not sure how Crysis is more relevant than Borderlands, at least for me. I can easily see myself playing Borderlands again, but I very much doubt I’ll ever play Crysis again. Yes it’s somewhat subjective, but Borderlands is much newer, has a decent play-length time, and some re-playability.

      I know it favors Nvidia hardware, but that’s no reason not to test with it. I think the most important factor in testing should be whether people are still likely to play the game or not. Is anyone out there seriously still playing Crysis?

      P.S. My last 4 cards have been ATi/AMD so I’m certainly not an “Nvidia fanboi”

    • potatochobit
    • 9 years ago

    I have been recommending the 260-216 to people who insist on nvidia for quite a while now as well

    its good to see that they finally got a replacement that is well priced and offers performance value

    • homerdog
    • 9 years ago

    l[http://www.rage3d.com/reviews/video/sapphire_hd5870_e6_crossfire/index.php?p=4<]§ r[http://forum.beyond3d.com/showthread.php?t=57870<]§ You can see ATI's awesome filtering optimizations at work. §[<http://www.mental-asylum.de/files/aioff.gif<]§ §[<http://www.mental-asylum.de/files/aion.gif<]§

    • Bauxite
    • 9 years ago

    I smell price wars coming up! We all win in those.

    • spworley
    • 9 years ago

    No CUDA tests? None? That’s a big deal! Yet no site tested anything.
    Even Folding@Home or the n-body simulator would be interesting.

      • flip-mode
      • 9 years ago

      And thank god. I don’t mind CUDA reviews as a stand-alone feature, but think of what you’re asking of the reviewers – taking time away from the major focus of a major, traffic generating review to focus on a feature that a slim (but sometimes vocal) few even care about.

      Then, look a little deeper even: this card /[

        • Shining Arcanine
        • 9 years ago

        Could you specify what GPGPU logic it is that Nvidia removed? The specialized logic on the card that could be removed is required for graphics processing, so I fail to see exactly what it is that Nvidia could remove without affecting the card’s ability to do graphical rendering.

        Putting it another way, graphics are a special case of GPGPU, so saying that GPGPU capabilities have been removed would imply that graphics capabilities have also been removed, because the graphics processing is dependent on the same logical circuits that do GPGPU.

          • reactorfuel
          • 9 years ago

          Did you read the review? Some salient quotes:

          “One place where the GF104’s SM is less capable is double-precision math, a facility important to some types of GPU computing but essentially useless for real-time graphics. Nvidia has retained double-precision support for the sake of compatibility, but only one of those 16-wide ALU blocks is DP-capable, and it processes double-precision math at one quarter the usual speed. All told, that means the GF104 is just 1/12 its regular speed for double-precision.”

          “What you can’t tell from the diagram is that, apparently, 128KB of L2 cache is also associated with each memory controller/ROP group. With four such groups, the GF104 features 512KB of L2 cache, down from 768K on the GF100. The local memory pools on the GF104 are different in another way, too: the ECC protection for these memories has been removed, since it’s essentially unneeded in a consumer product—especially a graphics card. “

            • Shining Arcanine
            • 9 years ago

            Ah, okay. I missed that. It is sad to see that Nvidia crippled the card’s GPGPU capabilities, although it makes sense because it would otherwise have been a threat to sales of Nvidia’s Tesla cards. :/

            • Skrying
            • 9 years ago

            Huh? This card wouldn’t even be possible without those GPGPU extras taken out. The card you’re looking for is already on the market. It’s sold as a GTX 470 or 480 currently.

          • SNM
          • 9 years ago

          You remember when we discussed how Fermi made a crappy graphics card because it had a lot of GPGPU-specific junk gumming up the works? As in whole functional units and memory caches that a graphics card just can’t use?
          Yeah, some of those got taken out since this is new silicon.

    • kamikaziechameleon
    • 9 years ago

    I remember when ATI came from behind with the 4000 series dropping their flagship card(4870) to 300 at launch. Nvidia hasn’t been able to compete against that and still fails to.

    They also seem to keen to klinging to old architeture, the GT8800 has been rebranded like 1000 times by now.

    Nvidia needs to see they aren’t that attractive of a product right now and really try to underbid amd, or else they’ll never gain market share.

      • Preyfar
      • 9 years ago

      As much as I’m NOT a fan of rebranding, you have to admin that the G92 was one of the most versatile chips of its history of PC graphics cards. It was fast, efficient and able to have lasting power for several years and at a pretty solid price point.

      That said, thank god we can finally move past it.

    • d0g_p00p
    • 9 years ago

    This looks like a great card. I wonder why nVidia decided to cut out one of the SM. If it had all of them enabled and had a total 384 Shader ALUs this card would be the killer price/performance card.

    I have a HD5850 and would love to go back to the green team because I like their drivers much more. A “uncut” version of the GTX 460 would have pushed me to sell my 5850 and get the 460.

    • HurgyMcGurgyGurg
    • 9 years ago

    Nice to see competition again, but my HD 4870 will probably hold well until the next console generation comes out.

      • ClickClick5
      • 9 years ago

      Ditto. There is really nothing out there that is /[

    • swaaye
    • 9 years ago

    Oh dear. My 8800 may need to be replaced now. I just need to figure out some games to play that run unacceptably slow at 1360×768.

      • Jigar
      • 9 years ago

      I had recently upgrade from 8800GT to HD5850. I would suggest you to wait for the next generation. For that resolution 8800GT is still strong card.

    • beck2448
    • 9 years ago

    Competition is great!! I’m still waiting for the next generation of Fermi which rumor has it will be sick!

    • ronch
    • 9 years ago

    I’m not really an Nvidia fan, and I normally go with ATI products, but ATI doesn’t really care about me and having good competition once again from Nvidia will make sure I can buy ATI products at earth prices. So, go ATI and Nvidia! Duke it out! 🙂

    • codedivine
    • 9 years ago

    I am a little unclear on the double-precision throughput. From the article:

    l[

      • Damage
      • 9 years ago

      That’s correct.

        • codedivine
        • 9 years ago

        Thanks for the clarification!

      • urbain
      • 9 years ago

      this benchmark seems kinda fishy,I’ve seen many benchmarks where the GTX470 is either slower or on par with the 5850 but in this benchmark it seems that GTX470 is even faster in BFBC2.
      the other noticable thing is the power consumption and heat levels that show nearly no difference between HD5000 and GTX 400 series which is not correct.
      overall i would say Scott is 100% paid out loud by Nvidia.

        • Jigar
        • 9 years ago

        LOL, nice one.

    • wira020
    • 9 years ago

    After reading some review from sites that had plenty of samples, I kinda wonder why Nvidia didnt set the stock clock at 750Mhz or 800MHz.. Most that did overclocking test in their review managed to get it above 800MHz core clock.. It’s already good enough at current clock setting, but they could have blown ATI out of the competition at the price point.. I guess this is just to let their AIB partner gain more profit…

      • poulpy
      • 9 years ago

      Or because they got burned with seriously awful yields and crazy power consumption for the last 4 months or so they decided to play it safe both ways this time around.

    • Beomagi
    • 9 years ago

    Now all we need are fast single slot solutions competing at this price point 😉

      • derFunkenstein
      • 9 years ago

      yeah good luck with that. :p At 150W you won’t see this or a Radeon 5830 in a single-slot config for a while. You might see a GF106 product in that range, but most likely they will be around the 9800GT in performance.

    • derFunkenstein
    • 9 years ago

    I’m impressed. Even the 768MB variant is PLENTY fast for people with 1920×1200 or 1920×1080 monitors. And the price is good.

    The big question is going to be availability, which you need plenty of in order to keep prices at MSRP (rather than seeing higher prices due to demand).

    • anotherengineer
    • 9 years ago

    Hmmm I wonder if the 5770 will drop a bit?? I also wonder if Prime1 will come back now??

      • sweatshopking
      • 9 years ago

      Everyone knows Prime1 was just Damage’s alt account.

    • maroon1
    • 9 years ago

    §[<http://www.newegg.com/Product/Product.aspx?Item=N82E16814261075<]§ This GTX 460 1GB not only cost $219 but it is also factory overclocked

      • flip-mode
      • 9 years ago

      NICE!

      • Fighterpilot
      • 9 years ago

      I’m surprised you missed first post…

      • crabjokeman
      • 9 years ago

      Nice, but it’s out of stock.

    • wingless
    • 9 years ago

    I love the HDMI 1.4 and DisplayPort ports on this thing! At these prices this card will make a nice, cheap (relatively) upgrade for a lot of folks coming from old 8000 series Nvidia or 3000/4000 series ATIs.

    ATI, hurry up and lower your prices!

      • anotherengineer
      • 9 years ago

      Depends, all I do is play CSS ocassionally, so my 4850 will be with me for a long time yet (proably till it dies)

    • Spotpuff
    • 9 years ago

    On page 4, the (presumably) 768MB model is listed as 768GB in the table.

    That’s a lot of memory!

      • dmjifn
      • 9 years ago

      Nah, he just left off the fact it has a Sandforce controller.
      If Intel can put GPUs on its CPUs…

    • Skrying
    • 9 years ago

    Two thoughts:

    1. I’m very happy to see Nvidia produce some competitive cards to go against the Radeon HD5000 series cards. Lowers prices across the board!

    2. I have a HD5770 and I feel zero need to upgrade. The card that is almost always dead last in these graphs but no sense of desire for a faster card.

      • khands
      • 9 years ago

      It’s one of those “do I really need max settings?” things.

        • dmjifn
        • 9 years ago

        Me: Yes. It works remarkably well to drown the pain of having dropped 2-3 bills. 🙂

    • flip-mode
    • 9 years ago

    Finally! Thank you Nvidia! Now there is a reason and even a necessity for AMD to make some price adjustments to the 5850.

    And power consumption looks good!

    Yay. Maybe now the price stagnation will clear from the video card scene.

    Why do Radeon 5Ks suck so much at Borderlands? Look at that 5770! I bet the 4850 512 is as fast! Fek. I’m having a blast playing on the 4850. I must have dynamic shadows turned off or something because it plays really well even at 1920×1080. Looks fine though!

      • flip-mode
      • 9 years ago

      [H]]*eh, [H] [h]as a [h]]*uge [h]ard on for t[h]]*is card.

      AMD will [h]ave to respond to t[h]]*is or lose sales.

        • Dashak
        • 9 years ago

        …what are you doing.

          • poulpy
          • 9 years ago

          Whatever that is it’s seriously confusing both my head and the link to the post!!1111

          Regarding ATi lowering prices I wouldn’t hold my breath just yet as, even if getting a competitive product from nVidia is the first step, it all boils down to the volume nVidia can muster and whether or not ATi is already selling everything TSMC can painfully chunk out for them..

            • flip-mode
            • 9 years ago

            Not tough. From [H]ardOCP. I simply used [H] or [h] at every instance of H or h. But, because of the way the comments page uses the [] square brackets, some weird results occurred. It looks like the issue was always with the closing bracket ( ] ) when followed by non-whitespace.

            Edit: Ah, there we go. I got it to work by putting an additional closing escape sequence after the ones that were having problems.

      • ssidbroadcast
      • 9 years ago

      flip-mode is on point. Summed up everything I wanted to say so well that I can go to the bank without writing up my own 2¢.

    • bfar
    • 9 years ago

    Interested in where this will go when they decide to use the locked parts of this chip, and raise the clock speed. gtx475 to answer southern islands maybe? Highly likely.

    As it stands tho, anyone with a gtx260/275/280/285 continues to have poor upgrade choices imo.

    • esterhasz
    • 9 years ago

    yeah, that’s the general impression I have, too but I’d love to see a closer look from a hardware-savvy website – the best information you can get on pro stuff is from forums and there are many diverging experiences (drivers, system configurations, etc.), and it’s a hassle to bring all the info together. So, sure, I know it’s a lot better to get a Quadro than just a 460 for 3Ds but how big is the difference, really? (not that I’m looking for an answer here in the comments, but these would be questions a graphics card review could bring up)

    I don’t know, there’s twenty articles from twenty sites on a graphics card launch, all using the same games, saying pretty much the same thing but when I want to know whether there’s a significant difference between a 5770 and a 460 for Photoshop filters, nothing…

    With CPU’s there’s a wider variety of applications taken into account, and the unique reliance on games really starts to look a bit dated…

    EDIT: sorry meant to reply to #9…

    • joselillo_25
    • 9 years ago

    De the 1GB model rest more system memory from a X32 vista PC than the 768MB model?

      • StuG
      • 9 years ago

      None of them should take any memory away from your OS….

      • sweatshopking
      • 9 years ago

      Yes, Video Ram is included in the memory limit of x86 os’s. It has to do with all memory addressing, and if you have a 1gb card, you will only have 2.5gb’s (roughly) available for your RAM.

        • jackaroon
        • 9 years ago

        If you have a 32 bit OS and the total of RAM + VRAM exceeds 4 gigs, yes.

          • ClickClick5
          • 9 years ago

          Thus my internal enjoyment when I see a sad soul running 32bit OSs with 4GB of Ram and two 1GB vid cards. Sad.

          Or a Core-i7 with XP….and they think XP is taking full control of the CPU…

          These people can not be helped. (The argumentitive ones anyway.)

    • Sahrin
    • 9 years ago

    I’m not wild about the dismissive attitude that TR is treating a crucial decision factor like sticker color. Once again, the unbelievable naiveté of TR in thinking that rational individuals are primarily concerned only about the quantitative performance of a GPU is laughable.

    • l33t-g4m3r
    • 9 years ago

    I’m actually interested in this.
    Ati has disappointed me in their 200$ range, and also by not supporting tessellation on their 10.1 cards.
    I think nvidia’s cooler is better designed than the 5830’s too.
    If it’s going to take up 2 slots, exhaust the air out the back.

    side note:
    Metro 2033 undoubtedly runs better on nvidia because of physx, and TWIMTBP.
    ATI owners are stuck running hobbled x87 physx code, and who knows what other performance sucking hacks.
    I don’t trust any TWIMTBP game benchmarks. They’re rigged.
    Remember the 3dmark cheating fiasco’s?
    TWIMTBP games aren’t any different. Cheating is cheating.

      • Goty
      • 9 years ago

      q[

    • JustAnEngineer
    • 9 years ago

    q[http://www.youtube.com/watch?v=IGqwqxRF598<]§ §[<http://www.youtube.com/watch?v=7edeOEuXdMU<]§

    • StuG
    • 9 years ago

    This card will be good until the HD6800 shows that the Fermi architecture was still too late too the party. These cards are the Fermi refresh now competing with what will be AMD’s old cards. Plus AMD has alot of wiggle room when it comes to price wars, as I felt they though the GTX480 was going to cause one to happen…but it didn’t so they have been sitting fat getting recommendations and sales the entire time.

      • Flying Fox
      • 9 years ago

      We’ll see if and when SI shows up. It is one thing to boast on paper, like the Fermi and so many architectures from years ago (remember how the unified shader architecture of the R600 was going to change things forever too?). It is another thing altogether to be actually able to fabricate the chip with good yields and to get them to market at reasonable prices.

    • yogibbear
    • 9 years ago

    Only just purchased an asus 260 gtx 216 sp balh blah blah (same one as in the review) like a few weeks ago for $180 AUD. Score. No reason for me to get one of these. [Only just got rid of my 8800gt… might have to stick it back into my pc though to play mafia 2]

    • Krogoth
    • 9 years ago

    Finally, Nvidia managed to make a competitive product out of its Fermi architecture.

    460 reminds me of HD 3850. A mid-range rework of a high-end design (2900XT) that got lambasted for its power consumption, delays and underwhelming performance for what it could do on paper.

      • willyolio
      • 9 years ago

      yeah, fermi made me think of the hd2k fiasco all the way through. i didn’t expect them to make any more high-end fermis, just tweaked “good enough” midrange cards out of it.

      • flip-mode
      • 9 years ago

      What’s this? An “on point” from Krogoth! This day is a rarity! lol, just kidding kroggy.

    • mczak
    • 9 years ago

    “That goes along nicely with the increase in interpolation capacity in the SFUs”. Sure of that? Since the MUL is gone in the SFUs for fermi, speculation was that interpolation is now probably done in the main alus (Evergreen uses main alus for that too).

    • link626
    • 9 years ago

    that zotac 460 card looks like it has pretty poor ventilation with that 2nd dvi port blocking the grille. looks like a piss poor hsf design

    • Fighterpilot
    • 9 years ago

    The GTX460 looks pretty good across the board after reading most of the reviews out so far.
    AMD need to re establish the 5850 at a competitive price if they want to keep their new found market share.
    Props to the Green team on this one.

    • GreatGooglyMoogly
    • 9 years ago

    How many monitors can these cards drive at the same time, and in which configuration? Couldn’t find that info in the article.

      • Farting Bob
      • 9 years ago

      I believe all fermi derivatives can only support 2 per card.

        • Game_boy
        • 9 years ago

        That’s right. You need two cards if you want to do 3 monitors as in 3D Vision Surround.

          • GreatGooglyMoogly
          • 9 years ago

          So if you don’t want to do any Vision blabla surround gaming, you still can only drive 2 monitors with that 1 GB card that has 4 ports? Laaaaame.

    • toyota
    • 9 years ago

    so under load about the same noise and power consumption levels as the gtx260? so its not much faster in some games than the gtx260? but the gtx460 costs MORE than I paid for my gtx260 18 months ago?

      • Flying Fox
      • 9 years ago

      Keep in mind some GTX260 scores did not have the DX11 eye candies in there.

      • Coran Fixx
      • 9 years ago

      Yeah, 15 months for me. I guess if someone didnt have a video card it might be exciting, but I’m just not getting the reason to upgrade to this card (260-216 owner)

      • travbrad
      • 9 years ago

      Yeah thats one of the reasons I haven’t upgraded my 4830 still. I bought this for $95 ages ago, and there is STILL nothing in that price range that is faster somehow…

    • KShader
    • 9 years ago

    I think you need to get that Xacto knife out, because your source for the GF104 die size was one from March that was basing it on 256SPUs, not the much higher number it ended up as, and was based on a square, not rectangular die.

    It is somewhat important as to the die size, since if the die is still bigger than the Cypress die, then that leaves an opening for pricing the HD5850 to compete with the GTX460 from a production perspective. At that price, you would find alot more difficulty justifying the GTX460. Unfortunately only about half of the reviews have included HD5850 results alongside the HD5830 which was easily expected to be outclassed, but they included the pricier GTX470 more often than not.

    Until this launch there has been no pressure for AMD to lower it’s prices, now that there is this pressure, what will their reply be, and what could it be?

    If the die is smaller and board costs are not significantly higher, then prices might drop and still remain profitable.

    In any case the GTX460 is a good launch and should bring a good dose of competition to the marketplace.

    • Farting Bob
    • 9 years ago

    Pleasently surprised by Green camp for once, this is what they needed 6 months ago.
    Hopefully we will get to see a bit of a price war in the $150-250 range now that both players have fairly similar performing cards in that range.

      • wira020
      • 9 years ago

      Me too, this is certainly a breath of fresh air after a few launch i had considered as failures… btw, good aricle TR.. i’m just not so sure about the choice of catalyst version… that driver didnt even work for me…

      • JustAnEngineer
      • 9 years ago

      Remember that you’re comparing them to cards that came out in September 2009. This is what NVidia needed back then to be competitive. I’m delighted that they’ve finally caught up. Give it a month and a half for competition to drive prices down on both sides, and there may be some good deals by Labor Day, 2010.

    • esterhasz
    • 9 years ago

    Thank’s for the review! The 460 looks good indeed…

    It would be great though to have some non-gaming benchmarks in there. Adobe CS5 would be really, really interesting (seems to favor CUDA but does it make a difference, hard to tell) and a look how the non-pro cards fare at 3DSmax or Maya would be more than welcome (for students, Quadro’s are often a little too pricey).

    With the 460 being really quiet, this may be a budget option to put in a workstation…

      • KShader
      • 9 years ago

      CS5 doesn’t officially support Fermi yet.

      The hack works, but the GTX460 would perform similarly to all other cards in the OpenGL accelerated other parts.

      Only MPE would change and only if you hack it to do so.

      For GPGPU you would want something that plays well to both architectures (MilkyWay@Home does while F@H does not).

      3DSMax and most other OpenGL Pro Apps are very disappointing on gaming cards, workstation drivers make a big difference in the majority professional OpenGL apps, DirectX versions are more related to hardware power than drivers, and there solid gaming cards have done well.

    • Game_boy
    • 9 years ago

    This product has a shelf life of 2-3 months, because of AMD’s Southern Islands generation. GF106 and GF108 are coming out in August/September, so they have a shelf life of even less.

    So, they have the price/performance and power win at that one price point, lower margins than Cypress (which can sell its top bin for $400). And Nvidia’s next generation is coming when? Q2 ’11 for 28nm?

    They did manage to get power consumption under control well though.

      • bdwilcox
      • 9 years ago

      From all the rumors I could find, GF106 and GF108 have much lower performance envelopes (and prices to match) so the GF104 should remain a viable product for a while. I’m sure nVidia will play with pricing and market position, but I don’t see GF104 going anywhere any time soon.

        • Game_boy
        • 9 years ago

        I meant that GF106 and GF108 would have even less time before AMD creates new products up against them than GF104 would, not that GF104 would be against GF106 and GF108.

        AMD’s January conference call said there would be a full lineup refresh in the second half of this year, and SemiAccurate has said it will arrive in September and feature Evergreen shaders with a new uncore that will remove the bottlenecks.

    • toyota
    • 9 years ago

    /[

    • bdwilcox
    • 9 years ago

    /[

      • StuG
      • 9 years ago

      +1

      • phez
      • 9 years ago

      You say the 460 is underwhelming for the money, then go on to say that ATI should drop the price of the 5850 /[

        • bdwilcox
        • 9 years ago

        The 460 is underwhelming. All it does is match its direct AMD competitor, the 5830, in both price and performance. You’re supposed to beat your direct competitor, not match it.

        So now AMD has three options to counter the 460. They can keep the 5830 as the 460’s direct competition but drop the 5830’s price to make it more attractive (beat the 460 in price). Or they can get rid of the 5830 and replace it with a better performing 5840 at the same price point (beat the 460 in performance). Or, finally, they can lower the 5850’s price to compete against the 460 1GB (beat the 460 1GB in performance) and counter the 460 768MB with either aforementioned 5830 price-drop or new 5840. But, no matter what, the 5850’s price will have to come down since its performance premium just doesn’t warrant a $100 price premium over the 460.

          • flip-mode
          • 9 years ago

          I’m not underwhelmed. This card is better than the 5830. If you want underwhelming, refer to the 5830 or the 5770.

            • bdwilcox
            • 9 years ago

            As I indicated in my first post, the 5830 was a miscarriage, no doubt. But being being 3 FPS better than a miscarriage doesn’t make a card very attractive when it costs the same or more. I just saw a Sapphire 5830 for $175. If I was looking for a card in this performance envelope, I would jump on that far quicker than a 460 since I wouldn’t be getting much more for the $25 premium the 460 is bringing with it (plus the 5830 comes standard with 1GB of memory making it more future proof).

            • flip-mode
            • 9 years ago

            There are good reasons to pick the 460 instead of the 5830, even if it costs a little more. Lower power, smaller card, and some very *[

          • PixelArmy
          • 9 years ago

          Equal? It’s 5-25% faster than the 5830! A lot of those games it’s looks like it’s battling the 5850 not the 5830. In Metro 2033, TWIMTBP or not, it’s taking on the 5870… This is your definition of equal?!?

            • douglar
            • 9 years ago

            Metro and Borderlands favor the 460 for sure. My issue is that I have no interest in playing Metro and Borderlands didn’t have any replay value compared to Counterstrike or L4D2, both of which are ATI favorites. Of course with those games, you don’t need a lot of video card period.

            • tviceman
            • 9 years ago

            Yeah it doesn’t take much more than a fart to power l4d or counterstrike.

            • Welch
            • 9 years ago

            I wouldn’t really count on the fact that this was only a “release price” for the 460. Its rare to see a company put out a product late like this and price it at release only to a week later lower it yet again. The only way they would is if ATI (as mentioned before) were to price match the 5850 to the 460. Short of that, you can bet on the 200.00 price tag for some time being. Although you may find the occasional manufacturers mail-in-and-wait-for-the-check-rebate. I wouldn’t go as far as to call it underwhelming… I’d probably characterize my reaction to the 460 as indifferent… who cares, its not really a STEAL or anything.

            I’m with BD and Flip on this… wait it out and reap the rewards of a reduced 5850 :). Its been around for awhile and is well overdue for a price slash.

            • tviceman
            • 9 years ago

            Tech articles suggest AMD doesn’t have enough production to meet supply should they substantially lower their prices of the hd5850 and 5870. Apparently Nvidia has secured and bought up a very large share of the 40nm production at TSMC, in part to help prevent a price war.

            • bdwilcox
            • 9 years ago

            Did you happen to look at more than two benchmarks? Or how about more than just this review? I’ve probably looked at 25 reviews across the web now and most benchmarks have the 460* about 3 FPS over the 5830. Sure, you’ll find some outlier benches where the nVidia architecture is favored, but most games give it about a 3 FPS advantage over the 5830. If only judging by the benches you’re looking at, even a GTX 260 would probably best a 5870. So take a look around. There’s quite a list of reviews in today’s Shortbread.
            * By this, I mean the 768 MB version. The 1GB version is quicker but also $40 more, pushing it out of the 5830’s price point for the time being.

            • PixelArmy
            • 9 years ago

            Of course I looked at more than those two TR benchmarks and looked at multiple sites… (Way to argue like I hadn’t…)

            Based on TechReport and AnandTech (37 charts)… (If I knew how to format a table for the comments section, I’d try and format the spreadsheet.)

            With Google Docs magic… Versus the HD5830…
            In terms of absolute fps, the GTX 460 768 MB is on average 4.43 fps faster and GTX 460 1GB is 9.18. I don’t know a good way to average %, but computing the % diff per game, then averaging those, results in the 460’s being 10.41% and 21.83% faster, respectively. (Alternatively, calculating % from the total sum of fps = +9.6% and +19.8%).

            *[

Pin It on Pinterest

Share This