Nvidia’s GeForce GTS 450 graphics processor

Generally, we’d open a review like this one by reminding you of the recent history of video cards and graphics chips like this one, setting the proper context for everything that comes next. Today, though, I have a plane to catch, and I anticipate writing a lot of the commentary on the following pages from an oddly crouched position in a coach-class seat while stale air tainted with the faint smell of farts blows in my face.

The source of our rush is a long, intense week spent with Nvidia’s new graphics card, the GeForce GTS 450. Priced at around $130, this card is Nvidia’s answer to the Radeon HD 5700 series—if you can call it an “answer” after the competition has been on the market for a full year. Regardless of the timing, though, the GTS 450—and the GPU behind it—is a potentially attractive proposition for those who lack the resolve and (display) resolution to spend more than the cost of three big-name games on the hardware required to play them. Keep reading for our detailed testing and incredibly rushed text on the GTS 450.

Yep, it’s yet another Fermi derivative

The graphics chip behind the GeForce GTS 450 is the third variety of the DirectX 11-class Fermi architecture that Nvidia has brought to the desktop market. As you may know, many chip designs these days are essentially modular, and can be scaled up and down in size and features to meet different goals.

The chip that powers the GTS 450, known as the GF106, is perhaps best thought of as roughly half of the GF104 GPU used in the GeForce GTX 460. Where the GF104 has two GPCs, or graphics processing clusters, the GF106 has just one, so it has half the triangle setup rate of the GF104—and just one fourth that of the big daddy, the GF100.

A block diagram of the GF106 GPU. Source: Nvidia.

Inside of that GPC are four shader multiprocessor blocks, or SMs, arranged essentially as they are in the GF104. That means each SM has 48 stream processors (Nvidia likes to call them “CUDA cores”; we do not) and a texture block capable of sampling and filtering eight texels per clock. In total, then, the GF106 has 192 SPs and can filter 32 texels per clock. For compatibility reasons, this GPU has the ability to process double-precision floating-point math, but only at one-twelfth the rate it can handle single-precision math, again like the GF104. (The GF100 is much more formidable, but it serves different markets.)

If you look closely at the diagram above, you’ll notice that the GF106 bucks expectations for a mid-range graphics chip in a couple of notable ways. Rather than the expected pair of 64-bit GDDR5 memory interfaces, the GF106 has a trio. Correspondingly, it has three ROP partitions, each capable of outputting eight pixels per clock, rather than the two ROP partitions one might expect. The 50% wider memory interface and ROP partitions give the GF106 substantially more potential oomph than competitors like AMD’s mid-range Juniper GPU.

  Estimated

transistor

count

(Millions)

Approximate

die size

(mm²)

Fabrication

process node

G92b 754 256 55-nm TSMC
GF100 3000 529* 40-nm TSMC
GF104 1950 331* 40-nm TSMC
GF106 1170 240* 40-nm TSMC
RV770 956 256 55-nm TSMC
Juniper 1040 166 40-nm TSMC
Cypress 2150 334 40-nm TSMC

Of course, the extra power comes at a price, as you can see with a quick glance at the transistor count and die size numbers on the right. The GF106 is quite a bit larger than Juniper, all told.

Incidentally, since Nvidia doesn’t divulge die sizes, we’ve put asterisks next to some of the figures in the table. We’ve simply gone with the best published numbers we can find for GF100 and GF104, but since it lacks a metal cap, we were able to measure the GF106 at roughly 15 mm by 16 mm, or 240 mm². We may be off by less than a millimeter in each dimension with our quick sizing via wooden ruler, but we’re pretty close.

The larger chip size likely translates into higher manufacturing costs for Nvidia, but it doesn’t necessarily translate into higher prices for folks buying graphics cards based on it. We’re just showing you this information for the sake of chip-geekery. Following further in that vein, we have some similar-sized pictures of the two chips below, shown next to a U.S. quarter to celebrate American hegemony and also to provide a size reference.

GF106

Juniper

The intriguing thing about the GF106 is that, like all of the Fermi-derived graphics processors to date, we’ve not yet seen a product based on a fully enabled version of the chip. The GTS 450, as we’re about to find out, only uses a portion of the GPU’s total power. We’re dying to know whether Nvidia has been producing gimpy implementations of its DX11 graphics chips out of necessity (due to manufacturing and yield issues), for strategic reasons (keeping a little juice in reserve), or some combination of the two (and what combination, really, which is the key question). We don’t know yet, but we do get to use a lot of parentheses in the interim, which is its own reward.

Introducing the GTS 450

For the GTS 450, Nvidia has elected to disable the GF106’s third memory controller and ROP partition, so the card effectively has a 128-bit path to memory and 16 pixels per clock of ROP throughput. That allows the GTS 450 to meet the Juniper-based Radeon HD 5700 series head-on with very similar specifications.

Here’s a look at the GeForce GTS 450 reference design from Nvidia. Retail cards should be based on it, but will differ to one degree or another. The GPU on this card is clocked at 783MHz (its double-pumped SMs thus run at 1566MHz), with a memory clock of 900MHz—or 3.6 Gbps, as is the fashion for reporting quad-data-rate GDDR5 speeds. Onboard are eight memory chips—four on the front and four on the back—totaling 1GB of capacity. You’ll notice, also, that there are two pads empty on the top side of the board, visible above. Two more empty pads are on the back, too, raising the likely prospect of a full-on GF106 card based on this same PCB design.

The reference GTS 450 has Nvidia’s now-standard complement of twin dual-link DVI ports and a mini-HDMI output. Board makers may deviate from this formula, as we’ll see. All GTS 450 cards should only require a single, six-pin auxiliary power input, though, since the card’s max power rating, or TDP, is 106W.

GTS 450 cards running at stock clock frequencies are already selling online for Nvidia’s suggested price of $130. That squarely positions the GTS 450 against the Radeon HD 5750, which has dipped as low as $120 this past weekend in order to welcome the GTS 450.

For just ten bucks more, or $140, you can grab the Asus ENGTS450 TOP card pictured above, with considerably higher clock rates: a 925MHz GPU core, 1850MHz shaders, and 1GHz/4 Gbps memory. Nvidia often leaves board makers with some leeway for higher clock speeds at higher prices, but this is a bit of funny move, because the GF106 apparently has beaucoup headroom—and at $140, this version of the GTS 450 is pretty much a direct competitor for the Radeon HD 5770. This Sapphire 5770, for instance, sells at that same price.

As is obvious from the picture, the Asus TOP card has a custom cooler. What may not be so obvious, given the shrouding on both, is that Asus’ cooler is quite a bit beefier than the stock one, with more metal and a larger heatsink surface. Asus calls this its Direct CU cooler, due to that fact that the copper heatpipes (beneath the chrome plating) make direct contact with the surface of the GPU. Asus’ other enhancements over the reference board include a custom VRM design with a higher phase count, the ability to tweak the GPU voltage for overclocking via its Smart Doctor software, and a metal bracket across the top of the board to provide additional sturdiness. Oh, and Asus includes a full-size HDMI port, a VGA connector, and just one DVI output.

We have little patience for debating over five or ten bucks in an age when top-flight games run $60—heck, we’re lousy at reviewing video cards in this category, since we’d nearly always step up a notch or two—but if it were up to us to choose, we’d pick the $140 Asus TOP over the $130 stock card ten times out of ten. If that choice is too daunting for you, we hear MSI is splitting the difference by offering a GTS 450 at 850MHz/4 Gbps for $135. That should rouse you out of your stultifying indecision.

We took some flak for not including higher-clocked retail versions of competing Radeon cards in our recent SLI vs. CrossFire roundup, so when we set out to do this review—before Nvidia revealed the exact pricing of the GTS 450 to us—we went looking for a hot-clocked Radeon HD 5750 to serve as a comparison. The best we could find selling at Newegg was Sapphire’s Vapor-X variant, pictured above, which Sapphire kindly agreed to send us. This baby is clocked at 710MHz/1160MHz, up 10MHz from a stock 5750. The custom Vapor-X cooler on this card is pretty nice, but unfortunately, this product is currently selling for 150 bucks at Newegg. A mail-in rebate will knock that down to $135, net, but we think this thing’s asking price will have to drop in response to movement on other 5750 and 5770 cards, as well as the GTS 450’s introduction. We’ve included full results for the Vapor-X 5750 on the following pages, so you can see how the tweaked clocks and fancy cooler change things.

Some driver changes from Nvidia

Alongside the release of the GTS 450, Nvidia today is introducing a new generation of its driver software, release 260, that will bring some notable improvements for owners of various GeForce cards. The firm claims performance boosts for all GTS/GTX 400-series graphics cards in certain games, ranging from 7-29%. Often, such claims for new drivers are limited to very specific scenarios—as is the 29% number in this case, which applies to a certain game at certain settings—but we can’t deny that Nvidia has made tremendous progress in tuning the performance of Fermi-based GPUs since their introduction. These drivers should be another step forward.

Beyond that, the release 260 drivers enable bitstream audio output over HDMI, with support for 24-bit, 96 and 192KHz audio formats from compatible Blu-ray movies on GTX400-series GPUs, as well as the GT240/220/210. Both the Dolby TrueHD and DTS HD Master Audio formats are supported.

Release 260 also brings a new user interface for the setup of multi-display configurations, and happily, the software for the funny-glasses-based GeForce 3D Vision is now packaged with the standard video driver.

All of these changes come in a new driver package, with an installer script that offers more control over which components are installed. In my experience, this installer is quite a bit quicker than the old one, which sometimes paused for minutes at a stretch for no apparent reason. Among the new choices in this script is a clean install option that purportedly “completely wipes out” older video drivers before installing new ones. That may help with troubleshooting—or simply satisfying those OCD urges—in some cases.

Our testing methods

Many of our performance tests are scripted and repeatable, but for a couple of games, Battlefield: Bad Company 2 and Metro 2033, we used the Fraps utility to record frame rates while playing a 60-second sequence from the game. Although capturing frame rates while playing isn’t precisely repeatable, we tried to make each run as similar as possible to all of the others. We raised our sample size, testing each FRAPS sequence five times per video card, in order to counteract any variability. We’ve included second-by-second frame rate results from Fraps for those games, and in that case, you’re seeing the results from a single, representative pass through the test sequence.

As ever, we did our best to deliver clean benchmark numbers. Tests were run at least three times, and we’ve reported the median result.

Our test systems were configured like so:

Processor Core i7-965 Extreme 3.2GHz
Motherboard Gigabyte EX58-UD5
North bridge X58 IOH
South bridge ICH10R
Memory size 12GB (6 DIMMs)
Memory type Corsair Dominator CMD12GX3M6A1600C8

DDR3 SDRAM at 1600MHz

Memory timings 8-8-8-24 2T
Chipset drivers INF update 9.1.1.1025

Rapid Storage Technology 9.6.0.1014

Audio Integrated ICH10R/ALC889A

with Realtek R2.51 drivers

Graphics Radeon HD 5750 1GB

with Catalyst 10.8 drivers & 10.8a application profiles

Sapphire Radeon HD 5750 1GB Vapor-X

with Catalyst 10.8 drivers & 10.8a application profiles

Sapphire Radeon HD 5750 1GB Vapor-X + Radeon HD 5750 1GB

with Catalyst 10.8 drivers & 10.8a application profiles

Gigabyte Radeon HD 5770 1GB

with Catalyst 10.8 drivers & 10.8a application profiles

XFX Radeon HD 5830 1GB

with Catalyst 10.8 drivers & 10.8a application profiles

EVGA GeForce GTS 250 Superclocked 1GB

with ForceWare 260.52 drivers

GeForce GTS 450 1GB

with ForceWare 260.52 drivers

Asus ENGTS450 TOP 1GB

with ForceWare 260.52 drivers

Dual GeForce GTS 450 1GB

with ForceWare 260.52 drivers

Gigabyte GeForce GTX 460 OC 768MB
with ForceWare 260.52 drivers
Hard drive WD RE3 WD1002FBYS 1TB SATA
Power supply PC Power & Cooling Silencer 750 Watt
OS Windows 7 Ultimate x64 Edition

DirectX runtime update June 2010

Thanks to Intel, Corsair, Western Digital, Gigabyte, and PC Power & Cooling for helping to outfit our test rigs with some of the finest hardware available. AMD, Nvidia, XFX, Asus, Sapphire, Zotac, and Gigabyte supplied the graphics cards for testing, as well.

Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests.

We used the following test applications:

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Running the numbers

  Peak pixel

fill rate

(Gpixels/s)

Peak bilinear

INT8 texel

filtering rate*

(Gtexels/s)
*FP16 is half rate

Peak

memory

bandwidth

(GB/s)

Peak shader

arithmetic

(GFLOPS)

Peak

rasterization

rate

(Mtris/s)

GeForce GTS 250 11.8 47.2 70.4 470 738
EVGA GeForce GTS 250 Superclocked 12.3 49.3 71.9 484 770
GeForce GTS 450 12.5 25.1 57.7 601 783
Asus ENGTS450 TOP 14.8 29.6 64.0 710 925
GeForce GTX 460 768MB 16.2 37.8 86.4 907 1350
Gigabyte GeForce GTX 460 768MB OC 17.2 40.0 86.4 961 1430
GeForce GTX 460 1GB 21.6 37.8 115.2 907 1350
GeForce GTX 465 19.4 26.7 102.6 855 1821
GeForce GTX 470 24.3 34.0 133.9 1089 2428
GeForce GTX 480 33.6 42.0 177.4 1345 2800
Radeon HD 5750 11.2 25.2 73.6 1008 700
Sapphire Radeon HD 5750 Vapor-X 11.4 25.6 74.2 1022 710
Radeon HD 5770 13.6 34.0 76.8 1360 850
Radeon HD 5830 12.8 44.8 128.0 1792 800
Radeon HD 5850 23.2 52.2 128.0 2088 725
Radeon HD 5870 27.2 68.0 153.6 2720 850
Radeon HD 5970 46.4 116.0 256.0 4640 1450

The table above shows theoretical peak throughput rates for these video cards and some of their bigger siblings in some key categories. As always, we’ll remind you that these are just theoretical numbers; delivered performance will almost always be lower and will depend on the GPU architecture.

You’ll notice that the GTS 450 cards don’t lead the competing Radeons in any of the heavy-hitter categories like texture filtering rate, memory bandwidth, or shader arithmetic. The gap in peak shader arithmetic rate is especially daunting. That’s par for the course in this generation of GPUs, and Fermi-based chips have shown an ability to perform relatively well in the real world, regardless. We can measure a couple of these capabilities to get a sense why that is.

We’ve grown increasingly dissatisfied with the texture fill rate tool in 3DMark Vantage, so we’ve reached back into the cupboard and pulled out an old favorite, D3D RightMark, to test texture filtering performance.

Unlike 3DMark, this tool lets us test a range of filtering types, not just texture sampling rates. Unfortunately, D3D RightMark won’t test FP16 texture formats, but integer texture formats are still pretty widely used in games. I’ve plotted a range of results below, and to make things more readable, I’ve broken out a couple of filtering types into bar charts, as well. Since this test isn’t compatible with SLI, we’ve omitted those results. We’ve also left the CrossFire config out of the line plot for the sake of readability.

The stock GTS 450 trails the Radeon HD 5770 with only bilinear filtering applied, but the GTS 450 gains strength as higher-quality filtering kicks in. At 16X aniso, the stock GTS 450 delivers more filtered texels than the 5750, and the higher-clock GTS 450 TOP nearly matches the Radeon HD 5770.

As I’ve noted before, the Unigine Heaven demo’s “extreme” tessellation mode isn’t a very smart use of DirectX 11 tessellation, with too many triangles and little corresponding improvement in image quality. I think that makes it a poor representation of graphics workloads in future games and thus a poor benchmark of overall GPU performance.

Pushing through all of those polygons does have its uses, though. This demo should help us tease out the differences in triangle throughput between these GPUs. To do so, we’ve tested at the relatively low resolution of 1680×1050, with 4X anisotropic filtering and no antialiasing. Shaders were set to “high” and tessellation to “extreme.”

Fermi is the first GPU architecture to enable parallel processing of fundamental geometry in the graphics pipeline, which should help with handling high levels of tessellation in DirectX 11 games, but the GF106 chip in the GTS 450 has only a single rasterization engine, just like any 5000-series Radeon. As a result, the GTS 450 cards perform about like their direct Radeon competition. The GeForce GTX 460, with dual rasterizers, performs in league with the SLI and CrossFireX dual-GPU configs.

Starcraft II

We’ll start with a little game you may have heard of called Starcraft II. We tested SC2 by playing back a quarter-final match from a recent tournament using the game’s replay feature. This particular match was about 10 minutes in duration, and we captured frame rates over that time using the Fraps utility. Thanks to the relatively long time window involved, we decided not to repeat this test multiple times, like we usually do when testing games with Fraps.

After capturing those results, we decided to concentrate our attention on the test data from the latter portion of the match, when the two sides had already completed their initial unit build-outs and were engaging in battle. This part of the match is much more graphically intensive and gives us a better sense of performance when it matters.

We tested at the settings shown above, with the notable exception that we also enabled 4X antialiasing via these cards’ respective driver control panels. SC2 doesn’t support AA natively, but we think this class of card can produce playable frame rates with AA enabled—and the game looks better that way.

With only one or two frames per second separating the 5750 from the GTS 450 and the 5770 from the GTS 450 TOP, we’re willing to call this one a wash, more or less. For this class of game, nearly all of these cards are producing acceptable frame rates, with lows just under 30 FPS. This is at a relatively high resolution for a $130 graphics card, too.

The largest differences here are in the higher-end configs. The GeForce GTX 768MB, whose price Nvidia has just slashed to $170, outperforms the similarly-priced Radeon HD 5830. Nvidia has the edge in multi-GPU scaling, too, as the GTS 450 SLI setup nearly doubles the performance of a single card, while 5750 CrossFireX performance is relatively sluggish.

Mafia II

The open-world Mafia II is another new addition to our test suite, and we also tested it with Fraps.

We tested at the settings shown above, and only after we’d gone down that path did we learn that turning on this game’s antialiasing option does something unexpected: it enables a 2X supersampled antialiasing mode, apparently. Supersampling touches every single pixel on the screen and thus isn’t very efficient, but we still saw playable enough frame rates at the settings we used. In fact, we need to look into it further, but we think Mafia II may also be using some form of post-processing or custom AA filter to further soften up edges. Whatever it’s doing, though, it seems to work. The game looks pretty darned good to our eyes, with very little in the way of crawling or jaggies on edges.

Although this game includes special, GeForce-only PhysX-enhanced additional smithereens and flying objects, we decided to stick to a direct, head-to-head comparison, so we left those effects disabled.

The Radeons look relatively stronger here, by a few FPS, in each price range. Only in SLI does a GeForce config come out on top.

Aliens vs. Predator

The new AvP game uses several DirectX 11 features to improve image quality and performance, including tessellation, advanced shadow sampling, and DX11-enhanced multisampled anti-aliasing. Naturally, we were pleased when the game’s developers put together an easily scriptable benchmark tool. This benchmark cycles through a range of scenes in the game, including one spot where a horde of tessellated aliens comes crawling down the floor, ceiling, and walls of a corridor.

To keep frame rates playable on these cards, we had to compromise on image quality a little bit, mainly by dropping antialiasing. We also held texture quality at “High” and stuck to 4X anisotropic filtering. We did leave most of the DX11 options enabled, including “High” shadow quality with advanced shadow sampling, ambient occlusion, and tessellation. The use of DX11 effects ruled out the use of older, DX10-class video cards, so we’ve excluded them here.

Once again, the differences are small enough that we can call these results a tie at each price point, but the Radeons do have the slight advantage in each case.

Just Cause 2

I’ve already sunk more hours than I’d care to admit into this open-world adventure, and I feel another bout coming on soon. JC2 has some flashy visuals courtesy of DirectX 10, and the sheer scope of the game world is breathtaking, as are the resulting view distances.

Although JC2 includes a couple of visual effects generated by Nvidia’s CUDA GPU-computing API, we’ve left those disabled for our testing. The CUDA effects are only used sparingly in the game, anyhow, and we’d like to keep things even between the different GPU brands. I do think the water simulation looks gorgeous, but I’m not so impressed by the Bokeh filter used for depth-of-field effects.

We tested performance with JC2‘s built-in benchmark, using the the “Dark Tower” sequence.

Given that, you know, the frame rates are almost identical, we’d call this one yet another tie between the GTS 450s and the Radeon HD 5700 cards. The GTX 460 768MB outduels the Radeon HD 5830 here, though.

DiRT 2: DX9

This excellent racer packs a scriptable performance test. We tested at DiRT 2‘s “ultra” quality presets in both DirectX 9 and Direct X 11. The big difference between the two is that the DX11 mode includes tessellation on the crowd and water. Otherwise, they’re hardly distinguishable.

 

DiRT 2: DX11

Lots of results, but the pattern we’ve seen in prior pages isn’t substantially changed. At the highest resolution in both DX9 and DX11, the GTS 450 cards are bracketed, above and below, by the 5750 and 5770. Overall, the contests are close enough to be considered a tie at each price point—again, with the obvious exception that the GTX 460 768MB is faster than the Radeon HD 5830.

Battlefield: Bad Company 2
BC2 uses DirectX 11, but according to this interview, DX11 is mainly used to speed up soft shadow filtering. The DirectX 10 rendering path produces the same images.

We turned up nearly all of the image quality settings in the game. Our test sessions took place in the first 60 seconds of the “Heart of Darkness” level.

 

Ok, look, you get the idea about the 5750 and GTS 450 being a close match, and the GTS 450 TOP and 5770 also offering extremely similar performance, right? You’re also gathering that the GTX 460 768MB is superior to the Radeon HD 5830? Good. The pattern holds. Let’s move on.

Borderlands

We tested Gearbox’s post-apocalyptic role-playing shooter by using the game’s built-in performance test. We tested with all of the in-game quality options at their max. We didn’t enable antialiasing, because the game’s Unreal Engine doesn’t natively support it.

Here’s one last game where have a chance to see something different, and we kind of do: the GeForces are relatively stronger in Borderlands, which might make up for some of the times when the Radeons have had a minor advantage in other games, if you’re keeping score very closely at home.

Power consumption

Since we have a number of non-reference GeForce cards among the field, we decided to test them individually against Nvidia’s reference cards in this portion of the review, so we could see how custom coolers and clock speeds affect power draw, noise, and operating temperatures. The results should give us a sense of whether these changes really add value.

We measured total system power consumption at the wall socket using our fancy new Yokogawa WT210 digital power meter. The monitor was plugged into a separate outlet, so its power draw was not part of our measurement. The cards were plugged into a motherboard on an open test bench.

The idle measurements were taken at the Windows desktop with the Aero theme enabled. The cards were tested under load running Left 4 Dead 2 at a 1920×1080 resolution with 4X AA and 16X anisotropic filtering. We test power with Left 4 Dead 2 because we’ve found that the Source engine’s fairly simple shaders tend to cause GPUs to draw quite a bit of power, so we think it’s a solidly representative peak gaming workload.

 

If the performance results left you looking for another factor to break the deadlock between the Radeons and GeForces, this might be it. Uh, kinda. The GeForces are more efficient at idle, to the tune of 5-10W at a system level, but they pull more power under load, leading to system-level power use that’s roughly 18-34W higher. Is that enough to matter? Let’s see what it does to noise and heat.

Noise levels

We measured noise levels on our test system, sitting on an open test bench, using an Extech model 407738 digital sound level meter. The meter was mounted on a tripod approximately 8″ from the test system at a height even with the top of the video card. We used the OSHA-standard weighting and speed for these measurements.

You can think of these noise level measurements much like our system power consumption tests, because the entire systems’ noise levels were measured. Of course, noise levels will vary greatly in the real world along with the acoustic properties of the PC enclosure used, whether the enclosure provides adequate cooling to avoid a card’s highest fan speeds, placement of the enclosure in the room, and a whole range of other variables. These results should give a reasonably good picture of comparative fan noise, though.

 

None of these cards are all that loud, and the differences in noise levels at idle are pretty minimal, overall. That’s partly because our move to a newer 7,200-RPM hard drive on our test rig has raised the system’s noise floor somewhat. The Radeons tend to be a little quieter at idle, and the GeForces are a little quieter under load—an interesting example of our noise results running counter to what one would expect from the power draw numbers. That goes to show that a good cooler can overcome a few watts of additional heat to dissipate.

The custom coolers on both the Asus GTS 450 TOP and the Sapphire 5750 Vapor-X fare well here, with the Vapor-X outperforming the stock AMD cooler and the GTS 450 TOP matching Nvidia’s stock cooler despite dissipating substantially more power under load.

GPU temperatures

We used GPU-Z to log temperatures during our load testing. For the multi-GPU options, we’ve reported the temperature from the primary GPU, which is generally the warmest.

Not only are the Asus and Sapphire custom coolers relatively quiet, but they keep the GPUs under them relatively cool, too. Doesn’t look to me like you’ll pay much of a penalty in terms GPU temperatures or noise due to the GTS 450’s somewhat higher peak power draw.

Conclusions

Our performance results tell a story of remarkable equivalence, overall, between the two versions of the GeForce GTS 450 we tested and the competing Radeons. The Radeons may have a slight advantage in terms of overall performance, mathematically, but as we saw, the real-world difference between the two is often just a few frames per second, all but imperceptible.

Step back for a second, and the other part of the picture you’ll see is that all of these relatively inexpensive video cards offer reasonably decent performance in the latest games at common display resolutions like 1440×900 and 1680×1050—and we generally pushed the envelope on image quality, even venturing into 1920×1080 resolution at times. If you have a monitor with less than two megapixels of resolution, any of these video cards should allow you to play today’s games without too terribly many compromises.

Like we said earlier, we don’t really have much interest in debating the finer points of product pricing and value when there’s only ten bucks or so between the offerings. At present, street prices on the Radeon HD 5750 have dropped to $120 in certain cases, to greet the GTS 450. Whether a gap between these two products will remain in the long run is anyone’s guess. We do know that we would unequivocally pay the extra in order to get the additional performance you will in stepping up from a stock GTS 450 to the Asus TOP card we tested—or from a Radeon HD 5750 to a 5770. Then again, we’d also recommend stretching whenever possible from the $140 cards up to the $170 GeForce GTX 460 768MB, which was the fastest product in our test by a good margin and, we think, represents the best value, too.

Given that, Nvidia is in a pretty good position, and the addition of the GTS 450 only enhances it. Yet we can’t help but notice that it’s taken Nvidia a year since the introduction of the Radeon HD 5700 series to produce an essentially equivalent DirectX 11-class product—and the GTS 450 isn’t substantially better in any notable way, other than its access to Nvidia’s “graphics plus” features like PhysX and 3D Vision. Many of the new additions to the release 260 drivers—an installer with control over individual software components, bitstream audio support, a better UI for multi-monitor setup—is just Nvidia playing catch up. Even now, the GTS 450 will only drive two monitors simultaneously, while nearly all Radeon HD 5000-series cards will drive three. We’re pleased to see a DX11-capable GeForce in this class of product, but it has indeed been a long time coming.

We expect AMD to unleash the Radeon HD 6000 series before the end of the year, and we’re left wondering whether Nvidia has kept enough potential in reserve in the GF106 and its other chips to allow it to meet that challenge.

Comments closed
    • beespark
    • 10 years ago

    You seem to have omitted a picture of the MSI N450 (http://www.hardocp.com/image.html?image=MTI4NDMyOTAyOHVsMUV1VFNRTHNfMV8xNl9sLmpwZw==) on the second page of your review.

    2 dual-DVI + full HDMI + DisplayPort + shroud on the Sapphire Radeon 5750 Vapor-X

    vs.

    2 dual-DVI + mini HDMI + Zalman-type heatsink design on the MSI board

    Advantage Radeon 5750…

    No biggie though, since your constituency can read B-)

    • clone
    • 10 years ago

    it’s a nice card, it’s very late, it’s priced a little high with HD 5750’s selling occasionally for $100 Cdn and HD 5770’s selling for $140.00 Cdn.

    still ATI should adjust their pricing if they want more marketshare given the maturity of the series it shouldn’t be a problem.

    • axeman
    • 10 years ago

    Something is mildly weird with the idle power consumption graphs. Whereas Nvidia is at the bottom for one 450, the SLI config is at the top. An extra 30 watts at idle for that second GPU is pretty lousy. Perhaps ATi is able to put the second graphics card into a lower power usage state than Nvidia’s drivers are when the system is idle?

    Oh, and the performance of that 5750 Crossfire setup for the power draw is pretty impressive, (cost aside), it’s managing to beat some single card setups that draw more power. I’d always thought one high-performance card would be more efficient.

    • Wintermane
    • 10 years ago

    I see once again you all miss the point.

    Because of the consoles they dont need a x50 chip that is all that oomphy. But what they did want was dx 11 and a cooler lower power smaller card. The 450 is .75 inches shorter then the 250 requires 44 watts less max power and the chip runs 6 degrees cooler, And while the base stock model does indeed have about 12 gb/sec less bandwidth.. as in 1/6th less bandwidth the chip has a much better onchip cache so realy its fairly much a wash not to mention most of the cards likely have mem that can run alot faster then stock anyway.

    The 450 will likely find its way into alot more systems then the 250 ever did simply because it FITS.

    As for amd vs nvidia … nvidia has money amd has debt that right there is the entire reason nvidia can manage to swing so bad and still be ok. Im far more worried about amd as they have notnhing left to sell if things go bad and frankly the economy looks like its going south again.

    • YeuEmMaiMai
    • 10 years ago

    considering that the 5770 pretty much pwns that card and is only $10 more and then you have the 5750 that is slighly below it for $110, Nvidia laid a big fat turd with this card lol

    • ZGradt
    • 10 years ago

    “Given that, Nvidia is in a pretty good position, and the addition of the GTS 450 only enhances it.”

    Where the “given” is paying 30 bux more for a 460 would be the smart thing to do. How does that enhance their position, and how is it a good one in the first place? They seem to be playing catch-up.

    It’s also kind of odd that the card being reviewed comes in dead last in some of the benchmarks. When I look at the numbers, I automatically think “well it’s slower than an [insert overpriced vid card], but at least it’s faster than a …”

    I really just shouldn’t read the reviews on low end hardware because it irks me that the newer cards can’t even compete with the last gen, but cost more just because they support the latest directx…

      • fr500
      • 10 years ago

      Not all consumers are educated nor enthusiasts, most will see the relatively high number, memory size and the GTS and will make a decision.

      • bittermann
      • 10 years ago

      Exactly…the review does not match the conclusion they gave? Same performance as the GTS 250 in some of the tests? What a disappointment. I so wanted this card to succeed but at most it is a $100 card….

    • d0g_p00p
    • 10 years ago

    I would have liked to see a GT200 based card tossed into this review. A GTX 260 would have been nice to see how it stacks up.

      • JustAnEngineer
      • 10 years ago

      GTX260 is obsolete. The one thing that the (admittedly extremely late) appearance of GeForce GTS450 finally does is drive the nail in the coffin of warmed-over DirectX 10.0 chips.

        • derFunkenstein
        • 10 years ago

        Are you saying that since it’s obsolete nobody owns one anymore and therefore we shouldn’t need one for upgrade comparison’s sake?

          • Meadows
          • 10 years ago

          Probably. In fact, I would’ve loved to see the ever-venerable 8800 GT (including a not-so-moderately overclocked version which I pretty much use), and I hope it shows up in one of the upcoming reviews.

          That’s a card that a bunch of people must own (along with its twin brother the 9800 GT, and their step-sister the 9600 GT) and I bet a good number of us are still waiting for direct number comparisons just to finally have enough reasons for an upgrade.

            • Farting Bob
            • 10 years ago

            Anand have their GPU benchmark suite with loads of cards tested on the same hardware, the 8800 is there for comparisons. No 450 on there yet but you can compare it to the rest of the 2xx and 4xx lineup, and competing ATI cards.

            §[<http://www.anandtech.com/bench/GPU/88<]§

      • flip-mode
      • 10 years ago

      GTX 260 and GTX 460 perform essentially the same. The 460 does beat the 260 by 10% or so sometimes, but overall they are very close.

        • Wintermane
        • 10 years ago

        While they may or may not perform the same the 460 is 2.25 inches shorter then the 260 and has a lower max power usage. Also much like the 450 vs 250 it looks like the chip is realty designed not to raw outperform the old gen but to do more while keeping the same frame rate at the same res/aa levels.

          • clone
          • 10 years ago

          the 450 was not built with the 260 as it’s primary focus, it was built to compete with the HD 57xx series while remaining profitable.

            • Wintermane
            • 10 years ago

            I was talking about the 460 vs 260 not the 450 vs 250 but again even there we know the 250 used waay too much power for a x50 card and the 450 uses a hell of alot less power. Thats vital for nvidia as too much power drain means not getting into mass produced systems with lousy cooling or cheap power supplies.

    • Lukian
    • 10 years ago

    Loved the review and the inclusion of 1920×1080 screen resolution.

    Only downside is I can’t compare the GTS 450 to the GTX 460 1GB, which is the logical option for only another $20-30 more.

    If you buy a GTX 460, I dare say it’s likely you’ll eventually buy a second one for use in SLI for higher screen resolutions (and future games) – a 768MB frame buffer is simply going to kill your performance over 1600×1200.

    I would love to see the GTX 460 1GB included, or a future review between the GTS 450 and GTX 460 1GB.

      • MadManOriginal
      • 10 years ago

      Please provide linkage to a GTX 460 1GB at $150-170 🙂 That is GTX 460 768MB range, on ‘deal’ for the lower end of the range, but it does certainly seem to be worth the moderate step up in price to the GTX 768MB. But you’re looking at near $100 more for a GTX 460 1GB and that’s also a huge price percentage increase although they certainly are great price/performance. Seeing the GTX 460 1GB would have been nice, yes, I’m sure we’ll see that from TR when these new game tests are done in future reviews. In the meantime there are other sites on the interwebs.

    • fr500
    • 10 years ago

    I have got an e8400@4Ghz and 2 8800GTX in SLI OC to 600Mhz. I wonder if upgrading will make sense at any point. I game at 1920×1200 btw.

    I skipped 9000 series obviously. I was tempted by the GTX275 till I tested it with an i7 920 and found it barely faster than my setup in games I used to play back then. Was perceptibly faster in GTA IV but it was due to the CPU I guess. I skipped the GTX200 series (except for a GTS250 in my living room PC for playing console ports).

    I like Nvidia better because there are actual resellers here in Ecuador so I don’t have to import the cards myself. This GTS 450 is not worth it if you have G92 cards. You won’t be running PhysX or DX11 eye candy with this. I guess a sensible upgrade for me would be a GTX460.

    According to review a GTX260 should be enough to best my current performance but I don’t buy it.

    Maybe an overclocked GTX460 or a GTX470, What do you think? 460 SLI?

    All I have to say is the 8800GTXs I got have been great for all these years, bought the first one on release date ($600) and got the other one from a friend at $100 a year later.

      • Kurotetsu
      • 10 years ago

      I’d replace the 8800GTXs with a GTX460 1GB or GTX470 if you can afford it. Not because of the performance (though you will see better performance with the GTX470 I imagine), but because getting rid of those cards should noticeably lower the power use and heat of your PC.

        • fr500
        • 10 years ago

        Well electricity is very very cheap so power consumption is not an immediate issue (but I do care for the environment so you have a point).

        Heat is fine too. Thing is back then when I was younger my parents bought me an Alienware PC. It was amazing for it’s time had an 4400+ X2 an 6800GTs in SLI. Upgraded to an 8800GTX but CPU wasn’t up to it.

        One day lightning struck and I was left with this (now In my opinion horrible but functional) case and an 8800GTX

        So I bought an e8400, a new mobo, 8GBs worth of Corsair Dominators (was young, had a good job who cares) and the extra 8800GTX. Now that was fast nothing could touch it….for 40 minutes or so. It would overheat and shutdown all the time. So I bought a Koolance EXOS 2, the required waterblocks and everything, My GPU temps dropped from 100c under load to 60c and the PC has been nice cool and stable since. 😀

        Anyway thanks for the advice, maybe you’re right GTX470 seems way to go. I guess I’ll wait a bit more for sandy bridge too. No point upgrading CPU now.

          • flip-mode
          • 10 years ago

          Yeah, it doesn’t sound like heat and power are an issue for you. Nice to see your cards treating you so well for so long.

    • ronch
    • 10 years ago

    Good to see Nvidia is at least matching ATI’s products. For the time being, at least. I usually go with ATI for the last 6 years or so, but I highly respect Nvidia and would like them to stay in the game. After all, the world’s best graphics processor engineers are either at ATI or Nvidia. From 15-20 graphics chip companies, we’re down to just 3.

    • indeego
    • 10 years ago

    Still no mid-high end cards that idle a total system anywhere near ~60W. If there ever was a lament for a blog, it is this. Since when are idles of 105W(from Tom’s review)-130W(Anand’s) acceptable in this day and ageg{

      • OneArmedScissor
      • 10 years ago

      Welcome to the wonderful world of marketing. All of these sites are handed insanely overpowered PSUs and $300 motherboards from hell to do their benchmarks on. The X58 platform is also the most power hungry in general that there’s been in a long time.

      Most integrated graphics level platforms from the 45nm Core 2 era and on idle at about 40w. Q45, H55, 780G, 890GX, or whatever variation of those are all about the same. The less power hungry cards shown here use about 15-20w at idle.

      A reasonable computer could certainly idle at 60w with current video cards. Unfortunately, normal people don’t all have power meters on hand and they’ll never know the reality. A Kill-A-Watt is $20 well spent, if you ask me.

        • Voldenuit
        • 10 years ago

        ++++1

        Benchmark computers at most tech review sites are not representative of mainstream usage scenarios.

        About the only people that do “sane” power configurations are spcr, and they unfortunately don’t update that often.

        xbitlabs measures GPU power through pass through voltage lines – they recorded a 5870 idling at -[<46W<]- 16W* (GPU only), so there *is* a lot of progress in GPU power management these days. It's just that tech sites are becoming (or staying) dissociated from reality. *EDIT: Turns out the 46W idle was for a *factory overclocked* 5870. The stock 5870 actually coasts at an unbelievable 16W! And that's for a high end card! In contrast, my old 4870 runs at 60W when doing nothing...

          • indeego
          • 10 years ago

          I keep forgetting about xbit. Every time I see that site I’m impressed, but never bookmark. Now permanently bookmarked. Thanksg{<.<}g

        • MadManOriginal
        • 10 years ago

        +1, well said. I provided some concrete examples to back up your ideas.

      • Krogoth
      • 10 years ago

      Sorry, it will never happen.

      Performance GPUs are build for performance not for power savings.

      It is like expecting a Subaru Impreza WRX (performance at a value price) to get the same fuel economy as a Smart Car or any other econobox.

      If you want low power consumption you are going to have go into the low-end arena.

        • Voldenuit
        • 10 years ago

        The 2011 V6 Mustang is more powerful than the old V6 car (305 vs 210 hp) and gets better mileage (31 mpg highway vs 24).

        And yes, you can get midrange GPUs with low idle power these days. It’s the X58 chipsets and EE CPUs that are skewing the power picture.

      • MadManOriginal
      • 10 years ago

      It really depends upon the rest of the system. Go look at techpowerup or xbitlabs and their ‘video card only’ power draw numbers…the idle numbers for recent gen 40nm cards are quite low, especially for anything under the top tier cards. My Q9550@3.6 (overclocked to that with sub-VID voltage and all power saving features enabled…was a very nice CPU) idled at ~115W with an 8800GT and a single HD, that would be around 90W with a modern video card of equivalent or better performance. The overclock, being low voltage, added a mere 5-8W to the idle power draw. b[

    • derFunkenstein
    • 10 years ago

    Also, thanks for including StarCraft 2. Very cool. Just a quick dumb question, are you set to player cam following one guy around to keep the exact view of the game identical?

    • OneArmedScissor
    • 10 years ago

    As boring as it sounds, it’s too bad they never pulled off another g92 upgrade. Gobs of shaders obviously aren’t everything. It looks kind of like the GTS 240 was an attempt at that, but man, that thing blew it.

    • Meadows
    • 10 years ago

    g{

      • Krogoth
      • 10 years ago

      450 has far less memory bandwidth and texture processing power at its disposal than 250GTS. It is kinda unrealistic to expect it to outperform 250GTS/9800GTX. 450GTS is certainly more efficient given its limited resources.

        • mrksha
        • 10 years ago

        But whats the point of releasing gpu barely faster than 250GTS/9800GTX?

        This thing is so slow you can forget about running dx11 on it.

          • Krogoth
          • 10 years ago

          To make a profit of course.

          Remember this GPU isn’t meant for hardcore FPS freaks. That’s the 480 and 470’s job.

          G92b isn’t being produced anymore, and Nvidia needs to recoup its R&D costs with Fermi architecture. The GF106 is sufficient enough for its given role as a “G92b” replacement.

          On the plus side, AMD now has to do a price war with 57xx family. IIRC, many value-minded enthusiast were lamenting the fact that 57xx series since its launch and prior to GTS450 were overpriced for their given performance when compared to their 48xx predecessors.

            • MadManOriginal
            • 10 years ago

            That is true. Aside from the high-end the whole crop of cards over the last year haven’t increased price/performance or absolute performance versus the cards they were replacing in the lineups based upon street prices at launch. Part of that is thanks to the economy tanking pretty hard and all kinds of blowout deals on the old stock but I have to think that there is just something about the designs that contributes as well.

            • Krogoth
            • 10 years ago

            Laws of diminishing returns and physics my friend.

            GPUs are running into their own sets of issues where the good, old die shrinkage routine isn’t going to cut it.

            • MadManOriginal
            • 10 years ago

            I suppose it depends upon what you look for as ‘returns’ aside from performance alone. There does seem to be a DX11-compatible ‘performance tax’ as well just because of whatever additional silicon is required for it. (Fermi’s, and especially high-end Fermi’s, GPGPU ambition additions aside.) Just look at the HD5700s versus previous gen DX10.1 cards for a possible example.

    • FuturePastNow
    • 10 years ago

    So it’s no faster (and sometimes slower) than the ancient GTS 250, which now goes for $99. Boring.

    • Goty
    • 10 years ago

    Is anyone else completely dumbfounded that NVIDIA /[

      • flip-mode
      • 10 years ago

      [raises hand]

      • Krogoth
      • 10 years ago

      Because it costs more to produce and delivers very little gain over 480 version.

      Nvidia is waiting for another die shrink and architecture refresh to make another attempt.

        • flip-mode
        • 10 years ago

        C’mon son, he’s not just talking just about 480 i.e. GF100, but also GF104 and GF106, which are not fully enabled chips either.

          • Krogoth
          • 10 years ago

          Again, likely for the same reasons.

          The bottom line, TSMC’s 40nm process is very rough.

          AMD had to cutback the original “Cypress” and “Juniper” designs in order to make it more workable with TSMC 40nm process. This wasn’t the big as a deal, since they were the first guys on the market and the “cut-down” versions still delivered sufficient performance margins over their predecessors in most cases.

            • flip-mode
            • 10 years ago

            AMD’s Cypress and Juniper are “cut-downs” of what was originally intended? I’ve never heard that. I challenge you to provide a link.

            As for Nvidia’s chips – the GF104 and GF106 are much smaller than the GF100 and are similar to or smaller than the size of Cypress. Furthermore, TSMC has been working on 40nm for a while now and their process has to have improved some by now. In sum, I think Nvidia is up to more than just yield management – that may be part of the story but I don’t think it is the whole story. I’d bet Nvidia is starting to think about how to respond to Radeon 6K. Wouldn’t it be convenient to release fully enabled GF104 and GF106 as Geforce GTX 5s? Quite convenient.

            • Krogoth
            • 10 years ago
            • Goty
            • 10 years ago

            Now show us where in that article it is stated that the design is cut down, please. ATI didn’t cut down the design, they modified it to improve yields. There was no change in the number or amount of any functional unit.

            Now, even if ATI /[

            • flip-mode
            • 10 years ago

            He is talking out of his arse, as usual. I read the whole article – which essentially dealt with Cypress only, so it can’t be applied to Juniper – and there is no mention of Cypress being “cut down” or diminished in any unexpected way due to TSMC’s 40nm process.

            • Goty
            • 10 years ago

            Oh, I know; I just wanted to see his response.

            • Krogoth
            • 10 years ago

            Geez, you people are incredibly dense.

            AMD still did cut-downs. The only difference is that they did it before going full production with the design. The engineers at the time were still in the design phases of RV870. They got word on the yielding issues with TSMC 40nm via RV780 a.k.a 4770. They were allowed to make the call a make sight redesign to make it more suitable for TSMC 40nm process. The redesign pretty much was a cut-down on certain features they were deemed less important. The final form was still faster on paper then its predecessors.

            Nvidia did the opposite. They went ahead full with GF100 design and hope that it work out with 40nm process. This ends up not being the case. After a number of retapes, it becomes clear that a full GF100 wasn’t economically feasible (That’s why 512 core GF100 are practically non-existent). They were forced to make cut-downs on the production level.

            • derFunkenstein
            • 10 years ago

            You’re talking about designs that are trimmed down. This discussion has been about chips with features disabled.

            • Goty
            • 10 years ago

            ATI’s departure from making monolithic GPUs was in place long before the issues with TSMC’s 40G process cropped up. Can you give me specific features that were dropped because of process issues? I’ll need sources, mind you.

            • SiliconSlick
            • 10 years ago

            Goty ” ATI’s departure from making monolithic GPUs was in place long before the issues with TSMC’s 40G process cropped up. Can you give me specific features that were dropped because of process issues? I’ll need sources, mind you. ”

            Well, with all the poser power whining about nvidia – it seems strange the crybabies are still quacking for a fully enabled core that woudl – make the power whine louder !
            The next problem of course is, since ati already stretched their tiny architacture to the limit, they’ve got nothing left in it. It’s all on the table, and if Nvidia decides to unlock all the power in their cores, ati is TOAST.

            Now as for this what did ati drop thing… Look – ati dropped plenty because they don’t have cuda, they crushed Eran’s PhysX wrapper from NGOHQ, they have failed on OpenCL implementation with a cracked sdk es2.0 finally appearing years late… no ambient occlusion included with older series cards from a driver update (ati dumb cores can’t do it ) that nviida gave to the masses.
            Water tesselation is behind, bokeh filtering doesn’t work on ati, AA is crippled and disabled in far too many games for ati, and their drivers, especially 10.5, 10.6, 10.7, 10.8, and now even 10.9 have severe problems for 50%+ of the ati fanboy crowds..
            What did they “leave out or cut” ? WHO CARES ! LOOK AT WHAT THE RESULT IS….
            I guess if you just play a couple fps games only, get lucky and don’t have one of the many frequent and multiple ati card issues that are so endless it would take 2 full single spaced pages to just list the commonly known ones, have a burning hatred in your heart and deranged mind for nviidia because “they are arrogant and egotisitical” (in other words irritatingly confident and on top of it all, including drivers and releases), why then go ahead and save yer two cents on an ati card…
            If not, and that list is no where near complete, do something sane for once in your life and buy an Nvidia card. True, you can’t scream like a bleeding feminized vicitim forever, whine about the evil corporation and their “unfair business practices”, screamthey harm the children of the world, whine they cost more and you have to deliver 2 more papers on the paper route this week to make up for it, wail you want to teach them how to run a company (when they are profitable and ati isn’t), moan they hold back huamities progress, shriek folding at home is nothing, berate PhysX as a worthless fad while screaming dx11 is everything the world ever wanted but keeping it turned off with your cheapo ati card 100% of the time… you know…
            Yeah, I guess the whining, trolling, nvidia hating, spewing, psycho, raging red rooster riotous retarded wretching misconduct is worth the ten thousand driver crashes a year….

            • flip-mode
            • 10 years ago

            You are off topic. The topic here is that Krogoth makes sh~t up on a regular basis.

            • Waco
            • 10 years ago

            There are two possibilities:

            1. We are all incredibly dense and your level of comprehension is well above ours. Unlikely.

            2. You’re talking out your ass and backed yourself into a corner.

            I’ll go with number two. 🙂

        • w00tstock
        • 10 years ago

        Umm you do know it would cost exactly the same right? each and every gf100 chip costs exactly the same amount.

          • flip-mode
          • 10 years ago

          LOL! Hiding in plain sight it was.

          • Krogoth
          • 10 years ago

          No, it costs more to produce a fully working GF1xx chip because a smaller portion of the production yield is suitable. The larger portions have defects, fail to reach clock speed goals, or their thermal ceiling is too high. This is where Nvidia is forced to cut them down.

          Nvidia is just following a common practice in the production side of the semiconductor industry.

          The lack of fully working chips and abundance of “crippled” chips clearly indicate one thing. The GF1xx family are having a major yielding issue. Maybe, the early rumors of 1.7% yields weren’t just some wild exaggerations.

            • Meadows
            • 10 years ago

            No. They cost the same no matter how they perform. A pristine, absolutely faultless GF100 costs the same as a chip that doesn’t function whatsoever.

            Where the difference lies, is in /[

    • flip-mode
    • 10 years ago

    Wow, pretty underwhelming. Nvidia takes a full year to produce a challenger to Juniper and when it finally does, the company actually disables part of the core in order to… not provide better performance than the compentition; then, the cards get priced so as not to… provide better value.

    OK, so the GTX 450 is a decent enough alternative to the 5770 if you want to stick with Nvidia and DX11 in that price range, but in terms of “make up sex” to its faithful for making them wait a year for a product, Nvidia earns a giant FAIL.

    And there is no way it makes any sense not to purchase the 460 instead. Seriously, the price difference is less than the cost of a new game. Wow.

      • Voldenuit
      • 10 years ago

      l[

      • derFunkenstein
      • 10 years ago

      edit: I made a factually-incorrect statement. I withdraw my comment entirely.

        • flip-mode
        • 10 years ago

        Before I even get to read it? Bummer. PM it to me?

          • derFunkenstein
          • 10 years ago

          No, I said “nobody but TR was saying it was crippled” but then I re-read Anand’s review and found I was wrong. Tom’s actually says that it’s not crippled, but that’s Tom’s.

            • flip-mode
            • 10 years ago

            Ah, cool. Thanks for sharing.

    • derFunkenstein
    • 10 years ago

    Given how awesome the release of the GTX 460 was, I expected the GTS 450 to…well, not necessarily blow the doors off the 5770, but at least consistently beat it by 10-15%. This is a pretty disappointing release. The price drop on the 768MB 460 is pretty sweet though.

    • bimmerlovere39
    • 10 years ago

    Am I the only one who was actually most impressed by the performance of the Sapphire’s cooler?

    That said, I welcome the GTS450… hopefully it’ll help pull down pricing on the 5770s.

    • Krogoth
    • 10 years ago

    GTS 450 = Nvidia version of 5770.

    Enough said.

    • sweatshopking
    • 10 years ago

    “About NVIDIA
    NVIDIA (NASDAQ: NVDA) awakened the world to the power of computer graphics when it invented the GPU in 1999. Since then, it has consistently set new standards in visual computing with breathtaking, interactive graphics available on devices ranging from tablets and portable media players to notebooks and workstations. NVIDIA’s expertise in programmable GPUs has led to breakthroughs in parallel processing which make supercomputing inexpensive and widely accessible. The company holds more than 1,100 U.S. patents, including ones covering designs and insights which are fundamental to modern computing. ”

    and don’t forget it B-words! INVENTED THE GPU.

      • kvndoom
      • 10 years ago

      Al Gore would be proud.

        • ClickClick5
        • 10 years ago

        They seem to have mildly forgotten that….

        • pot
        • 10 years ago

        Except Al Gore never claimed to have invented the internet. That false meme needs to die.

        §[<http://www.snopes.com/quotes/internet.asp<]§

          • indeego
          • 10 years ago

          Not to mention that if any politician could lay claim to it, he’d be up there. Hell, Apple owe’s /[

    • Fighterpilot
    • 10 years ago

    Yeah but 2 x HD5750 in CrossfireX use less power under load than one GTX.450.
    Fermi is still a power hog.

    • Usacomp2k3
    • 10 years ago

    The SLI performance is the highlight here, IMHO.

      • Krogoth
      • 10 years ago

      For the same cost, you can get a 460 1GB or even a 5850 which would outperform it for less power consumption and no long-term worries over driver compatibility and “micro-shuttering”.

    • can-a-tuna
    • 10 years ago

    So are you using overclocked press sample? Nvidia is doing that stuff.

      • Voldenuit
      • 10 years ago

      TR is testing the 450 at both stock speeds and ASUS’ TOP factory overclock.

      The fact is that it is easier to find a manufacturer overclocked nvidia card than an AMD one across their entire product range.

      Guru3D did a pretty exhaustive 10-card roundup including overclocked noise and power consumption if you’re after more information.

        • JustAnEngineer
        • 10 years ago

        Hooray for TR testing at the stock clock speed!

      • derFunkenstein
      • 10 years ago

      hooray for not reading the article!

    • Voldenuit
    • 10 years ago

    Thanks for the timely review, Scott.

    If I have a niggle, it would be that very few people will be looking to pair a $130 card with a $999 Extreme Edition 3.2 GHz CPU.

    I understand that using the fastest CPU is intended to remove any CPU bottlenecks from the testing, and that is why everybody else does this. It makes for comparable figures between reviews, but may possibly inflate the numbers compared to real-world usage.

    I know TR can’t possibly test every hardware configuration under the sun, but I’d love to see a separate page (or two) for midrange cards tested on, say, TR’s Econobox and Sweeter Spot builds.

    Since the guides constantly change, it may not be useful for historical comparison, but would still give a very good snapshot of the state of the industry at the time.

      • esterhasz
      • 10 years ago

      I wholeheartedly agree. I wouldn’t have to be the full battery of tests but three games on two resolutions with a lower-end and a mid-range CPU would give people an idea what they can expect from a graphics upgrade…

      • wagsbags
      • 10 years ago

      I’ve mentioned the need for this before as well. Tech articles usually answer the question “I’m upgrading, now what should I get?” but never “what performance can I expect from an upgrade?”

      • OneArmedScissor
      • 10 years ago

      It doesn’t really make much difference. A cheaper, overclocked CPU would be even faster.

      I would be more concerned with the specific type of CPU. The Bloomfield i7s are probably pretty prevalent amongst people visiting these sites, but it’s not necessarily the least “in the way” of a graphics card. It would be nice to see a Lynnfield i7 used, but set with equal clock speeds to the EE chips, including the uncore.

      • derFunkenstein
      • 10 years ago

      At the resolution TR runs its tests, you’re not going to see much variation between these results and your own resutls on an Athlon II X4. However, it’s clearly in everyone’s best interests to show which card is actually the fastest, relatively speaking, by keeping the CPU plenty fast enough to not be a bottleneck. Complaining about this is dumb.

        • willmore
        • 10 years ago

        It’s not quite that simple. It’s already been shown that Intel drives, in certain circumstances, move work from the GPU to the CPU if there is enough CPU to make it worthwhile. So, the speed of the CPU *could* be a factor in the performance of the card and not in a negative sense as you imply. It could prove to be a positive effect on the performance of the card.

        Though, beyond the IGP level of cards, this is pretty unlikely, though.

        That said, Apple did just release a new video driver that fixed some issues where CPU speed was holding back the GPU–and that was *not* on the low end ones, but the high end ones.

        It would be nice to see a few different CPUs with the same graphics cards to see if there is any differential due to the CPU.

    • MadManOriginal
    • 10 years ago

    q[

    • Kurotetsu
    • 10 years ago

    Ignore

    • Kurotetsu
    • 10 years ago

    The “GTS 460 768MB” in the bar graphs should probably be “GTX 460 768MB”?

    • CampinCarl
    • 10 years ago

    I’m really hoping for the Radeon HD6xxx series to come out before Christmas…my 4870 has treated me well so far, but I’m starting to get that upgrade itch. Probably can only manage to convince myself it’s okay to spend the money around that time.

    • Goty
    • 10 years ago

    10Mhz on the Sapphire 5750, eh? Careful, I think that may be pushing the clockspeed a bit too far! The premiums for the shoddy overclocks on 5750s are absolutely ridiculous. You can get a 5770 that has an extra SIMD enabled and another 150MHz on the core speed when compared to the 5750 for less money!

    • jackbomb
    • 10 years ago

    2nd sentence made me regret zapping a late night Chef Boyardee.

    • Buzzard44
    • 10 years ago

    Eh, trivial performance increase over GTS 250 (in many cases non-existant). Color me unimpressed.

    However, I guess this means my 9800GTX isn’t by any means obsolete.

    At this price point, I don’t think the GTS 450 is neither a good deal nor a bad one. It’s just another point on the price/performance line.

      • Kurotetsu
      • 10 years ago

      I have a 9800GTX too, but I’m still kinda of surprised that the GTS 450 didn’t do much better than it. You’d think a more advanced architecture and more stream processors would beat out a 2-3 year old card.

        • willmore
        • 10 years ago

        I’m in the same boat as you guys. Looks like my 9800GTX+ will serve me just as well as a 450 would. Sans the extra $90 I’d have to pay. Ahh, the joys of clearancing out ‘old’ parts for ‘new’ (relabeled) parts. $49 for a 9800GTX+? Yes, please. Thank you, PNY.

        Maybe a ‘full’ GF106 will look better? 3/2 higher memory BW and ROP can’t hurt. I wonder how much of the lower power consumption of this chip is due to the shaders being idle waiting for memory.

          • Ushio01
          • 10 years ago

          This is why I wish I bought an 8800GTX back in 2006, acceptable frame rates in all games at 1680×1050 for 4 YEARS.

            • Vasilyfav
            • 10 years ago

            Yeah, except GTX 8800 would run you 600$ back in 2006 😛

            Nvidia’s top end has always been a terrible value at release.

            I’m glad I waited until cheapo dual cores and 8800GT.

            • willmore
            • 10 years ago

            What, $49 isn’t low enough for you for a GT9800GTX+?

            • swaaye
            • 10 years ago

            oops nevermind.

            • Chrispy_
            • 10 years ago

            I picked up a fully-functional G92 (8800GTS-512) in Jan 2008 for £150 ($230) and I have to say that I’m still not tempted to replace it.

            In almost three years, the best the industry can do is give me a 50% performance boost for another outlay of £150. No thanks.

            • swaaye
            • 10 years ago

            You can probably blame the fact that they have to also spend transistor budget on new features.

        • Krogoth
        • 10 years ago

        GTS 250 and 9800GTX have far more memory bandwidth and their “stream processors” are beefier (more texturing power).

        450 is more efficient given its resources, but that efficiency isn’t going to make up its deficit in memory bandwidth/stream processing resources.

          • willmore
          • 10 years ago

          I guess there is still hope that activating the extra memory bank will help pull this ahead of the 250/9800s.

          I wonder how fast these guys fold?

      • Triple Zero
      • 10 years ago

      Agreed. I have a GTS 250 and based on what I’ve read in this review there is no reason for me to consider upgrading to a GTS 450. The only reviewed game I own is Borderlands and the GTS 250 apparently performs better than the stock GTS 450 in that game. I guess I’ll wait for NVIDIA to release a fully enabled GF106 (GTS 455?) with a 192-bit memory interface instead of the 128-bit the GTS 450 uses and see how that one performs.

Pin It on Pinterest

Share This