Home Nvidia’s GeForce GTS 450 graphics processor
Reviews

Nvidia’s GeForce GTS 450 graphics processor

Scott Wasson
Disclosure
Disclosure
In our content, we occasionally include affiliate links. Should you click on these links, we may earn a commission, though this incurs no additional cost to you. Your use of this website signifies your acceptance of our terms and conditions as well as our privacy policy.

Generally, we’d open a review like this one by reminding you of the recent history of video cards and graphics chips like this one, setting the proper context for everything that comes next. Today, though, I have a plane to catch, and I anticipate writing a lot of the commentary on the following pages from an oddly crouched position in a coach-class seat while stale air tainted with the faint smell of farts blows in my face.

The source of our rush is a long, intense week spent with Nvidia’s new graphics card, the GeForce GTS 450. Priced at around $130, this card is Nvidia’s answer to the Radeon HD 5700 series—if you can call it an “answer” after the competition has been on the market for a full year. Regardless of the timing, though, the GTS 450—and the GPU behind it—is a potentially attractive proposition for those who lack the resolve and (display) resolution to spend more than the cost of three big-name games on the hardware required to play them. Keep reading for our detailed testing and incredibly rushed text on the GTS 450.

Yep, it’s yet another Fermi derivative
The graphics chip behind the GeForce GTS 450 is the third variety of the DirectX 11-class Fermi architecture that Nvidia has brought to the desktop market. As you may know, many chip designs these days are essentially modular, and can be scaled up and down in size and features to meet different goals.

The chip that powers the GTS 450, known as the GF106, is perhaps best thought of as roughly half of the GF104 GPU used in the GeForce GTX 460. Where the GF104 has two GPCs, or graphics processing clusters, the GF106 has just one, so it has half the triangle setup rate of the GF104—and just one fourth that of the big daddy, the GF100.


A block diagram of the GF106 GPU. Source: Nvidia.

Inside of that GPC are four shader multiprocessor blocks, or SMs, arranged essentially as they are in the GF104. That means each SM has 48 stream processors (Nvidia likes to call them “CUDA cores”; we do not) and a texture block capable of sampling and filtering eight texels per clock. In total, then, the GF106 has 192 SPs and can filter 32 texels per clock. For compatibility reasons, this GPU has the ability to process double-precision floating-point math, but only at one-twelfth the rate it can handle single-precision math, again like the GF104. (The GF100 is much more formidable, but it serves different markets.)

If you look closely at the diagram above, you’ll notice that the GF106 bucks expectations for a mid-range graphics chip in a couple of notable ways. Rather than the expected pair of 64-bit GDDR5 memory interfaces, the GF106 has a trio. Correspondingly, it has three ROP partitions, each capable of outputting eight pixels per clock, rather than the two ROP partitions one might expect. The 50% wider memory interface and ROP partitions give the GF106 substantially more potential oomph than competitors like AMD’s mid-range Juniper GPU.

  Estimated
transistor
count
(Millions)
Approximate
die size
(mm²)
Fabrication
process node
G92b 754 256 55-nm TSMC
GF100 3000 529* 40-nm TSMC
GF104 1950 331* 40-nm TSMC
GF106 1170 240* 40-nm TSMC
RV770 956 256 55-nm TSMC
Juniper 1040 166 40-nm TSMC
Cypress 2150 334 40-nm TSMC

Of course, the extra power comes at a price, as you can see with a quick glance at the transistor count and die size numbers on the right. The GF106 is quite a bit larger than Juniper, all told.

Incidentally, since Nvidia doesn’t divulge die sizes, we’ve put asterisks next to some of the figures in the table. We’ve simply gone with the best published numbers we can find for GF100 and GF104, but since it lacks a metal cap, we were able to measure the GF106 at roughly 15 mm by 16 mm, or 240 mm². We may be off by less than a millimeter in each dimension with our quick sizing via wooden ruler, but we’re pretty close.

The larger chip size likely translates into higher manufacturing costs for Nvidia, but it doesn’t necessarily translate into higher prices for folks buying graphics cards based on it. We’re just showing you this information for the sake of chip-geekery. Following further in that vein, we have some similar-sized pictures of the two chips below, shown next to a U.S. quarter to celebrate American hegemony and also to provide a size reference.


GF106

Juniper

The intriguing thing about the GF106 is that, like all of the Fermi-derived graphics processors to date, we’ve not yet seen a product based on a fully enabled version of the chip. The GTS 450, as we’re about to find out, only uses a portion of the GPU’s total power. We’re dying to know whether Nvidia has been producing gimpy implementations of its DX11 graphics chips out of necessity (due to manufacturing and yield issues), for strategic reasons (keeping a little juice in reserve), or some combination of the two (and what combination, really, which is the key question). We don’t know yet, but we do get to use a lot of parentheses in the interim, which is its own reward.

Introducing the GTS 450
For the GTS 450, Nvidia has elected to disable the GF106’s third memory controller and ROP partition, so the card effectively has a 128-bit path to memory and 16 pixels per clock of ROP throughput. That allows the GTS 450 to meet the Juniper-based Radeon HD 5700 series head-on with very similar specifications.

Here’s a look at the GeForce GTS 450 reference design from Nvidia. Retail cards should be based on it, but will differ to one degree or another. The GPU on this card is clocked at 783MHz (its double-pumped SMs thus run at 1566MHz), with a memory clock of 900MHz—or 3.6 Gbps, as is the fashion for reporting quad-data-rate GDDR5 speeds. Onboard are eight memory chips—four on the front and four on the back—totaling 1GB of capacity. You’ll notice, also, that there are two pads empty on the top side of the board, visible above. Two more empty pads are on the back, too, raising the likely prospect of a full-on GF106 card based on this same PCB design.

The reference GTS 450 has Nvidia’s now-standard complement of twin dual-link DVI ports and a mini-HDMI output. Board makers may deviate from this formula, as we’ll see. All GTS 450 cards should only require a single, six-pin auxiliary power input, though, since the card’s max power rating, or TDP, is 106W.

GTS 450 cards running at stock clock frequencies are already selling online for Nvidia’s suggested price of $130. That squarely positions the GTS 450 against the Radeon HD 5750, which has dipped as low as $120 this past weekend in order to welcome the GTS 450.

For just ten bucks more, or $140, you can grab the Asus ENGTS450 TOP card pictured above, with considerably higher clock rates: a 925MHz GPU core, 1850MHz shaders, and 1GHz/4 Gbps memory. Nvidia often leaves board makers with some leeway for higher clock speeds at higher prices, but this is a bit of funny move, because the GF106 apparently has beaucoup headroom—and at $140, this version of the GTS 450 is pretty much a direct competitor for the Radeon HD 5770. This Sapphire 5770, for instance, sells at that same price.

As is obvious from the picture, the Asus TOP card has a custom cooler. What may not be so obvious, given the shrouding on both, is that Asus’ cooler is quite a bit beefier than the stock one, with more metal and a larger heatsink surface. Asus calls this its Direct CU cooler, due to that fact that the copper heatpipes (beneath the chrome plating) make direct contact with the surface of the GPU. Asus’ other enhancements over the reference board include a custom VRM design with a higher phase count, the ability to tweak the GPU voltage for overclocking via its Smart Doctor software, and a metal bracket across the top of the board to provide additional sturdiness. Oh, and Asus includes a full-size HDMI port, a VGA connector, and just one DVI output.

We have little patience for debating over five or ten bucks in an age when top-flight games run $60—heck, we’re lousy at reviewing video cards in this category, since we’d nearly always step up a notch or two—but if it were up to us to choose, we’d pick the $140 Asus TOP over the $130 stock card ten times out of ten. If that choice is too daunting for you, we hear MSI is splitting the difference by offering a GTS 450 at 850MHz/4 Gbps for $135. That should rouse you out of your stultifying indecision.

We took some flak for not including higher-clocked retail versions of competing Radeon cards in our recent SLI vs. CrossFire roundup, so when we set out to do this review—before Nvidia revealed the exact pricing of the GTS 450 to us—we went looking for a hot-clocked Radeon HD 5750 to serve as a comparison. The best we could find selling at Newegg was Sapphire’s Vapor-X variant, pictured above, which Sapphire kindly agreed to send us. This baby is clocked at 710MHz/1160MHz, up 10MHz from a stock 5750. The custom Vapor-X cooler on this card is pretty nice, but unfortunately, this product is currently selling for 150 bucks at Newegg. A mail-in rebate will knock that down to $135, net, but we think this thing’s asking price will have to drop in response to movement on other 5750 and 5770 cards, as well as the GTS 450’s introduction. We’ve included full results for the Vapor-X 5750 on the following pages, so you can see how the tweaked clocks and fancy cooler change things.

Some driver changes from Nvidia
Alongside the release of the GTS 450, Nvidia today is introducing a new generation of its driver software, release 260, that will bring some notable improvements for owners of various GeForce cards. The firm claims performance boosts for all GTS/GTX 400-series graphics cards in certain games, ranging from 7-29%. Often, such claims for new drivers are limited to very specific scenarios—as is the 29% number in this case, which applies to a certain game at certain settings—but we can’t deny that Nvidia has made tremendous progress in tuning the performance of Fermi-based GPUs since their introduction. These drivers should be another step forward.

Beyond that, the release 260 drivers enable bitstream audio output over HDMI, with support for 24-bit, 96 and 192KHz audio formats from compatible Blu-ray movies on GTX400-series GPUs, as well as the GT240/220/210. Both the Dolby TrueHD and DTS HD Master Audio formats are supported.

Release 260 also brings a new user interface for the setup of multi-display configurations, and happily, the software for the funny-glasses-based GeForce 3D Vision is now packaged with the standard video driver.

All of these changes come in a new driver package, with an installer script that offers more control over which components are installed. In my experience, this installer is quite a bit quicker than the old one, which sometimes paused for minutes at a stretch for no apparent reason. Among the new choices in this script is a clean install option that purportedly “completely wipes out” older video drivers before installing new ones. That may help with troubleshooting—or simply satisfying those OCD urges—in some cases.

Our testing methods
Many of our performance tests are scripted and repeatable, but for a couple of games, Battlefield: Bad Company 2 and Metro 2033, we used the Fraps utility to record frame rates while playing a 60-second sequence from the game. Although capturing frame rates while playing isn’t precisely repeatable, we tried to make each run as similar as possible to all of the others. We raised our sample size, testing each FRAPS sequence five times per video card, in order to counteract any variability. We’ve included second-by-second frame rate results from Fraps for those games, and in that case, you’re seeing the results from a single, representative pass through the test sequence.

As ever, we did our best to deliver clean benchmark numbers. Tests were run at least three times, and we’ve reported the median result.

Our test systems were configured like so:

Processor Core i7-965 Extreme 3.2GHz
Motherboard Gigabyte EX58-UD5
North bridge X58 IOH
South bridge ICH10R
Memory size 12GB (6 DIMMs)
Memory type Corsair Dominator CMD12GX3M6A1600C8
DDR3 SDRAM
at 1600MHz
Memory timings 8-8-8-24 2T
Chipset drivers INF update 9.1.1.1025
Rapid Storage Technology 9.6.0.1014
Audio Integrated ICH10R/ALC889A
with Realtek R2.51 drivers
Graphics Radeon HD 5750 1GB
with Catalyst 10.8 drivers & 10.8a application profiles
Sapphire Radeon HD 5750 1GB Vapor-X
with Catalyst 10.8 drivers & 10.8a application profiles
Sapphire Radeon HD 5750 1GB Vapor-X + Radeon HD 5750 1GB
with Catalyst 10.8 drivers & 10.8a application profiles
Gigabyte Radeon HD 5770 1GB
with Catalyst 10.8 drivers & 10.8a application profiles
XFX Radeon HD 5830 1GB
with Catalyst 10.8 drivers & 10.8a application profiles
EVGA GeForce GTS 250 Superclocked 1GB
with ForceWare 260.52 drivers
GeForce GTS 450 1GB
with ForceWare 260.52 drivers
Asus ENGTS450 TOP 1GB
with ForceWare 260.52 drivers
Dual GeForce GTS 450 1GB
with ForceWare 260.52 drivers
Gigabyte GeForce GTX 460 OC 768MB
with ForceWare 260.52 drivers
Hard drive WD RE3 WD1002FBYS 1TB SATA
Power supply PC Power & Cooling Silencer 750 Watt
OS Windows 7 Ultimate x64 Edition
DirectX runtime update June 2010

Thanks to Intel, Corsair, Western Digital, Gigabyte, and PC Power & Cooling for helping to outfit our test rigs with some of the finest hardware available. AMD, Nvidia, XFX, Asus, Sapphire, Zotac, and Gigabyte supplied the graphics cards for testing, as well.

Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests.

We used the following test applications:

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Running the numbers

  Peak pixel
fill rate
(Gpixels/s)
Peak bilinear
INT8 texel
filtering rate*
(Gtexels/s)
*FP16 is half rate
Peak
memory
bandwidth
(GB/s)
Peak shader
arithmetic
(GFLOPS)
Peak
rasterization
rate
(Mtris/s)
GeForce GTS 250 11.8 47.2 70.4 470 738
EVGA GeForce GTS 250 Superclocked 12.3 49.3 71.9 484 770
GeForce GTS 450 12.5 25.1 57.7 601 783
Asus ENGTS450 TOP 14.8 29.6 64.0 710 925
GeForce GTX 460 768MB 16.2 37.8 86.4 907 1350
Gigabyte GeForce GTX 460 768MB OC 17.2 40.0 86.4 961 1430
GeForce GTX 460 1GB 21.6 37.8 115.2 907 1350
GeForce GTX 465 19.4 26.7 102.6 855 1821
GeForce GTX 470 24.3 34.0 133.9 1089 2428
GeForce GTX 480 33.6 42.0 177.4 1345 2800
Radeon HD 5750 11.2 25.2 73.6 1008 700
Sapphire Radeon HD 5750 Vapor-X 11.4 25.6 74.2 1022 710
Radeon HD 5770 13.6 34.0 76.8 1360 850
Radeon HD 5830 12.8 44.8 128.0 1792 800
Radeon HD 5850 23.2 52.2 128.0 2088 725
Radeon HD 5870 27.2 68.0 153.6 2720 850
Radeon HD 5970 46.4 116.0 256.0 4640 1450

The table above shows theoretical peak throughput rates for these video cards and some of their bigger siblings in some key categories. As always, we’ll remind you that these are just theoretical numbers; delivered performance will almost always be lower and will depend on the GPU architecture.

You’ll notice that the GTS 450 cards don’t lead the competing Radeons in any of the heavy-hitter categories like texture filtering rate, memory bandwidth, or shader arithmetic. The gap in peak shader arithmetic rate is especially daunting. That’s par for the course in this generation of GPUs, and Fermi-based chips have shown an ability to perform relatively well in the real world, regardless. We can measure a couple of these capabilities to get a sense why that is.

We’ve grown increasingly dissatisfied with the texture fill rate tool in 3DMark Vantage, so we’ve reached back into the cupboard and pulled out an old favorite, D3D RightMark, to test texture filtering performance.

Unlike 3DMark, this tool lets us test a range of filtering types, not just texture sampling rates. Unfortunately, D3D RightMark won’t test FP16 texture formats, but integer texture formats are still pretty widely used in games. I’ve plotted a range of results below, and to make things more readable, I’ve broken out a couple of filtering types into bar charts, as well. Since this test isn’t compatible with SLI, we’ve omitted those results. We’ve also left the CrossFire config out of the line plot for the sake of readability.

The stock GTS 450 trails the Radeon HD 5770 with only bilinear filtering applied, but the GTS 450 gains strength as higher-quality filtering kicks in. At 16X aniso, the stock GTS 450 delivers more filtered texels than the 5750, and the higher-clock GTS 450 TOP nearly matches the Radeon HD 5770.

As I’ve noted before, the Unigine Heaven demo’s “extreme” tessellation mode isn’t a very smart use of DirectX 11 tessellation, with too many triangles and little corresponding improvement in image quality. I think that makes it a poor representation of graphics workloads in future games and thus a poor benchmark of overall GPU performance.

Pushing through all of those polygons does have its uses, though. This demo should help us tease out the differences in triangle throughput between these GPUs. To do so, we’ve tested at the relatively low resolution of 1680×1050, with 4X anisotropic filtering and no antialiasing. Shaders were set to “high” and tessellation to “extreme.”

Fermi is the first GPU architecture to enable parallel processing of fundamental geometry in the graphics pipeline, which should help with handling high levels of tessellation in DirectX 11 games, but the GF106 chip in the GTS 450 has only a single rasterization engine, just like any 5000-series Radeon. As a result, the GTS 450 cards perform about like their direct Radeon competition. The GeForce GTX 460, with dual rasterizers, performs in league with the SLI and CrossFireX dual-GPU configs.

Starcraft II
We’ll start with a little game you may have heard of called Starcraft II. We tested SC2 by playing back a quarter-final match from a recent tournament using the game’s replay feature. This particular match was about 10 minutes in duration, and we captured frame rates over that time using the Fraps utility. Thanks to the relatively long time window involved, we decided not to repeat this test multiple times, like we usually do when testing games with Fraps.

After capturing those results, we decided to concentrate our attention on the test data from the latter portion of the match, when the two sides had already completed their initial unit build-outs and were engaging in battle. This part of the match is much more graphically intensive and gives us a better sense of performance when it matters.

We tested at the settings shown above, with the notable exception that we also enabled 4X antialiasing via these cards’ respective driver control panels. SC2 doesn’t support AA natively, but we think this class of card can produce playable frame rates with AA enabled—and the game looks better that way.

With only one or two frames per second separating the 5750 from the GTS 450 and the 5770 from the GTS 450 TOP, we’re willing to call this one a wash, more or less. For this class of game, nearly all of these cards are producing acceptable frame rates, with lows just under 30 FPS. This is at a relatively high resolution for a $130 graphics card, too.

The largest differences here are in the higher-end configs. The GeForce GTX 768MB, whose price Nvidia has just slashed to $170, outperforms the similarly-priced Radeon HD 5830. Nvidia has the edge in multi-GPU scaling, too, as the GTS 450 SLI setup nearly doubles the performance of a single card, while 5750 CrossFireX performance is relatively sluggish.

Mafia II
The open-world Mafia II is another new addition to our test suite, and we also tested it with Fraps.

We tested at the settings shown above, and only after we’d gone down that path did we learn that turning on this game’s antialiasing option does something unexpected: it enables a 2X supersampled antialiasing mode, apparently. Supersampling touches every single pixel on the screen and thus isn’t very efficient, but we still saw playable enough frame rates at the settings we used. In fact, we need to look into it further, but we think Mafia II may also be using some form of post-processing or custom AA filter to further soften up edges. Whatever it’s doing, though, it seems to work. The game looks pretty darned good to our eyes, with very little in the way of crawling or jaggies on edges.

Although this game includes special, GeForce-only PhysX-enhanced additional smithereens and flying objects, we decided to stick to a direct, head-to-head comparison, so we left those effects disabled.

The Radeons look relatively stronger here, by a few FPS, in each price range. Only in SLI does a GeForce config come out on top.

Aliens vs. Predator
The new AvP game uses several DirectX 11 features to improve image quality and performance, including tessellation, advanced shadow sampling, and DX11-enhanced multisampled anti-aliasing. Naturally, we were pleased when the game’s developers put together an easily scriptable benchmark tool. This benchmark cycles through a range of scenes in the game, including one spot where a horde of tessellated aliens comes crawling down the floor, ceiling, and walls of a corridor.

To keep frame rates playable on these cards, we had to compromise on image quality a little bit, mainly by dropping antialiasing. We also held texture quality at “High” and stuck to 4X anisotropic filtering. We did leave most of the DX11 options enabled, including “High” shadow quality with advanced shadow sampling, ambient occlusion, and tessellation. The use of DX11 effects ruled out the use of older, DX10-class video cards, so we’ve excluded them here.

Once again, the differences are small enough that we can call these results a tie at each price point, but the Radeons do have the slight advantage in each case.

Just Cause 2
I’ve already sunk more hours than I’d care to admit into this open-world adventure, and I feel another bout coming on soon. JC2 has some flashy visuals courtesy of DirectX 10, and the sheer scope of the game world is breathtaking, as are the resulting view distances.

Although JC2 includes a couple of visual effects generated by Nvidia’s CUDA GPU-computing API, we’ve left those disabled for our testing. The CUDA effects are only used sparingly in the game, anyhow, and we’d like to keep things even between the different GPU brands. I do think the water simulation looks gorgeous, but I’m not so impressed by the Bokeh filter used for depth-of-field effects.

We tested performance with JC2‘s built-in benchmark, using the the “Dark Tower” sequence.

Given that, you know, the frame rates are almost identical, we’d call this one yet another tie between the GTS 450s and the Radeon HD 5700 cards. The GTX 460 768MB outduels the Radeon HD 5830 here, though.

DiRT 2: DX9
This excellent racer packs a scriptable performance test. We tested at DiRT 2‘s “ultra” quality presets in both DirectX 9 and Direct X 11. The big difference between the two is that the DX11 mode includes tessellation on the crowd and water. Otherwise, they’re hardly distinguishable.

 

DiRT 2: DX11

Lots of results, but the pattern we’ve seen in prior pages isn’t substantially changed. At the highest resolution in both DX9 and DX11, the GTS 450 cards are bracketed, above and below, by the 5750 and 5770. Overall, the contests are close enough to be considered a tie at each price point—again, with the obvious exception that the GTX 460 768MB is faster than the Radeon HD 5830.

Battlefield: Bad Company 2
BC2 uses DirectX 11, but according to this interview, DX11 is mainly used to speed up soft shadow filtering. The DirectX 10 rendering path produces the same images.

We turned up nearly all of the image quality settings in the game. Our test sessions took place in the first 60 seconds of the “Heart of Darkness” level.

 

Ok, look, you get the idea about the 5750 and GTS 450 being a close match, and the GTS 450 TOP and 5770 also offering extremely similar performance, right? You’re also gathering that the GTX 460 768MB is superior to the Radeon HD 5830? Good. The pattern holds. Let’s move on.

Borderlands
We tested Gearbox’s post-apocalyptic role-playing shooter by using the game’s built-in performance test. We tested with all of the in-game quality options at their max. We didn’t enable antialiasing, because the game’s Unreal Engine doesn’t natively support it.

Here’s one last game where have a chance to see something different, and we kind of do: the GeForces are relatively stronger in Borderlands, which might make up for some of the times when the Radeons have had a minor advantage in other games, if you’re keeping score very closely at home.

Power consumption
Since we have a number of non-reference GeForce cards among the field, we decided to test them individually against Nvidia’s reference cards in this portion of the review, so we could see how custom coolers and clock speeds affect power draw, noise, and operating temperatures. The results should give us a sense of whether these changes really add value.

We measured total system power consumption at the wall socket using our fancy new Yokogawa WT210 digital power meter. The monitor was plugged into a separate outlet, so its power draw was not part of our measurement. The cards were plugged into a motherboard on an open test bench.

The idle measurements were taken at the Windows desktop with the Aero theme enabled. The cards were tested under load running Left 4 Dead 2 at a 1920×1080 resolution with 4X AA and 16X anisotropic filtering. We test power with Left 4 Dead 2 because we’ve found that the Source engine’s fairly simple shaders tend to cause GPUs to draw quite a bit of power, so we think it’s a solidly representative peak gaming workload.

 

If the performance results left you looking for another factor to break the deadlock between the Radeons and GeForces, this might be it. Uh, kinda. The GeForces are more efficient at idle, to the tune of 5-10W at a system level, but they pull more power under load, leading to system-level power use that’s roughly 18-34W higher. Is that enough to matter? Let’s see what it does to noise and heat.

Noise levels
We measured noise levels on our test system, sitting on an open test bench, using an Extech model 407738 digital sound level meter. The meter was mounted on a tripod approximately 8″ from the test system at a height even with the top of the video card. We used the OSHA-standard weighting and speed for these measurements.

You can think of these noise level measurements much like our system power consumption tests, because the entire systems’ noise levels were measured. Of course, noise levels will vary greatly in the real world along with the acoustic properties of the PC enclosure used, whether the enclosure provides adequate cooling to avoid a card’s highest fan speeds, placement of the enclosure in the room, and a whole range of other variables. These results should give a reasonably good picture of comparative fan noise, though.

 

None of these cards are all that loud, and the differences in noise levels at idle are pretty minimal, overall. That’s partly because our move to a newer 7,200-RPM hard drive on our test rig has raised the system’s noise floor somewhat. The Radeons tend to be a little quieter at idle, and the GeForces are a little quieter under load—an interesting example of our noise results running counter to what one would expect from the power draw numbers. That goes to show that a good cooler can overcome a few watts of additional heat to dissipate.

The custom coolers on both the Asus GTS 450 TOP and the Sapphire 5750 Vapor-X fare well here, with the Vapor-X outperforming the stock AMD cooler and the GTS 450 TOP matching Nvidia’s stock cooler despite dissipating substantially more power under load.

GPU temperatures
We used GPU-Z to log temperatures during our load testing. For the multi-GPU options, we’ve reported the temperature from the primary GPU, which is generally the warmest.

Not only are the Asus and Sapphire custom coolers relatively quiet, but they keep the GPUs under them relatively cool, too. Doesn’t look to me like you’ll pay much of a penalty in terms GPU temperatures or noise due to the GTS 450’s somewhat higher peak power draw.

Conclusions
Our performance results tell a story of remarkable equivalence, overall, between the two versions of the GeForce GTS 450 we tested and the competing Radeons. The Radeons may have a slight advantage in terms of overall performance, mathematically, but as we saw, the real-world difference between the two is often just a few frames per second, all but imperceptible.

Step back for a second, and the other part of the picture you’ll see is that all of these relatively inexpensive video cards offer reasonably decent performance in the latest games at common display resolutions like 1440×900 and 1680×1050—and we generally pushed the envelope on image quality, even venturing into 1920×1080 resolution at times. If you have a monitor with less than two megapixels of resolution, any of these video cards should allow you to play today’s games without too terribly many compromises.

Like we said earlier, we don’t really have much interest in debating the finer points of product pricing and value when there’s only ten bucks or so between the offerings. At present, street prices on the Radeon HD 5750 have dropped to $120 in certain cases, to greet the GTS 450. Whether a gap between these two products will remain in the long run is anyone’s guess. We do know that we would unequivocally pay the extra in order to get the additional performance you will in stepping up from a stock GTS 450 to the Asus TOP card we tested—or from a Radeon HD 5750 to a 5770. Then again, we’d also recommend stretching whenever possible from the $140 cards up to the $170 GeForce GTX 460 768MB, which was the fastest product in our test by a good margin and, we think, represents the best value, too.

Given that, Nvidia is in a pretty good position, and the addition of the GTS 450 only enhances it. Yet we can’t help but notice that it’s taken Nvidia a year since the introduction of the Radeon HD 5700 series to produce an essentially equivalent DirectX 11-class product—and the GTS 450 isn’t substantially better in any notable way, other than its access to Nvidia’s “graphics plus” features like PhysX and 3D Vision. Many of the new additions to the release 260 drivers—an installer with control over individual software components, bitstream audio support, a better UI for multi-monitor setup—is just Nvidia playing catch up. Even now, the GTS 450 will only drive two monitors simultaneously, while nearly all Radeon HD 5000-series cards will drive three. We’re pleased to see a DX11-capable GeForce in this class of product, but it has indeed been a long time coming.

We expect AMD to unleash the Radeon HD 6000 series before the end of the year, and we’re left wondering whether Nvidia has kept enough potential in reserve in the GF106 and its other chips to allow it to meet that challenge.

Latest News

smartphone security organization
Community Contributions

How to Successfully Tackle Smartphone Security in Your Organization

meme-season (1)
Crypto News

8 Meme Coins to Consider for Investment During the Current Meme Coin Trend

Meme coins recorded jaw-dropping returns in the past couple of weeks. Many household projects pushed towards its new ATHs in recent weeks. Dogwifhat, surged over 600% in the last week...

SpaceX Is Building A Network Of 100 Spy Satellites For The US
News

SpaceX Is Building a Network of 100 Spy Satellites for the US Government, Angers China

Elon Musk’s SpaceX is reportedly making 100 spy satellites for the US intelligence agency. According to sources, the company recently accepted a secret contract by the US government worth $1.8 billion....

IMF Shared An Update About The February Security Breach
News

IMF Shared an Update about the February Security Breach – All Affected Email Accounts Resecured

Taylor Swift in concert
Statistics

9 Taylor Swift Controversies – The Numbers Behind the Drama

What is Darwin AI, Apple’s Latest AI Acquisition?
News

What is Darwin AI, Apple’s Latest AI Acquisition?

Cyberattack On France Govt Exposes Data of 43 Million Users
News

Massive Cyberattack On France Government Departments Leaves The Data of 43 Million Users Exposed