Nvidia’s GeForce GTS 250 graphics card

The history of Nvidia’s G92 graphics processor is a long one, as these things go. The first graphics card based on it was the GeForce 8800 GT, which debuted in October of 2007. The 8800 GT was a stripped-down version of the G92 with a few bits and pieces disabled. The fuller implementation of G92 came in December ’07 in the form of the GeForce 8800 GTS 512MB. This card initiated the G92’s long history of brand confusion by overlapping with existing 320MB and 640MB versions of the GeForce 8800 GTS, which were based on an entirely different chip, the much larger (and older) G80. Those cards had arrived on the scene way back in November of 2006.

As the winter of ’07 began to fade into spring, Nvidia had a change of heart and suddenly started renaming the later members of the GeForce 8 series as “new” 9-series cards. Thus the GeForce 8800 GTS 512 became the 9800 GTX. And thus things remained for nearly ten weeks.

Then, in response to the introduction of strong new competition, Nvidia shipped a new version of the G92 GPU with the same basic architecture but manufactured on a smaller 55nm fabrication process. This chip found its way to market aboard a slightly revised graphics card dubbed the GeForce 9800 GTX+. The base clock speeds on the GTX+ matched those of some “overclocked in the box” GeForce 9800 GTX cards, and the performance of the two was essentially identical, though the GTX+ did reduce power consumption by a handful of watts. Slowly, the GTX+ began replacing the 9800 GTX in the market, as the buying public scratched its collective head over the significance of that plus symbol.

EVGA’s GeForce GTS 250 Superclocked

Which brings us to today and the introduction of yet another graphics card based on the G92 GPU, the GeForce GTS 250. This is probably the card that, by all rights, the 9800 GTX+ should have been, because it consolidates the gains that switching to a 55nm fab process can bring. Although its base clock speeds remain the same as the 9800 GTX+—738MHz for most of the GPU, 1836MHz for the shaders, and 1100MHz (or 2200 MT/s) for the GDDR3 memory—the GeForce GTS 250 is a physically smaller card, at nine inches long rather than 10.5″, and it has but a single six-pin auxiliary power connector onboard.

The reduction in power connectors is made possible by a new board design that cuts power consumption sufficiently to make the second power input superfluous. Although, we should note, Nvidia rates the GTS 250’s max board power at 150W, right at the limits of the PCI Express spec for this power plug configuration.

The GTS 250 is quite a bit shorter than the 9800 GTX

A single power connector will do, thanks

Along with the G92’s umpteenth brand name comes a price cut of sorts: the 512MB version of the GTS 250 will sell for about $130, give or take a penny, well below the price of 9800 GTX+ 512MB cards today. The GTS 250 also offers another possibility in the form of a 1GB variant, which Nvidia and its partners expect to see selling for about $150. That’s quite a nice price in the context of today’s market, where the GTS 250’s most direct competition, the Radeon HD 4850, sells for about $150 in 512MB form. Then again, things change quickly in the world of graphics cards, and Nvidia doesn’t expect GTS 250 cards to be available for purchase until March 10, a whole week from now.

Heck, they may have changed this thing’s name again by then.

There are some benefits to GPU continuity. As you can see in a couple of the pictures above, the GTS 250 retains the dual SLI connectors present on the 9800 GTX, and Nvidia says the GTS 250 will willingly participate in an SLI pairing alongside a GeForce 9800 GTX+ of the same memory size. Unfortunately, though, 512MB and 1GB cards will not match, and Nvidia’s drivers won’t treat a 1GB card as if it were a 512MB card for the sake of multi-GPU cross-compatibility, like AMD’s will.

The card we have in Damage Labs for review is EVGA’s GeForce GTS 250 Superclocked 1GB. Like many GeForce-based graphics cards, this puppy runs at clock speeds higher than Nvidia’s baseline. In this case, we’re looking at a fairly modest boost to a 771MHz core, 1890MHz shaders, and 1123MHz memory. You’ll pay about ten bucks for the additional speed; list price is $159. EVGA also plans to sell a 1GB card with clocks closer to stock speeds for $149. Odds are, neither of those cards will look exactly like the one in the pictures above, which is an early sample. EVGA intends for the final product to have a swanky black PCB and an HDTV out port, which our sample conspicuously lacks.

The Radeon HD 4850 goes to 1GB, too

When we go to review a new graphics card, we tend to look for the closest competition to compare against it. In this case, the most obvious candidate, at least in terms of similar specifications, seemed to be a Radeon HD 4850 card with 1GB of memory onboard. Several board makers now sell 4850 cards with a gig of RAM, and Gigabyte was kind enough to send us an example of theirs.

That handsome cooler is a Zalman VF830, which Zalman bills as a “quiet VGA cooler.” Gigabyte takes advantage of the thermal headroom provided by this dual-slot cooler to squeeze a few more megahertz out of the Radeon HD 4850. The end result is a GPU core clock of 700MHz, up 20MHz from stock, with a bone-stock 993MHz GDDR3 memory clock.

Right now, prevailing prices on this card are running about $189 at online vendors, well above the GeForce GTS 250’s projected price. I wouldn’t be surprised to see AMD and its partners cut prices to match or beat the GTS 250 in the next couple of weeks, but given current going rates, the new GeForce would seem to have a built-in price advantage against the 4850 1GB.

Test notes

You’ll have to forgive us. Since Nvidia sprung this card on us in the middle of last week, and since we rather presumptuously had plans this past weekend, we were not about to go and formulate a revised test suite and produce an all-new set of benchmark results for this card and thirteen or so of its most direct competitors, with all new drivers and new games. Instead, we chose a strategy that very much mirrors Nvidia’s, recycling a past product for a new purpose. In our case, we decided to rely upon our review of the GeForce GTX 285 and 295, published way back on January 15, for most of our test data.

This unflinchingly lame, sad, and altogether too typical exercise in sheer laziness and feckless ridiculosity nets us several wholly insurmountable challenges in our weak attempt at evaluating this new product and its most direct competitor. First and foremost, of course, is the fact that video card drivers have changed one or two entire sub-point-release revisions since our last article. So although we tested the GeForce GTS 250 and Radeon HD 4850 1GB with recent drivers, the remainder of our results come from well-nigh ancient and unquestionably much slower and less capable driver software, because everyone knows that video card performance improves 15-20% with each driver release. Never mind the fact that the data you will see on the following pages will look, on the whole, entirely comparable across driver revisions. That is a sham, a mirage, and our other results are entirely useless even as a point of reference.

As if that outrage weren’t sufficient to get our web site operator’s license revoked, you may be aware that as many as one or two brand-new, triple-A PC game titles have been released since we chose the games in our test suite, and their omission will surely cripple our ability to assess this year-and-a-half-old GPU. This fact is inescapable, and we must be made to suffer for it.

Finally, in a coup de grace fitting of a Tarantino flick, two of the games we used were tested at a screen resolution of 2560×1600, clearly a higher resolution than anyone with a $150 graphics card would ever use for anything. Ever. Do not be swayed by the reasonable-sounding voice in your ear that points out both games were playable at this resolution on this class of hardware. Do not be taken in by the argument that using a very high resolution serves to draw out the differences between 512MB and 1GB graphics cards, and answer not the siren song of the future-proofing appeal. Nothing about this test is in any way “real world,” and no one who considers himself legitimate as a gamer or, nay, a human being should have any part in such a travesty. You may wish to close this tab in your browser now.

Our testing methods

As ever, we did our best to deliver clean benchmark numbers. Tests were run at least three times, and the results were averaged.

Our test systems were configured like so:

Processor Core i7-965
Extreme
3.2GHz
System bus QPI 4.8 GT/s
(2.4GHz)
Motherboard Gigabyte
EX58-UD5
BIOS revision F3
North bridge X58 IOH
South bridge ICH10R
Chipset drivers INF update
9.1.0.1007
Matrix Storage Manager 8.6.0.1007
Memory size 6GB (3 DIMMs)
Memory type Corsair
Dominator TR3X6G1600C8D
DDR3 SDRAM
at 1333MHz
CAS latency (CL) 8
RAS to CAS delay (tRCD) 8
RAS precharge (tRP) 8
Cycle time (tRAS) 24
Command rate 2T
Audio Integrated
ICH10R/ALC889A
with Realtek 6.0.1.5745 drivers
Graphics
Asus EAH4850 TOP Radeon HD 4850 512MB PCIe

with Catalyst 8.12 (8.561.3-081217a-073402E) drivers

Dual Asus EAH4850 TOP Radeon HD 4850 512MB PCIe

with Catalyst 8.12 (8.561.3-081217a-073402E) drivers

Gigabyte Radeon HD 4850 1GB PCIe

with Catalyst 9.2 drivers

Visiontek Radeon HD 4870
512MB PCIe

with Catalyst 8.12 (8.561.3-081217a-073402E) drivers

Dual Visiontek Radeon HD 4870
512MB PCIe

with Catalyst 8.12 (8.561.3-081217a-073402E) drivers

Asus
EAH4870 DK 1G Radeon HD 4870 1GB PCIe

with Catalyst 8.12 (8.561.3-081217a-073402E) drivers

Asus
EAH4870 DK 1G Radeon HD 4870 1GB PCIe

+ Radeon HD 4870 1GB PCIe

with Catalyst 8.12 (8.561.3-081217a-073402E) drivers

Sapphire
Radeon HD 4850 X2 2GB PCIe

with Catalyst 8.12 (8.561.3-081217a-073402E) drivers

Palit
Revolution R700 Radeon HD 4870 X2 2GB PCIe

with Catalyst 8.12 (8.561.3-081217a-073402E) drivers

GeForce
9800 GTX+ 512MB PCIe

with ForceWare 180.84 drivers

Dual GeForce 9800 GTX+ 512MB PCIe

with ForceWare 180.84 drivers

Palit GeForce 9800 GX2 1GB PCIe

with ForceWare 180.84 drivers

EVGA GeForce GTS 250 Superclocked 1GB PCIe

with ForceWare 182.06 drivers

EVGA
GeForce GTX 260 Core 216 896MB PCIe

with ForceWare 180.84 drivers

EVGA
GeForce GTX 260 Core 216 896MB PCIe

+ Zotac GeForce GTX 260 (216 SPs) AMP²! Edition 896MB PCIe

with ForceWare 180.84 drivers

XFX
GeForce GTX 280 1GB PCIe

with ForceWare 180.84 drivers

GeForce GTX 285 1GB PCIe

with ForceWare 181.20 drivers

Dual GeForce GTX 285 1GB PCIe

with ForceWare 181.20 drivers

GeForce GTX
295 1.792GB PCIe

with ForceWare 181.20 drivers

Hard drive WD Caviar SE16 320GB SATA
OS Windows Vista Ultimate x64 Edition
OS updates Service Pack 1, DirectX
November 2008 update

Thanks to Corsair for providing us with memory for our testing. Their quality, service, and support are easily superior to no-name DIMMs.

Our test systems were powered by PC Power & Cooling Silencer 750W power supply units. The Silencer 750W was a runaway Editor’s Choice winner in our epic 11-way power supply roundup, so it seemed like a fitting choice for our test rigs.

Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests.

We used the following versions of our test applications:

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Specs and synthetics

Before we get to play any games, we should stop and look at the specs of the various cards we’re testing. Incidentally, the numbers in the table below are derived from the observed clock speeds of the cards we’re testing, not the manufacturer’s reference clocks or stated specifications.

Peak
pixel
fill rate
(Gpixels/s)

Peak bilinear

texel
filtering
rate
(Gtexels/s)


Peak bilinear

FP16 texel
filtering
rate
(Gtexels/s)


Peak
memory
bandwidth
(GB/s)

Peak shader
arithmetic (GFLOPS)

Single-issue Dual-issue

GeForce 9500 GT

4.4 8.8 4.4 25.6 90 134

GeForce 9600 GT

11.6 23.2 11.6 62.2 237 355

GeForce 9800 GT

9.6 33.6 16.8 57.6 339 508
GeForce 9800 GTX+

11.8 47.2 23.6 70.4 470 705
GeForce GTS 250

12.3 49.3 24.6 71.9 484 726
GeForce 9800 GX2

19.2 76.8 38.4 128.0 768 1152
GeForce GTX 260 (192 SPs)

16.1 36.9 18.4 111.9 477 715
GeForce GTX 260 (216 SPs)

17.5 45.1 22.5 117.9 583 875
GeForce GTX 280

19.3 48.2 24.1 141.7 622 933
GeForce GTX 285

21.4 53.6 26.8 166.4 744 1116
GeForce GTX 295

32.3 92.2 46.1 223.9 1192 1788
Radeon HD 4650 4.8 19.2 9.6 16.0 384
Radeon HD 4670 6.0 24.0 12.0 32.0 480
Radeon HD 4830 9.2 18.4 9.2 57.6 736
Radeon HD 4850

10.9 27.2 13.6 67.2 1088
Radeon HD 4850 1GB

11.2 28.0 14.0 63.6 1120
Radeon HD 4870

12.0 30.0 15.0 115.2 1200
Radeon HD 4850 X2

20.0 50.0 25.0 127.1 2000
Radeon HD 4870 X2

24.0 60.0 30.0 230.4 2400

The theoretical numbers in the table give the GeForce GTS 250 a clear advantage in texture filtering rates and memory bandwidth, while the Radeon HD 4850 has an equally sizeable lead in peak shader arithmetic capacity. But look what happens when we run these cards through 3DMark’s synthetic tests.

The 4850 1GB nearly matches the GTS 250 in the color fill test, which tends to be bound primarily by memory bandwidth, and the 4850 comes out on top in the texture fill rate test.

Meanwhile, the GeForce GTS 250 leads the 4850 in half of the shader processing tests, and our expectations are almost fully confounded. In this GPU generation, the theoretical peak capacities of the GPUs take a back seat to the realities of architectural efficiency. Although the G92 has more texture filtering potential and memory bandwidth on paper, the HD 4850 is stronger in practice. And although the 4850’s RV770 GPU has more parallel processing power than the G92, the GeForce tends to use its arithmetic capacity more effectively in many cases.

Far Cry 2

We tested Far Cry 2 using the game’s built-in benchmarking tool, which allowed us to test the different cards at multiple resolutions in a precisely repeatable manner. We used the benchmark tool’s “Very high” quality presets with the DirectX 10 renderer and 4X multisampled antialiasing.

Our two main contenders are very closely matched here. The Radeon HD 4850 1GB is faster at mere mortal resolutions, but the GTS 250 produces higher frame rates at four megapixels.

The most notable result here, perhaps, is the strong performance of these two new 1GB cards at 2560×1600, where even CrossFire and SLI configurations involving 512MB cards run out of headroom. Neither new card, in a single-GPU config, is really fast enough to be playable at this res, but the additional video RAM clearly brings an improvement, and these results suggest good things for multi-GPU configs with 1GB. (So do our results from higher-end multi-GPU configs involving 1GB cards, of course.)

Left 4 Dead

We tested Valve’s zombie shooter using a custom-recorded timedemo from the game’s first campaign. We maxed out all of the game’s quality options and used 4X multisampled antialiasing in combination with 16X anisotropic texture filtering.

The scaling theme we established on the previous page continues here: the 4850 is faster at the lowest resolution, and the GTS 250’s relative performance becomes increasingly stronger as the resolution rises. Still, both cards produce nearly 60 FPS at 2560×1600, so playability is never in question.

Call of Duty: World at War

We tested the latest Call of Duty title by playing through the first 60 seconds of the game’s third mission and recording frame rates via FRAPS. Although testing in this matter isn’t precisely repeatable from run to run, we believe averaging the results from five runs is sufficient to get reasonably reliable comparative numbers. With FRAPS, we can also report the lowest frame rate we encountered. Rather than average those, we’ve reported the median of the low scores from the five test runs, to reduce the impact of outliers. The frame-by-frame info for each card was taken from a single, hopefully representative play-testing session.

Both cards were fast enough to make play-testing, even at this high resolution and quality level, quite feasible. With that said, the low frame rate numbers in the twenties are a bit iffy, as is the feel of the game on these cards at this crazy-insane display resolution. On more reasonable 1920×1200 displays, either card should run this game fine. Just on the numbers, this is technically a win for the GTS 250, but it’s close enough to count as a tie in my book.

Fallout 3

This is another game we tested with FRAPS, this time simply by walking down a road outside of the former Washington, D.C. We used Fallout 3‘s “Ultra high” quality presets, which basically means every slider maxed, along with 4X antialiasing and what the game calls 15X anisotropic filtering.

The GTS 250 and the HD 4850 1GB produce identical median low frame rates of 34 FPS—eminently playable, in my book, at least in this section of the game. You can see how frame rates tend to rise and fall in a saw-tooth pattern as Fallout 3‘s dynamic level-of-detail mechanism does its thing.

Interestingly enough, the 4850 doesn’t seem to benefit much from having 1GB of memory, compared to the 512MB version, but the GTS 250 pretty clearly makes use of the additional video RAM. The 9800 GTX+ 512MB is much slower than the GeForce GTS 250 1GB, even in a dual-card SLI config.

Dead Space

This is a pretty cool game, but it’s something of an iffy console port, and it doesn’t allow the user to turn on multisampled AA or anisotropic filtering. Dead Space also resisted our attempts at enabling those features via the video card control panel. As a result, we simply tested Dead Space at a high resolution with all of its internal quality options enabled. We tested at a spot in Chapter 4 of the game where Isaac takes on a particularly big and nasty, er, bad guy thingy. This fight is set in a large, open room and should tax the GPUs more than most spots in the game.

I’m just guessing, but I think maybe Nvidia’s drivers are a little bit better optimized for this game than AMD’s. Call it a hunch. Then again, even the slowest Radeon here cranks out acceptable frame rates, so it’s hard to get too worked up about it. Chalk up a win for the GTS 250, I suppose, though.

Crysis Warhead

This game is sufficient to tax even the fastest GPUs without using the highest possible resolution or quality setting—or any form of antialiasing. So we tested at 1920×1200 using the “Gamer” quality setting. Of course, the fact that Warhead tends to apparently run out of memory and crash (with most cards) at higher resolutions is a bit of a deterrent, as is the fact that MSAA doesn’t always produce the best results in this game. Regardless, Warhead looks great on a fast video card, with the best explosions in any game yet.

The GTS 250 again has a slight edge over the Radeon HD 4850, which seems to be an emerging pattern. Neither card appears to benefit substantially from having 1GB of memory compared to its 512MB sibling.

Power consumption

We measured total system power consumption at the wall socket using an Extech power analyzer model 380803. The monitor was plugged into a separate outlet, so its power draw was not part of our measurement. The cards were plugged into a motherboard on an open test bench.

The idle measurements were taken at the Windows Vista desktop with the Aero theme enabled. The cards were tested under load running Left 4 Dead at 2560×1600 resolution, using the same settings we did for performance testing.

Recent GeForce cards have had some impressively low power consumption numbers at idle, and the GTS 250 continues that trend in surprising fashion, reducing power draw by 30W compared to the 9800 GTX+. That’s with the exact same 55nm G92 graphics processor and twice the memory capacity of the 9800 GTX+, even. Power draw is also down by 15W under load, and in both scenarios, the GeForce GTS 250 consumes less power than the Radeon HD 4850 1GB.

That said, the reductions in power use aren’t limited to the GeForces. Gigabyte’s Radeon HD 4850 1GB also represents measurable progress versus the 4850 512MB.

Noise levels

We measured noise levels on our test system, sitting on an open test bench, using an Extech model 407738 digital sound level meter. The meter was mounted on a tripod approximately 8″ from the test system at a height even with the top of the video card. We used the OSHA-standard weighting and speed for these measurements.

You can think of these noise level measurements much like our system power consumption tests, because the entire systems’ noise levels were measured. Of course, noise levels will vary greatly in the real world along with the acoustic properties of the PC enclosure used, whether the enclosure provides adequate cooling to avoid a card’s highest fan speeds, placement of the enclosure in the room, and a whole range of other variables. These results should give a reasonably good picture of comparative fan noise, though.

The GTS 250’s noise levels, both when idling and running a game, are some of the best we’ve measured in this round of tests. The new GeForce would look even better, relatively speaking, were it not up against some very quiet but essentially broken Asus custom coolers on the Radeon HD 4850 512MB.

Meanwhile, the strangely high noise levels for the Gigabyte Radeon HD 4850 1GB card, which match at idle and under load, are not a fluke. Although Gigabyte chose a nice, powerful Zalman cooler for this card, they did not see fit to endow this cooler with intelligent fan speed control. Or even kind-of-dumb fan speed control. In fact, there’s no fan speed control at all. When I asked Gigabyte why, the answer was: because this is an overclocked card. I wasn’t aware that eking out an additional 20MHz required the total destruction of a product’s acoustic profile, but that’s what’s happened here. And it’s a real shame. A real, puzzling shame.

GPU temperatures

I used GPU-Z to log temperatures during our load testing. In the case of multi-GPU setups, I recorded temperatures on the primary card.

With that fan spinning at 100% no matter what, the 4850 1GB certainly has some nice, low GPU temperatures. Meanwhile, the GTS 250 is easily quieter, but still keeps its temperatures well in check.

Conclusions

At this point in the review, Nvidia’s marketing department would no doubt like for me to say a few words about some of its key points of emphasis of late, such as PhysX, CUDA, and GeForce 3D Vision. I will say a few words, but perhaps not the words that they might wish.

CUDA is Nvidia’s umbrella term for accelerating non-graphics applications on the GPU, about which we’ve heard much lately. ATI Stream is AMD’s term for the same thing, and although we’ve heard less about it, it is very similar in nature and capability, as are the underlying graphics chips. In both cases, the first consumer-level applications for are only beginning to arrive, and they’re mostly video encoders that face some daunting file format limitations. Both efforts show some promise, but I expect that if they are to succeed, they must succeed together by running the same programs via a common programming interface. In other words, I wouldn’t buy one brand of GPU over the other expecting big advantages in the realm of GPU-compute capability—especially with a GPU as old as the G92 in the mix.

One exception to this rule may be PhysX, which is wholly owned by Nvidia and supported in games like Mirror’s Edge and… well, let me get back to you on that. I suspect PhysX might offer Nvidia something of an incremental visual or performance advantage in certain upcoming games, just as DirectX 10.1 might for AMD in certain others.

As for GeForce 3D Vision, the GeForce GTS 250 is purportedly compatible with it, but based on my experience, I would strongly recommend getting a much more powerful graphics card (or two) for use with this stereoscopic display scheme. The performance hit would easily swallow up all the GTS 250 has to give—and then some.

The cold reality here is that, for most intents and purposes, current GeForces and Radeons are more or less functionally equivalent, with very similar image quality and capability, in spite of their sheer complexity and rich feature sets. I would gladly trade any of Nvidia’s so-called “graphics plus” features for a substantial edge in graphics image quality or performance. The GTS 250 comes perilously close to losing out on this front due to the Radeon HD 4850’s superior performance with 8X multisampled antialiasing. The saving grace here is my affection for Nvidia’s coverage sampled AA modes, which offer similar image quality and performance.

All of which leads us to the inevitable price and performance comparison, and here, the GeForce GTS 250 offers a reasonably compelling proposition. This card replicates the functionality of the GeForce 9800 GTX+ in a smaller physical size, with substantially less power draw, at a lower cost. I like the move to 1GB of RAM, if only for the sake of future-proofing and keeping the door open to an SLI upgrade that scales well. In the majority of our tests, the GeForce GTS 250 proved faster than the Radeon HD 4850 1GB, if only slightly so. If the Radeon HD 4850 1GB were to stay at current prices, the GTS 250 would be the clear value leader in this segment.

That’s apparently not going to happen, though. At the eleventh hour before publication of this review, AMD informed us of its intention to drop prices on several Radeon HD 4800 series graphics cards that compete in this general price range, mostly through a series of mail-in rebates. Some examples: this MSI 4850 512MB card starts at $159.99 and drops to $124.99 net via a mail-in rebate, and more intriguingly, this PowerColor 4870 512MB lists for $169.99 and has a $15 rebate attached, taking it to net price parity with the EVGA GTS 250 Superclocked card we tested. We hate mail-in rebates with a passion that burns eternal, but if these rebates were to last in perpetuity, the GeForce GTS 250 1GB at $149 nevertheless would be doomed.

For its part, Nvidia has indicated to us its resolve to be the price-performance leader in this portion of the market, so it may make an additional price move if necessary to defend its turf. Nvidia has limitations here that AMD doesn’t face, though, mainly because it doesn’t have the option of simply switching to GDDR5 memory to get twice the bandwidth. That is, after all, the only major difference between the Radeon HD 4850 and 4870. On the merits of its current GPU technology, AMD would seem to have the stronger hand.

What happens next is anybody’s guess. Just so long as Nvidia doesn’t rename this thing, I’ll be happy. There are GPU bargains ahead.

Comments closed
    • nerdrage
    • 11 years ago

    q[

    • Rakhmaninov3
    • 11 years ago

    I don’t think nVidia should ever invent another GPU, they should just keep re-naming the G92.

    • AustinW
    • 11 years ago

    “On the other hand, although its memory size is doubled, this card’s GDDR3 runs at 993MHz, down slightly from the 4850’s stock speed.”

    I’m pretty sure 993 MHz is the 4850’s stock memory speed…

      • Damage
      • 11 years ago

      Yeah, you’re right. I’ve corrected the text. Thanks.

    • no51
    • 11 years ago

    Hey Damage, can you provide a clear front shot of the card straight on (not angled)?

      • Damage
      • 11 years ago

      Sure. $500.

        • indeego
        • 11 years ago

        I’ll paypal you $20 if you dog{<.<}g

          • Damage
          • 11 years ago

          Cool. $520 net, and I am definitely doing this.

            • no51
            • 11 years ago

            Sorry, but a competitor offered to take the picture for me at 85% of the quality for 50% of the price.

    • TravelMug
    • 11 years ago

    Epic test notes.

      • Tamale
      • 11 years ago

      I wanna print them on fancy parchment, frame them in mahogany, and hang it on my office wall next to my PhD

      • FireGryphon
      • 11 years ago

      ++

      I love this website 🙂

      • rhema83
      • 11 years ago

      The reason why TR is my browser home page.

    • TurtlePerson2
    • 11 years ago

    For a rebrand this is a pretty good card. Lower power draw, slightly better performance. I don’t think I mind any of those things.

    • YeuEmMaiMai
    • 11 years ago

    wow Nvidia has to restort to renaming almost 2 year old tech to get people to buy it………..

    Thing’s a POS and no one should even consider it………

      • UberGerbil
      • 11 years ago

      Why? I know this is odd for TR, but in the real world lots of people go more than two years without buying a new video card. If you’re in the market for something in the $150 range that offers decent performance without using a lot of power or producing a lot of noise, why wouldn’t you consider it?

      • Meadows
      • 11 years ago

      Ever tried releasing your period key?……………………………………
      Works wonders……………………………………………..

      The G92, no matter how old, is still a much preferred midrange solution – believe it or not, 80% of gamers probably won’t genuinely need anything better, or even as good as it. The above percentage is my fabrication.

      …………………………………………………………………..

        • derFunkenstein
        • 11 years ago

        I’m on my period…………………………………………………………………………..

          • MadManOriginal
          • 11 years ago

          I’m on ur period key
          Makin dotz on yer screen

    • RickyTick
    • 11 years ago

    I liked the review, thanks.
    Someone please tell me why some of the new series Nvidia cards have the prefix GTS, and others are GTX? Does it just refer to low-end vs high-end? Thanks.

    • Vaughn
    • 11 years ago

    “Jerry Jerry”

    • MadManOriginal
    • 11 years ago

    I guess the ‘omg rebadge’ pre-release ranting should be tempered at least a little bit. This card is smaller and more efficient which is good. If NV allowed non-reference designs more readily this probably would have happened earlier.

    • Gerbil Jedidiah
    • 11 years ago

    After reading over the article a second time I couldn’t help but notice something. This new GTS 250 has some similarities to my “old” 8800GTS 512MB. They are both roughly the same size and use a 6 pin power connector. SO, it seems things are coming full circle.

    The more things change the more they stay the same?

    My 8800GTS is clocked at 745MHZ, so even that part is pretty similar.

    • SomeOtherGeek
    • 11 years ago

    Damage, awesome job!

    Are you planning to get rid of your readers by making them die of laughter? I laughed through most of it.

    Ok, let’s see if I got this right…? Next week, 2 cards will come out, the GTS 250+ (1 Gb version) and GTS 250- (512 Mb version), am I right?? Then 2 months later nVidia will see the GTS 250- as a negative statement and rename them both to GTS 250 +/- then another 2 months goes by and they will renamed then to GTS 250 like they originally thought last week? Am I right, wrong or just plain confused?

    Hey, you started it with l[

    • moose17145
    • 11 years ago

    On a random, and not really important note, under “Our Testing Methods”, in the “Memory Type” row, the link pointing to the exact modules being used appears to be broken. Either that or my internets are being stupid.

    Edit, the link to the OS being used also appears to be borked.

    • kilkennycat
    • 11 years ago

    (non-cosmetic) Differences between the GTS250 and the 9800GTX+ ??

    NONE with just 2 exceptions!!

    * The GTS250 1Gbyte memory option.
    * The GTS250 is missing the analog (DIN) HDTV connector.

    I have a eVGA 9800GTX+ “Superclocked” , purchased towards the end of January (2009) from Newegg $157 less $20 rebate. 512Mbytes memory, 756MHz clock. single 6-pin power connector, 9 inches long from rear-plate to end of circuit-board (9.25 inches from rear-plate to tip of fan shroud), dual SLI-conectors, SPDIF pass through (if you have a DVI -HDMI adapter).

      • thecoldanddarkone
      • 11 years ago

      power usage?

    • thebluebumblebee
    • 11 years ago

    Is there any guarentee that if you get a 512MB version that you are getting the lower power usage of the 1GB (does not make sense) version?

    • Konstantine
    • 11 years ago

    Somehow, the numbers don’t look genuine….Here are some points that i would like to get responses to, from the reviewer.

    -Why to use 8.12 drivers for the Radeon cards when the 9.2 driver has been posted for almost 2 weeks now.

    -According to anandtech, xbitlabs, and PCGH, the 4870 1G beats the 216 core in L.4.D, Crysis Warhead, and Fallout 3. Here, the 4870 loses…

    §[<http://www.xbitlabs.com/images/video/gainward-hd4850-1024mb-gs/crysiswh.png<]§ §[<http://images.anandtech.com/graphs/mutligpuupdate2gpufeb09_021209154227/18252.png<]§ §[<http://images.anandtech.com/graphs/mutligpuupdate2gpufeb09_021209154227/18254.png<]§ -The option of the games is extremely rediciulous.where's Stalker clear sky which is a great looking DX10.1 game, and where ATI cards do extremely well against Nvidia's? §[<http://www.xbitlabs.com/images/video/gainward-hd4850-1024mb-gs/stalkercs.png<]§ Why to use Dead Space...? It's a badly coded, console game. I hope Techreport didn't become another Hardocp....

      • danny e.
      • 11 years ago

      tone says a lot.. in this case it screams fanboy. if you want to be taken seriously, you have to lose your fanboy taint.

        • Konstantine
        • 11 years ago

        Accusing me of being a fanboi just because i have an objection is rude.

      • Damage
      • 11 years ago

      Please see the Test Notes section of the review.

        • Konstantine
        • 11 years ago

        I’ve just read it for the sixth time, in case i missed anything, and i can’t recognize what i have missed.The only card that you used the 9.2 driver to test with is the 4850 1 gigabyte.

          • Meadows
          • 11 years ago

          He made it clear that it doesn’t make a honking hint of a difference. And that they didn’t have time, and they’re still doing you a favour.

            • Konstantine
            • 11 years ago

            I do appreciate all the efforts that have been put into this review. But still, Using the latest drivers is a must. especially, with those new titles where The new drivers from AMD has brought some significant improvements in terms of performance, as you can see from my links.

            As for the shortage of time issue, i don’t think it was necessary to put all those graphics cards in this review.

            • Meadows
            • 11 years ago

            It’s not a must. End of discussion. It makes a difference, but it wasn’t worth it for this review.

            • Damage
            • 11 years ago

            Good luck arguing that one. What he’s really missing is that we did test the 4850 1GB with Catalyst 9.2.

            • Konstantine
            • 11 years ago

            I couldn’t disagree with you more.

            • Meadows
            • 11 years ago

            I couldn’t care less.

            • Damage
            • 11 years ago

            I could ban you both.

            • MadManOriginal
            • 11 years ago

            Gonna pull a Krogoth here…

            §[<http://rebelrockrunners.org/gallery/d/15309-2/ban_him.jpg<]§

            • Meadows
            • 11 years ago

            “Pull a ‘Krogoth'”, oh golden.

            • paulWTAMU
            • 11 years ago

            ROFL. There’s a reason I read TR.
            Seriously, about the drivers? So freaking what. They’re testing yet ANOTHER rebranded 8800GT against a well established competitor. The only real difference seems to be the power consumption (which is actually a plus in my book–I don’t like the trend towards cards that take as much juice as my microwave). So why bother doing a full review and taking that much time?

            • MadManOriginal
            • 11 years ago

            The reason all the other cards are in the review is because they’re carried over from previous reviews with earlier drivers, if that wasn’t done the results would take a lot longer to produce especially since Scott does some in FRAPS. You’re free to ignore those card results as irrelevant due to earlier drivers which would be the same as not testing them.

            I would however like to see standard clock testing. I remember the massive argument about overclocked results from the ‘$300 video card’ comparison. While valid at the time on the basis of what $300 would buy, getting a baseline for standard clocks in a broad article like this would add usefulness and longevity to the results.

            • JustAnEngineer
            • 11 years ago

            Agreed! I would REALLY like to see stock clocks included in the benchmarks. It’s okay to include the overclocked results if they’re in addition to the stock results and if they’re labeled as the premium cards that they are, but the stock performance is what’s going to matter to the majority of people shopping for these cards in the future.

      • grantmeaname
      • 11 years ago

      read the article, I’m begging you!

      • sammorris
      • 11 years ago

      The Dead Space figures are wrong. I don’t have any objection with any of the other tests, but those results indicate averages of 50fps or so, regardless of whether you use one, two or four ATI GPUs. I can assure you that is not the case, whether using normal CF or X2 cards, XP or Vista, 9.2, 9.1, 8.12, or even earlier drivers. If you’re going to use a poorly-coded game in a benchmark fair enough, but at least look to see if the numbers match what other people score.

        • Konstantine
        • 11 years ago

        Yes.Dead space is not a benchmarking game.But, my point is….that the results are fairly different from the ones at xbitlabs or anandtech.

          • Damage
          • 11 years ago

          But they are not running at the same clock speeds. RTA.

            • Konstantine
            • 11 years ago

            Oh…sorry, didn’t notice that.

      • Freon
      • 11 years ago

      I think it is perfectly reasonable to use slightly old data for non-competing segments. As long as the direct competition is up to date, so what if the 285 SLI benchmarks are 5% slower than with new drivers? This is about the 4850 1GB and GTX 250. The rest are there to give some general perspective to see how these fit in the price/performance spectrum.

      I don’t know anyone who plays STALKER CS, nor am I aware of the engine being used in many other (any?) games. It’s a really crappy game. It seemed like a shovelware update to the original. Yeah the skybox looks great, but I can’t see any other redeeming qualities. It does not seem like a benchmark contender.

      • eitje
      • 11 years ago

      I’m glad you came in here to post this. It just makes page 2 even funnier.

    • deruberhanyok
    • 11 years ago

    l[

      • DrDillyBar
      • 11 years ago

      Taking this full disclosure thing a bit seriously. 🙂

      • moose17145
      • 11 years ago

      rofl i know, i loved the test notes. The best part is, Konstantine is still bitching. you would have thought that after the test notes he would have taken the hint.

      Edit: I should however point out it would be nice to see a couple DirectX 10.1 games, like Stalker: Clear Skies. On the whole still a shame NVidia is stupid and wants to crush anything good for ATI. dX 10.1 really does give a nice performance boost where it can be used.

    • adisor19
    • 11 years ago

    If only MS would adopt OpenCL, this whole CUDA/Stream crap would finally be over and we would see more useful programs come out that would use the GPU power when needed. Alas, knowing MS, they will prolly come out with their own “standard” just to try and force the hand of the industry once again.

    Adi

      • Meadows
      • 11 years ago

      DirectX 11 “Compute Shader” from months ago says hi. You should read more.

        • UberGerbil
        • 11 years ago

        And Compute Shaders were in various MSDN prelim docs and WinHEC slides back in early 2007 (if not earlier). Which may well pre-date OpenCL, though I’m not enough of a fanboy to get worried about precedence since the desirable concept of a unifying GPGPU API was obvious long before that.

          • Meadows
          • 11 years ago

          DirectX was a very good move from Microsoft too, so I’m not sure what adisor is pissed off about. Maybe that Apple doesn’t have IP even remotely close to what DirectX has become.

          Similarly, if anyone can help GPGPU stuff evolve fast and correctly, it’s Microsoft.

            • UberGerbil
            • 11 years ago

            It has to be a 3rd party that isn’t invested in the hardware, that’s for sure. In theory an open SIG or similar kind of committee could work just as well, but the glacial pace of the OpenGL ARB was really what allowed Direct3D to walk away with the leadership (of course Microsoft’s investment in developer support was no small contributer as well) The jury’s still out on the Khronos Group, though so far they do seem to be moving much faster than the ARB did. They do have pretty much all the major players except MS, but a lot of them have one vested interest or another that may work at cross-purposes. Of course it’s not like having two standards is the end of the world either, as long as they’re both hardware-agnostic.

      • tfp
      • 11 years ago

      Yeah that will happen…

    • Freon
    • 11 years ago

    Head to head at the same price it does look like the GTS 250 is an overall buy over the 4850 1GB. But the question is will AMD counter by lowering 4870 1GB prices within reach. If you can snag one for just $20-30 more it looks mighty tempting.

    Glad to see there is still plenty of active competition in the market. A $150 video card really is high end performance wise on its own, even on 24″ monitors. Unless you’re a hardcore Crysis 1 fan or have a 30″ display anything in this price bracket will perform superbly.

    • yogibbear
    • 11 years ago

    This is going to be deja vu, but why would anyone upgrade from an 8800gt? (If they are not at > 1920×1080, and only use one monitor)

    Still looking forward to the next best thing. No one impresses me :(. The GT300 better not be the lovechild of two G92’s making bunnies in the dumpster out back behind Nvidia’s fabs.

      • Meadows
      • 11 years ago

      Their next generation thing will probably be an extended GT200 chip with more doohickeys and whatchamacallits and support for DirectX 11. And they’d better hurry up with it, they don’t have much time left until Windows 7 makes the OS scene implode.

      • grantmeaname
      • 11 years ago

      G92s would make baby G92s, not bunnies.

      Duh.

    • maroon1
    • 11 years ago

    Difference between GTS 250 and 9800GTX+

    -GTS 250 is shorter than 9800GTX+ (9inch vs 10.5inch)
    -GTS 250 use only one 6-pin connector, and it has lower power consumption than 9800GTX+ and even HD4850 at both idle and full load
    -GTS 250 has 1GB memory

    So, GTS 250 is not exactly the same thing as 9800GTX+ as some people had claimed, and it is not a bad video card

      • marvelous
      • 11 years ago

      Not a bad video card but when Nvidia releases the same card 3 times under a different name it tricks the misinformed.

      4870 will be $149.99. GTS 250 or whatever have a hill to climb when it’s pitted against 4870.

      • PRIME1
      • 11 years ago

      Had they not renamed it, people would have whined about that as well.

        • grantmeaname
        • 11 years ago

        Nobody *[

      • toyota
      • 11 years ago

      some of the 9800gtx+ models already have one 6 pin connector. also the gts250 will have 512mb models.

        • indeego
        • 11 years ago

        hah, so even cards named the same are differentg{<.<}g Head esplodes.

    • flip-mode
    • 11 years ago

    The -[<8800GTS 512<]- -[<9800GTX<]- -[<9800GTX+<]- GTS250 looks like a decent card. If I were buying it would be a hard choice up against the 4850. I'd toss a coin, or just say screw it a pay a few more dollars for a 4870 or a GTX260. Or I'd save a few dollars and get a 4830 and buy a game (Fallout 3) with the saved cash.

      • Meadows
      • 11 years ago

      In essence, you are clueless.

        • khands
        • 11 years ago

        Or tNvidia and AMD/TI are compressing the 100-150 dollar market so much that the consumer is getting really confused. That would be my guess anyway.

          • Kurotetsu
          • 11 years ago

          I still don’t buy that.

          People who buy discrete video cards of this calibur are either going to be 1). gamers or 2). professional designers or content creators.

          For gamers, the buyer is going to do their research first and already know what kind of card they want (and there are PLENTY of sites like Techreport that you can get information from). Most of the community is already quite aware of what is a rebrand and what isn’t.

          Professionals don’t even look at this part of the market. The enterprise they work for has entire departments dedicated to figuring out what to buy with their assloads of cash. Nvidia and ATI have entirely different product lineups dedicated to these people. So rebrands at this level have no effect.

          Joe Sixpack doesn’t buy discrete video cards, he/she uses whatever is already in their pre-built Dell system. So naming rebrands have no affect on them.

          So…who exactly is confused here?

          EDIT:

          Well, I’ll add in Joe Middleman. A technically inclined user who doesn’t game overly much and isn’t a professional (I have no idea if this type actually exists or not). In their case, they’ll probably be content with good onboard graphics and ignore discrete cards about as much as Joe Sixpack. So, again, rebrands don’t affect them much.

            • thebluebumblebee
            • 11 years ago

            Ha!
            I watch Craigslist, an am amazed by the people who try to sell a video card that they bought at Buy More 😉 for some crazy high price because they did not know what a decent price for the item is. A local guy is trying to sell a 8800 Ultra for $550! If they don’t know what a realistic price is, they also have not researched it to understand the pros and cons of that particular card.

            • indeego
            • 11 years ago

            The reason people sell high on craigslist is there are dumb people that buy at those elevated prices on craigslist.

            Hence why you see a ton of Dell systems that people buy, and resell at $50-$150 profitg{<.<}g

        • flip-mode
        • 11 years ago

        Yeah? You post did little to clear that up.

      • sammorris
      • 11 years ago

      There are some valid things to consider, I for one would consider noise level, power consumption, and reliability of performance (e.g. despite the higher numbers in a lot of games, I take the 4870X2 over the GTX295 as it doesn’t fall to zero when 8xxAA is applied, vice versa compare the stability of frame rates for crossfire instead of a single nvidia card, even with a lower score the latter is more fluid)

    • Krogoth
    • 11 years ago

    GTS 250 is a good match for 4850. You cannot go wrong with either choice. The next tier (260/4870) is still more than sufficient provided that you do not have feed a 30″+ monster at native resolution.

    • Spurenleser
    • 11 years ago

    How is it, that TR is unreachable from time to time (as was the case during the last 45 minutes)? Is there some maintenance going on on a regular basis or is it a routing problem?

      • Meadows
      • 11 years ago

      As far as I know, the site does a 15-30 minute backup (varies) every day starting at 9 AM GMT sharp.

        • Mourmain
        • 11 years ago

        This is the only site I know of that has to go down for so long (or at all, actually) to do that. It seems highly anachronistic… I’m surprised somebody at TR isn’t worried about this at all.

        • Spurenleser
        • 11 years ago

        Ah, thanks for the info! Knowing what it is makes it easier to bear and easier to work around.

          • HurgyMcGurgyGurg
          • 11 years ago

          Yea, its a pain for someone like me living in Asia, get home at 5 o’clock sometimes and the site is down for 30 minutes.

        • indeego
        • 11 years ago

        I know of no modern operating system that requires http or a sql db to shut down completely during backupg{<.<}g

          • Meadows
          • 11 years ago

          Nevertheless, that’s what I heard around the site.

          • jackaroon
          • 11 years ago

          Mysql / InnoDB may be free & open, but for some reason it requires an offline backup without 3rd party for-pay software. Still not exactly “requires” the downtime, but it’s a possible explanation. As for why an open source database requires commercial software for such a basic feature, I have no explanation.

    • Meadows
    • 11 years ago

    I agree with your humorous remarks regarding testing conditions, but it’s still important to note that by using the latest nVidia drivers on the cards that were tested without it (can’t speak for AMD), you can see a performance increase anywhere from 5 to 15% /[

    • fpsduck
    • 11 years ago

    Gotcha! (page 6)

    Call of Duty 4: World at War

    CoD4: Modern Warfare
    CoD5: WaW

    • DrDillyBar
    • 11 years ago

    IF they don’t rename it. 😉

    • MadManOriginal
    • 11 years ago

    First paragraph – 88000 GT. 10 times better mystery card?

    Wow, the test notes page which I usually skim right through is worth a read. Sarcasm meter off the charts.

      • Meadows
      • 11 years ago

      That is seriously /[

        • MadManOriginal
        • 11 years ago

        Heh, that’s what I wrote at first but then I realized there are cards actually numbered over 9000.

          • Meadows
          • 11 years ago

          Yes, but those are simply over nine thousand.
          The GeForce 88000 GT is /[

        • Krogoth
        • 11 years ago
    • Fighterpilot
    • 11 years ago

    Nice power consumption numbers,guess the makers are finally getting the message on quieter,less power hungry cards.
    With ATI HD4870 now in the $149 price bracket it’s going to be a fierce contender for this new green card to deal with.
    For $19 more the Radeon is far superior.

      • Deanjo
      • 11 years ago

      Unless you run another OS other then windows as well. Right now nvidia rules the alternative OS market.

        • khands
        • 11 years ago

        That’s only really if you’re running Free BSD or a more exotic Linux distro though, they’re both pretty even on the more major one’s.

        • swaaye
        • 11 years ago

        I’ve found my TNT2 M64 to run Ubuntu real nice. heh. NVIDIA legacy driver works great. Who needs a bleeding edge card for an OS that’s not really about gaming or GPGPU? Most hardware (all?) is supported though if you have access to the proprietary binary drivers. They even worked on my notebook’s Mobility 9600.

          • Buzzard44
          • 11 years ago

          Wow, I thought I was the only one on this site with a computer still using a TNT2 M64. Good for most stuff, although it kinda struggles in newer games…..hehe.

    • ssidbroadcast
    • 11 years ago

    First—-MAN is my timing good.

    /goes to read article.

    Edit: HAHAHA! Your Test Notes this time are HILARIOUS!

      • ecalmosthuman
      • 11 years ago

      Commenting first and THEN reading the article actually indicates bad timing. 😉

Pin It on Pinterest

Share This