NVIDIA’s GeForce FX 5200 GPU

WHEN NVIDIA announced its NV31 and NV34 graphics chips, I have to admit I was a skeptic. The chips, which would go on to power NVIDIA’s GeForce FX 5600 and 5200 lines, respectively, promised full DirectX 9 features and compatibility to the masses. Who could resist?

Me, at least initially. Perhaps I still had a bitter taste in my mouth after the recycled DirectX 7 debacle that was the GeForce4 MX, or maybe it was NVIDIA’s unwillingness to discuss the internal structure of its graphics chips. Maybe it was merely the fact that I didn’t believe NVIDIA could pull off a budget graphics chip with a full DirectX 9 feature set without making cutting corners somewhere.

Or maybe I’m just turning into a grumpy old man.

Well, NVIDIA may have pulled it off. Now that I have Albatron’s Gigi FX5200P graphics card in hand, it’s time to take stock of what kind of sacrifices were made to squeeze the “cinematic computing” experience into just 45 million transistors. Have NVIDIA and Albatron really produced a sub-$100 graphics product capable of running the jaw-dropping Dawn demo and today’s 3D applications with reasonably good frame rates? How does the card stack up against its budget competition? Let’s find out.

The NV34 cheat sheet
NVIDIA’s big push with its GeForce FX line is top-to-bottom support for DirectX 9 features, including pixel and vertex shaders 2.0, floating point data types, and gobs of internal precision. As the graphics chip behind NVIDIA’s GeForce FX 5200 and 5200 Ultra, NV34 has full support for the same DirectX 9 features as even the high-end NV30. What’s particularly impressive about NV34 is that NVIDIA has squeezed support for all those DirectX 9 features into a die containing only 45 million transistors—nearly one third as many as NV30.

Beyond its full DirectX 9 feature support, here’s a quick rundown of NV34’s key features and capabilities. A more detailed analysis of NV34’s features can be found in my preview of NVIDIA’s NV31 and NV34 graphics chips.

  • Clearly defined pipelines — NVIDIA has been very clear about the fact that NV34 has four pixel pipelines, each of which is capable of laying down a single texture per pass. Unlike NV30, whose texture units appear dependent on the kind of rendering being done, NV34 is limited to a single texture unit per pipeline for all rendering modes.

  • Arrays of functional units — NVIDIA has been coy about what’s really going on under the hood of its GeForce FX graphics chips. Instead of telling us how many vertex or pixel shaders each chip has, NVIDIA expresses the relative power of each graphics chip in terms of the amount of “parallelism” within its programmable shader. NV30 has more parallelism than NV31, which in turn has more parallelism than NV34. How much more? Well, NVIDIA isn’t being too specific about that, either.

  • Lossless compression lost — Unlike NV30 and NV31, the NV34 graphics chip doesn’t support lossless color and Z compression, which could hamper the chip’s antialiasing performance. The absence of lossless Z compression will also limit the chip’s pixel-pushing capacity.

  • 0.15-micron core — NVIDIA’s mid-range NV31 and high-end NV30 graphics chips are manufactured on a 0.13-micron manufacturing process, and both feature 400MHz RAMDACs. Since NV34 is targeted at low-end graphics cards, it’s being built on a cheaper and more mature 0.15-micron manufacturing process. The 0.15-micron manufacturing process limits NV34’s RAMDAC speed to 350MHz, but only those running extremely high resolutions at high refresh rates should be limited by a 350MHz RAMDAC.


NVIDIA’s NV34 graphics chip: DirectX 9 on a budget

With chip specifics out of the way, let’s take a peek at Albatron’s Gigi FX5200P.

Albatron’s Gigi FX5200P
Because the GeForce FX 5200 is a high-volume, low cost product, it’s likely that cards from different manufacturers will be very similar to each other. Manufacturers will probably stick to NVIDIA’s reference design and differentiate their products through cosmetics and bundles more than anything else.

Today, we’re looking at Albatron’s Gigi FX5200P, which follows NVIDIA’s GeForce FX 5200 reference design. Check it out:

The Gigi FX5200P’s blue board should nicely match Albatron’s most recent motherboards, which sport the same color scheme.

Because the GeForce FX 5200 supports 64 and 128MB memory configurations, manufacturers have a little room here for potential differentiation. Albatron has taken the high road with its Gigi FX5200P and endowed the card with 128MB of memory. Bucking a recent trend, the Gigi FX5200P’s memory chips are all mounted on one side of the board.

The budget-minded Gigi FX5200P uses TSOP memory chips from Samsung that are rated for operation at speeds as high as 500MHz. Since the GeForce FX 5200 spec calls for a memory clock speed of only 400MHz, users could potentially overclock the Gigi FX5200P’s memory bus by 100MHz without exceeding the capabilities of the memory chips.

Although the Gigi FX5200P’s NV34 graphics chip is built on a 0.15-micron core, the chip is only running at 250MHz. At that speed, a passive heat sink is all that’s needed to keep the chip cool, which is quite a contrast to the GeForce FX 5800 Ultra’s 70dB Dustbuster.

Like every other graphics card in its class, the Gigi FX5200P features a standard array of video output ports. NVIDIA has hinted that we may see some manufacturers offering versions of the GeForce FX 5200 with dual DVI output ports, but those cards may only surface in pre-built systems targeted at business users.

Bundle-wise, the Gigi FX5200P doesn’t have much to offer, but that’s to be expected. At this price point, there’s little room in the budget for extras. Albatron does manage to squeeze a composite-to-RCA video adapter and a video cable into the Gigi FX5200P’s box, along with a copy of WinDVD and a requisite driver CD.

All right, that’s enough gawking at pictures for now. It’s time to check out some benchmarks.

Our testing methods
As ever, we did our best to deliver clean benchmark numbers. Tests were run three times, and the results were averaged.

Our test system was configured like so:

System
Processor Athlon XP ‘Thoroughbred’ 2600+ 2.083GHz
Front-side bus 333MHz (166MHz DDR)
Motherboard Asus A7N8X
Chipset NVIDIA nForce2
North bridge nForce2 SPP
South bridge nForce2 MCP
Chipset drivers NVIDIA 2.03
Memory size 512MB (2 DIMMs)
Memory type Corsair XMS3200 PC2700 DDR SDRAM (333MHz)
Sound nForce2 APU
Graphics card GeForce4 Ti 4200 8X 128MB
GeForce MX 460 64MB
GeForce FX 5200 128MB
Radeon 9000 Pro 64MB
Radeon 9600 Pro 128MB
Graphics driver Detonator 43.45 CATALYST 3.2
Storage Maxtor DiamondMax Plus D740X 7200RPM ATA/100 hard drive
OS Microsoft Windows XP Professional
OS updates Service Pack 1, DirectX 9.0

Today, we’re testing the GeForce FX 5200 against a wide range of graphics cards from ATI and NVIDIA. NVIDIA’s own GeForce4 MX 460 and ATI’s Radeon 9000 Pro are the GeForce FX 5200’s closest competitors, price-wise, though results for the GeForce4 Ti 4200 8X and Radeon 9600 Pro have been included to frame the results in a wider perspective.

In order to keep a level playing field, image quality-wise, I tested the NVIDIA cards with the “Application” image quality setting. The Radeon cards were tested using ATI’s “Quality” image quality option, which produces visuals roughly equivalent to NVIDIA’s “Application” setting.

A number of tests were run with 4X antialiasing and 8X anisotropic filtering enabled to illustrate the GeForce FX 5200’s performance with its image quality options pushed to their limits. Since the GeForce4 MX 460 is incapable of 8X anisotropic filtering, it was eliminated from those tests. Also, since my Radeon 9000 Pro features only 64MB of memory, it’s incapable of 4X antialiasing at resolutions above 1024×768.

The test system’s Windows desktop was set at 1024×768 in 32-bit color at an 85Hz screen refresh rate. Vertical refresh sync (vsync) was disabled for all tests.

We used the following versions of our test applications:

All the tests and methods we employed are publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Fill rate
Theoretical fill rate and memory bandwidth peaks don’t necessarily dictate real world performance, but they’re a good place to start. Here’s how the GeForce FX 5200 stacks up against the competition when it comes to theoretical peaks:

Core clock (MHz) Pixel pipelines Peak fill rate (Mpixels/s) Texture units per pixel pipeline Peak fill rate (Mtexels/s) Memory clock (MHz) Memory bus width (bits) Peak memory bandwidth (GB/s)
GeForce FX 5200 250 4 1000 1 1000 400 128 6.4
GeForce4 Ti 4200 8X 250 4 1000 2 2000 512 128 8.2
GeForce4 MX 460 300 2 600 2 1200 550 128 8.8
Radeon 9000 Pro 275 4 1100 1 1100 550 128 8.8
Radeon 9600 Pro 400 4 1600 1 1600 600 128 9.6

As you might expect from the cheapest graphics card of the lot, the GeForce FX 5200 has the humblest specifications. The card’s fill rates match up nicely with ATI’s Radeon 9000 Pro, but with a memory clock speed of only 400MHz, the GeForce FX 5200’s peak memory bandwidth looks like a potential bottleneck.

Of course, the above chart only reflects theoretical peaks; real-world performance can be a different beast altogether. How does the GeForce FX 5200 stack up in some synthetic fill rate tests?

The GeForce FX 5200’s single-texturing performance looks good, putting the card neck-and-neck with the Radeon 9000 Pro. However, when we look at multi-texturing performance, the GeForce FX 5200 falls off the pace, despite the fact that it and the Radeon 9000 Pro are both 4×1-pipe architectures with similar clock speeds. Could the GeForce FX 5200’s limited memory bandwidth be rearing its ugly head?

To put real world fill rate performance in perspective, let’s revisit those theoretical peaks.

With single texturing, only the GeForce4 MX 460 comes close to realizing all of its theoretical peak fill rate. In multitexturing, the GeForce FX 5200 stands out like a sore thumb. The FX 5200 is the only card that doesn’t come close to realizing all of its peak theoretical multitextured fill rate, suggesting that a memory bottleneck may be lurking behind the scenes.

Occlusion detection
The NV34’s lack of lossless color and Z-compression puts the GeForce FX 5200 at a distinct disadvantage in our occlusion detection test. The GeForce FX 5200 also lacks the “Early Z” occlusion detection algorithms that have all but eliminated overdraw in mid-range and high-end graphics chips.

With antialiasing and anisotropic filtering disabled, the GeForce FX 5200’s performance in VillageMark closely matches the GeForce4 MX 460 and trails the Radeon 9000 Pro. When we enable 4X antialiasing and 8X anisotropic filtering, the GeForce FX 5200’s performance is actually closer to that of the GeForce4 Ti 4200 8X, which puts it ahead of the Radeon 9000 Pro.

Pixel shaders
The GeForce FX 5200 is replacing a pixel shader-less GeForce4 MX line at the low end of NVIDIA’s graphics lineup, so it doesn’t have big shoes to fill. The GeForce FX 5200’s pixel shaders support the pixel shader 2.0 standard, and they can even offer more color precision and longer instruction lengths than the shaders on ATI’s Radeon 9600 Pro.

Despite its support for more advanced pixel shader programs than any of the other graphics cards we’re looking at today, the GeForce FX 5200’s pixel shader performance in 3DMark2001 SE scrapes the bottom of the barrel. The GeForce FX 5200’s scores are especially disappointing in the advanced pixel shader test, where the card is well behind the Radeon 9000 Pro.

In NVIDIA’s own ChameleonMark pixel shader benchmark, the GeForce FX 5200 is at the back of the pack, well behind the Radeon 9000 Pro. The GeForce FX 5200’s poor pixel shader performance thus far underlines the fact that capability and competence are two very different things.

Of the cards we’re looking at today, only the Radeon 9600 Pro and GeForce FX 5200 are capable of running 3DMark03’s pixel shader 2.0 test. Of course, the Radeon 9600 Pro costs more than twice as much as the GeForce FX 5200, and the cards’ relative performance reflects that price disparity. The fact the GeForce FX 5200 is slow here no matter what resolution is used suggests simple pixel fill rate isn’t factoring into the equation much; the GeForce FX 5200’s pixel shaders are just slow. At the very least, they could definitely use a little more of that parallelism NVIDIA keeps talking about.

Vertex shaders
The reduced parallelism in GeForce FX 5200’s programmable shaders no doubt hurts its performance in synthetic pixel shader tests, but are the card’s vertex shaders equally hindered?

Yes. In 3DMark2001SE, even the GeForce4 MX 460, which uses DirectX 8’s software vertex shader, turns in a better performance than the GeForce FX 5200. The GeForce4 MX 460 can’t complete 3DMark03’s vertex shader test, but the GeForce FX 5200 is still underpowered and outclassed compared to the Radeon 9000 Pro.

In 3DMark2001 SE’s transform and lighting tests, the GeForce FX 5200 is again at the back of the pack. The card is more competitive in the less complex single light scene, but with eight lights, it doesn’t come close to keeping up with the Radeon 9000 Pro or the GeForce4 MX 460.

Games
Synthetic feature tests are fun and all, but real-world performance is what’s really important. How does the GeForce FX 5200 fare?

Quake III Arena

With antialiasing and anisotropic filtering disabled in Quake III Arena, the GeForce FX 5200 runs with the Radeon 9000 Pro and is slower than the GeForce4 MX 460. With 4X antialiasing and 8X anisotropic filtering, the GeForce FX 5200 stays ahead of the Radeon 9000 Pro and even starts narrowing the performance gap with NVIDIA’s GeForce4 Ti 4200 8X.

Jedi Knight II

The GeForce FX 5200 fares much better in Jedi Knight II, especially with antialiasing and anisotropic filtering enabled. Managing nearly 60 frames per second at 1024×768 with 4X antialiasing and 8X anisotropic filtering is an impressive feat for a graphics card this cheap.

Comanche 4

In Comanche 4, the GeForce FX 5200 is sent to the back of the pack again. Without antialiasing and anisotropic filtering enabled, the GeForce FX 5200 is behind even the GeForce4 MX 460. With antialiasing and anisotropic filtering turned on, the GeForce FX 5200’s performance improves relative to its competition, but overall frame rates are really too low for even casual gaming.

Codecreatures Benchmark Pro

In Codecreatures, the GeForce FX 5200 is left looking a little underpowered again. The Radeon 9000 Pro’s behavior in this test is a little erratic, especially with antialiasing and anisotropic filtering enabled, so its results have been omitted.

Unreal Tournament 2003

The GeForce FX 5200’s performance in Unreal Tournament 2003 isn’t exactly awe-inspiring. The card is beaten handily by the GeForce MX 460 with low-quality detail settings, and the FX 5200 just trails the MX 460 with Unreal Tournament 2003’s high-detail settings. In all cases, the GeForce FX 5200 is well behind the Radeon 9000 Pro.

With 4X antialiasing and 8X anisotropic filtering enabled, the GeForce FX 5200 looks like a much better competitor for the Radeon 9000 Pro. However, the frame rates produced by both cards with antialiasing and anisotropic filtering enabled are really only high enough for gaming at low resolutions.

Problems in Unreal Tournament 2003
The GeForce FX 5200’s performance in Unreal Tournament 2003 isn’t particularly impressive, but there are problems beyond simple performance. The “Application” image quality setting in NVIDIA’s drivers forces trilinear filtering in 3D applications. It also creates all kinds of rendering problems for the GeForce FX 5200 in Unreal Tournament 2003. Check it out:


Unreal Tournament 2003 with “Quality” image quality settings


Unreal Tournament 2003 with “Application” image quality settings
The way it’s meant to be played?

If the “Application” image quality setting is changed to “Quality” or “Performance,” the scenes are rendered correctly. However, with the “Application” image quality option selected, the rendering problems persist with even the lowest in-game image quality settings and the latest Unreal Tournament 2003 patch. This same problem also occurs with the GeForce4 MX 460, but not with any other card I’ve tested.

So, while Unreal Tournament 2003 is certainly playable on the GeForce FX 5200, trilinear filtering with the “Application” image quality setting is apparently too much to ask.

Serious Sam SE
We used Serious Sam SE’s “Extreme Quality” add-on to ensure a mostly level playing field between the different graphics cards. In these tests, the Radeons are actually doing 16X anisotropic filtering, the GeForce4 Ti 4200 8X and GeForce FX 5200 are doing 8X anisotropic filtering, and the GeForce4 MX 460 is doing only 2X anisotropic filtering.

The GeForce FX 5200 is at the bottom of the pile in Serious Sam SE in terms of average frame rates. What happens when we look at frame rates over the course of the entire benchmark demo?

The GeForce FX 5200 is consistently the slowest card throughout the demo, but the fact that the card stutters at the start of the demo at higher resolutions sets off a few alarms. What happens when we crank up the antialiasing and anisotropic filtering options?

With 4X antialiasing and 8X anisotropic filtering, the GeForce FX 5200 bottoms out again. How do things look over the course of the benchmark demo?

At the start of the demo, it’s not pretty. The GeForce FX 5200 jumps all over the place in the first moments of the demo, a behavior that gets worse as we crank up the resolution. After its initial fit, the GeForce FX 5200 settles down and produces low but relatively consistent frame rates. These big performance dips indicate a severe playability problem with Serious Sam SE on the GeForce FX 5200. Perhaps a future driver update could fix the problem, but for now, it’s not pretty.

3DMark2001 SE

Overall, the GeForce FX 5200 matches up with the GeForce4 MX 460 in 3DMark2001 SE. How do 3DMark2001 SE’s individual game test scores shape up?

The GeForce FX 5200 ties the GeForce4 MX 460 at the back of the pack in one of 3DMark2001 SE’s game tests, and sits behind the GeForce4 MX 460 in another. In the “Nature” game test, which requires at least DirectX 8-class hardware, the GeForce FX 5200 is barely half as fast as the Radeon 9000 Pro.

3DMark03
NVIDIA doesn’t think there’s really any point to using 3DMark03 to evaluate graphics cards, but we beg to differ. While it would be inappropriate for a graphics review to lean heavily on the results of any one graphics benchmark, when used in conjunction with as wide a variety of tests as we’ve assembled today, 3DMark03’s game tests do have value.

Only able to complete one of 3DMark03’s game tests, the GeForce4 MX 460 takes a bullet for the GeForce FX 5200 and ends up at the back of the pack. Let’s look at the individual game tests.

In 3DMark03’s first three tests, the GeForce FX 5200 can’t keep up with the Radeon 9000 Pro.

Only the Radeon 9600 Pro and the GeForce FX 5200 are capable of running the gorgeous “Mother Nature” game test, which requires DirectX 9-class hardware. Since the Radeon 9600 Pro costs more than twice as much as the GeForce FX 5200, it’s no surprise that its performance is much more impressive here.

3DMark03 image quality
The GeForce FX 5200 is capable of running 3DMark03’s DirectX 9-class “Mother Nature” game test, but that doesn’t mean the card’s output looks right. Check out the following pictures taken from frame 1799 of the Mother Nature game test. The first picture is from the reference software DirectX 9 renderer, the second is from ATI’s Radeon 9600 Pro, and the third is from the GeForce FX 5200.


DirectX 9’s software renderer


ATI’s Radeon 9600 Pro


NVIDIA’s GeForce FX 5200

You can to click on the pictures and look at the high-quality PNG images to see the difference more clearly, but the GeForce FX 5200’s sky definitely looks “off,” even when compared with the Radeon 9600 Pro. Is NVIDIA falling back to lower color precision in order to improve performance? Probably, but for the GeForce FX 5200, they really need to. Just because the GeForce FX 5200 supports DirectX 9 applications doesn’t mean it will look as good as other DirectX 9-capable cards.

SPECviewperf
So far, our performance has focused on gaming and synthetic feature tests. How does the GeForce FX 5200 perform in workstation-class applications?

Pretty well, at least when compared with the Radeon 9000 Pro. At least for a budget workstation, the GeForce FX 5200 appears to be a solid pick.

Antialiasing
Next, we’ll isolate the GeForce FX 5200’s performance in different antialiasing and anisotropic filtering modes. We’ve already had a glimpse of the card’s performance with 4X antialiasing and 8X anisotropic filtering in our game tests, but this section will provide a more thorough look at the card’s performance with these image quality features.

Edge antialiasing

The GeForce FX 5200’s antialiasing performance isn’t particularly impressive, but the card fares relatively well against the Radeon 9000 Pro. The fact the GeForce FX 5200 is, comparatively, so slow at 640×480 is a little disappointing, though.

Antialiasing quality: Radeon 9600 Pro
For some odd reason, the Radeon 9000 Pro refuses to change antialiasing levels in 3DMark03, so we’re using screenshots taken with a Radeon 9600 Pro as a comparative reference for antialiasing quality.


Radeon 9600 Pro: No antialiasing


Radeon 9600 Pro: 2x antialiasing


Radeon 9600 Pro: 4x antialiasing


Radeon 9600 Pro: 6x antialiasing

Antialiasing quality: GeForce FX 5200


GeForce FX 5200: No antialiasing


GeForce FX 5200: 2x antialiasing


GeForce FX 5200: 4x antialiasing

The GeForce FX doesn’t support 6X antialiasing like the Radeon 9600 Pro. With both cards at 4X AA, the FX 5200 doesn’t smooth out jagged edges as well as the ATI card.

Texture antialiasing
To measure texture antialiasing, I used Serious Sam SE with various texture filtering settings.

The GeForce FX 5200’s anisotropic filtering performance is well behind its competition; even the GeForce4 MX 460 manages better scores with its limited anisotropic filtering capabilities. Of course, the GeForce FX 5200 does have one advantage over the Radeon 9000 Pro; the GeForce FX 5200 can do trilinear and anisotropic filtering at the same time.

We only tested the GeForce FX 5200 with its “Application” image quality setting, but here’s how the card looks with its other image quality options:


GeForce FX 5200: Standard trilinear + bilinear filtering


GeForce FX 5200: “Performance” 8X anisotropic filtering


GeForce FX 5200: “Quality” 8X anisotropic filtering


GeForce FX 5200: “Application” 8X anisotropic filtering

As you can see, NVIDIA’s “Application” quality option produces the sharpest textures.

Anisotropic filtering quality: Radeon 9000 Pro
For comparative purposes, here’s the Radeon 9000 Pro with trilinear and “Performance” 8X anisotropic filtering. Because the Radeon 9000 Pro can’t do anisotropic and trilinear filtering at the same time, this is as good as it gets.


Radeon 9000 Pro: Standard trilinear + bilinear filtering


Radeon 9000 Pro: 8X anisotropic filtering

While 8X anisotropic filtering looks good on the Radeon 9000 Pro, the GeForce FX’s “Application” image quality setting, which uses anisotropic filtering and trilinear filtering, looks much better.

Texture filtering and mip map levels: GeForce FX 5200
Now let’s see exactly what NVIDIA is doing to texture filtering with these various quality slider settings. I’ve used Q3A’s “r_colormiplevels 1” command to expose the various mip-maps in use and the transitions between them.


GeForce FX: bilinear + trilinear filtering


GeForce FX: “Performance” 8X anisotropic filtering


GeForce FX: “Quality” 8X anisotropic filtering


GeForce FX: “Application” 8X anisotropic filtering

The GeForce FX’s “Performance” and “Quality” mip map transitions aren’t as smooth as the “Application” image quality setting, whose transitions are gorgeous.

Texture filtering and mip map levels: Radeon 9000 Pro


Radeon 9000 Pro: bilinear + trilinear filtering


Radeon 9000 Pro: 8X anisotropic filtering

As evidenced by the brutal mip map transitions, trilinear filtering is sorely missing on the Radeon 9000 Pro with anisotropic filtering enabled.

Overclocking
Unfortunately, there’s not much I can tell you about my overclocking experiences with the Gigi FX5200P. I was able to get the graphics core up to a stable and artifact-free clock speed of 300MHz with little effort, but any attempt at memory overclocking failed. That’s odd, especially considering that the card’s Samsung memory chips are rated all the way up to 500MHz. A little extra memory bandwidth could help the Gigi FX5200P’s performance, too. Conclusions
There are really two issues to consider as I conclude this review. First, there’s the performance and features of NVIDIA’s NV34 graphics chip and the GeForce FX 5200 cards that make use of it. Second, we have the Gigi FX5200P, Albatron’s take on the GeForce FX 5200.


This is Dawn on the GeForce FX 5200


This is the Dawn we drool over

To NVIDIA’s credit, the GeForce FX 5200 largely makes up for the travesty that was—and still is—the GeForce4 MX. With the GeForce FX 5200, NVIDIA can claim full DirectX 9 feature support across its entire graphics line. Even the cheapest GeForce FX 5200s, which retail for as little as $67 on Pricewatch, support all the DirectX 9 features that make NVIDIA’s “Dawn” demo look so good, and that’s an impressive feat. However, it’s important to make a distinction between feature capability and feature competence. As we’ve seen in our testing, the GeForce FX 5200 is considerably underpowered in situations where even less technically capable graphics cards excel. Sure, the GeForce FX 5200 supports high precision data types, pixel and vertex shaders 2.0, and a host of other advanced features, but it doesn’t seem to perform particularly well when those features are exploited. So, while the GeForce FX 5200 is technically capable of all sorts of DirectX 9-class eye candy, I have to question just how well the card will handle future DirectX 9 games and applications. After all, a slideshow filled with DirectX 9 eye candy is still a slide show.

The GeForce FX 5200 isn’t as capable a performer as its feature list might suggest, but that doesn’t mean cards based on the chip aren’t worth picking up. At only $67 online, the GeForce FX 5200 is a few dollars cheaper than the Radeon 9000 Pro. For gamers, the Radeon 9000 Pro offers better and more consistent performance. However, for average consumers and business users, the GeForce FX 5200 offers better multimonitor software, more future-proof feature compatibility, and silent and reliable passive cooling. The GeForce FX 5200 is a great feature-rich card for anyone that’s not looking for the best budget gaming option.

So what about Albatron’s Gigi FX5200P offering?

Unfortunately, it looks like Albatron may have tried to cater to gamers a little too much with the Gigi FX5200P. The Gigi FX 5200 retails for $95 on Pricewatch, which is pricey compared to GeForce FX 5200 cards from other manufacturers. The Gigi FX5200P does feature 128MB of memory, but since I wouldn’t recommend a GeForce FX 5200-based graphics card to budget gamers, I don’t see much point in having 128MB of memory on the board. With 128MB of memory, Albatron’s Gigi FX5200PP is too slow to appeal to gamers and too expensive to compete with the $67 GeForce FX 5200 64MB cards that will appeal to budget-conscious businesses and consumers.

Albatron does, however, have plans for a whole slew of budget GeForce FX 5200-based offerings, including versions of the card with 64MB of memory, 64-bit memory busses, and even a PCI variant. Those cards should be cheaper than the $95 Gigi FX5200P and more appropriate for consumers and business users.

Comments closed
    • Anonymous
    • 16 years ago

    What a f*** was that??? My GeForce FX is 1.5x – 2x better than that Albatron. You have 3DMark2003 1024x768x32 1200 i have 1900 ?! How is that possible??? You dont now witch card to test??? Take the test from you site off,its bu*****t Learn benchmark. It was on 100% 64bit VGA. Test 128Bit and then make end dissions

    • Anonymous
    • 17 years ago

    This is almost off topic….

    Reading through the review I noticed the VillageMark scores for the ti4200 appeared a bit low. Running some quick tests on my 128mb ti4200 (not the 8x version) at default core and clock of 250/444 I got the following.

    Tested at 1024x768x32 both times

    No AA or AF
    My rig – 116fps
    TR – 98
    I’m 20% faster

    4X AA + 8X AF
    My rig – 25fps
    TR – 18fps
    I’m 40% faster

    Considering my system is only a 1.2ghz tbird with 256mb pc133, this is pretty weird. Like the TR rig, I have xp pro w/ sp1. So why am I getting higher frame rates with an inferior system? My memory is running 70mhz slower than the TR ti4200!!!

    My card is a good overclocker, so I retested at 250/512 (the TR speeds).

    No AA or AF
    My rig – 122fps
    TR – 98
    I’m 24% faster

    4X AA + 8X AF
    My rig – 25fps (weird, no increase)
    TR – 18fps
    I’m still 40% faster

    Does this make sense to anyone? Am I blessed with an overachieving rig ๐Ÿ˜› I am running some old drivers, det 30.82

    Dan

      • Dissonance
      • 17 years ago

      If I had to guess, I’d bet on it being a driver thing. I’m using the “Application” setting with the latest official drivers, which at least for the FX series, offers much better image quality.

        • Anonymous
        • 17 years ago

        got it. I had trilinear filtering disabled when I ran my tests. After enabling it, my scores are in line with yours.

        Dan

    • Anonymous
    • 17 years ago

    #63, Buub:
    The *[

    • Anonymous
    • 17 years ago

    #60, Buub

    There are some notable omissions in your text. For example, ATi has been offering unique innovations since the original Radeon…some of the GF3’s LMA you mention reflected things that had already been done in the Radeon. It is maintaining a lead in both performance and features that was new with the 9700.

    Now, it is true that it takes significant competence to refine register combiners to the degree nVidia did for the GF 3, but the (very significant) importance of that was the threshold crossed by that accomplishment, not in the significance of engineering innovation present in it.

    What has been happening is that ATi, for example, has had the resources and engineers to innovate more in the given time period…this is not a new trend: they started behind, have consistently been implementing more improvements, and finally caught up on all fronts.

    What nVidia has been focusing on instead of design based innovation is process and memory technology based improvements (which of course then takes engineering to implement). The nv30 is indeed an ambitious project of this nature, but the limitations of technology restricted what they could do. In contrast, the R3x0 family is scaling to over 400 MHz (yes, the 9800 is capable of that…the competition does not require such a part yet, however) on a more limited process, and this is through designing effort by ATi. This relates to success in the mobile space as well.

    This is not to say that nVidia engineers are incompetent at all, it is just that their current philosophy seems to me to preclude them playing catch up on both fronts as ATi did (i.e., I expect NV40 to offer significant functionality advantages but fall short in performance compared to ATi’s offering at the time, but that is based on a whole body of speculation about the timeframe of upcoming parts from both companies that I can’t summarize effectively here. Note the apparent dependence on process (IBM) to implement this.

    BTW, the ‘flexibility’ of the nv30 is this:
    It has 4 (as opposed to the R300’s 8) fp processing units (also they do double duty for PS 2.0 texture ops, where the R300 has 8 other units for that). Clear advantage: these units have the capability to calculate at fp32 precision (though in the nv30 they can’t output 4 component fp32 results in one clock). There are a whole host of other advantages/disadvantages that can be discussed for them (besides the number of units), and I’ll refer you to searching beyond3d for further discussion ( ยง[<http://www.beyond3d.com/forum/viewtopic.php?t=5150<]ยง is an example, and there is an NV30/R300 speculative comparison article on the site as well). It has, in addition to this, less capable computation units for integer (12 bit fixed point) where the bulk of its performance is offered. Namely, the flexibility here is in a backwards direction, where you trade features for performance. Optimization efforts are focused on best making use of this (which current DX 9 PS 2.0 and ARB_fragment_program specifications don't allow very often), which is related to Carmack's Doom 3 discussion about nv30 specific (slightly leading R300 most of the time, sometimes losing) path versus ARB path (half the speed of R300) performance. The vertex processing is somewhat similar, except that vertex processing is all 32 bit, so it is much more successfully done (and actually fits well with the idea you echo that the nv3x is designed for workstation usage, since simple vertex processing is still very important there). Actually, vertex processing is where I think nVidia can establish a lead easily (they only fail now because performance drastically lowers with complex vertex processing in comparison to an R300 at a lower clock speed...i.e., they only "lose" in performance, which is a straightforward problem to address), the problem being that pixel processing is not exactly unimportant, and increasing chip real estate for both is a significant challenge...is IBM's process up to it? (something that, again, ties into a whole body of discussion, this time about speculation as to what nv40 will be).

      • Buub
      • 17 years ago

      Yes and the Opteron is a warmed-over Athlon… ๐Ÿ™‚

      You can make any achievement look trivial if you cast it in the right light.

      I don’t think we need to beat the dead horse of nVidia pushing technologies like 32-bit color, Hardware T&L, hardware-assisted FSAA in a consumer card. Obviously both companies have some of the finest engineers in the industry. I think we agree on that.

      And yes you’re right to a point about them concentrating more on process/physical technology improvements. They have pushed that in the past to get ahead.

      But let’s not forget that nVidia basically drove the entire DX8.0 spec, which added a ton of new useful functionality to the API. Granted Microsoft may have moved in that same direction without them, but then again, they might have screwed it up too. This is Microsoft after all. ๐Ÿ™‚ And yes, I will acknowledge that ATI added some key contributions to shaders after that as well. The point is that they have contributed across the board. Some was evolutionary, some was revolutionary.

      Finally, you have to admit totally reinventing the GF core as a totally modular floating-point-enabled core with the advanced shader technology it has is a little bit more than just pushing process improvements. That’s a fundamental change in every aspect of the design. Yes ATI did it as well; I’m not saying nVidia is better because they did this obviously, I’m just saying that they had no problems designing and implementing the architecture (though the production, as we know, wasn’t so well executed).

      I think the workstation angle is the most accurate. Here is what I think was going on in the heads of the nVidia designers:

      “Let’s make the most-powerful kick-ass GPU ever developed.”

      But what they meant by this was to make the GPU that could do the most. Which is different than making a chip that will be the fastest in gaming. Their goal probably works really well on the Quadro (and evidentally it does by the reviews I have seen).

      But when it comes to gaming, maybe they should have had a two-chip implementation. The super-chip for Quadro, and a cut-down chip that was good at gaming but left some of the really advanced stuff behind.

      With their heads down in development trying to make the “most powerful GPU ever”, they lost sight of what it was they actually needed to create for their largest group of customers: gamers.

      This makes the most sense to me: nVidia trying to make the most powerful, most advanced GPU ever developed, and in the process making a chip that just wasn’t very good for gamers. ATI did better here because they were more focused on exactly what they needed to go after, and it paid off in a GPU that has just the right number and complexity of features to make it a very good high-end gaming card.

    • MaceMan
    • 17 years ago

    Actually, I agree; reviews of fanless videocards lumped together would be appealing. Or at least a way to compare them. I’m developing an almost irrational hatred of fans and would cough up some performance to be rid of them.

    On my list of things that would make my shorts twist is the ability to select (checkbox) cards from the roundup master list, and see them tossed onto 3-4 charts. You know, like how you can select 3-4 stocks and have them tracked on a little chart? You’d have to standardize the chart dimensions for a “static overlay” type of thing or… well, I’m yapping about things out of my league. Anyone have ideas how this could be done in a reasonable fashion (to get it rolling)?

    • Anonymous
    • 17 years ago

    I agree that 5200 sucks. But this review is unfair. Being fanless, it should be compared to ATI recent fanless offering, the Radeon 9200 – which also underperforms the Radeon 9000 pro. See beyond3d or other websites and you see that ATI does not perform any better in the fanless department either.

    Why can’t ATI or NVidia offer a reasonably fast fanless solution? Would be great for my cheap home theater PC. *sigh*

      • Dissonance
      • 17 years ago

      Given the small size of the fan on the Radeon 9000 Pro, I wouldn’t be surprised to see it running just fine with a passive heat sink the same size as what’s found on the 5200.

      Suggesting that we compare graphics cards based on cooling characteristics is a little dangerous, though. If we did that, we’d have nothing to compare the GeForce FX 5800 Ultra to because nothing from ATI requires nearly that amount of cooling.

      And let’s not forget that, against the Radeon 9000 Pro, this GeForce FX 5200 had a memory size advantage of 64MB.

    • Pete
    • 17 years ago

    Diss, allow me to point out a few minor mistakes, as is my annoying habit. ๐Ÿ˜‰

    *Fill rate table, incorrectly listing the 9000P as having 4 texture units per pipe (should be 1).
    *3DM03: You used frame 1779 for the reference image, and 1799 for the cards. ๐Ÿ™‚
    *Also, you should note that the 9600 uses JGMS AA with gamma correction, while the 9000 uses OGSS AA with no gamma correction–so you can’t really substitute one for the other. The FX uses JGMS AA at 2x, and OGMS AA at 4x.

    I dunno, this 5200 performed a bit better than I thought it would. It’s still not a very good deal, IMO, and it should become a worse one when the 9600 non-Pro arrives in volume and its prices drop. The Dawn pics at the end were perfect–a succint summation of just how DX9 the FX 5200 is.

    • Terribatronic
    • 17 years ago

    There is one half decent 5200 and thats the asus 9520 , it comes with dual dvi ports and vivo and cost about 60% of what i paid for my ti4200vivo card about 9 month ago .

      • JustAnEngineer
      • 17 years ago

      But the GeForceFX 5200 doesn’t even provide 60% of the performance of your GeForce4Ti 4400. If you switched from what you have to anything less than a Radeon 9500 Pro, you would be *[

    • Anonymous
    • 17 years ago

    #42:
    crazybus, here are some approximate prices:

    $199 : 9600 Pro
    $129 : 9600
    $199 : 5600 Ultra
    $149 : 5200 Ultra

    Note that the $129 is from the Beyond3D review, and I couldn’t find confirmation elsewhere. A maximum of $149 might be a safer assumption, matching the proposed 5200 Ultra price.

    I think some confusion results from the current list prices for the 9000 Pro….and the “DX 9” monicker implying more value from the FX 5200 non Ultra.

    If you cut through the hype and keep in mind that the nv3x cores depend on using integer processing for full performance, and even then the nv34 doesn’t compete well with the 9000/9200, the monicker “DX 9” is more than slightly misleading. Without performance to back it up, it has trouble being anything other than a DX 8.1 part, and a really slow one to boot…unless you are someone who’d otherwise be stuck using software nv30 emulation for DX 9 level shader development, or some such, and couldn’t afford more than a 5200 for some reason (i.e., not a gamer), it seems it doesn’t really seem to be a good choice.

    Oh, little mentioned factoid : the ATi parts that support PS 1.4 offer more dynamic range than GF FX parts running PS 1.4 (due to that dependence on integer processing for performance for the FX parts…they can all definitely offer much more dynamic range, just not at a speed resembling their performance for any other task, so this seems to be disabled for PS 1.4 level shaders).

      • JustAnEngineer
      • 17 years ago

      Radeon 9600 hasn’t appeared yet. I am hoping for versions with fast RAM from Gigabyte or Hercules that can be overclocked beyond Radeon 9600 Pro defaults. ATI Radeon 9600 Pro is listed for pre-order at $190.

    • droopy1592
    • 17 years ago

    Any real specs for r400? The NV35’s 28GB of mem ain’t nuffin to suhneeze at.

    • Anonymous
    • 17 years ago

    FYI, someone over at the beyond3d forums posted the(a?) shader for the Dawn demo and over half of the instructions used DX 8 fixed-point instructions as opposed to DX9 floating-point.

    This is only possible by using Nvidia’s OpenGL extensions. DirectX 9’s PS2.0 and OpenGL’s ARB_fragment_shader don’t allow the mixture of fixed-point and floating-point instructions in the same shader.

    In order to reach even a fraction of their shading potential, NV3x cards *MUST* use the custom OpenGL extension and they must mix fixed-point with floating-point.

    • Anonymous
    • 17 years ago

    NV35 remains 4×2 architecture

    28.8 GB/s bandwidth

    By Fuad Abazovic: Tuesday 29 April 2003, 15:35

    AS WE SAID A WHILE BACK, the NV35 memory controller is a redesign. But as we previously said in our article memory controller was redesigned but there was little time to do many other modifications, and the limitation in the memory controller needed fixing, not the GPU speed nor the memory.
    A memory controller and support for DDR 1 at 450 MHz will give Nvidia 28.8 GB/sec real bandwidth while NV30 had only 16 GB/s, making this new card almost twice as fast if you’re counting memory bandwidth.

    The pipeline number will remain four while the NV35 will use two texture memory units (TMUs) that will give eight textures per clock but just if multi texturing is used.

    Redesigning memory controllers is complex work and Nvidia put a lot into it, since this is what a future design needed to outperform ATI’s efforts.

    But despite Nvidia’s efforts, we are certain that ATI still has many cards to play in this graphics battle. ATI can easily use memory clocked at, let’s say, 400MHz, and possibly give the NV35 trouble.

    We are sure that both Nvidia and ATI won’t quit this leadership fight as it’s a battle for mainstream business, with both sides showing equal determination

    Not the topic but just one question. Why the %^&$ did Nvidia opt for 4×2 again!

      • droopy1592
      • 17 years ago

      I don’t know either. Maybe they think all the next gen games will be multi texing

      • Autonomous Gerbil
      • 17 years ago

      My guess would be that they only had so much time to rework the chip that they had to make a call about what they could get done in the amount of time available. You’ve got to keep in mind that this isn’t a whole new chip, but a rework of their first failed attempt. ๐Ÿ˜‰

      • Forge
      • 17 years ago

      Because damned near every core they’ve ever made is 4*2? GeForce 1 couldn’t be, they needed to keep die size down, but GF2 GTS and every core after that was. ATI, meanwhile, has done 2*3, 4*2, and now 8*1. It looks like NV hit a lucky combo out of the gates, and ATI went out and did research.

      I’m really starting to think that the chip designers (not driver writers, not sales guys, just the IC design techs) at ATI are godlike, while the IC techs at NV are simply parroting a lucky design, with ever-shrinking die sizes and ever-more-beta memory tech keeping that gimp design afloat. The .15 R300, with simple DDR1, matches or beats the .13 DDR2 GeForce FX 5800 Ultra in every category, even managing lower heat output while outperforming the NV30! …… PLUS Nvidia had the better part of a YEAR to study the R300 and then put out their response!

      I used to have such respect for NV, but they haven’t done anything admirable in far too long.

        • axeman
        • 17 years ago

        Yeah I agree, NVidia is becoming the new 3dfx, slow to adapt, trying to wring more out of old ideas instead of innovating. Its the same old top of the heap complacency that leads to being caught off guard. 3dfx caught off guard by the TNT/2 somewhat and then majorly by the Geforce, AMD’s Athlon obliterating the P3 performance wise at launch, I’m sure there’s more.

        • Buub
        • 17 years ago

        Forge, while I can agree with some of your statements, in general, I don’t think it’s terribly accurate overall.

        Yeah the nVidia guys might have gotten a little comfortable and not worked as hard as they should. But there’s nothing like strong competition to wake you up, and they now have that.

        q[

          • JustAnEngineer
          • 17 years ago

          If there are several million extra transistors in the GPU on a $400+ 3D video card hyped for over 12 months as the most revolutionary and greatest thing ever invented for 3D gaming, then those expensive heat-generating transistors darn sure ought to be doing something that is gaming related.

          Theoretically, GeForceFX 5800 Ultra is *[

    • dukerjames
    • 17 years ago

    so what does this mean to Nvidia?
    The dawn of cinematic real time CG ?
    or the dawn of down of Nvidia?

    • Anonymous
    • 17 years ago

    The Radeon 9000 beats the 9600 in the Serious Sam tests . . . I’m guessing that the results were accidentally switched?

      • Dissonance
      • 17 years ago

      Nope, they’re correct. Because I’m using Serious Sam SE’s “Extreme” quality settings, things look a little different on each card. The 9000 Pro is running 16X aniso and so is the 9600 Pro, but the 9600 Pro is doing trilinear filtering while the 9000 Pro is not because it cant’ do aniso + trilinear.

    • Loie
    • 17 years ago

    ยง[<http://tw.nvidia.com/view.asp?PAGE=exec_profiles<]ยง if only it had contact info... :(

      • Autonomous Gerbil
      • 17 years ago

      Hmm, after looking at that page I have come to 2 conclusions. 1)Their Carmack must not be as good as the other Carmack, and 2)Dwight Dierks may have eaten the code that fixed the driver problems in UT2003.

    • crazybus
    • 17 years ago

    I think we should keep in mind that there is also a 5200 Ultra version clocked at 325/325, which will be going against the 9200Pro at 300/300. I guess we’ll see who wins that one.

      • Anonymous
      • 17 years ago

      except the 5200 (non-ultra) is getting creamed by the radeon 9000, what makes you think the 5200-ultra is going to be a stronger competitor for the 9600?

        • crazybus
        • 17 years ago

        It was my understanding that the 9200 Pro was going to be priced similarly to the 5200 Ultra.

          • Yahoolian
          • 17 years ago

          You mean 9600 pro..

    • Anonymous
    • 17 years ago

    I got a laugh out of the comments about “The way it’s meant to be played?” under the Unreal Tournament 2003 screenshot. It was also interesting to see what Dawn looked like in the conclusion. All this says a lot about what Nvidia is doing with both its hardware and drivers these days. I’m glad I don’t an Nvidia based card any more, but I hope they get back on track since competition in the marketplace is for the good of everyone.

    • Anonymous
    • 17 years ago

    I’m a bit confused by the concluding snapshots of Dawn.

    Is the GFFX 5200 running at a lower FP mode or other settings?
    Or does the card produce different output than the GF FX 5800 with identical settings?

    Or am I confused, and it’s just intended to show that the 5200 doesn’t quite live up to expectations?

    Please enlighten ๐Ÿ™‚

    • Anonymous
    • 17 years ago

    Why the hell does Nvidia make it’s cards so strapped for bandwidth? Christ, the GF2 Pro and GF3 Ti200 have as much bandwidth as this POS! And when you consider that most OEMS will probably outfit their FX5200s with 64-bit memory buses, it just gets worse! If ATI can make a cheap, decent performing DX9 level card (the 9600) why can’t Nvidia?

    • Anonymous
    • 17 years ago

    its kinda shocking how weak this thing is, according to your graphs, this is not much more than a glorified 4MX (which was, really, just a glorified 2MX)… However, it’s cheap, so it just revamps the entry-line, making the clueless customer believe he/she’s getting so much more than just last year’s card because of the marketing motor pushing it forth.

    sad.. i used to be a die-hard nvidia fan, and now i’m considering ati for my card upgrade this year

    • muyuubyou
    • 17 years ago

    So it was you, Droopy? ๐Ÿ˜‰

    But it was “tehn dolrah to ecspensiv” ๐Ÿ˜€

      • droopy1592
      • 17 years ago

      Actually it was *[

    • YeuEmMaiMai
    • 17 years ago

    they actaully released that thing to the general public? Well at least the 9000Pro users are happy that they are still above the 5200 in terms of performance. Crap even my old ancient 8500 at 300/300 is faster than that peice of crap.

      • atryus28
      • 17 years ago

      Hey you leave that ancient 8500 alone. That’s what I’m still usin. Although I am running it @ 308/306

    • AmishRakeFight
    • 17 years ago

    you’re way too politically correct in your conclusion….couldn’t you just say “it sucks!”?

    • droopy1592
    • 17 years ago

    And to think, i invented that line:

    *[

    • Kurlon
    • 17 years ago

    Nope, is to escpensiv…

    Sorry about that, any ways, I’m still stuck with the ‘horror’ of a GeForce 2MX, pre speed binning of those parts, in a dual Xeon box. ’bout the only problems I’ve noticed is no DX9 support, wah, and I don’t run most games above 800×600. Looks like I’ll be waiting one more product cycle to upgrade at least. If I can push a Voodoo 1 through the Quake 3 era, I can get a few more months out of this GF 2MX. : )

    • Anonymous
    • 17 years ago

    OMG… Can I just throw up on Nvidia now?

      • eitje
      • 17 years ago

      that’ll teach them!
      where are Nvid’s goosebump & oily skin sliders now? HUH?
      there WAS NO lubing! there WAS NO chilly!

    • Anonymous
    • 17 years ago

    Since the focus of the review was the 5200 FX, I think it would have been a good idea to have the AF benchmarks with an extra “dashed” line (same color) for it for “Quality” AF. I mention this because it’s nearest competitor, the 9000, is shown to unfair advantage with the 5200 doing Application mode and trilinear + aniso at the same time.

    I know your screenshots at the end explained things thoroughly, but the graphs still leave an unfair impression of how they compare through the article before the reader gets to them.

    Also, I think you should have posted a bit more of a warning along with recommending 64-bit bus card at the end…that’s some pretty horrible bandwidth and fill rate that would result, I think.

    Also, before

      • Dissonance
      • 17 years ago

      Since I don’t recommend the 5200 for gaming at all, the limited bus bandwidth that a 64-bit bus would yield isn’t really an issue. Average consumers and business users wouldn’t notice the difference.

    • R2P2
    • 17 years ago

    page 2:
    q[

    • Anonymous
    • 17 years ago

    I agree with #9, it’s a marketing GPU. It’s designed for OEM’s who want to put ‘DirectX 9 compatible’ on their displays. Anyone who buys this as a standalone card is a fool; find a used or bargain sale GF3 or GF4 or Radeon card for the same price then wait the few years before DX9 matters for games ๐Ÿ˜‰

      • JustAnEngineer
      • 17 years ago

      I can almost guarantee you that the sleazy system builders will advertize their systems “with GeForceFX graphics.”

        • wesley96
        • 17 years ago

        The horror implied here is just as bad as, if not worse than, those system builders putting GeForce4 graphics on the system that comes with GF4MX420.

          • JustAnEngineer
          • 17 years ago

          Yep. NVidia’s evil marketing geniuses are still hard at work. They developed a strategy of splitting the market into three tiers with a common brand, and it has worked very well for them.

          I have to admit that I’m a bit disappointed by the latest crop of video cards. Obviously, GeForceFX 5200 is a joke for all but very casual gaming. GeForceFX 5600 looks too expensive. I am still waiting to see if we get ODM (Gigabyte, Hercules, etc.) Radeon 9600 cards with unusually fast memory chips installed that might be highly overclockable for a reasonable price. As it has since its introduction, the Radeon 9500 Pro is still hanging in there as the best value for gaming.

          • JustAnEngineer
          • 17 years ago

          Also– Nevermind the “horror” of a GeForce4MX 440. Consider how many systems were purchased (and are still being purchased) with the GeForce4MX 420, with its totally-inadequate memory bandwidth.

            • StartX
            • 17 years ago

            Is Geforce 5200 faster that ATI RADEON 9000 only, not the pro…. Is it faster than Geforice 3 Ti200 64mb. ? I need help

    • muyuubyou
    • 17 years ago

    Is this GPU worse than the now-aged one in the XBOX? or is it for the higher resolutions…

    • atryus28
    • 17 years ago

    So this is basically a cheap POS card not worth blinking at. I won’t be seeing that time come back again. Not that the review itself was bad but regardless that felt like a waste.
    I couldn’t recommend this card to anyone.

      • eitje
      • 17 years ago

      it’s important to HAVE reviews like this, to SHOW people how it’s a waste of money. i can’t count the times i wished someone had done a review on something i was looking to purchase – and every time, i bet someone said “eh, this isn’t worth reviewing”.

        • atryus28
        • 17 years ago

        I didn’t say that it wasn’t worth ANYONE’s time just not mine hehe. I should have known better because nVidia’s current iterations of a video card suck. Seeing as you can still find a GF 2 gts and the same price as this 5200, it’s blantently obvious that it sucks the big one.

        I personally shouldn’t have read this review but I had nothing better to do at the time since I work third shift. I also have this uncanny habit of reading almost all of tech-reports reviews.

    • EasyRhino
    • 17 years ago

    Gee, the performance actually made the GF4MX look pretty good.

    And yes Diss, you ARE a grumpy old man ๐Ÿ™‚

    • muyuubyou
    • 17 years ago

    I guess DX9 support here is a marketroid feature. Who needs DX9 when only old games run smoothly?

    Maybe they needed to release a fanless card to compensate for the leaf blower.

    • NeXus 6
    • 17 years ago

    I really see no point in releasing a DX9 video card that will not be any good for future gaming. For the the price, I suppose some people will think they’re getting a steal–until that new game comes out, which will probably only be playable at 800 x 600. And even then it will probably be choppy with all of the eye candy on. So, I don’t see the point other than NVIDIA taking advantage of consumers/gamers that don’t know any better. I realize there’s a market for that price range, but NVIDIA’s PR dept. is deceiving gamers into thinking that this video card will play those future DX9 games (with all of the eye candy ON) with acceptable frame rates. It was the year 2003 that last time I checked. And, playing a new game at even 800 X 600 is, IMO, ancient.

    My GeForce 3 Ti500, which is getting old, still beats this card for the most part. I get a 3DMark2001 score of 8000+ at default settings. Sad…

      • T_Dawg
      • 17 years ago

      I think you’re fooling yourself if you think you will be able to run next generation games (Doom3, Half Life 2, Deus Ex 2, etc) at high resolutions with the current generation of cards.

      Over the last couple years as hardware has out-paced software developement, I think some people have developed expectations that all games should be run at 1280 x 1024 w/ AA/AF at 60fps with all eye candy turned on., and this next generation of game engines are just gonna cripple the current generation of cards.

      You just need to look at the FPS you see in UT2k3 or 3dMark2k3 to see what to expect.

    • wesley96
    • 17 years ago

    I sum it up as ‘performance hopping between GF4MX460 and GF4Ti4200, while offering DX9 compliance’. Price is okay, but this is a lukewarm card.

    • Rakhmaninov3
    • 17 years ago

    Nice review! I was a bit over-excited ’cause of it’s cheap price and DX9 capability, but it looks like it’s a dog when it comes down to it, at least when you consider its sub-MX performance in so many apps. I think I’ll save the $67 and wait until I can afford a 9800:)

    • atidriverssuck
    • 17 years ago

    ok for budget users I guess? Bit disappointing tho.

    But wake me when they give me a good reason to replace a ti4200.

    • gordon
    • 17 years ago

    What’s wrong with good ole bar graphs, are they out of style now or something =/? Nice review btw and should the 5200 really be loosing to a dx7 card on a 4 year old core in any situation =/.

      • Dissonance
      • 17 years ago

      Bar graphs do a poor job conveying performance scaling across multiple resolutions, and AA and AF settings. What would take four or more bar graphs to represent can be easily shown on a single line graph.

        • indeego
        • 17 years ago

        But dude: They’re *[<_[

          • wesley96
          • 17 years ago

          Unless they’re bars of soap, they ain’t much of use, anyway. ๐Ÿ™‚

Pin It on Pinterest

Share This