NVIDIA’s GeForce FX 5200 Ultra GPU

WHEN NVIDIA first introduced the GeForce FX 5200, its performance was horrid. Sure the chip supported all sorts of DirectX 9 goodies, but feature compatibility with DirectX 9 doesn’t guarantee competitive or even competent performance in DirectX 9-class applications or older games. To cure the GeForce FX line’s performance woes, NVIDIA released the Detonator FX 44.03 drivers. Cheats and optimizations for 3DMark03 in the Detonator FX drivers created some controversy. However, the drivers did dramatically improve performance in many applications without degrading image quality or otherwise breaking any rules. And there was much rejoicing.

Thus, I had a renewed interest in NVIDIA’s GeForce FX 5200 Ultra. The Ultra bolsters the GeForce FX 5200’s core clock speed by 75MHz and memory clock by 150MHz, which makes it look more competitive, on paper, with NVIDIA’s mid-range GeForce FX 5600 than with its sub-$70 sibling.

Today we’re looking at Inno3D’s Tornado GeForce FX 5200 Ultra. Does it deliver on the GeForce FX 5200 Ultra’s potential? Is it fast enough to challenge the GeForce FX 5600 or the competition from ATI? The answers might surprise you.

All you need to know about NV34
The GeForce FX 5200 Ultra uses NVIDIA’s NV34 graphics chip, which I described in my GeForce FX 5200 review back in April. NV34 occupies the low end of NVIDIA’s GeForce FX line, and it offers DirectX 9-class pixel and vertex shader programs. Like NV31, which is used in NVIDIA’s GeForce FX 5600 series, NV34 has four pixel pipelines capable of laying down one texture per rendering pass. However, NV34 differs from the rest of the NV3x line in a couple of key ways:

  • Manufacturing process – NVIDIA’s other NV3X graphics chips are built using new 0.13-micron process technology, but NV34 is built using older and more established 0.15-micron technology. NVIDIA’s use of 0.15-micron technology will set NV34’s core clock speed ceiling lower than it perhaps could be with a 0.13-micron chip, which would also consume less power, but 0.15-micron technology makes sense for relatively budget parts like the GeForce FX 5200 and 5200 Ultra.

  • No color or Z compression – Unlike the rest of the NV3X line, NV34 can’t do color or Z compression. The lack of color compression should hamper the chip’s performance primarily with antialiasing enabled, but the lack of Z compression will hurt across the board. Without advanced lossless compression schemes, NV34 doesn’t make as efficient use of the bandwidth it has available, which reduces the chip’s overall effective fill rate (or pixel-pushing power).

Those are the basics. You can find a more comprehensive analysis of the chip in my GeForce FX 5200 review and preview of NVIDIA’s NV31 and NV34 graphics chips.

Inno3D’s Tornado GeForce FX 5200 Ultra
For gamers on a budget, the GeForce FX 5200 Ultra should be a pretty appealing option. Like most other GeForce FX 5200 Ultra cards, Inno3D’s rendition appears to follow NVIDIA’s reference design, so the Tornado will have to rely on bundled extras and low prices in order to compete.

Memory heat sinks have been used on a number of graphics cards from various manufacturers in the past few years, but quite honestly I’ve yet to see a graphics card whose memory cooling solution made a huge difference in even overclocking performance.

Rear-mounted memory heat sinks can also create clearance problems with some motherboards, so Inno3D isn’t missing out by leaving the Tornado GeForce FX 5200 Ultra’s memory chips bare.

The card’s 128MB of memory is mounted on both sides of the board without any extra cooling. The Tornado GeForce FX 5200 Ultra uses Hynix BGA chips rated up to 350MHz (700MHz DDR).

One factor that could aid in any overclocking endeavors is the Tornado’s auxiliary power connector, which helps feed the board with juice. The card uses a standard 4-pin MOLEX connector. Inno3D doesn’t, however, include a MOLEX splitter cable with the Tornado. Some users, especially owners of small-form-factor systems, may have trouble finding a spare power connector.

Like many other manufacturers, Inno3D is essentially re-badging NVIDIA’s reference cooler. Although the non-Ultra GeForce FX 5200 can get away with only passive cooling, the Ultra’s higher clock speeds necessitate active cooling. Fortunately, the Tornado’s fan is no louder than an average graphics card fan; processor or power supply fan noise easily drowns it out.

Like just about everyone else, Inno3D equips the Tornado with VGA, DVI, and S-Video outputs. A DVI-to-VGA adapter is also included in the box for those who want to take advantage of NVIDIA’s nView software running dual CRT monitors. For those looking to hook up the Tornado to a TV or home theater, Inno3D throws in an S-Video-to-composite video cable, but curiously no standalone S-Video cable. Included in the box but completely useless with the card is a VIVO adapter cable; the Tornado GeForce FX 5200 Ultra doesn’t support video input at all.

Inno3D includes a number of different software titles and games with the Tornado. In addition to the requisite driver CD and a disc full of game demos, there are also copies of 3DMark03 (standard version), Comanche 4, WinDVD 4, and WinDVD Creator. Honestly, I’d far rather see brand new graphics cards bundled with recent games rather than older titles. Graphics card manufacturers seem to have no problem getting the latest versions of software like WinDVD into the box, but I’ve yet to see anyone bundle a game that’s not at least a year old.

Now, don’t get me wrong—I’m a big fan of freebies in game bundles. However, I wonder if leaving out a copy of an older game like Comanche 4 might allow Inno3D to drop the price of the Tornado GeForce FX 5200 Ultra a little, or perhaps even to bundle in something more useful, like an S-Video cable.

Our testing methods
As ever, we did our best to deliver clean benchmark numbers. Tests were run three times, and the results were averaged.

Our test system was configured like so:

System
Processor Athlon XP ‘Thoroughbred’ 2600+ 2.083GHz
Front-side bus 333MHz (166MHz DDR)
Motherboard Asus A7N8X
Chipset NVIDIA nForce2
North bridge nForce2 SPP
South bridge nForce2 MCP
Chipset drivers NVIDIA 2.03
Memory size 512MB (2 DIMMs)
Memory type Corsair XMS3200 PC2700 DDR SDRAM (333MHz)
Sound nForce2 APU
Graphics card GeForce FX 5200 Ultra 128MB
GeForce FX 5200 128MB
GeForce FX 5600 256MB
Radeon 9000 Pro 64MB
Radeon 9600 Pro 128MB
Graphics driver Detonator FX 44.03 CATALYST 3.2
Storage Maxtor DiamondMax Plus D740X 7200RPM ATA/100 hard drive
OS Microsoft Windows XP Professional
OS updates Service Pack 1, DirectX 9.0

Today I’ll be comparing the Tornado GeForce FX 5200 Ultra’s performance with the vanilla GeForce FX 5200, the GeForce FX 5600, and a couple of Radeons from ATI. Since the performance of Inno3D’s Tornado should accurately reflect the performance of all GeForce FX 5200 Ultra cards that run at NVIDIA’s prescribed core and memory clock speeds, I’ll be using Inno3D’s Tornado as a reference point for GeForce FX 5200 Ultras as a whole.

Recently, the issue of driver-based cheating and optimizing for specific tests has called into question the legitimacy many of the test applications we use regularly here at TR. As always, we’ve done our best to ensure that we’re delivering clean, reliable benchmark numbers. We have taken steps to circumvent known driver cheats in our testing today. However, we are working on some new methods for graphics testing intended to sidestep common cheats. Those methods will debut in a future article. Stay tuned.

The test system’s Windows desktop was set at 1024×768 in 32-bit color at an 85Hz screen refresh rate. Vertical refresh sync (vsync) was disabled for all tests.

We used the following versions of our test applications:

All the tests and methods we employed are publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Fill rate
Raw fill rate isn’t the be-all and end-all of graphics performance, but it’s a good place to get things started. How do the GeForce FX 5200 Ultra’s paper specs stack up against the competition we’ve assembled today?

Core clock (MHz) Pixel pipelines Peak fill rate (Mpixels/s) Texture units per pixel pipeline Peak fill rate (Mtexels/s) Memory clock (MHz) Memory bus width (bits) Peak memory bandwidth (GB/s)
GeForce FX 5200 250 4 1000 1 1000 400 128 6.4
GeForce FX 5600 325 4 1300 1 1300 500 128 8.0
Radeon 9000 Pro 275 4 1100 1 1100 550 128 8.8
Radeon 9600 Pro 400 4 1600 1 1600 600 128 9.6
GeForce FX 5200 Ultra 325 4 1300 1 1300 650 128 10.4

Because both the GeForce FX 5200 Ultra and GeForce FX 5600 share the same core clock speeds and basic 4-by-1 pipeline configuration, they have identical peak theoretical fill rates. However, NV31 and NV34 are different graphics chips; don’t expect the performance of these two cards to be identical because of these specs alone.

The GeForce FX 5200 Ultra may have the same single- and multi-texturing fill rate potential as NVIDIA’s GeForce FX 5600, but the Ultra’s packing 2.4GB/sec of extra memory bandwidth that should help it deliver on that theoretical fill rate promise. In fact, the GeForce FX 5200 Ultra has even more memory bandwidth available than ATI’s mid-range Radeon 9600 Pro, though the latter’s use of lossless color and Z-compression should enable it to make more efficient use of its bandwidth.

Is the GeForce FX 5200 Ultra’s raw real world fill rate as impressive as its theoretical peaks?

Sort of. It’s certainly delivering more single-texturing fill rate than the GeForce FX 5600, which is odd considering that NV31 and NV34 are both 4×1-pipe designs running at 325MHz. I’m inclined to blame the GeForce FX 5600’s lesser memory bandwidth for its poor performance, but the card’s multi-texturing fill rate is actually superior to the GeForce FX 5200 Ultra.

How do these real-world fill rates compare to those theoretical peaks we just looked at?

Not well. The big story here is how inefficient the GeForce FX line is with its theoretically available resources. None of the graphics cards we’re looking at today realizes anywhere near all of its single-texturing fill rate potential, but at least the Radeon cards are making efficient use of resources when it comes to multi-texturing. Unfortunately, the GeForce FX 5200 Ultra isn’t even realizing 75% of its multi-texturing fill rate potential in this synthetic test; that doesn’t bode well for the card’s performance.

Occlusion detection
NVIDIA hasn’t been terribly forthcoming regarding what (if any) advanced occlusion detection algorithms NV34 uses to reduce overdraw. At the very least, the chip’s lack of lossless color and Z-compression will be bit of a handicap in any fill rate-bound scenario.

The GeForce FX 5200 Ultra essentially ties the GeForce FX 5600 without any anisotropic filtering or antialiasing enabled, but the Ultra falls slightly behind with 8X anisotropic filtering and 4X antialiasing.

Pixel shaders
Since NVIDIA isn’t revealing any concrete details about the number or relative strength of NV34’s pixel shaders, trying to predict the GeForce FX 5200 Ultra’s performance is difficult. Instead of speculating about “levels of parallelism” within the GeForce FX programmable shader, let’s check out some actual test results.

The GeForce FX 5200 Ultra’s clock speed advantage over the vanilla GeForce FX 5200 yields a better performance in 3DMark2001 SE’s pixel shader tests, but all the GeForce FX cards are still well behind the curve in the advanced pixel shader test.

The GeForce FX 5200 Ultra cozies up with the GeForce FX 5600 in NVIDIA’s ChameleonMark benchmark, but both cards are dominated by Radeon 9600 Pro.

In 3DMark03’s pixel shader 2.0 test, the GeForce FX 5200 Ultra is still nipping at the GeForce FX 5600’s heels, but all the GeForce FX cards are embarrassed by ATI’s Radeon 9600 Pro. Since it’s only a DirectX 8 part, the Radeon 9000 Pro can’t complete this test.

Vertex shaders

FutureMark’s vertex shader tests show the GeForce FX 5200 Ultra right behind the GeForce FX 5600 again. In fact, the GeForce FX 5200 Ultra actually pulls out ahead of its mid-range sibling at higher resolutions in 3DMark2001 SE’s vertex shader test. Even then, the GeForce FX 5200 Ultra is way behind the Radeon 9000 Pro.

In 3DMark2001 SE’s transform and lighting tests, which run as vertex shader programs on the cards we’re looking at today, the GeForce FX 5200 Ultra trails the GeForce FX 5600 a little more than it did in more targeted vertex shader tests. At least the GeForce FX 5200 Ultra is able to pull out ahead of the Radeon 9000 Pro this time around, especially in the one-light, high-polygon-count test.

Games
Synthetic feature tests are great and all, but how does the GeForce FX 5200 Ultra perform in real games?

Quake III Arena

In Quake III Arena, the GeForce FX 5200 Ultra is actually faster than the GeForce FX 5600 until we turn on anisotropic filtering and antialiasing. Without color and Z-compression, the GeForce FX 5200 Ultra looks like it’s getting a little bandwidth-deprived when anisotropic filtering and antialiasing are turned up. However, when those extra image quality features are disabled, the card’s better real world single-texturing fill rate may be enabling higher frame rates.

Jedi Knight II

In Jedi Knight II, the trend continues; the GeForce FX 5200 Ultra is faster than the GeForce FX 5600 without anisotropic filtering or antialiasing enabled, but slower with. The GeForce FX 5200 Ultra’s higher clock speeds give the card a nice performance boost over the vanilla GeForce FX 5200 and a clear advantage over the Radeon 9000 Pro.

Comanche 4

The trend continues in Comanche 4; it looks like those uninterested in anisotropic filtering or antialiasing may want to opt for a GeForce FX 5200 Ultra rather than a GeForce FX 5600.

Codecreatures Benchmark Pro

Codecreatures produces consistent results for the GeForce FX 5200 Ultra and GeForce FX 5600, which swap places depending on whether or not aniso and AA are enabled.

Unreal Tournament 2003

The GeForce FX 5200 Ultra isn’t quite able to pull out ahead of the GeForce FX 5600 in Unreal Tournament 2003 with aniso and antialiasing disabled, though the performance difference between the two cards is quite slight when high detail settings are used. To me, that’s almost counter-intuitive; I would have expected the less expensive GeForce FX 5200 Ultra to be a faster option in lower detail scenes.

The GeForce FX cards are nicely wedged between the two Radeons in this test, illustrating the huge performance discrepancy between the different architectures used in ATI’s low-end and mid-range graphics cards.

Serious Sam SE
We used Serious Sam SE’s “Extreme Quality” image quality add-on, which maximizes the anisotropic filtering level of each graphics card, for our testing. In this test, the GeForce FX cards are doing 8X anisotropic filtering while the Radeon 9600 Pro is at 16X. The Radeon 9000 Pro is also doing 16X anisotropic filtering, but it’s falling back to bilinear rather than trilinear filtering.

Again, the GeForce FX 5200 Ultra is faster than the GeForce FX 5600 with extra image quality options enabled. How do things look over the length of the benchmark?

The GeForce FX 5200 Ultra suffers the same stuttering at the start of the benchmark demo as the vanilla GeForce FX 5200. Thanks to our lovely graphs, NVIDIA is aware of the issue and has pledged to fix it in the next Detonator FX driver release. Apparently, the problem occurs only with NV34, NV17, and NV18-based graphics products, which suggests the NV34 might have a GeForce2-class core with a programmable shader tacked on.

With 8X aniso and 4X AA, the GeForce FX 5200 Ultra’s performance drops, and it settles in further behind the GeForce FX 5600. How does the card’s performance across the benchmark demo look?

Consistent. When we turn on anisotropic filtering and antialiasing, the GeForce FX 5200 Ultra’s stuttering problem gets even worse, and the card falls behind the GeForce FX 5600 across the board.

Splinter Cell

In Splinter Cell, the GeForce FX 5200 Ultra is faster than the GeForce FX 5600 yet again.

Looking at it second by seond, the GeForce FX 5200 Ultra manages to stay out ahead of the GeForce FX 5600 enough to achieve a higher average frame rate.

The GeForce FX 5200 Ultra takes a rare win over the GeForce FX 5600 with 8X anisotropic filtering and 4X antialiasing. What’s going on when we look at the length of the benchmark demo?

Even with anisotropic filtering and antialiasing enabled, the GeForce FX 5200 Ultra wins this round overall. Perhaps the NV34 shares more with the Xbox’s graphics chip than it does with NV31? Splinter Cell was, of course, originally developed for the Xbox and may be better tuned for that platform.

3DMark2001 SE

In 3DMark2001 SE, the GeForce FX 5200 Ultra and GeForce FX 5600 virtually tie. Let’s look at the individual game tests.

In all but the “Nature” test, the GeForce FX 5200 Ultra is right up there with the GeForce FX 5600, and the 5200 Ultra is faster than the Radeon 9000 Pro throughout.

3DMark03
Recently, NVIDIA was caught “optimizing” anisotropic filtering levels in 3DMark03. Those optimizations affect even the latest 330 build of 3DMark03, but since they only come into play when anisotropic filtering is used, they won’t affect the results of our testing.

In a freshly patched 3DMark03 build 330, the GeForce FX 5200 Ultra trails the GeForce FX 5600. What about the individual game tests?

The GeForce FX 5200 Ultra and GeForce FX 5600 essentially tie in the first and last game test, but the GeForce FX 5600 has a distinct advantage in the middle two. The relatively poor performance of the entire GeForce FX line in the Mother Nature test is especially disappointing.

3DMark03 image quality
Because frame rates aren’t everything, let’s take a quick peek at the GeForce FX 5200 Ultra’s image quality in 3DMark03. There’s more going on here than you might expect.

Below are cropped JPEGs of frame 1799 of 3DMark03’s “Mother Nature” test. Click on the images for a full-size PNG of each screen shot.


DirectX 9’s reference rasterizer


NVIDIA’s GeForce FX 5600


NVIDIA’s GeForce FX 5200 Ultra

There are subtle differences between the image produced by Microsoft’s DirectX 9 reference rasterizer and those produced by the GeForce FX cards, but the real story here is the difference in the image quality between the GeForce FX 5600 and GeForce FX 5200 Ultra. Check out the differenced between the water in each of the two pictures; something is definitely wrong with how the GeForce FX 5200 is rendering the scene.

SPECviewperf

In SPECviewperf, the GeForce FX 5200 Ultra nearly equals or bests the performance of the GeForce FX 5600 across the board. In all but the ugs test, where the Radeon 9600 Pro dominates, the GeForce FX 5200 Ultra is at or near the head of the class.

Antialiasing
We’ve been looking at the GeForce FX 5200 Ultra’s performance with and without 8X anisotropic filtering and 4X antialiasing throughout this review. Next, let’s break down these antialiasing modes and take a closer look at the GeForce FX 5200 Ultra’s performance across each.

Edge antialiasing

The GeForce FX 5200 Ultra’s antialiasing disadvantage is distinct in Unreal Tournament 2003. Across all the antialiasing modes it supports, the GeForce FX 5200 Ultra is slower than the GeForce FX 5600. The GeForce FX 5200 Ultra is, however, a much better performer with AA than the Radeon 9000 Pro or the vanilla GeForce FX 5200.

For image quality purposes, the different antialiasing modes offered by the GeForce FX 5200 Ultra are identical to those offered by the GeForce FX 5200. You can see how the GeForce FX 5200’s antialiasing image quality shots stack up here.

Texture antialiasing

When we specifically target anisotropic filtering performance, the GeForce FX 5200 Ultra actually pulls out ahead of the GeForce FX 5600, suggesting that the latter’s performance advantage with 8X anisotropic filtering and 4X antialiasing is primarily, if not exclusively, a result of the 5600’s superior antialiasing abilities.

Overclocking
Because some of us like to wring as much performance from our hardware as possible, I did a little overclocking with the Tornado GeForce FX 5200 Ultra. In testing, I was able to get the card at a stable core clock speed of 390MHz with an artifact-free memory clock speed of 703MHz. Remember that overclocking is never guaranteed; just because my sample card was able to achieve a 390/703 core/memory clock speed doesn’t mean that every Tornado GeForce FX 5200 Ultra will hit those speeds. I would, however, suspect that most of the cards will be able to hold a 700MHz memory clock speed since the memory chips are rated for operation at that speed.

Unfortunately, although overclocking the Tornado GeForce FX 5200 Ultra was easy, the higher core and memory clock speeds don’t produce significantly better performance in Unreal Tournament 2003. Even with 8X anisotropic filtering and 4X antialiasing, our overclocking efforts only buy a couple of extra frames per second.

Conclusions
Inno3D’s Tornado GeForce FX 5200 Ultra isn’t yet available online, but GeForce FX 5200 Ultra cards from other manufacturers are going for as low as $132 on Pricewatch—a good $11 cheaper than the least expensive GeForce FX 5600 and $30 cheaper than ATI’s Radeon 9600 Pro. If Inno3D is to be competitive among other GeForce FX 5200 Ultra manufacturers, its card will have to come in at around $130.

Honestly, though, I’m a little confused as to where the GeForce FX 5200 Ultra is going to fit into the market. At $130, it’s really leaning towards the high end of the middle ground between sub-$70 vanilla GeForce FX 5200s and our reigning mid-range graphics champion, the Radeon 9600 Pro, at over $160. For anyone who’s serious about gaming, the Radeon 9600 Pro is definitely worth the extra scratch, especially since it should be better-equipped for next-generation applications.

For bargain hunters less concerned with gaming performance, the GeForce FX 5200 Ultra looks like it could be a pretty good deal, in part because it’s often a better performer, without antialiasing enabled, than the GeForce FX 5600. Antialiasing fans would be far better off with a GeForce FX 5600. And at least the GeForce FX 5600 can render 3DMark03’s “Mother Nature” water correctly; I have a sneaking suspicion that NVIDIA may never enable higher precision floating point pixel shaders in the budget GeForce FX 5200, if it even can.

However, it’s hard to ignore the price/performance ratio of the vanilla GeForce FX 5200, which costs almost half as much as the Ultra. Casual gamers will probably be satiated by the performance of GeForce FX 5200, and I just don’t see enough value behind the Ultra’s $65 higher price tag. The Tornado GeForce FX 5200 Ultra isn’t yet on the market, but if Inno3D expects it to compete with its own $67 (on Pricewatch) Tornado GeForce FX 5200, the Ultra’s price is going to have to drop.

In the end, the GeForce FX 5200 Ultra doesn’t have the image quality, antialiasing prowess, or overall performance to challenge the more expensive Radeon 9600 Pro for serious gamers. Nor does it have a low enough price to challenge the GeForce FX 5200 for casual gamers on a budget. It’s impressive to see the Ultra competitive with the GeForce FX 5600 in many instances, and DirectX 9 capabilities are nice to have. Still, I think consumers are going to be better off spending more on a Radeon 9600 Pro or much less on a vanilla GeForce FX 5200.

Comments closed
    • PLASTIC SURGEON
    • 16 years ago

    We can today reveal that ATi will definitely supply the graphical heart of the next-generation of Xbox, following last week’s story that this massive shift was underway. Speaking under terms of strict anonymity, a senior source close to nVidia said, “They [nVidia] simply didn’t want to meet Microsoft’s demands for the floating design for Xbox Next. It didn’t make sense to partner on the project. At this moment in time, ATi is working with Microsoft.”

    Is this true or not??? Hmmmmmm…..

    • Anonymous
    • 16 years ago

    Ok any gamer should know that 97 or % of devolpers program for Nvida because they offer better tools better technolagy . look at the 5900 ultra it offers more technolagy then the Radeon cards hands down it offers CG based hardware and software of it. now Nvidas DX9+ hardware is the only hardware to offer true 128bit FPU studio quality pipeline/ ATI cards offer 96bit /32bit emulated coler projection for in game quality while Nvidas is all hardware real Time CG rendering. Sure some may think its hype but i remeber when the GeForce 3 came out all you ATI radeon 1/2 owners and Nvida GF256/1/2 oweners siad it was all hype with the pixle shaders and vertext shaders that it wouldant improve quality and game makers would choos the best not to use. well you all who had siad that was wrong. and as far as SC goes with NVs/AA it works fine for my GeForce 4 Ti 4400 so try a new one. anywas Fact: the FX cards well 5600 above for best performance can get games created for it that look like you watching a movie . §[< http://www.s.t.a.l.k.e.r.com<]§ that game is optimized for the NV35 core to the fullest more so now the FX5900ultra has the technolagy that surpasses ATIs basic DX9 codeing

      • Anonymous
      • 16 years ago

      yay. fp to int12. thanks nvidia, i love your dx9 -[

        • Anonymous
        • 16 years ago

        sorry, my bad, fp to int16

      • PLASTIC SURGEON
      • 16 years ago

      What acid have you been taking? What technology does the 5900 or fx line of chips have over the R350 CORE? Cg graphics like you said?? Cinefx??? lmao. This has proven to be all marketing and pr hype. The simple fact that ALL dx9 ATI cards ran the dawn demo faster then fx chips proves this. A major blow to Nvidia. And why alot of deveoplers design with Nvidia in mind(not so much these days now) is because they still hold must of the market share. It’s all about the money$$$$$ Not that Nvidia offers better technology. If so. Why is Nvidia still using dated FSAA? Sorry. Rotated gamma corrected FSSA is clearly superior to what Nvidia is offering. IQ still goes to the 9800pro. And almost every single hardware site says this.

      r[http://www.hardocp.com/article.html?art=NDcyLDE0<]§ And go to some other sites also to see that they take the 9800pro's IQ over the 5900. Yup. Nvidia has better technology. lol. Keep falling for that PR Cg graphics, UltraShadow, CineFX crap. All B.S. It has been proven. That lil Dawn Demo proves it with their intent to cheat 3 times. Real sad if you ask me.

        • Anonymous
        • 16 years ago

        First off its not PR hype when thers secral games alredy comming out that will use this technolagy like Stalker and Gunmetal 2 name a couple that are using the CG based hardware on the FX crads the CinFX engine isant hype …/ Cg Specific Graphical Features:

        Cg support for all materials
        Cg: ‘Motion Blur’ effect for the plane, proportional to speed
        Cg: Realistic ‘Water Refraction’ effect
        Cg: Further effects under consideration
        DX9: ‘Occlusion Query’ for optimised rendering and realistic flare effects
        DX9: Use of 128bit floating point buffers, enabling use of high contrast colour and overflows (such as retina bleach)
        . and as far as the dawn Demo goes if you comprae the quality and detial with the ATI card and Nvida thers a differance and the Dawn demo didnt play easy on an ATI card it uses a GL warapper to emulate NV30 code bases so it could run and at the time the Nv0 was all that was out. here/ siad The following was said about the opengl wrapper:

        It runs 15% faster than NV30 on the 9800pro, and it also runs faster than NV35 (we are unable to personally confirm this / (we are unable to personally confirm this. so it is not confirmed that it runs faster then the Nv35 / and for the programmers they are not [aid toprogram for Nvida hardware they choose to, Look at john carmak he favored the ATI untill someone slipped the Doom 3 alpha he ran the card 5 months untill the ATI new cards ran out of room for further programming . the Nv30 he just now since october just now got to the end of ptogrammingthats 3more month of prgramming or so the card allowed him to do over ATIs. so tell me agian its hype what it can do cause really its not hype.

          • PLASTIC SURGEON
          • 16 years ago

          Carmack has never favoured ATI cards. That is a given. Even when they released the 9700p[ro and it destroyed the 4600ti in all tests, Carmack still mentioned nothing of the cards stellar performance. He Just issuied comments about how the drivers were poor. (Which they were not with the new Catalyst drivers)

          r[

            • Anonymous
            • 16 years ago

            to post 62# no the ATI does not do all those FX CD FX because it is not incorperated in to the hardware sutch as the Nv30 and above the CG C for graphic was created by Nvida them selves this is i repeat a new C/ codeing not likie all the other C based codeing maby ATI uses most other C laungages you may think is CG but it is not Cg codeing is Nvida spacific code. and i got that here the specs on the CG §[http://www.3dmax.com/gallery/still/3dmax5beta.jpg<]§ at least anywasy thats the relazm the Nv35 cancreat over the ATI

          • PLASTIC SURGEON
          • 16 years ago

          Last point. Why do you think DOOM3 ran better on the 5800, and the 5900? Carmack designed the program with a gforce3 card in mind. This game is shader intensive. And the 5800, 5900 has a slower shader engine then the 9800pro. So what do you think is going to happen if he optimizes the game for the 9800pro which has a faster shader engine? I can tell you this. Doom3 for Nvidia line of cards won’t be rendering in FP32 that’s for sure. Try 12, or 16. Same as those dx9 tests.

            • Anonymous
            • 16 years ago

            Look i alredy know this iam just saying that things the FX card has siad tht it can do is mostly true otherwize i think them demos wouldant work right like the Time machine truck demo look at the detial on that truck i can say this the ATI card ptolly couldant render somthing that well done . i mean shit most of the DX9 game that are comming out this year and earli next year are based on Nvidas technolagy all tell ya why ATIs hardware offers nothing spec. its a basic gamers card with the minmum basic fucntion to work a game thats all nothing more. Look at games like HL2 tho somewhat impressive the charactors are not to detiald in the game and kinda stiff rendering when talking kinda still has the old HL1 feel to it in away for rendering the motions of the muscls ther ALex charactor is exact replaca of ATI demo back in 2001 not very real looking i have seen more real with the Nvida FX now HL is the only Radeon 9800 pro optimized game comming out this year that i know of . But whats realliy gonna make ya pee your pant is when games sutch as stalker obllivian lost al redy does and other show fetures and FX that you siad was all PR hype. 1 otherthing look at 3dmark 2k3 Nvida didnt cheat and futremark came right out siad that . but they proved that ATI uses driver cheats to get better scores witch are not as good as nvida.

            • PLASTIC SURGEON
            • 16 years ago

            I think you need to look up facts before you post. For one. Nvidia did cheat. FM saying they did not just allowed a small company to avoid litigation which they cannot afford. And if you read their recent press release about this whole fiasco regarding the patch they have created to prevent cheating, they still state that this form of optimization is cheating. That’s their opinion of course.(which in the case of Nvidia is held widely by many)

            r[

            • PLASTIC SURGEON
            • 16 years ago

            But the big reason why having several fragment shader datatypes has turned out such a bust for Nvidia is not the inherent usefulness of the idea but the specs. PS2.0 and ARB_fragment_program all assume a minimum of FP24 precision, and offer only a crude per-shader mechanism to signal that FP16 would be ok, too. Worst of all for NV30, 31 and 34, FX12 is completely against both specs. And the extra precision of FP32 over FP24 goes to waste too, because no one is going to write shaders that depend on having more precision than either the spec or ATI’s substantial DX9-level installed base provide for.

            So “CineFX 1.0” is a failure not necessarily because of its intrinsic faults, but because it maps horribly to the two non-proprietary fragment shader specs out there. NV_fragment_program is of course a possibility for the content creation crowd, but any non-Doom3 engine OpenGL games are not necessarily likely to go to the extra effort of supporting a completely different fragment code path.

            Others have posited Cg as Nvidia’s desperate attempt to prevent this situation from coming to pass, but I’m not even sure how that conspiracy theory is supposed to function: doesn’t Cg have to compile to one of the standards? In any case, ATI being 6 months ahead with DX9-level hardware has obviously gotten them a lot of developer mindshare (and AGP slots), so I doubt Cg will have much of an effect.

            • Anonymous
            • 16 years ago

            you ASS. nvidia CHEATED. ati OPTIMIZED. learn how to read (techreport) dimwit

            • Anonymous
            • 16 years ago

            ok read Futuremark statement, but also an Nvidia statement, as well as a joint Nvidia-Futuremark statement.

            Futuremark claims that Nvidia is not cheating. If we were Nvidia’s lawyer, we’d never have allowed the sentence that includes the phrase “and not a cheat”
            Tero Sarkinnen, VP of Futuremark’s sales and marketing, appears to be standing by the position that there’s something wrong in benchmark paradise, and said in an interview that what his firm first described as “cheats” are now “slight optimisations”.. so tell me that agian i read my stuff.

            • Anonymous
            • 16 years ago

            carmack defined it as a cheat. if the output doesn’t match the correct output, what else can we call it? cheating. this is the case with the fx products.
            in the mother nature test, the water for the fx 5200 looks like a big white bird dropping instead of water etc.
            the fx 5600 is better but it has leaves where there shouldn’t be etc.

            and if FM thinks that the clipping plane issue was ok, fine, ati and everybody else running their bench should do the exact same thing as nvidia did. even the playing field.
            the whole point of this bench should be that every card renders the output exactly as the correct version, and see which is faster.

            • PLASTIC SURGEON
            • 16 years ago

            The rewording of the statement for Cheating to “slight optimizations” came admist threats of future law suits and litigation. In the United States to imply a corporation or company cheats is infact to imply they are in the practice of decieving and wrong doing. And to prove in a court of law would take years and many millions of dollars. Something a 21 man operation like FutureMarks could not afford. It does not take a rocket scientist to figure out that Nvidia’s Lawyers threaten FutureMarks inregards to this. FutureMarks stuck by their story for weeks when the news broke. And then all of a sudden made a 360 degree turn??? Come on. Use your head. This was plain strong arm tactics. Tactics that would not work if FutureMarks took them to court. They would have won. But it would have cost the small company millions of dollars to prove they did cheat and using that word in the corporate sense is justified.

      • Anonymous
      • 16 years ago

      However Nvidia’s clip planes and poor quality output is not valid. It is such a bad cheat, that it cannot be used in a game because of the visual errors you would see when such clipping, low quality shaders, or disabled buffer clears are used. It would be considered a severe driver bug. In this respect, using such techniqes on game benchmarks when they *cannot* be used to speed up the game is a complete cheat. It is only designed to increase benchmark scores, without any kind of similar increase for the player during the game itself.
      This is completely misleading for people looking at benchmarks to try and see what the performance of the card is when running games. The benchmark in no way shape or form reflects what the in game performance will be when static clip planes are not present, when buffer clears are not disabled, and (for shader enabled games) when the normal quality shaders have not been swapped out for low quality, high speed replacements.

      I know that if I was producing a quality game, I would not want my franchise tainted by the idea that my benchmarks gave misleading information or produced worse looking images to my customers, even if the fault is caused by another company altering my code to cover for their poor hardware.

      This is where Nvidia really fall down. They think that this kind of cheating behaviour is okay. They think this cheating is valid, and so I must consider that if they are happy and willing to cheat on benchmarks, then they “do so*. This calls into question all kinds of benchmarks from Nvidia. If they are happy to cheat on 3DMark, why wouldn’t they cheat on UT2K, Q3, JK2, Serious Sam, etc? Why should I spend money on products from a company that seems to be willing to lie to me and misrepresent their products in order to get my money?

      Nvidia flung a lot of FUD around in order to muddy the waters, to make all benchmarks suspect, to hide their lack of a competative card, but for me (and a lot of other people) the dirt has stuck to Nvidia. Strongarming Futuremark is part of their ongoing PR war in lieu of having any decent products to sell. Nvidia is in the gutter right now, and rather than try to get out of it with better products, they’ve decided to try and drag everyone down into the gutter with them. Looks like they’ve suceeded with Futuremark.

      • PLASTIC SURGEON
      • 16 years ago

      Gee, its pretty freaking simple. People who claim to represent the bastion of truth don’t usually change their position so dramatically without strong-arming.
      Since the mob isn’t big in the video card industry (at least, not that I’ve heard of), and since litigation is the most common form of expressing dissatisfaction, it would take a complete IMBOCILE to NOT understand why people would make such an assumption. And yes, I am singling the morons out who believe that Nvidia and FutureMarks made this descion amicably without strong arm tactics.
      Perhaps futuremark, and by extension ExtremeTech do not adhear rabidly to the concepts of freedom of the press and the right to express the truth as they see it. Perhaps it is more likely that they are weak-nerved individuals who are willing to give in to a bit of whining and foot stomping about the unfairness of their opinion.
      But since most journalists and experts would sooner cut their own throats than reverse their position, any reasonable person would suspect that a managerial decission based on the likelyhood of legal action is behind this sudden loss of nerve.
      Plain and simple. Nvidia cheated. And pushed their force of might to sway a small company’s opinion. Nvidia knows they do not have the resources to fight this mess in court. Sickening if you ask me.

        • Anonymous
        • 16 years ago

        umm hello did you read man ATI is the cheater futremark proved that because ther is noway fo Nvida to cheat in 3d2k3 when months like back in augest Nvida pulled outa the beta programmer so they didnt get to work with it prior to its darn release ATI did and they found a cheat within ATIs drivers for 3dmark 2k3 its been proven alredy peaple.

        However, ATi did not come out of Futuremark’s audit report with completely flying colours either. Futuremark noted that with the new build of 3DMark, the score on a reference system based on a Radeon 9800 Pro also dropped, but by a much smaller margin of 1.9%.

        The interesting thing here is both company’s reactions. ATi came out and publicly stated that it had optimised Game Test 4 of 3DMark03 and shuffled the instructions to better suit its architecture. Note that the scene was still rendered exactly how Futuremark had intended, the instructions were just moved around to suit their GPU and give them the slight boost which it had. However, despite it being a genuine optimisation, ATi still announced that it would remove this in the next release of their Catalyst drivers. thatwas siad then but now its turned Futremark dugg deep and knew that Nvida didnt cheat at all its just the way ther drivers was made but ATI did have source code cheat in ther catlyst 3drivers so on

          • Anonymous
          • 16 years ago

          ati should’ve used the same nvidia clipping plane cheat. according to what you’re saying, FM will think it’s just a driver optimization. oh and i guess ati should also bring their check book, driver optimizations require a lot of money, just ask nvidia.

          • PLASTIC SURGEON
          • 16 years ago

          r[http://www.beyond3d.com/index.php<]§ Plain and simple. Nvidia felt it needed to do these cheats to make the 5900, 5800 Ultra look better. Ever since the release of the 9700pro, Nvidia has not truely beaten ATI's flagship gpu's the same way the 9700pro destroyed the 4600ti. And that's what realling pissing Nvidia off. Look at their current PR campaign about the 5900 Ultra. They are claiming that the 5900 Ultra is "spanking" the 9800pro. Which by all review sites, it is clearly not. Not even close. It is actually losing in some tests. That is not a "spanking" That is real close. The days of Nvidia just coming along and releasing a card right after ATI's offering to beat it handily are over. And they know it. They days of the 4600ti dominance over the 8500 are done. And Nvidia is trying to do everthing in it's power, cheating or otherwise to bring back those glory days. Nvidia. just bring back IQ and don't worry so much about speed and cheating to get there.

      • Anonymous
      • 16 years ago

      You’re an ignoramus if you think nvidia’s games are movie quality! It’ll be yrs before games look like saving nemo or final fantasy.

      For nvidia -[

        • Anonymous
        • 16 years ago

        umm excues me but umm they are almost movie qaulity man stalker raises the barr on realizm man and using the Nv35 and all ist so called hyyped fetures . i didnt say exact i dont think i didi but the NV35 can do movie quality FX and enviroments i mean right now lucas arets and many movie GC companies are testing with the Nv35 qaudro series for some of the FX ther seeing how well it tsakc su p now thers as you kn ow 3dmax plug i for Nvidas CG and it has alredy own an awardand look at this list the companys using ther CG for Nvida
        Bethesda Softworks
        Beyond Games
        Bioware
        Blizzard
        Cat Daddy
        CodeCult
        Codemasters
        Contraband
        Core Design
        Criterion
        Dark Black
        Destineer
        EALA
        Electronic Arts
        Elixir
        EMG
        Ensemble Studios
        Epic Games
        Fox Interactive
        Grin
        Headgate
        High Voltage
        Inevitable
        Intrinsic
        Ion Storm
        Kaboom
        Kuju
        Lego Media
        Lionhead Studios
        Lithtech
        Massive Development
        Maxis
        Microsoft Game Studios
        Nadeo NAMCO
        NDL
        NemoSoft
        Novalogic
        Pivotal
        Rage Sheffield
        Relic Entertainment
        Ritual
        Sega
        Sony Online Entertainment
        SpinVector
        Steel Monkeys
        Turbine Entertainment
        Valve
        Vulpine
        Yeti Studios

        DCC VENDORS:

        Alias|Wavefront
        Discreet
        Softimage

          • Anonymous
          • 16 years ago

          I know Stalker looks good. But are you telling me that flat ass textured thing they use for grass is grass? The CG scenes which takes days to render stuff defines every single blade with polygons. Cards can’t do this in real-time yet. But stalkers plants look good. I wonder if they have real physics so that they move if u walked through them. Saving nemo in real time is still yrs off.

          I think somebody said that HL and Doom were demoing in the ATI booth at E3. If nvidia keeps cheating with subpar products, that list is just another piece of worthless history.

          • PLASTIC SURGEON
          • 16 years ago

          No Consumer GPU can render movie like quality for games in real time. Not even close yet. Please tell me when a game that looks like Finding Nemo or Toy Story 2 comes out in Real-time. I will be there with bells on. And that list is not in stone. ATI cards and Nvidia cards can render quality almost movie like scenes for demos. That’s not the point. Technology at this point does not allow this to happen in real time. So for you to say so, you need to take a little gpu 3d programing basic course.
          And like i said. I want to see what Precision Nvidia fx cards are going to run these DX9 games in. Because anything less the FP24 and that so-called “movie like” visuals go out the window big time.

            • Anonymous
            • 16 years ago

            no look its siad the FX can do cinamatic quality FX and rendering some FX can be done in real time . and stalker on a FX card has 3million pollygons/s per a frame running through the game ther is real physics in the game sky changes relastic skys rain water and fogya name it the building in the game are renderd in CG format . heres a small line from ainterb=view with stalker i found The engine not only uses detailed textures, but it also uses geometry-detailing objects for level surfaces such as grass and rocks now heres image of what is in the DX9 render of stalker i recommend apeak its perty cool.

            §[< http://www.oblivion-lost.com/community/e3/SAVE0015.jpg< ]§ now hers the link i got this from §[<http://www.oblivion-lost.com/en/index.php?site=e3.php<]§ the images at the bottum of the page ther on the send link show images now from the Nv35

            • Anonymous
            • 16 years ago

            wow, imagine how awesome it would look on a radeon!

            • Anonymous
            • 16 years ago

            ok post to all here s a good one for stalker

            Doupě: Can you give us some more details of your graphics engine and its capabilities?

            AS: We use our in-house X-Ray engine for the game. The engine supports all the latest technologies in 3D graphics available today. Currently we optimize the engine for DirectX 9. We also closely cooperate with Nvidia and carry out optimization for their latest Cg technologies provided with GeForce FX boards. Owing to the recently available hardware support, it has become possible to implement natural render of complex materials with proper light interaction, such as rusted metal, natural flora, glass, realistic skin, etc in the game. One of the latest features we’ve been working on is the use of so called “deferred shading” approach to break the barriers in polygon count found in the current and future leading game-engines. Having more than 2 million of fully bump-mapped polygons representing each frame and true per-pixel complex light-material interaction will make it possible to attain an immensely realistic image in S.T.A.L.K.E.R. You can find the full list of technical specifications

            • Anonymous
            • 16 years ago

            and when they optimize for radeon, schweet!

            • Anonymous
            • 16 years ago

            acully umm ther not doing ATI optimized code at all it will run with basic functions needed to run the game. but that little last post i made with that interview Question should anserw almost alot

            • Anonymous
            • 16 years ago

            gee, might as well commit self mutilation. if they know what’s good for them, they’ll optimize for ati. why would anybody want to write sw for nvidia only, when an ati could better showcase their product.

            also, that’s a 5900. forget running it on a 5200. 5600 will barely run without eyecanday. most ppl are not going to dish out 500 quid to get a 5900 just so they can play stalker.

            i’m guessing they’ll probably change their tune when they see just how much market share they’re losing. nvidia can’t possibly pay them enough to ignore majority market share.

            • PLASTIC SURGEON
            • 16 years ago

            Of course they are going to optimize it for the radeon cards. If they do not. They will be missing a LARGE portion of the market. As only 1% of buyers fork over 500 bills for a high-end card. It’s the mid-range cards that make the money. And the 9600pro, 9500pro are the currently best cards out right now for the mid-range.

            • Anonymous
            • 16 years ago

            yea and if a peaerson uss a ATI card on games like stalker thart are optimized for the Nvida cards and spacific X that can only be drawn on a Nida card thenyou gamers with Radeons will miss a whol lotta game and qaulity

            • Anonymous
            • 16 years ago

            Are you sure you know what you are talking about? The only real difference might be some lighting effects. If that. For them not to make the game 100% compatible with ATI cards is suicide for that software company. They would lose so much money it’s not funny. So i don’t see them doing that when the game is released. And if the game turns out to be “great” which chances are it won’t, you will see a patch developed that will run it on ATI cards easy. As the R350 runs shader based programs faster. And those cards won’t default to a lower floating point precision like Nvidia FX based cards do in DX9 apps. Blame that on their drivers. Seems Nvidia just wants to put out a few extra frames per second at the cost of quality.

            • Anonymous
            • 16 years ago

            welllets see ther lates drivers are solid and stables and lets see i mean lets take stalker agiann the devolopers are progra,,er for spacific Nvida FX and enviroment FX and acully larg FX sutch has building an models not just lighting FX and stuff iam talkin the whole game what will be renderd on a /Nvidacard and radeon are 2 differant things with the FX line the game will use Nvidas hardware CG codeing pateind by Nvida and please most you get CG confused with older C laungages . Now hers a devolopers testomonial vide on the new CG laungage the Nvida FX cards offer its a good watch like 6mins i think.

            §[< http://developer.nvidia.com/view.asp?IO=cg_developers_video<]§ Not only is Cg a "real-time" shader language, Cg has been designed to be hardware friendly and hardware efficient. This means that Cg should run well on most modern GPUs. now right now ATI deffinitly dosant use Nvida CG laungage so right now it only for Nvida vased cards mostly DX9 based.

            • PLASTIC SURGEON
            • 16 years ago

            Of course it’s only used for Nvidia based cards. It’s there application programming. And we don’t get confused with C++ and Cg graphics. If that’s what you are refering to. And the only thing with Cg graphics and it’s benefits is in regards to shader development. ATI hardware can do the same thing on it’s hardware. Proven. And do it with better IQ at that. So it’s nothing special. Lot’s of PR talk by Nvidia saying it is one of a kind programming application for advanced shader programming. Which is far from the truth. If a developer programs specifically for one cards hardware over the other, there will be graphic differences with some effects. That goes for ATI as well. That’s nothing new. They used to program with 3dfx in mind back in the day. Just like Valve is programming with ATI cards in mind with HL2. But both Nvidia cards will run the game quite fine. And ATI has a similar tool for rendering advanced graphics. It’s called Render Monkey.

            r[

            • Anonymous
            • 16 years ago

            You are right but i dont beleave it to be PR talk totally. iam just saying with CG launage programmers can devolop more realistic movie quality graphics now see ATI still tho has abarrior and thats coler precision for ther games HL2 uses MAXimum of 96bit allowed with the ATI cards Nvidas FX tho are usinf true 128bit precision studio quality hardware like 3Dlab cards most of them anyways . i mean damn i knwo were way along from haveing games that can be like movies but the FX card could be used to creat quality simuler to what wee see . like remeber when the GeForce 4 Ti came out it offerd few things over GF3 . faster card real time fur rendering dual vertex and pixle shaders but FX5900 utra tho maby slower PS2.0 rendings still the fastes card on the market i plan on buying oen so i can play DX9 games an games made for it i myself am not a Nvida fanboy at all just i prefer somthing thats more stable .

            • Anonymous
            • 16 years ago

            ok this si to all i just foud this page take a peak look at the first image on it the car that was all done on a GeForce FX

            §[< http://www.discreet.com/products/3dsmax/<]§

            • PLASTIC SURGEON
            • 16 years ago

            I can also show you screen shots of ATI cards rendering images that are life like. What’s your point? §[< http://mirror.ati.com/developer/demos/r9800.html< ]§ §[<http://www.ati.com/technology/wp/naturallight.html<]§ All cards have their Demos to show off their hardware. And this is a little remark about "real-time" graphics. "Pixar-level animation' runs about 8 hundred thousand times slower than real-time on our renderfarm cpus. (I'm guessing. There's about 1000 cpus in the renderfarm and I guess we could produce all the frames in TS2 in about 50 days of renderfarm time. That comes to 1.2 million cpu hours for a 1.5 hour movie. That lags real time by a factor of 800,000.) Do you really believe that their toy is a million times faster than one of the cpus on our Ultra Sparc servers? What's the chance that we wouldn't put one of these babies on every desk in the building? They cost a couple of hundred bucks, right? Why hasn't NVIDIA tried to give us a carton of these things? -- think of the publicity milage [sic] they could get out of it! Duff had a point. He hammered the point home by handicapping the amount of time necessary for NVIDIA to reach such a goal: At Moore's Law-like rates (a factor of 10 in 5 years), even if the hardware they have today is 80 times more powerful than what we use now, it will take them 20 years before they can do the frames we do today in real time. And 20 years from now, Pixar won't be even remotely interested in TS2-level images, and I'll be retired, sitting on the front porch and picking my banjo, laughing at the same press release, recycled by NVIDIA's heirs and assigns. "

            • Anonymous
            • 16 years ago

            those images are ot images ther from ATIs site and man get real that money dosant look reeal at all

            • Anonymous
            • 16 years ago

            reply to all

            What were your goals for Cg when you started? How did those goals change?

            Cg can also be used at many places in the production process for both real-time applications like games, and interactive modeling tools. Cg can also be used to build very high quality renderers for film and special effects. The fact that there is a single, hardware accelerated language for all of these applications will dramatically change workflow. You can run the same shader for previewing, editing, and for the final game or movie. You can create families of shaders that will run in different ways on different hardware bases, but have similar looks. Development with Cg is a lot easier than the raw API calls.

            • Anonymous
            • 16 years ago

            What is the difference between Cg and RenderMonkey?

            Cg is a programming language. RenderMonkey is an application, or more accurately, an IDE (integrated development environment).

            • PLASTIC SURGEON
            • 16 years ago

            Any card can render STILL real-time visuals Anonymous. You should know that. Whopdeedoo. They have been doing that since the release of the gforce2 card and the rage fury maxx cards.
            Nvidia based cards on the FX engine offer nothing more then ATI R350, R300 chips. Matter of fact when running games as many have said a 1000 times, Full Scene Anti-Alaising and picture quality is superior on ATI cards when running games at high resolutions. Gamma corrected rotated grid FSAA is simply a better, cleaner technique then what Nvidia has for it’s fx line. Period. And that is where it counts. Not static still prerendered images. ATI can do plenty of those. 3DLABS can do plenty of those. Matrox can do plenty of those. It’s how games look when running and being played. And DVD playback for the 9800pro and 9800AIW cards are definatley better then what Nvidia has to offer hands down.
            As for Cg Graphics and all the other things Nvidia has hyped. ATI can do those same things within the hardware of it’s cards. And run it in full dx9 spec. None of this less precision crap.
            Nvidia is doing everything in it’s power to PR spin the failure of the 5800 Ultra and exaggerate the featurs of the fx line of chips. They have one hell of a marketing department. But the truth is in the pudding as they say. And right now when games are played between the 5900 Ultra and the 9800pro, frames per seconds are real close. Very LITTLE difference at all. The real difference is in the IQ area. And ATI holds that still.

            r[

            • PLASTIC SURGEON
            • 16 years ago

            Shadow Culling. Nice word for it. And i thought it was Smart Culling. Is that not the same thing that Nvidia is doing here? 😉 Just calling it different to say they invented it. 😉 More great PR.

            Simply amazing, I can just imagine if NVidia wrote Einstein’s theory of relativity.

            “Space tells matter how to move and matter tells space how to warp” would become

            “Benchmarks tell NVidia how fast we move and NVidia can warp any benchmark we like to move”

            I can understand FM not wanting a lawsuit hence retracting the C word, its impolite to mention something related to Fraud to a publically listed company.

            But FM saying we will consider allowing NVidia to drop precison below the DX9 spec if their Cinematic Rendering 128 bit capable cards can’t keep up seems just a smidegon generous to me.
            Like i will always say. Until they fix the quality issue. Nvidia will not see my hard earned money anytime soon.

            • Anonymous
            • 16 years ago

            better to wait for r360 or beyond. the fact is, nvidia’s slow shader engine is a serious problem neck and using an ultrashadow whatsamawhoosit will have less benefit than if they designed their engine correctly. and their aa is busted, their precision is not there; their IQ is in the dumps.

            ati cards will still be able to outperform fx cards even without the shadow effect.

            in r360 or later, ati will just put the shadow thing in too, and to rub salt into nvidia’s wounds, it’ll probably be better! i’m also curious about what power the r400 will pack, i heard it will have a massive amount of transistors… the margin between atis superior performance and nvidias will only grow larger every release.

            • Anonymous
            • 16 years ago

            the R400 will have more thne 122il. trans bo no the Nvida ultra shadow is a seperate rendring engine just for shadows and in Doom 3 prerelase build ID gave the benchmakers it ran 40% faster then the Nv30 withgout it and over 35% faster then the R350

            • Anonymous
            • 16 years ago

            i heard rumours the next radeon would boast a few extra transistors, something to do with stencil buffers… i’m guessing it will be an ultrashadow rip
            §[< http://www.beyond3d.com/index.php<]§ look at Next Generation Talk if this is the case, radeon will lay the smackdown on the 5900's candy ass

            • Anonymous
            • 16 years ago

            acully no see ATI wont have a ultrashadowing engine but they are going to talk about adding a 3rd stencil buffer witch could increas shadow rending up to 12% but as it stands the Nv35 uses double the stecile shadow couculation with the ultra shadow technolagy. ATI could have tripple that if they do ceat a seperate engieg just to render shadows but ATI may not implament that till ther R450

            • Anonymous
            • 16 years ago

            ati’ll have a shadow engine. ps&vs 3. and speed boost. supposedly their next gen will be twice as fast as the current.

            • PLASTIC SURGEON
            • 16 years ago

            Actually you will see that come in early 2004. The R420. The R400 has been cancelled. As that card was on scheduale release if ATI wanted to continue to spend the R&D MONEY next month. Instead they are forging head with the R420. The R420 will be entirely on a new chipset from the R300, r350 Chips.

            r[

            • Anonymous
            • 16 years ago

            well who is gonna spend moneuy on anew mobo for PCIexspress really it alredy has bugs in it they will fix them but not soon i recommend ya stick with AGP 8X

            • Anonymous
            • 16 years ago

            ati cards will have agp to pci xpress bridge. nvidia will not from what i’ve heard. nv4x will give you the shaft unless you have xpress.

    • droopy1592
    • 16 years ago

    No such thing as future proof my ass. Just like I play Counter Strike on a GF2MX, the 9800 Pro will play DX9 games twice as fast as GFFX a year or two from now. If i’m a moron for saying that, so is Tech-report and it’s staff, which I don’t think is the case. Call the morons for backing me up.

    Tech report said:
    q[

      • indeego
      • 16 years ago

      The only people “serious” about gaming are those getting paid for itg{<.<}g

    • 5150
    • 16 years ago

    Am I the only one that just looks at the conclusion and calls it good? I remember back in the day I used to read this stuff from front to back. I suppose that at 23, I’m just getting too old.

      • Anonymous
      • 16 years ago

      When I was 23, I was still enthusiastic about reading articles (of any sort) from beginning to end, but as I got to my late 20’s, I found it to be more and more of a chore. Now at the age of 31 I find myself skimming articles more than ever before and sometimes just skip to the conclusion the same as you do. 🙂 When it comes to computer parts like video cards, I just want the benchmarks and any comments on drivers. I can’t be bothered with the pre-test drivel of an article.

    • Anonymous
    • 16 years ago

    In this industry… there is no such thing as ‘future proof’

    so buy what makes you happy for today and tomorrow. If you’re thinking beyond tomorrow then you’re thinking too far into the future.

      • atryus28
      • 16 years ago

      Oh how true, how true.

      • NeXus 6
      • 16 years ago

      Ah, but there is such a thing as “more future proof.” Ask Droopy.

        • Anonymous
        • 16 years ago

        Droopy is a moron, plain and simple.

        If there was ever an idea of ‘more future proof’ in this industry then it’s to buy the fastest thing you can get and just hope for the best. That’s it and nothing more.

        It is inevitable that what you buy today will be obsolete tomorrow. You want ‘future proof’? Buy what you can now and shut yourself out from the outside world. What you don’t know can’t hurt you. Ignorance is bliss. And for Droopy… he is the epitome of ignorance.

    • Anonymous
    • 16 years ago

    The only trouble is that after all that Nvidia has done since ATI released their 9700 Pro video card, I view them (Nvidia) in a very dim light and have since dumped their products. Maybe when Nvidia can start writing decent, stable drivers again and quit with their marketing antics, etc., I’ll consider them again. Also, I’d want them to not continue with the practice of striking “exclusive deals” like they’ve recently done with EA. If Nvidia’s cards are truly “the way it’s meant to be played”, they should stand on their own merits rather than trying to get developers and publishers to purposely bias their games in favor of Nvidia’s hardware.

    • atryus28
    • 16 years ago

    I saw newegg had a Radeon 9600 for $137 from some company called connect3d. Is it just me or is this way cheap. It also has a passive cooler on it. §[< http://www.newegg.com/app/ViewProduct.asp?submit=manufactory&manufactory=1851&catalog=48&DEPA=1&sortby=14&order=1<]§

      • StripSurge
      • 16 years ago

      Did you see the picture of their Radeon 9000? It contains a genuine “Redeon 9000 GPU” If they can’t take the time to spell, god only knows what kind of quality you’re getting.

        • atryus28
        • 16 years ago

        No I don’t see any pictures where it shows a misspelled radeon.

    • Anonymous
    • 16 years ago

    ‘640k is all you need’

    • Anonymous
    • 16 years ago

    1024×786+4xAA+8XAS+60FPS= its all you need.

    my 5600 delivers that… when DX9 Games come out, my vanilla 5900, will still be better than a 9500 or 9600… i bet this on June 2003

    ill see you all ati fan boys on december 2003 when doom3 goes down your 9600’s

    -“the ag”

      • Anonymous
      • 16 years ago

      a 5900??? fact: fx dx9 sucks. check 3dmurk game 4 results.

        • Anonymous
        • 16 years ago

        All you ATIdiots will be disapointed when in 2 yars your damn so called radeon 9800 pro chugs at new DX9 titesl and my 5900ultra all have will be flowinf away at 65FPS you may be lucky to hit 23FPS

          • Anonymous
          • 16 years ago

          In 2 yrs!! The fx line will be dead by then! Haha. R4xx yeah!!!

          • Anonymous
          • 16 years ago

          wtf? 5900 doesn’t even beat a card a yr old in dx9.

          • PLASTIC SURGEON
          • 16 years ago

          You are quite dumb. Even if you disregard 3dmarks2003, it is the only dx9 test out there that pushes cards to run related dx9 apps. And the fx line does not do them well. Especially in the sahader tests. A area where future games will take advantage of. And shader 2.0 is also slower on the fx chipsets.

      • droopy1592
      • 16 years ago

      I’d have to say you are an idiot because everyone knows that GFFX anything sucks at anything related to DX9.

    • Anonymous
    • 16 years ago

    >>>>>Conclusion

    By looking at the scores above, I find it hard to convince the consumers that FX5600 would give them a better gaming experience than a good old GF4 Ti. Generally speaking, Leadtek A310 (FX5600) is around 10~35% slower than a standard GF4 Ti-4800SE. The FX5600 may come out on top if FSAA is used but that is about it.

    I am not too sure how Nvidia is going to explain to the consumers that the replacement GPU, under certain circumstances, is slower than the one being replaced. Yet, it costs more. You should not let DX9 capability influence your buying decision too much, as by the time DX9 titles hit the street, the FX5600 would be obsolete.<<<<<

    The whole fx line minus the 5900 is quite crap……

      • Anonymous
      • 16 years ago

      I’d have to agree on that. If you aren’t gonna be enabling AA or AF in anycase, then the Geforce4 Ti-4200,wiz in the same price range, would be a much better card.It still gives great fps ,sometimes comparable to the 9500 pro,ofcourse without AA or Af though.

      • NeXus 6
      • 16 years ago

      The same can be said for any video card on the market right now. People pimping current ATi video cards think DX9 is the main reason to buy one. I wonder how many here will still be using their 9500/9700/9800 Pro cards a year from now. Buy a video card for current games; don’t buy it for a game that doesn’t yet exist.

        • droopy1592
        • 16 years ago

        Some people only upgrade every 18 months to 2 years (like me) so they may wish that their card may be as future proof as possible.

        There will be DX9 games in that time period. That’s why I have a 9700 pro that’s overclocked to hell. I probably won’t by a video card until I have an SATA/ PCI express rig going. That won’t be for a while.

          • NeXus 6
          • 16 years ago

          Well, tell me when that “time period” comes so you can express your disappointment about how you can’t play that latest DX9 game you just bought at max IQ settings. I can’t wait…

            • PLASTIC SURGEON
            • 16 years ago

            Thing is, only if you want to play those games at MAX settings will you be disappointed. Otherwise those cards will be perfectly fine. Because if that’s your intention, then you need to wait for the R420 or the NV40 then. As the 9700pro, 9800pro, 5800 Ultra, 5900 Ultra will not run those games at max settings at 60fps. Mid-Range dx9 performance has to go to the 9500pro. 9600Pro.
            I have to agree that the current line of FX chipsets not including the 5900 Ultra are pitiful. Nvidia really bombed on those cards. Including the mobile chip too.

            • droopy1592
            • 16 years ago

            Fact is, would you rather be playing DX9 games on a Radeon 9800 pro or a GFFX 5900 Ultra a year from now? That’s the arguement, and the question, not whether or not the 9800 pro will play DX9 games at high res next year. That makes it more future proof. They both play DX7 and 8 games equally well and fast enough to satisfy anyone.

            If my 9800 pro will run Jim Bob’s Nascar racing DX9 title at more the double the frame rate that the 5900 will, this time next year, I’d much rather have an ATi card in my box.

            I had to upgrade. I was pushing a TNT2 in one box and a GF2 MX in the other.

            • NeXus 6
            • 16 years ago

            I’d say either video card, since you’ll have to turn down most of the IQ settings to get a decent frame rate. Either way you’ll be unhappy with it.

            Regarding DX9 games, you’re just making assumptions again. You can’t just assume every DX9 game will run faster on the 9800 Pro. And besides, most people are going to ditch what they have right now for the lastest video cards if they want to get the best gaming performance (like they always do). And if you’re stuck with an older card, you’ll just have to turn down the IQ settings. BTW, you made no mention of OpenGL games. Is the FX 5900 Ultra “more future proof” because it’s faster than ATi in OpenGL?

            • PLASTIC SURGEON
            • 16 years ago

            Future Proof does not really exist anymore as long as Nvidia and ATI keep this insane 6 month cycle. Because soon as you buy a high or Mid range card, they bring out a new refresh or new core chip. So to buy a card to future proof is quite silly and as useful as the balls on the Pope. Buy for the now. Not for the future. PC hardware technology changes too fast and the software side changes alot slower. So to future proof your purchase is just not happening.
            But saying that, if i had the coin i would buy the 9800AIW over the 5900 Ultra. Why? Because the price is the same and the AIW gives you more features and video goodies. And if you want to push the future proof issue,(hey it’s a free world) games coming out look to be shader intensive, more so then other games of the past. So the 9800pro would be the wise choice with it’s more powerful shader engine….Of course If you want to drop $499.99US on it. 😉 LOL

        • AENIMA
        • 16 years ago

        that really isnt a true argument. my friend and I had computers, he had a 1.5GHz P4 with 256MB RD and a Geforce3, I had a 1GHz Athlon 512MB SD and a Geforce2. I told him “my computer plays games now just fine, why waste the extra cash on th Geforce3?” it wasnt long at all before I HAD to upgrade. Medal of Honor would hardly run and thats not DX8! now here he is, with a geforce3 and he can play splintercell and UT2003 and BF1942. I wouldent dream of using my old Geforce2 on any of those titles. so here I am with a Radeon 9700 that cost $250 and my friend can still play lots of games today without upgrading. not that I regret the $250 spent on it either. I know that years from now I will be ABLE to play future titles while people who decided to buy for now are FORCED to upgrade.

          • absurdity
          • 16 years ago

          Do a side by side comparison of your cards, and then see what you think.

          Geforce3 with Doom III or Half-Life is going to be decent with settings turned down… but you’ll be able to crank them, and then he’ll be the one needing the upgrade.

            • NeXus 6
            • 16 years ago

            That’s my point. I currently have a GF3 and I’m not going to enjoy those games at 1280 x 960 with AF or AA. Carmack even claimed Doom 3 would run decent on GF3 hardware. Yeah, if you enjoy playing at a bogus screen resolution with minimal IQ settings.

            • PLASTIC SURGEON
            • 16 years ago

            What’s decent? Decent for the many would be at high resolution. Maybe not with IQ pumped to the max. But good resolutions. And what else do you think Carmack is going to say.
            r[

    • Anonymous
    • 16 years ago

    Most important question of the day…

    I have a Duron 600 o/c to 1GHz. I also have a GF2MX card in it and 512MB of RAM.

    Big question of the day is what video card should I buy to gain some performance?

    Yes I know the CPU ain’t all that fast, but what vid card with that CPU makes the most of my money? I’d prefer not to spend more than $150 for a card, but less is better.

    • Anonymous
    • 16 years ago

    Apparently, with crappy AA, incomplete dx 9 compatability and much slower performance (without OCing the very OCable 9600pro) a person would have to be brain dead not to spend 30 bucks more. Not to mention that anyone who supports NV while they are cheating, lying and cutting corners like mad is a total fool. The review was certainly better than most, however I don’t understand how you can compare AA performance when its definately not apples to apples. NV AA sucks. This should always be pointed out very clearly until NV get it that we won’t be stupid enough to believe their numbers. Just to be fair, the 90/91/92 don’t impress me either, Ati should have made a cheaper variation on the rv350 for this market.

    BackSpaced

    PS. where are the dx9 police? arrest that board!!!

    • eitje
    • 16 years ago

    q[

    • tu2thepoo
    • 16 years ago

    I don’t know if this has been mentioned before, but a lot of the cards that the 5200 has been compared to are really out of its league. i mean, who’s seriously going to consider a 5200 if they have the cash for a Radeon 9700 or GFFX 5900?

    I’d really like to see some comparisons to the Radeon 7500/8500/9000, the Geforce2/3, the Kyro I/II, and other previous-generation cards, because I wager that’s what the majority of the 5200’s intended market are currently using.

      • droopy1592
      • 16 years ago

      It’s the 5200 Ultra, and if you read the conclusion:

      s[

        • tu2thepoo
        • 16 years ago

        eitje: i’m not saying that it should’ve been an entirely new comparison, but that it would have been nice to see a basline Geforce3 or Radeon 8500 system, to know how much faster it is than the previous generation.

        droopy: is this the equivalent of a “RTFM n00b” post? like i wrote above, i think a gamer would still like some sort of “previous-generation” reference. it may just be a pipe dream of mine, but i think it’d be really nice to see a DX8-class (GF3, Radeon 8500, etc) card included in comparisons of all but the most high-end cards.

          • droopy1592
          • 16 years ago

          Well, a 9000pro is about as close as you are going to get to a 8500 , but it may be a bit slower. That may satisfy you. While your idea may not sound like a bad one, reviewers rarely ever go back to a previous generation anything unless we just made the jump. Like “Is this years BMW 330i better than last years 328i?”

          Look at what you wrote: q[

      • eitje
      • 16 years ago

      in this review, the most expensive card with which the 5200U was compared was the 9600. give that the price difference is only $30 there, I think that TR did a very good job of a “today’s cards” review.

      i beleive you HAVE to assume that the 5200U is going to perform better than a 7500 or GF2 across the board (perhaps not outperforming the most recent previous-gen cards, though), and so TR’s review acts as a “which card should i consider for an upgrade?” more than “how does this perform compared with what i have?”

      • tu2thepoo
      • 16 years ago

      actually, i take it back. it was really early when i first thumbed through the review, and i thought it was the 9600/9500 versus the 5200 family.

      my bad!

    • droopy1592
    • 16 years ago

    q[

      • indeego
      • 16 years ago

      We’re still waiting for those next gen apps. Last I heard September 30th is still many months away. Until then, screw getting a new card, periodg{<.<}g

        • PLASTIC SURGEON
        • 16 years ago

        Everyone might as well wait until Late November to next year when both companies release their new chips. The R420 and the NV40. Keep what you have now unless you’re running a gforce2 card or rage128…lol

    • SPOODZ
    • 16 years ago

    9500pro is a better buy than the 9600pro, anyone knows that. As for the Nvidiots cards, who gives a damn, they all suck.

    Go ATI !!

    • NeXus 6
    • 16 years ago

    AG #2 –

    Like it matters to you, me, or anyone that does a little research before they buy? NVIDIA is one of many companies that do it. It’s called making money. It may be deceiving but it works. Don’t worry about poor Joe six- pack. He is what keeps these companies in business.

      • moog
      • 16 years ago

      joe six-pack will see the holy ati light soon. the views of the enthusiasts and the suggestions of informed owners of the smaller comp stores will eventually seep through the many calcified cranial layers of joey’s noodle noggin head.

    • Anonymous
    • 16 years ago

    Now one thing makes me wonder … wasn’t Splinter Cell the game that according to the developers at UbiSoft didn’t work with multisampling AA at all, producing articfacts on the screen when enabled?

    AFAIK nVidia doesn’t allow AA to be applied to SC at all, ATi does, however producing artifacts, making the SC 4xAA / 8xAF benchmarks absolutely meaningless?

    Sorry, if my memory is playing games on me.

    • PLASTIC SURGEON
    • 16 years ago

    Stick with a 9600pro, 9500pro and spend the extra $20-$50.

    • atidriverssuck
    • 16 years ago

    how does it compare to a 4200?

      • Anonymous
      • 16 years ago

      THE 4200 is better without AA. If you have a 4200ti. Don’t even bother to upgrade to that card. Or downgrade. 😉 The only feature on the 5200 is dx9 features, which is a joke because these low end cards won’t run dx9 apps decent anyways. This is just a marketing ploy for people who know squat.

        • atidriverssuck
        • 16 years ago

        ahh thx, in that case I’ll be sticking to 4200 for the forseeable. Nothing to see here, moving right along…

      • PLASTIC SURGEON
      • 16 years ago

      I have a MSI4200TI running in my secondary pc and a 9700pro in my primary. After reading the reviews on this card, there is no way in hell i would change my 4200ti for that card. A 9500pro. Yes. 9600Pro. Yes. Maybe even the 5600 Ultra.(the new core) But the 5200? Not a chance. DX9 compliant? lol. Too bad the card is too slow to run those programs when they emerge. People will be suckered to buy this card anyways. That’s the sad part. Total marketing with this DX9 nonesense.

    • Anonymous
    • 16 years ago

    yeah and ATI’s naming scheme isn’t deceptive.. whatever.. get a clue dude.

    • Anonymous
    • 16 years ago

    I don’t know about what it’s like elsewhere, but it seems that all of the “Ultra” GeForce FX cards are hard to come by…the shelves are starting to fill up with regular FXs however. Perhaps the Ultra boards are coming out later…

    If this isn’t the case, I find it misleading that nVidia chooses to send out Ultra cards to reviewers while stocking the shelves with regular cards. Customers may go to nVidia’s website and read some reviews about how the GeForce FX 5200 Ultra performs, then go to the store and pick up a normal 5200 instead. These Ultra boards are rather pricey, but at the store, Joe will think the $80 5200 will comletely smoke a 9000 Pro…It’s almost a bait-and-switch.

      • JustAnEngineer
      • 16 years ago

      Deceptive product naming has been one of NVidia’s most successful marketing ploys, starting with the TNT2M64, and degenerating through the GeForce2MX, GeForce2MX 200, GeForce4MX, GeForce4MX 420, etc., Not to mention the false promise of GeForce4Ti 4800SE, etc. Can it ever reach bottom?

      Combined with increased market segmentation, confusing product naming has allowed NVidia’s ODMs to sell underperforming cards to millions of consumers who based their purchases on the good performance reviews of a similarly-named but different NVidia-based card.

      I still cannot figure out why a sane person would purchase any available GeForceFX product when Radeon 9500 Pro is readily available for less than $200. Remember that Radeon 9500 Pro handily beats the Radeon 9600 Pro shown in Dissonance’s review.

        • DemonicAngel
        • 16 years ago

        To be fair, ATi’s naming scheme isn’t exactly transparent either. You gave an example yourself – 9500 faster than 9600? Makes no sense. 🙂

        • Anonymous
        • 16 years ago

        actually the normal 5200 is slightly better than 9000

          • JustAnEngineer
          • 16 years ago

          I did not mention the Radeon 9000 or 9500. I specifically mentioned the Radeon 9500 *[

    • dukerjames
    • 16 years ago

    is this the first ultra that sucked?

    I mean all the previous ultra cards from Nvidia were the top dogs back then and cost big $$$. this looks like some cheap budget card doesn’t really deserve its “ultra” badge

    edit: er… 1st post?? woot!

Pin It on Pinterest

Share This