ATI’s Radeon 9800 Pro graphics card

ATI’S RADEON 9700 PRO HAS been a resounding success, capturing the leads in both graphics technology and performance for ATI upon its introduction last fall, and holding the crown to today—at least in terms of products shipping in volume. NVIDIA’s GeForce FX 5800 Ultra may have captured at least part of the technology and performance titles for itself, but the cards are so rare, we haven’t even been able to secure one for review.

The new Radeon 9800 Pro is all about solidly winning the graphics lead for ATI, and these cards are set to hit store shelves this month—quite possibly before any of the high-end GeForce FX cards arrive. With fast 256-bit DDR memory, improved pixel shaders, and more efficient use of memory bandwidth, the Radeon 9800 Pro looks to be the new king of the hill. Read on as we examine the 9800 Pro in detail, exploring the performance and technology behind ATI’s latest and greatest.

The R350 VPU debuts
The Radeon 9800 Pro is based on the chip code-named R350. R350 is, as you might expect, derived from ATI’s R300 chip, which powers ATI’s Radeon 9500 and 9700 lineups. We’ve already reviewed the Radeon 9700 Pro in some depth, and I will try to avoid repeating myself here. If you want to understand the technology from which the R350 is derived, please read our 9700 review.

The key things you need to know about the R350 chip are fairly basic. Like the R300, the R350 is manufactured using 0.15-micron process tech, and like the R300, it has 8 pipelines with one texture unit per pipe. The R350’s 256-bit DDR memory interface runs at a higher clock speed, which allows the chip to have even more memory bandwidth than the Radeon 9700 Pro. The Radeon 9800 Pro will debut with an effective 680MHz memory clock speed, which gives it a very healthy 21.8GB/s of memory bandwidth. The Radeon 9700 Pro, by contrast, topped out at 19.8GB/s.

No, that’s not a huge gain in terms of bandwidth overall, but it’s not bad. ATI has achieved more throughput than any other consumer graphics chip, and they’ve done so without resorting to a Dustbuster appendage. Hard to argue with that.

ATI has taken several measures to allow the R350 to put its memory bandwidth to good use. The clock speed of the R350 chip is 380MHz, while the R300 peaked at 325MHz in the Radeon 9700 Pro. Also, the company has tuned the chip’s memory controller to better arbitrate reads and writes during heavy use, which should especially help performance when rendering antialiased pixels. Finally, the R350’s has an improved cache for Z-buffer reads and writes, to aid in the bandwidth-intensive task of handling pixel depth information. (That is, info about a pixel’s position on the Z axis.) ATI says this cache has been optimized to work better with stencil buffer data, which should help when developers use stencil shadow volumes to create shadowing effects in future games like Doom III.

Your new graphics catch phrase: F-buffer
The most significant piece of new technology in the R350, however, is more than a simple performance tweak. One of the NVIDIA GeForce FX’s key advantages over the Radeon 9700 is its ability to execute pixel shader programs as long as 1024 instructions. The pixel shaders on the R300 chip are limited to program lengths of 64 instructions, which simply isn’t enough to create some of the more compelling shader effects developers might want to use. In order to produce more complex effects, the R300 would have to resort to multi-pass rendering. Multi-pass rendering is nifty because it overcomes a lot of technical limitations, but it’s a performance killer because it duplicates lots of work unnecessarily. Essentially, to provide really complex shader effects in real time, you want to avoid making multiple rendering passes, at least in the traditional sense of full passes through the GPU pipeline. The GeForce FX can do so, but the R300 can’t.

To understand why all of this multi-pass stuff matters and to get a sense why I get all hot and bothered when talking about DirectX 9-class hardware, go read my article about such things. There, I identified pixel shader program lengths as a noteworthy advantage for NVIDIA way back in August.

ATI has addressed the R300’s pixel shader limitations in R350 by implementing something called an F-buffer. (ATI says the “F” stands for “fragment stream FIFO buffer,” in case you were wondering.) The R350’s F-buffer allows it to execute pixel shader programs of arbitrary instruction lengths, more than bringing it on par with NVIDIA’s GeForce FX. The genesis of the F-buffer idea was a paper by William R. Mark and Kekoa Proudfoot at Stanford University. Mark and Proudfoot suggested the F-buffer as a means of storing intermediate results of rendering passes without writing each pixel to the frame buffer and taking another trip through the graphics pipeline.


Source: ATI

Storing intermediate results in a FIFO buffer not only offers the potential for big performance increases over traditional multi-pass techniques, it also sidesteps a number of problems. For instance, multi-pass rendering doesn’t handle transparent or translucent surfaces particularly well. In this case, the chip must perform a color blend operation before writing the pixel to the framebuffer, which can cause problems with the look of the final, rendered output. The F-buffer, however, can store both foreground and background pixel fragments and perform additional operations on them both—no blend ops needed between passes.

The F-buffer approach does have some limitations, but they aren’t show-stoppers, from what I gather. However, as with traditional multi-pass approaches, pixel shader programs will have to be structured to account for the GPU’s per-pass rendering limitations.

Of course, in the new worlds of DirectX 9 and OpenGL 2.0, such things generally ought to be handled by compilers. Shader programs will largely be written in high-level shading languages like MS’s HLSL and broken down into passes by a runtime compiler. With high-level shading languages, developers need not think much about the hardware’s per-pass limitations.

Honestly, I didn’t expect ATI to address the R300’s 64-instruction pixel shader limit with this “half-generation” refresh chip, but they’ve apparently done so. The verdict is still out on how R350’s approach compares to NVIDIA’s GeForce FX chips, mainly because we don’t yet have enough information about the NVIDIA chip to understand precisely how these two chips compare. My sense is that the NV30 chip in the GeForce FX offers a little more complexity and flexibility than the F-buffer approach, but the real-world differences in performance and rendering output are likely to be minor.

In all, the F-buffer is a crucial enhancement to the R350 that reasserts ATI’s technology leadership in graphics. The concept is fundamentally simple, as many good innovations in computers are, but the impact of the change is profound.

 

The Radeon 9800 Pro card
OK, enough of the egghead graphics geek stuff. Let’s check out the new hardware. Here are some beauty shots of the card:

The 9800 Pro card isn’t wildly different from the Radeon 9700 Pro. Many of the same bits are in the same places, although the board itself is about a half-inch longer than the 9700 cards. All the standard graphics card stuff is there, including VGA, DVI, and TV-Out ports. One especially welcome addition is that cute little heatsink on the card’s voltage regulator. The metal plate on the back side of the original 9700 cards tended to get very hot, and the heatsink seems like a better design.

Remarkably, the card’s main heatsink and cooler is actually smaller than the stock ATI cooler on the 9700 Pro. The unit’s fan has more blades at a sharper angle than the 9700’s fan, but noise levels seem largely unchanged. As with the 9700, the Radeon 9800 Pro is barely audible amidst the noise generated by a standard CPU heatsink/fan combo and a power supply fan.

ATI uses Samsung DDR memory chips on the 9800 Pro, and unlike the 9700, the 9800 card uses a four-pin Molex-type connector for auxiliary power. These connectors are more abundant in most systems than the smaller floppy-drive power connector on the 9700 cards, so the change is welcome.

Now that we’ve ogled the card, let’s see how it performs.

 

Our testing methods
As ever, we did our best to deliver clean benchmark numbers. Tests were run at least twice, and the results were averaged.

Our test system was configured like so:

  System
Processor Athlon XP ‘Thoroughbred’ 2600+ 2.083GHz
Front-side bus 333MHz (166MHz DDR)
Motherboard Asus A7N8X Deluxe
Chipset NVIDIA nForce2
North bridge nForce2 SPP
South bridge nForce2 MCP-T
Chipset drivers 2.03
Memory size 512MB (2 DIMMs)
Memory type Corsair XMS3200 PC2700 DDR SDRAM (333MHz)
Sound nForce2 APU
Storage Maxtor DiamondMax Plus D740X 7200RPM ATA/100 hard drive
OS Microsoft Windows XP Professional
OS updates Service Pack 1, DirectX 9.0

The test system’s Windows desktop was set at 1024×768 in 32-bit color at an 85Hz screen refresh rate. Vertical refresh sync (vsync) was disabled for all tests.

We used Catalyst revision 7.84 drivers for the Radeon 9800 Pro, and ATI cards and Catalyst 3.1 (7.83) drivers for the 9700 Pro. We used NVIDIA’s Detonator 42.68 drivers for the NVIDIA cards, which aren’t yet publicly available (wink, wink), but include optimizations for 3DMark03.

We used the following versions of our test applications:

All the tests and methods we employed are publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

 

Synthetic tests
We’ll kick off our review with a series of synthetic tests, so we can see how the Radeon 9800 Pro’s enhancements have helped specific kinds of performance versus the 9700 Pro. We’ll also use some more general application-based tests to gauge real-world performance, as well.

Fill rate
First and foremost in graphics performance, as always, is fill rate. We are talking raw, pixel-pushing power here, and in this case, specs matter. Specifications aren’t destiny for a GPU, but they are a full-ride scholarship to the college of its choice plus free tutoring whenever needed. In this department, the 9800 is especially well-endowed. Behold, the trusty chip chart, sorted conveniently in order of memory bandwidth:

  Core clock (MHz) Pixel pipelines  Peak fill rate (Mpixels/s) Texture units per pixel pipeline Peak fill rate (Mtexels/s) Memory clock (MHz) Memory bus width (bits) Peak memory bandwidth (GB/s)
Radeon 9600 325 4 1300 1 1300 400 128 6.4
GeForce4 Ti 4200 8X 250 4 1000 2 2000 512 128 8.2
Radeon 9500 275 4 1100 1 1100 540 128 8.6
Radeon 9500 Pro 275 8 2200 1 2200 540 128 8.6
Radeon 9600 Pro  400 4 1600 1 1600 600 128 9.6
GeForce4 Ti 4600

300

4 1200 2 2400 650 128 10.4
GeForce FX 5800 400 4 1600 2 3200 800 128 12.8
GeForce FX 5800 Ultra 500 4 2000 2 4000 1000 128 16.0
Radeon 9700 275 8 2200 1 2200 540 256 17.3
Parhelia-512 220 4 880 4 3520 550 256 17.6
Radeon 9700 Pro 325 8 2600 1 2600 620 256 19.8
Radeon 9800 Pro 380 8 3040 1 3040 680 256 21.8

The 9800 Pro is the pimp-daddy of pixel-pushing prowess, with over 3 gigapixels per second of single-textured fill rate. With two-layer multitexturing in the picture, the GeForce FX 5800 Ultra has a higher peak texel fill rate, but the FX may lack the memory bandwidth to keep up with the 9800 Pro. In the special case of four-layer multitexturing, Matrox’s Parhelia leads the pack, but theory and reality rarely collide there.

Overall, the fill rate increase (and corresponding memory bandwidth increase) from the 9700 Pro to the 9800 Pro is fairly modest. However, since the 9800 Pro supplants the 9700 Pro at the same price point, the gains are quite welcome.

Here’s how the 9800 Pro’s measured up in our synthetic tests.

The 9800 Pro nicely distances itself from its older sibling, and it absolutely crushes the previous-gen NVIDIA card in single-textured fill rate. With multitexturing, the 9800 Pro is still quite potent.

Diss enjoys playing theoretical versus actual with fill rate, so we can see how memory bandwidth limitations and other mitigating factors keep the chips from reaching their peak potential. (Of course, with synthetic tests, those limitations don’t always show up quite as much as they can in real applications.) I’ll indulge him. Here’s the dirt.

The 9800 Pro stays a ways below its theoretical peak pixel fill rate with single texturing, but when multiple textures are required, it’s able to achieve near-perfection in the 3DMark fill rate test.

Before we go on, I should note that for a really high-end card like the 9800 Pro, the most important sorts of fill rate performance aren’t the straight-up numbers we’re measuring here. The 9800 Pro can deliver amazing performance with anisotropic filtering and edge antialiasing enabled, which is where a card like this really excels. We will look at antialiasing and texture filtering performance later in the review, but I wanted to mention that fact now. The R350 GPU employs sophisticated techniques like 6:1 color compression to make AA performance especially smooth. In fact, our entire test suite should probably be revised to better account for antialiasing and the like, but we didn’t have time to make those revisions since the 9800 Pro arrived in our labs just this past Saturday.

Occlusion detection
VillageMark tests fill rate, including the ability of a GPU to avoid drawing pixels that won’t make it onscreen—a.k.a. overdraw. These pixels are generally situated behind others in a scene and are therefore obscured. The R3x0 chips use a technique called Early Z to “virtually eliminate” overdraw. VillageMark renders a scene with gobs of overdraw, and it’s up to the cards to cope.

The 9800 comes through looking good, although it doesn’t appear to be wildly more efficient than the 9700 Pro. The GeForce4 Ti chip lacks Early Z, though it does have some overdraw-reduction abilities, and the disparity shows.

 

Pixel shader performance
Next up, we have a series of pixel shader tests to see how these cards handle fancy-pants effects. Unfortunately, only one of our tests uses the pixel shader 2.0 standard from DirectX 9, and none of the tests will flex the F-buffer at all. (DX9 pixel shaders are limited to 64 instructions.) These tests will show us generally how the cards’ pixel shaders will perform in current games, however.

Guess what? The 9800 Pro is very fast, and the old GF4 Ti nearly gets a hernia trying to keep up. You can see why it’s crucial for NVIDIA to get GeForce FX products to stores soon.

Now let’s try NVIDIA’s own ChameleonMark.

For DX8-class pixel shaders, ATI’s R3x0 chips are worlds ahead of the GF4 Ti. Let’s try what’s currently our only DX9 pixel shader test, from the new 3DMark03. Because it’s a DX8-class chip, the GeForce4 Ti will have to sit this one out.

The extra clock speed and memory bandwidth give the 9800 Pro a slight edge over its predecessor here.

 
Polygon throughput and vertex shader performance
Now we’ll try out some vertex shader and lighting tests to see what kind of pain the 9800 Pro can inflict on its competition in this department.

Like R300, the R350 exhibits amazing vertex shader throughput, more than doubling the performance of the GF4 Ti 4600. Oddly, the R300 matches the R350 in the 3DMark2001 SE test until fill rate considerations intrude at higher resolutions. The 3DMark03 test, however, shows the R350 to be faster.

Let’s look at legacy transform and lighting now. As ever, all these chips use a vertex shader program to emulate an older-style T&L unit.

You were expecting something else? The 9800 Pro again prevails.

 

Quake III Arena
Now to the real game tests. We tested with a Quake III demo from a CPL match involving my bud Fatal1ty, of course. It’s a long-ish demo for benchmarking, but it should be a nice test. You can grab the demo from our server here, at least until we find out the thing is copyrighted somehow.

The GF4 Ti card puts in a relatively strong showing in this older game, but as screen resolutions increase, the 9800 Pro asserts its superiority.

Comanche 4
The shader-aware Comanche 4 offers us a chance to test DirectX gaming performance.

Here’s a case where our test system, based on an Athlon XP 2600+ with a 333MHz front-side bus and dual-channel DDR333 memory, is entirely the limiting performance factor. Only at 1600×1200 does the GF4 Ti 4600 card begin to dip below about 45-47 frames per second, and only by a smidgen. Oh well. Not every game is limited by graphics card performance.

Codecreatures Benchmark Pro
The Codecreatures engine is also a DirectX 8-class application, so it may give us some insights into DX8 performance that Comanche 4 couldn’t.

The 9800 Pro at 1600×1200 nearly matches the GF4 Ti card at 1024×768, and the 9800 Pro shows some decent performance gains over the 9700 Pro.

 

Unreal Tournament 2003
Now for a really real DirectX 8-class game engine.

Once more, the 9800 Pro comes out on top.

 

Serious Sam SE
We wanted to really stress the 9800 Pro for once, so this time around, we used Serious Sam’s “Extreme Quality” add-on to set graphics options. This should be largely an apples-to-apples comparison from one card to the next, but the Radeon cards are doing 16X anisotropic filtering here, and the TI 4600 is doing its maximum of 8X aniso. However, with the adaptive aniso algorithms that ATI and NVIDIA use, the difference between 8X aniso and 16X aniso is very minor.

The 9800 Pro barely pulls ahead of the 9700 Pro here, but both Radeons deliver Serious Sam at extremely high-quality graphics settings at over 60 fps at 1280×1024—an impressive accomplishment.

Let’s look at a timescale graph to see how much the cards’ performance varies over the course of our test demo.

As you can see, the 9800 Pro delivers more than just nice frame rate averages over time. Even in our demo’s worst-case scenarios, it runs well ahead of the GF4 Ti card.

 

3DMark2001 SE
For completeness, I’ve included 3DMark2001, which is a decent DirectX 8 performance test. I won’t comment much on the results, because there’s not much more to say.

Again, the 9800 Pro rolls, especially at high resolutions.

 

3DMark03
3DMark03 has become a bit controversial. Please see my article about the controversy to better understand the issues. Nonetheless, we’ll offer results from this benchmark, because it uses more advanced rendering techniques than most anything else we can use to test.

The 3DMark03 composite score is above is just a weighted average of the four game tests below. Also, note that the GF4 Ti card isn’t able to run game test 4, because that test requires DX9 compatibility. This missing score harms the GF4’s showing in the 3DMark composite score.

Games 2 and 3 use stencil shadow volume for shadowing, so the R350’s Z-cache optimizations come into play here. Nevertheless, the 9800 Pro isn’t too much faster than the 9700 Pro.

 

Edge antialiasing
Now for a real test of the 9800 Pro’s abilities. As resolutions and antialiasing sample counts go up, even the 9800 Pro should be strained.

Amazingly, the 9800 Pro never dips below 100 frame per second, even with 6X antialiasing at 1280×1024. In current and older games, the 9800 Pro should be able to run 6X AA all the time, without penalties. And ATI’s gamma-corrected AA is the best looking AA we’ve ever seen.

Texture antialiasing
Another way to enjoy the benefits of the 9800 Pro’s performance is to increase visual quality though texture filtering. We’ve already looked at how texture filtering improves images in our 9700 Pro review, complete with screenshots. The 9800 Pro’s output shouldn’t be visually different from the 9700 Pro, so we’ll skip to the benches.

With texture filtering, the 9800 Pro scales like what it essentially is: a faster Radeon 9700.

 
Conclusions
We will have to buy, beg, borrow, or steal a GeForce FX 5800 Ultra card before we can say with complete confidence that the Radeon 9800 Pro is the fastest, more feature-complete graphics card anywhere, but judging by our experience with the card, the FX will be hard pressed to keep up. This thing is amazingly fast, and preliminary reviews of the GeForce FX 5800 Ultra show it tying with, or slightly outrunning, the Radeon 9700 Pro. By all rights, the 9800 Pro should be the fastest card on the planet.

Technology-wise, the addition of the F-buffer is the single most important enhancement I could imagine ATI making to its R300 chip. It will probably be a very long time before we see the benefits of the F-buffer exercised in real games, but on the Quadro/FireGL side of the house, this improvement should be widely used. Importantly, ATI has found an adequate reponse to the GeForce FX’s ability to run longer shader programs. Perhaps the only significant difference between the two left is the maximum floating-point pixel shader precision, where the FX has an edge, 128 bits to 96 bits.

At $399, the Radeon 9800 Pro 128MB isn’t cheap. However, it may well arrive in stores before the GeForce FX 5800 line does, which would be a major coup for ATI. Retail availability is planned for later this month, and ATI’s ability to deliver a test card to us today may be a testament to its ability to build these cards in volume. NVIDIA is still unable to do so.

ATI is planning an “amateur” Radeon 9800 for $349 with slightly lower clock speeds (the exact speeds hasn’t been finalized yet) and 128MB of memory. That version of the R350 should be available some time in the second quarter of this year. For the price of a small house, you’ll also be able to pick up a Radeon 9800 Pro card sporting 256MB of DDR-II memory. ATI claims this will be the first gamers’ card with 256MB of memory, and the new DDR-II type RAM should offer even more bandwidth than the current 9800 Pro.

If you’re a hard-core gamer who feeds on eye candy and can’t get enough antialiasing, high resolutions, and texture filtering, it’s time to put a kidney for sale on eBay. The Radeon 9800 Pro is coming, and you won’t want to miss it. 

Comments closed
    • Anonymous
    • 17 years ago

    Uhhm? What is a video card?

    • Anonymous
    • 17 years ago

    Ok im ready to buy a new video card. Ive been a Geforce fan since the begining. But my work buddy is a ATI fan, So we always fight at work. But anyways to get to the point. I was looking at reviews to porve him wrong to show him why Im buying a GeForce FX. I had all the reviews. The GeForce FX is faster then the Radeon 9700 by 6-10% in almost every test across the board. So I go print out about 6 or so reviews. Then my buddy told me im a dumb ass….go check out the new card thats coming out the Radeon 9800 and that the 9700 was a 6 month old card. So I did. Then I cried. I felt bad. I had so much hope, So much pride in my GeForce cards. Why, Why did they do this to me. The New 128mb 9800 beats the new FX card across the board by 9%-33%. Then I shivered while I went to the ATI site thinking I might have to buy an ATI card. Then I saw it, The new 256mb 9800 card! Oh No. Why do this to Nvidia? Just caused they owned Radeons the last couple of years? So it looks like im going to have to buy a Radeon 9800. I hope it lives up to its tests, or im changing it for a FX card. Mabie the Next generation of Nvidia cards will own. I hope so. But untill then…Im going with Radeon.

    Langor

    Nvidia Fan turned to the evil ATI.

    • Anonymous
    • 17 years ago

    wumpus probaly works in nVidia’s PR dept.

    • Anonymous
    • 17 years ago

    hey wumpus are u in anyway related to chanoth, lol!!!

    • Anonymous
    • 17 years ago

    wumpus i guess ur one of the many blind nvidiots who choose inferior technology because of ur loyalty to certain company

    • Anonymous
    • 17 years ago

    all you vidio card maker’s, memory maker’s and mother board and cpu maker’s take to damn long. feel as though we should all go in a time machine, set to 50 for years from now and I might see the kind of power im looking for, some one out there has all the tech, but is only feeding the public in tiny incrament’s till they have, all are money, Some one wake me when we are at, lets see 128, 256, 512, 1024, 2048 bit graffix yeah 2048 bit graffix sound nice, but there will always be something better next week.

    • Anonymous
    • 17 years ago

    absinthe, you’v got to keep waiting 🙂

    • Anonymous
    • 17 years ago

    *[

    • Anonymous
    • 17 years ago

    #117 is a moron. You might want to read reviews and numbers regarding the FX Chip in comparison to the R350 and R300. Stupid.

    • Anonymous
    • 17 years ago

    ATI is gunna suck up the house. Geforce FX will blow it out of the water.

    • Anonymous
    • 17 years ago

    it’s etc. et cetera

    • Anonymous
    • 17 years ago

    *[

    • Anonymous
    • 17 years ago

    What is important is which high-end card is best at the moment. And you have to hand that over to ATI and the 9700pro and the new 9800. No extra slot required. Less noise. Not even using passive cooling. Great performance. And now you can pick up a 9700pro for under $300. Can’t beat that. The 5800 Ultra has been a major let down. Due in part to Nvidia’s hype of the product being the “9700pro killer”

    • Anonymous
    • 17 years ago

    *[http://www.tomshardware.com/graphic/20030306/radeon9800pro-08.html<]§ ) So well there you have a difference in x8 vrs x16. I can dig up more examples but I have proved that there is a diff :) [/quote] I\'ll admit that there is a difference between 8x and 16x 9700 in those pictures. But that doesn\'t explain why my two examples aren\'t equally valid, either. I don\'t know why the GFFX is doing so poorly in that example, either-- it doesn\'t look like it\'s even using trilinear filtering! Sheesh. Even ye olde GF4 would at least trilinear filter that. I\'m not really up in arms; I\'ll be the first to admit that ATI is clearly ahead of nVidia. I just want to make sure we\'re actually [i]examining[/i] their claims-- the Spinal Tap-esque \"but this one goes to 16x!\" statements in that HardOCP review make my blood boil.

    • IntelMole
    • 17 years ago

    Hang on… what happens to all those who were desperate for the best graphics performance and sold a kidney to get a GFFX/R9700 Pro?

    Have to get a job, I reckon 😀
    IntelMole

    • Forge
    • 17 years ago

    AG #109 – You also need to use commas instead of periods in a few places, there.

    • Forge
    • 17 years ago

    Wumpus was abused when very young by a Rage IIC, and only the Riva128 took him in. He generally has a really good grasp of the tech concepts involved, though, so I’d try to run down exactly what he’s on about before dismissing him out of hand.

    After all, he was right and I was wrong about Trilinear + Aniso on 8500, he spotted it before pretty much anybody outside ATI even figured out what he meant.

    • Anonymous
    • 17 years ago

    “We couldn’t tell any differences whatsoever between the different aniso performance settings on the GeForce FX in this game. We’re not sure what caused it. ”

    “On the Radeon, on the other hand, we could easily spot the differences between the quality settings. In 8x Performance mode, the mipmap transitions are easily discernable. In 16x, the barrier looks much crisper than in 8x. ”

    For me. It’s all gravy. Wumpus. Chill. You need a cup of joe. If you got the cash. The 9800 is the card to grab. Period. Not the 5800. Damage. Good run down.

    • Damage
    • 17 years ago

    jbirney:

    If you run a diff operation between the two source images for 8X and 16X aniso in our 9700 review there, you will find that there is not a single pixel’s difference between 8X and 16X in our example. That’s no indictment of the Radeon 9700’s adaptive aniso method, which is much improved from the Radeon 8500, but it may be an indictment of my choice in sample scenes. Only surfaces at specific angles from the viewpoint will get the all-out 16X aniso treatment on the 9700.

    In my experience, I’ve found the Radeon 9700’s aniso method largely comparable to the NVIDIA one. Both are adaptive, and both have their drawbacks, but they’re both pretty competent–none of the scanline-based weirdness from the 8500, and you can have real trilinear, too. Check the difference:

    8500:
    §[<http://tech-report.com/reviews/2001q4/vidcards/123a/r8500-aniso.jpg<]§ 9700: §[<http://tech-report.com/reviews/2002q3/radeon-9700pro/q3a-aniso-mip.jpg<]§ Of course, adaptive methods mean there's little difference in terms of sampling rate or actual output between 8X and 16X aniso in most cases, as you might expect from our looking at benchmarks: §[<http://www.tech-report.com/reviews/2003q1/radeon-9800pro/q3a-aniso.gif<]§ Note that the Radeon 9700 scales better than the GF4 Ti, but not massively so, and the R300 is a more efficient architecture (in terms of Z handling the like). I don't think ATI is sampling at a significantly lower rate than NVIDIA. Bottom line: the Radeon 9700's adaptive anisotropic filtering is pretty good, but ability to peak out at 16X aniso versus 8X for the NVIDIA chips means very little in practice. Not sure what Wumpus is up in arms about. ATI atoned for an awful lot with the Radeon 9700, and they deserve credit for it. Sounds to me like NVIDIA is cutting some corners a little too clumsily with the FX now. It is expected, I s'pose. 3D graphics is the art of cheating without getting caught, right?

    • Anonymous
    • 17 years ago

    *[http://www.tech-report.com/reviews/2002q3/radeon-9700pro/anisopics.x<]§

    • Anonymous
    • 17 years ago

    *[http://www.tomshardware.com/graphic/20030306/radeon9800pro-08.html<]§ ) So well there you have a difference in x8 vrs x16. I can dig up more examples but I have proved that there is a diff :)

    • Anonymous
    • 17 years ago

    *[

    • Anonymous
    • 17 years ago

    One point. Wumpus what the hell are you really trying to say? That the image quality on the FX is superior then the one generated by the 9700pro and the 9800? Come out and say it if that’s your beef. For god’s sake man. But this is not what all the review sites are saying. So your babble is a mystery to me. Or do you like to see your name in print.

    • Anonymous
    • 17 years ago

    “Overall the 9700/9800 Pro has better image quality. Also look at the performance differences; we gained about 15 FPS more at 2X AA on the 9800 Pro compared to the 9700 Pro.”

    “The 9700/9800 ultimately has better Aniso then the GFFX due to the fact they can do it at 16X. ”

    “Still the R300/350 does have the most impressive antialiasing solution as of yet. Performance and quality is simply top notch.”

    Does this not say the 9800 and the 9700pro offer the best quality solutions available right now? Or is this a conspiracy brought forth by Wumpus…………………….

    • Anonymous
    • 17 years ago

    *[http://www.hardocp.com/image.html?image=MTA0NjgwOTI1NEdRU0hrZlkyeDRfMTJfMjZfbC5qcGc=<]§ 9800 \"8x\" §[<http://www.hardocp.com/image.html?image=MTA0NjgwOTI1NEdRU0hrZlkyeDRfMTJfMjRfbC5qcGc=<]§ Alt-tab back and forth. Look at the RENDERED PIXELS. See any difference? They are literally, pixel for pixel, identical!

    • Anonymous
    • 17 years ago

    *[

    • Anonymous
    • 17 years ago

    As you know more developers are now writing their games with ATI hardware now in mind. Thanks to the performance of the 9700pro and the still no show of the 5800 Ultra.

    • Anonymous
    • 17 years ago

    Nicely put jbirney……….

    • Anonymous
    • 17 years ago

    Wumpus, Please. Buy your 5800 Ultra. True Fan-boy to the end. True Nvididiot. Bottom line: 9700PRO is better then your coveted 5800Ultra. Cheaper. Less noise. Smaller. Same performance.
    9800: Beats the 5800Ultra in all the areas that count. And it is still running on .15 die size and DD1 memory. Something you Nvidia fan-boys cannot handle. You sold your 9700. Good for you. Wait then for the 5800 vacuum cleaner then. I saw it at Sears in the appliance department.
    Individuals like you are the type of fools Nvidia loves to market their products towards. Even when they release a sub-par product, lemmings such as yourself will follow them right off the cliff. People should buy products that are worth the money spent. Regardless of the company or name behind it. And at tgis point. The 5800 Ultra is far from that. Will it is a GREAT deal for Nvididiots.
    “A FOOL IS SOON PARTED WITH HIS MONEY”

    • Anonymous
    • 17 years ago

    *[

    • Anonymous
    • 17 years ago

    Please nuke that post…….damn

    • Anonymous
    • 17 years ago

    Hehehehehehehe you guys are arguing about video cards. ROFL! Please keep this discussion going because the whole fate of the world rests on this debate here! 🙂 lol

    Nvida ROOOOLZ! PHEAR TEH NVIDIA!!!!!!!

    ATI IS TEH ROXXORZZZZZ!!!!!!! IT”S TEH BIZOMB YO!!!!!!!!!!

    I was eating a bag of funjuns the other day and there was this one that was so huge that I could wear it on my head like a crown! And I did.

    What do you call a closet full of lesbians? A liquor cabinet.

    [Apu is shot.]
    Apu: Ah! the searing kiss of hot lead; how I missed you! I mean, I think I’m dying.

    ——————————————————————————–
    Bart: Milhouse, what happened?! You were supposed to be watching the factory!
    Milhouse: I was watchin’. First it started to fall over, then it fell over.

    ——————————————————————————–
    [Praying heavenward]
    Homer: I’m not normally a religious man, but if you’re up there, save me, Superman!

    ——————————————————————————–
    Lionel Hutz: Mrs. Simpson, your sexual harassment suit is exactly what I need to help rebuild my shattered practice. Care to join me in a belt of scotch?
    Marge: But it’s only 9:30 in the morning!
    Lionel Hutz: Yeah, but I haven’t slept in days.

    ——————————————————————————–
    Lisa: I still believe in protecting animal’s rights, but that still doesn’t excuse what I did. I’m sorry for wrecking your barbecue, dad.
    Homer: That’s okay, honey. I used to believe in things too.

    ——————————————————————————–
    Marge: There’s no shame in being a pariah.

    ——————————————————————————–
    [On working at the DMV.]
    Patty: Somedays we don’t let the line move at all.
    Selma: Yeah, we call those WEEKdays.

    ——————————————————————————–
    Homer: Old people don’t need companionship. They need to be isolated and studied so it can be determined what nutrients they have that might be extracted for our personal use.

    ——————————————————————————–
    Kent Brockman: I’ve said it before, and I’ll say it again: democracy just doesn’t work!

    ——————————————————————————–
    Bart: I’ve said it before, and I’ll say it again…aye carumba!

    ——————————————————————————–
    Lisa: Dad, what’s a Muppet?
    Homer: Well, it’s not quite a mop, it’s not quite a puppet, but man…
    [laughs hysterically]
    Homer: So to answer your question, I don’t know.

    ——————————————————————————–
    Lisa: As you know, we’ve been swimming. And we’ve developed a taste for it. We agree that getting our own pool is the way to go. Now before you respond, you should know that your refusal will result in months and months of…
    Bart, Lisa: CanwehaveapoolDad? CanwehaveapoolDad? CanwehaveapoolDad? CanwehaveapoolDad? CanwehaveapoolDad?
    Homer: I understand. Let us celebrate our agreement with the adding of chocolate to milk.

    ——————————————————————————–
    Bart: Christmas is the one time of year when people of all religions come together to worship Santa Claus.

    ——————————————————————————–
    Homer: To alcohol! The cause of, and solution to, all of life’s problems.

    ——————————————————————————–
    Billy Corgan: Billy Corgan, Smashing Pumpkins.
    Homer: Homer Simpson, smiling politely.

    ——————————————————————————–
    Lou: I went to the McDonalds over in Shelbyville the other day.
    Chief Wiggum: The Mc-what?
    Lou: Yeah, I never heard of it either but they say they have over 2000 locations in this state alone.
    Eddie: Hmm…Must’ve sprung up over night.
    Lou: But you know, its the little differences.
    Chief Wiggum: Example?
    Lou: Well at a McDonalds you can get a Krusty Burger with cheese. But they don’t call it a Krusty Burger with cheese.
    Chief Wiggum: Get out! What do they call it?
    Lou: A quarter pounder with cheese.
    Chief Wiggum: Quarter pounder with cheese…well I can see the cheese but? Hey, do they have Krusty’s Partially Gelatinated Gum-Based beverages?
    Lou: Yeah, they call them ‘shakes.’
    Eddie: *Pfft* ‘Shakes.’ You don’t know what you’re gettin’.

    ——————————————————————————–
    Bart: Leonard Nimoy? What are you doing here?
    Leonard Nimoy: Wherever there is mystery and the unexplained, cosmic forces shall draw me near.
    Bart: [flippantly] Uh-huh.
    Hot Dog Vendor: Hey Spock, what do you want on your hot dog?
    Leonard Nimoy: Surprise me.

    ——————————————————————————–
    Mulder: All right, Homer. We want you to re-create your every move the night you saw this alien.
    Homer: Well, the evening began at the gentleman’s club, where we were discussing Wittgenstein over a game of backgammon.
    Scully: Mr. Simpson, it’s a felony to lie to the F.B.I.
    Homer: We were sitting in Barney’s car eating packets of mustard. You happy?

    ——————————————————————————–
    Homer: The only monster here is the gambling monster that has enslaved your mother! I call him Gamblor, and it’s time to snatch your mother from his neon claws!

    ——————————————————————————–
    Homer: God bless those pagans.

    ——————————————————————————–
    [Bart & Lisa are reading a magazine at the Kwik-E-Mart.]
    Apu: Hey, hey, this is not a lending library. If you’re not going to buy that thing put it down or I’ll blow your heads off!

    ——————————————————————————–
    [George Washington appears in Lisa’s dream, urging her to reveal the truth about the town’s founder. Lisa wakes up yelling:]
    Lisa: I want to help you, George Washington!
    Bart: [walking by her room] “I want to help you… George Washington”? Man, even your dreams are square.

    ——————————————————————————–
    Bart: As God is my witness, I can pass the fourth grade.
    Homer: And if you don’t, at least you’ll be bigger than the other kids.

    ——————————————————————————–
    Homer: Weaseling out of things is important to learn. It’s what separates us from the animals… except the weasel.

    ——————————————————————————–
    Mr. Burns: You’re fired.
    Marge: You can’t fire me just because I’m married. I’m gonna sue the pants off of you.
    Mr. Burns: You don’t have to sue me to get my pants off.

    ——————————————————————————–
    Mr. Burns: Thank you, come again. Smithers, release the hounds.

    ——————————————————————————–
    Rev. Lovejoy: Wait a minute. This sounds like rock and/or roll.

    ——————————————————————————–
    Homer: Got any of that beer that has candy floating in it? You know, Skittlebrau?
    Apu: Such a beer does not exist, sir. I think you must have dreamed it.
    Homer: Oh. Well, then just give me a six-pack and a couple of bags of Skittles.

    ——————————————————————————–
    Dealer: 19.
    Homer: Hit me!
    Dealer: 20.
    Homer: Hit me!
    Dealer: 21.
    Homer: Hit me!
    Dealer: 22.
    Homer: D’oh!

    ——————————————————————————–
    Homer: D’oh!

    ————————————————————–

    • Anonymous
    • 17 years ago

    dw#90

    Anonymous Coward is already a registered poster here.

    Do not defile his name. You would not believe how many geribils can hide in a laundry hamper, just waiting for you to go nighty-night. ph33r.

    • Anonymous
    • 17 years ago

    Sure AG92, the GFFX looks better in that one.

    but it’s also ~3x slower than the R9800 Pro

    • Anonymous
    • 17 years ago

    [q]This of course leads to gems like
    §[<http://www.hardocp.com/article.html?art=NDM5LDEy<]§[

    • Anonymous
    • 17 years ago

    *[

    • Anonymous
    • 17 years ago

    *[

    • Anonymous
    • 17 years ago

    Nor agreeing with himself, and offering elucidation on his own point as though from another’s view.

    • Anonymous
    • 17 years ago

    I’ve not often seen someone arguing with himself in these forums.

    • Anonymous
    • 17 years ago

    *[

    • Anonymous
    • 17 years ago

    *[http://www.tech-report.com/reviews/2003q1/radeon-9800pro/index.x?pg=9<]§ Notice the CPU dependency on the graph at 640x480. The GeForce is typically ahead by about 2 FPS. Notice the effect on the CPU on the graph at 1600x1200. The GeForce is behind by about 10 FPS, 25% slower than 9800, but where it matters. Any other type of benchmark is useless, and should be ignored. A strong statement, but true, when you think about it.

    • Anonymous
    • 17 years ago

    *[

    • Anonymous
    • 17 years ago

    *[

    • JustAnEngineer
    • 17 years ago

    §[<http://www.beyond3d.com/previews/nvidia/gffxu/<]§ Look at the filtering on the GFFX. It's almost completely bilinear. Image quality suffers, too.

    • Anonymous
    • 17 years ago

    Hopefully this will do sweet things to the price of the 9700pro . Now if they could sell the 9500 at gf4mx prices , I’d say Nividia would be eliminated permenantly. Screw the 9200 .

    • Zenith
    • 17 years ago

    UT2k3 is still quite CPU limited in the bot matches, in case you couldn’t tell…………………

    • Anonymous
    • 17 years ago

    *[

    • Anonymous
    • 17 years ago

    *[http://www.hardocp.com/article.html?art=NDM5LDEy<]§ [quote]The 9700/9800 ultimately has better Aniso then the GFFX due to the fact they can do it at 16X. [/quote] Yeah. 16x > 8x. Except when it really isn\'t. At least the anti-aliasing scores are truly comparable, AFAIK. #76 -- learn how to form coherent paragraphs and maybe next time I\'ll read your rant.

    • Freon
    • 17 years ago

    77,

    Sharper != better quality.

    • Anonymous
    • 17 years ago

    my question is, why do ALL the screen shots from HardOCP, tech-report, anandtech, and even tomshardware show the radeon 9700 having better IQ with AF? these are all kinds of screen shots.

    • Anonymous
    • 17 years ago

    Wumpus. Are you for real? I have a 9700pro. And to this day i have had absolutely no problems with older or newer games. Period. You tell me what card out right now is better then the 9700pro for the price and performance? Now it seems your a Nvidia fan-boy. Fine. However, the quality settings for the 5800 Ultra are not even close to the 9700pro’s. And we won’t even mention the early reviews of the 9800. And you say you don’t trust ATI? Bloodyhell. Then how the hell can you trust Nvidia????? Where they not the same company that kept saying in the days of the first g-force card that they have a product that would render graphics like a TOY STORY movie. What a lie that was. Then they said the same thing about the g-force3 cards and the g-force4 line. Still waiting. And then they had the balls to say it again with the FX line of chips(which are not even out yet)…Cinematic rendering…..what a loud of rubbish. Then Nvidia’s release dates for the FX chip. LMAO. More lies. “It will be released in November.” Then they said…”It will be released before Christmas…” Then they said….”it will be on store shelves in massive quantities in Feb….” Then they said…”we are only releasing limited quantities of the 5800 Ultra(100,000) by late Feb…” And now they are saying….”10,000 units will be available in march…..”
    And you say that you don’t trust ATI??? Bloodyhell. Nvidia even lied about the pixel pipelines……At least ATI does not B.S on that large amount scale with so many back to back lies. And we won’t even get into the B.S move the P.R. department pulled about the 5800 Ultra being the 9700pro killer. That one really screwed you fan-boys. Made you wait for a vacuum cleaner that is not even out yet. And the performance is the same and sometimes worse then the older 9700pro. And everyone and there Mothers know Carmack has always been in the Nvidia’s camp for ages. And lets face it. Nvidia is farrrrrrr from perfect in the drivers department of late. If Carmack chks his code before the Nvidia’s drivers…then he is a moron. He is not a God unlike some might think. And it’s quite interesting too that the reviewers who were comparing the quality settings on the 9700pro and the 9800 said they were the best period. And they ran the card through tests. Have you? Just go buy the 5800 Ultra and let it sound like a Concord and wake up your neibours and heat your home. Wow what a deal for that card if you can get one. It will cost more then the 9800 and 9700pro. Run slower then the 9800 which is on .15 die size. Less clock speed. Slower ddr1 memory. And i smaller and does not take up a PCI slot. Same goes for the 9700pro in some cases. Yes Wumpus. Go buy your 5800 Ultra. But please, don’t tell stories that are full of crap. Until the NV35 can hit the shelves. Nvidia has squat in the high-end areana worth buying. And that is fact not fiction…….

    • Anonymous
    • 17 years ago

    [q]Pretty typical of ATI– they cheat like hell wherever they can. Anyway, I haven’t had time to look at this stuff in months. I just figured ATI was cheating based on their previous track record. Sure enough, they are. [/q]

    I thought that my list of most stupid things I’ve ever read was complete…. but wait a second!! Wumpus posting about ATI… gotta check my list again, and to add his post…

    Seriously, STOP!!! If you love nvidia that much… go an get a F#cking GFFX, please!!!!

    • Freon
    • 17 years ago

    70,

    And on top of it those screens change depending on the angle. If the tunnel were orientated square to up/down/left/right the Radeon would look sharper.

    ATI’s anisotropic is inconsistent at best. And the LOD is overly agressive and causes more moire and aliasing than Nvidia’s solution. And the sliders in their drivers are not even LOD bias. The mipmap quality slider actually downsamples the resolution of all textures, ala r_picmip in the Quake engine. And it doesn’t even do it in binary increments, but fractions. At least Nvidia’s drivers provide some level of LOD control.

    ATI’s decision to finally add trilinear to their anisotropic (performance vs quality mode) has helped, but I still prefer Nvidia’s filtering even if it doesn’t always look quite as sharp.

    Now I’m not running off and ditching my 9500 Pro. I’ll take the good with the bad, which is a game I find myself ahead on for the time being.

    • Anonymous
    • 17 years ago

    *[

    • Anonymous
    • 17 years ago

    *[http://www.ixbt.com/video2/images/r9700pro/gf4-anis8.jpg<]§ §[<http://www.ixbt.com/video2/images/r9700pro/r9700-anis16.jpg<]§ Here\'s how to compare these images: at what point do the numbers become too blurry to read? That\'s anisotropic filtering, folks. Pretty typical of ATI-- they cheat like hell wherever they can. Anyway, I haven\'t had time to look at this stuff in months. I just figured ATI was cheating based on their previous track record. Sure enough, they are.

    • Anonymous
    • 17 years ago

    Thanks for taking all the time to do these reviews, they’re appreciated!

    I would love to see a more in-depth treatment of the actual signal quality that comes out of these various cards – which has importance for use in home theater pcs, graphic design, and so forth. While there are leaps and bounds in 3D performance, it’s very rare to hear a manufacturer trumpeting their improved DACs or a bigger color space.

    A review like this for the “other” ports on a board would be very helpful, too – manufacturers may plonk video outs on a board, and while there’s lots of subjective testing of these, there doesn’t seem to be very much objective testing. If nothing else, vscope screenshots of smpte color bars could be telling…

    In the same vein, it would be nice to see some sort of tests that measure (boring I know) 2d performance of all of these cards.
    (or is 2d peformance just so fast at this point amonst every card on the market that there’s almost no point in testing?)

    Has anyone heard from ATI about their plans for AIW with this new series of cards? (how about AIW with 2 monitors?!)

    thanks!

    • Anonymous
    • 17 years ago

    wumpus, I am tried of your bull$shit. Do us a favor, STOP IT

    • Anonymous
    • 17 years ago

    wumpus
    Please stop whining, ok?
    Just go buy your favourite GFFX. Never touch ATI.

    • Anonymous
    • 17 years ago

    hey droopy, i was browsing a previous article and noticed something you said about grado headphones. wanted to ask you something about them. email is dredfi2@lsu.edu or you can just toss out your email here.

    • Anonymous
    • 17 years ago

    *[http://www.tomshardware.com/graphic/20030306/radeon9800pro-14.html<]§ Codecreatures (MAYBE representative of future games?) §[<http://www.tomshardware.com/graphic/20030306/radeon9800pro-26.html<]§ Pushing up AA and AF has its place -- albeit a minor one, that most casual gamers can\'t even see. What we really need are games that actually use complex graphics techniques, instead of least common denominator DX7 stuff.. of course, doing so might be financial suicide for the developer, but what the hey.

    • droopy1592
    • 17 years ago

    I ain’t nevah seen him say dat, boyyyy… *hic*

    I ain’t had nuh problums eithuh.

    • Anonymous
    • 17 years ago

    *[

    • Anonymous
    • 17 years ago

    *[

    • Anonymous
    • 17 years ago

    *[

    • Anonymous
    • 17 years ago

    yarbo
    [q]#48: because a 2600+ is a lot closer to what most of us actually have in our boxes.[/q]

    So what? Did TR review the 3.06HT cpu with a Radeon 8500 or GF4 Ti4200 because that’s close to what most people have? No, they rightfully used the fastest card available at the time.

    It’s a high end video card review, dont you want to see the performance limit of these cards? If you must, save the mid-range cpus to review with mid range video cards to reflect ‘typical user’ performance.

    Games and benchmarks are as much CPU benchmarks as they are GPU benchmarks. If you want to evaluate a vidcard, you need to try to remove as many other bottlenecks as possible. Actually, evaluating CPU scaling as #48 suggested is the best idea. Showing a select number of tests with a high and mid range CPU at various display resolutions would cover the bases (but be a hella load of work)

    • Pete
    • 17 years ago

    Nice review, Damage. Thanks.

    wumpus and Freon:

    The 8500 didn’t have RIP-mapping, even though that was the speculation for quite a while. Besides, that’s well behind us: now it’s nV who defaults to bilinear on two out of three AF choices, whereas ATi offers a much simpler, clearer choice. I don’t know what review pics you’re looking at, but the R300/350 looks much better than the NV30, especially since they can do 16x AF at the same speed as NV30 does 8x.

    Here’s an AF screenshot comparison that Freon linked (check out the weirdness with Aggressive’s MIP-maps):
    §[<http://www.beyond3d.com/previews/nvidia/gffxu/index.php?p=20<]§ You want to see terrible AF quality? Start reading this portion of 3D Velocity's excellent review: §[<http://www.3dvelocity.com/reviews/gffx5800u/gffx_5.htm<]§ As for 12x10 vs. 12x9: I think 12x10 may become more relevant as more people buy LCD's that are locked into that res (not to mention most drivers only offer 12x10 by default). The "incorrect geometry" argument is pretty much moot with 3D games--the ratios will remain correct, only at higher res. I wonder if a 9500 Pro will be a better card for Doom3 than a 9600 Pro, by virtue of it having double the vertex shaders.... Damage, would you be averse to a short article benching the Doom3 leaked alpha with a 9500 Pro, 9700 Pro, and 9800 Pro (to later compare with a 9600 Pro, 5600 Ultra, and 5800 Ultra)? I would only suggest doing it if you could get Carmack's approval, as a way to test stencil performance of the cards, not as a preview of D3 speed. Also, sorry to ask, Damage, but can you Nuke AG35's post? His link messes up the formatting of the whole page. We can use Rage3D's large link list to compensate (it includes TR, too ;) ): §[<http://www.rage3d.com/#1046927026<]§

    • Freon
    • 17 years ago

    53,

    §[<http://www.beyond3d.com/reviews/ati/r350/index.php?p=18<]§ Yeah, their filtering is still iffy at best. But it looks like Nvidia is following their lead, though to a lesser degree. §[<http://www.beyond3d.com/previews/nvidia/gffxu/index.php?p=20<]§ Look at the checkered tunnel shots. A "correct" filter would show a perfect circle. ATI's texture filter looks like an antenna propagation chart! (sorry, that's the best analogy I could come up with) I'm mostly happy with my 9500 Pro, but the texture filtering is still my biggest concern. If Nvidia had something even remotely close to the value of the 9500 Pro, I'd probably stick with them. But they don't.

    • Anonymous
    • 17 years ago

    For one. When you pay $400-$500 smacks for a high-end card these days it’s not just to run your games in low resolution with no effects enabled. Exactly the opposite. We spend that kind of coin for the very reason to pump up the resloution and effects settings. The 9700pro and the very soon to be released 9800 and later the 256ddr2 version runs games better then the 5800 ultra with quality settings set high. That’s what high-end G.P.U buyers want. If people are not interested in Fsaa and Ansioscopic filtering, then for them they should just stick to a 4600ti or 8500. And the fact that the 9800 is the fastest kid on the block with more features then the 5800 Ultra and does not use crazy cooling, no extra PCI slot, quiet on 0.15 die size is quite an amazing piece of engineering accomplishment. Something everyone said that could not be done. And ATI has done it. And room to spare too. Only makes you salivate for the late summer release or early fall debut of the R400. (if the engineering department can do that with 0.15 micron technology, imagine what they can do when they go high-end with 0.13) This is one area ATI has always had over Nvidia. The engineering department.

    • Anonymous
    • 17 years ago

    *[

    • Anonymous
    • 17 years ago

    *[

    • Anonymous
    • 17 years ago

    *[

    • yarbo
    • 17 years ago

    #48: because a 2600+ is a lot closer to what most of us actually have in our boxes.

    • Anonymous
    • 17 years ago

    The R350 is just a stop over until the R400 is released in late summer. And that will be on a different chipset altogether. The 9800 in its different variations…256ddr2 and 128dd1 will just add insult to the 5800 Ultra for now.

    • Anonymous
    • 17 years ago

    *[

    • Hockster
    • 17 years ago

    Excellent card! Clearly the best gaming card available.

    • Anonymous
    • 17 years ago

    Nice review.

    One comment is why wasnt a faster CPU used ie 3.06HT or 3000+ (or alternately, some CPU scaling experiments)

    We know that high end cards, whether from nvidia or ATI, need alot of cpu power to really fly. Differences in performance tend to be magnified as you shift the bottleneck more to the GPU as you see when you move up the resolution scale to 16×12.

    I suspect a 2600+ isnt coming close to maxing out the capabilities of a 9800Pro

    • Anonymous
    • 17 years ago

    *[Originally Posted by Trident

    • atidriverssuck
    • 17 years ago

    by the time of my next upgrade, PCI Express will probably be the norm. Well, maybe not…but it should be available. Bye bye AGP cards.
    §[<http://www.theinquirer.net/?article=7885<]§

    • Thresher
    • 17 years ago

    I’m really surprised they didn’t hold off a little bit and release this on a .13 process instead of the .15. That would have given them more headroom, less heat, and better performance.

    My bet is that this card is just a stopgap, kick-sand-in-nVidia’s-face-while-they’re-introducing-their-new-product type action.

    I would imagine that within a few months after this is released, we’ll see an uber-Radeon that is based on the smaller process.

    • R2P2
    • 17 years ago

    AG#40 — Most games only support a limited range of resolutions; 1280×1024 is usually included, and 1280×960 isn’t. The drivers for my Radeon 7500 don’t even include 1280×960 desktop support. Not much Damage can do about that sort of stuff.

    • Anonymous
    • 17 years ago

    All i know is this. For the same price or less then the 5800 Ultra(if you can get one) And less noise. Only one slot to use. Is faster where it counts. Fsaa and Ansioscopic filtering. And this was on the 9800 running only DDR1 memory. Not 256DDR2.

    • Anonymous
    • 17 years ago

    The article may now reflect the correct memory type, but the front page entry doesn’t…

    • Anonymous
    • 17 years ago

    #40, because 1280×1024 is the resolution that people use at that width on their desktop. They don’t use 1280×960.

    5:4 aspect ratio instead of 4:3.

    PAL Amigas used to use 5:4 aspect ratios all the time (320×256, 640×512 …). Kinda odd considering that TVs and monitors are 4:3 aspect ratio.

    Still, I want widescreen monitors with a 16:9 aspect ratio. Then you can complain about not running the tests at 1600×900 🙂

    • Anonymous
    • 17 years ago

    why run tests at stretched 1280×1024??
    please use 1280×960 that follows the 1024×768 and 1600×1200 ratio.

    • Damage
    • 17 years ago

    Ok, I corrected the review to note the current 128MB 9800 Pro cards use DDR memory, not DDR-II. Sorry for the flub.

    • Anonymous
    • 17 years ago

    Re: #36

    OK, the only thing I see strange here is that someone actually uses The Register as a source for hardware reviews. Their “review” *cough* is more of a joke than a comprehensive and in-depth analysis.

    “Some sites pick the FX as faster” huh? Where are the rest of them? Seriously, stop kidding around on the forums please.

    [quote]

    “Strange how some sites pick the FX as faster:

    In general terms, the game benchmarks are quite clear in their message: the 9800 is undoubtedly an improvement over the previous chip, but not by a great enough margin to currently catch up the FX in current DX8 tests. That could all of course change with the arrival of true DX9-class titles, only the likes of Doom III will be able to shed any true light on the matter.”

    • Anonymous
    • 17 years ago

    The 9800 Pro uses 128MB DDR-I memory. A future (a month or two) edition of the card will be equipped with 256MB of DDR-II memory.

    Also, remember that the R350 is a i[

    • Anonymous
    • 17 years ago

    Strange how some sites pick the FX as faster:

    [q]

    In general terms, the game benchmarks are quite clear in their message: the 9800 is undoubtedly an improvement over the previous chip, but not by a great enough margin to currently catch up the FX in current DX8 tests. That could all of course change with the arrival of true DX9-class titles, only the likes of Doom III will be able to shed any true light on the matter.

    [/q]

    • Anonymous
    • 17 years ago

    This is Google News’ link to all the 9800 reports it deems fit to list:

    §[<http://news.google.com/news?num=30&hl=en&ie=UTF-8&q=cluster:new%2estockwatch%2ecom%2fnewsit%2fnewsit%5fnewsit%2epasp%3fbid%3dB%2d211988%2dC%3aATY%26news%5fregion%3dC%26symbol%3dATY<]§ At the time of this comment, TR is nowhere to be found. Don't you think your report is at least as worthy any listed there? How do you expect to see what these new bad boy servers can do if you aren't even getting noticed?

    • EasyRhino
    • 17 years ago

    Comanche seems pretty worthless as a video card benchmark. I recommend dropping it.

    (It might still be useful as a CPU/mobo benchmark, though)

    ER

    • Anonymous
    • 17 years ago

    The GF FX 5800 is aimed at the high end non-pro market, that means it is aim at either 9700 np or 9800 np.
    The R9500 and R9600 shall be compare with GF FX 5600 where from 3dmark03 result ATi already shows its lead in the same market segment.

    • Ardrid
    • 17 years ago

    Nevermind, Damage. Kyle cleared it up at the end of the review. R300 and R350 still both use 96-bit FP precision in pixel shaders.

    • Anonymous
    • 17 years ago

    I think I know who I’d believe, at first blush anyway. And it’s the place I’m posting comments.

    • Ardrid
    • 17 years ago

    Damage, about the 96-bit FP precision, the review at HardOCP claims that the 9800 brings 128-bit FP precision to the pixel shaders. Can you verity that at all since I know you were saying that it only has 96-bit FP precision in pixel shaders.

    • Ardrid
    • 17 years ago

    Question for you Damage: Are the Samsung chips DDR-I or DDR-II? Because I’m seeing in a lot of reviews that the R9800 is equipped to support DDR-II, but is currently using DDR-I modules.

    • Anonymous
    • 17 years ago

    *[

    • Kilroy1231
    • 17 years ago

    2 observations.
    First, no dual DVI on the latest and greatest card! Though I don’t have a monitor that uses a DVI connection if I had the money to gte 2 LCD monitors that used DVI connections I sure as heck would want my graphics card to have 2 DVI connections!
    Secondly, Scott, did you get a chance to pull the heatsink off and see if the base makes good contact with the die?

    • Anonymous
    • 17 years ago

    Dude.

    -Rakh3

    • crazybus
    • 17 years ago

    what’s this about ddr2 memory??? I thought the 256mb version had that.

    • Damage
    • 17 years ago

    Well, #16, if all comparsions could boil down to “unlimited” versus “limited,” the world would probably be an easier place. However, it’s just not so. The issue of whether the NV30 or F-buffer-equipped R350 is better at executing complex pixel shader programs won’t be settled until we can test the two cards head-to-head handling such programs with relatively mature drivers and compilers. So far, we haven’t even seen an appropriate test.

    The NV30 may well implement something like an F-buffer, or it may manage intermediate fragment data differently. NVIDIA talks a lot about registers, constants, and the like on NV30, and such things may help it handle complex shaders more efficiently in fewer instructions. The compiler output can be tuned (by NVIDIA) to take advantage of the chip’s abilities. And, of course, if 1024 instructions become a limitation, there’s always the multi-pass option. Two passes gets you 2048 instructions, and it won’t slow things down much.

    Of course, once you get into 2048-instruction range, the GFFX’s 128-bit FP pixel shader precision will start looking mighty good. I’d expect artifacts out of 24 bits per color channel in such cases. (And yes, the R350 is limited to 96 bits of precision in its pixel shaders, though it supports 128-bit FP color modes, as did the R300.)

    So I stand by my contention. The issue simply isn’t settled yet. Of course, none of this stuff will matter for games until we’ve moved beyond the confines of DirectX 9.0.

    If you have more to say, I’d suggest a different tone, on that’s more conducive to actual discussion and understanding than some oddball tirade.

    • Anonymous
    • 17 years ago

    [quote]
    Yo I gues UNLIMITED and LIMITED mean the same thing to people like you eh??? Still got to find a way to make the GFFX sound better. One more time moron

    UNLIMITED = R350
    LIMITED =Nv30..

    Fing Nvidiot..

    Can you see the difference? yet in your brilliant opinion
    The GFFX is *somehow* still better.. What Fing ever..

    You also fail to mention that there is ANOTHER R350 getting introduced in April. A 400mhz core 460mhz DDR-II 256mb ram version. Another thing.. Read all the reviews. With 4x FSAA and 8x AF the 9800pro is an average of 20-30 FPS faster than the GFFX.

    Further you dont even have all your facts straight about what the R350 is capable of. Maybe you should read the [H] review and LEARN what a correct and complete review looks like..
    [/quote]

    Did you even understand what you were replying to? How is quoting the maximum number of operations per pixel shader a rebuttal to his statement about the flexibility and complexity of the NV30’s approach? Also if you missed it, the next card based on the R350 core to be released in April will be the 256MB 9800 Pro. Wierd people around at 2am (besides myself of course :P).

    • Anonymous
    • 17 years ago

    I would like to see some benches on the 9600, see how well that stacks up against the GFFx 5800, seemingly that is where Nvida hope to makes some money on there gfx cards. Looking at THG, even the R9500 its too far behind the GFFx 5800.

    • Forge
    • 17 years ago

    Err, and R350 lengthens the lead, not the F-Buffer.

    • Forge
    • 17 years ago

    Looks like another win.

    F-Buffer nullifies the shader program complexity advantage GFFX had, and further lengthens ATI’s speed lead.

    Very nice.

    I’m surprised there’s not a higher memory clock on that DDR2. There’s headroom in them thar hills.

    • Anonymous
    • 17 years ago

    #16, big fucking deal, 1000 instructions or 250 for that matter is more than will practically ever get used with this generation of hardware.

    Get a life.

    • Anonymous
    • 17 years ago

    Trivial 15% improvement. ATI was already only choice on this end of the market, so no need for anything revolutionary. Still expected better than this. R300 will have been out 8 months by the time you can buy one of these. Nvidia’s position isn’t looking quite so awful now.

    Verdict: Not interesting.

    • Anonymous
    • 17 years ago

    Fing Nvidiots…
    [qupte]
    My sense is that the NV30 chip in the GeForce FX offers a little more complexity and flexibility than the F-buffer approach, but the real-world differences in performance and rendering output are likely to be minor.
    [/quote]
    Yo I gues UNLIMITED and LIMITED mean the same thing to people like you eh??? Still got to find a way to make the GFFX sound better. One more time moron

    UNLIMITED = R350
    LIMITED =Nv30..

    Fing Nvidiot..

    Can you see the difference? yet in your brilliant opinion
    The GFFX is *somehow* still better.. What Fing ever..

    You also fail to mention that there is ANOTHER R350 getting introduced in April. A 400mhz core 460mhz DDR-II 256mb ram version. Another thing.. Read all the reviews. With 4x FSAA and 8x AF the 9800pro is an average of 20-30 FPS faster than the GFFX.

    Further you dont even have all your facts straight about what the R350 is capable of. Maybe you should read the [H] review and LEARN what a correct and complete review looks like..

    • Freon
    • 17 years ago

    Tom’s is showing the FX getting spanked like a red-headed step child in FSAA and anisotropic tests. Ouch.

    • Coldfirex
    • 17 years ago

    woops, my bad Damage. Keep up the excellent work!

    • Anonymous
    • 17 years ago

    [quote]Oddly, the R300 matches the R350 in the 3DMark2001 SE test until fill rate considerations intrude at higher resolutions. The 3DMark03 test, however, shows the R300 to be faster. [/quote]

    I think you meant the R350 shows to be faster. I think there is another error shortly after that one of the same affect.

    Otherwise though, yeehaw ATi! If only I had a puter that could keep up with it and if I had the money. Oh well

    • gordon
    • 17 years ago

    Scott Wasson is Damage!?!?!

    • Damage
    • 17 years ago

    Coldfirex: I’m talking pixel shader precision. Both cards support lots of int and FP color modes.

    • Coldfirex
    • 17 years ago

    Well HardOCP says it picked up 128bit fp

    “The R300 supports 96bit FP (Floating Point) precision, while the GeForceFX supports up to 128bit FP precision. The 9800 Pro now supports 128bit, 64bit, and 32bit FP pixel precision.”

    • Anonymous
    • 17 years ago

    Hey how come tom at THG got one with a black HSF?

    • Anonymous
    • 17 years ago

    *[

    • gordon
    • 17 years ago

    [quote]
    Perhaps the only significant difference between the two left is the maximum floating-point pixel shader precision, where the FX has an edge, 128 bits to 96 bits.
    [/quote]

    I thought one of the other big things of the R350 was it’s use of 128-bit floating point precision.

    • Anonymous
    • 17 years ago

    Freshmeat, I almost agree with you, but not quite. The GeForce was the fastest thing out, and the GeForce 2 cemented nVidia’s lead, that’s true. But the GeForce 2 was such a TREMENDOUS step up from the GeForce (67% faster clock speed, double the texturing units) that it just oozed speed everywhere. The 9800 Pro doesn’t even come close to that kind of speed differential with the 9700 Pro, leaving NV35 a chance to step up. Though when we can expect it, who knows.

    • freshmeat
    • 17 years ago

    On a more serious note, it seems as though the r300 is the new geforce. Think about it — when the geforce came out, it was one of the best performers available. Then came the gf2, and nvidia was king for years. Now, the r300 debuted as a top performer, and the r350 merely opens the gap wider. For a variety of reasons, I think it’s a little early to count nvidia out, but it’s clear that the r300 was the start of a run, not the end of one. I’m not sure what the gffx is — as I said in another thread, it may be a one-off oddity, it may be the start of an entirely new approach. Nevertheless, ati appears to be moving from being the current leader to a position of dominance.

    • Tairc
    • 17 years ago

    What I really want is a 9600Pro. Can’t afford the uber-expense, but a nice sub-200$ card is the sweet spot for power. I hope.

    Also, I wonder what kind of OC’s we’ll get on the 0.13u cores?

    • Anonymous
    • 17 years ago

    All this from a group that supposedly smokes “hallucinogens” – President of Nvidia.

    Looking at the some of the benchmarks…wow…20 more fps then the R97Pro..talk about neckbreaking speed.

    Whats up with the HSF tho? looks kinda werid…but still beats the dust buster.

    • freshmeat
    • 17 years ago

    I’ll take it! No need to wrap it up – just send it on over.

    • Anonymous
    • 17 years ago

    ATI have the throne once again!

Pin It on Pinterest

Share This