ATI’s Radeon 9700 Pro graphics card

THOSE OF YOU who have read my preview of next-gen graphics chips will know that I’m optimistic about the prospects for DirectX 9-class graphics hardware. The Radeon 9700 Pro graphics card we’re reviewing today is the first DX9-class product to hit the market, so we’re naturally excited about it.

However, I was surprised to see that the Radeon 9700 didn’t ship with a single piece of code capable of taking advantage of its most important new features, including its new floating-point datatypes that allow for very high color precision. Not only that, but I searched around online, and I couldn’t find anything there, either. Microsoft’s DirectX 9 isn’t available yet, and all those fancy demos ATI showed off at the Radeon 9700 launch were apparently written in Direct3D. ATI has a set of OpenGL extensions in the works, but those aren’t ready for prime time yet.

I asked ATI about the possibility we’d see any driver features from them like Matrox provides with Parhelia. The Parhelia drivers will force games and the Windows desktop into a 10-bit-per-color mode. It’s a hack, but it works. Unfortunately, ATI doesn’t have any plans to provide such a thing, nor are they aware of any plans at Microsoft to make use of 10-bits-per-channel color modes on the Windows desktop anytime soon.


The Radeon 9700 Pro card

So we’ll have to review the Radeon 9700 for what it is, effectively, for today’s buyers: an especially nice DirectX 8-class graphics card with some intriguing future potential. ATI has made some compromises in the Radeon 9700’s design in order to gear it toward the DX9 future. For instance, the chip can lay down only one texture per pixel pipeline in a clock cycle. The Radeon 9700 has eight pixel pipelines, so it’s still very fast, but current cards like the GeForce4 Ti (four pipes with two texture units each) and Matrox’s Parhelia (four pipes with four texture units each) have comparable pixel-pushing power. The number of texture units per pipe may seem important now, but in the future, when pixel shader programs generate textures procedurally—just run a “wood” shader or a “marble” shader—traditional texture units will probably matter less.

Nevertheless, the Radeon 9700 should be the ultimate DX8 card in many ways. The extra precision present throughout the chip’s pixel pipeline should help image quality in a few places, and its memory bandwidth is second to none. The R9700 can run most current games fluidly at high resolutions with edge and texture antialiasing features cranked up.

Before we go on, I’m going to have to stop and admonish you to go read my article on next-gen graphics chips so you can see how innovative the Radeon 9700 really is. It’s a quick read, and it probably won’t make your head hurt too much. I covered a lot of ground in that article, and I won’t drag you back over the same territory here. Suffice to say that the Radeon 9700 should change the graphics landscape dramatically.

Also, you’ll want to go read this page to get an exposition of the Radeon 9700’s most important new features. This chip is loaded with all of the latest goodies, including AGP 8X, eight pixel pipes, four vertex shaders, a 256-bit crossbar memory interface, and killer occlusion detection. It also has all of the non-3D bits that one would expect on a modern graphics card integrated onto a single chip: dual RAMDACs for analog monitors, a TMDS transmitter for digital flat panels, and a TV decoder/encoder unit for video output and capture. We will discuss many of the card’s new features in more detail as we frame our test results below.


All the standard output ports, plus a DVI-to-VGA adapter is included

Of course, all these fancy features come at a price. This wonder of Moore’s Law weighs in at roughly 110 million transistors. Manufactured on a 0.15-micron fab process, the Radeon 9700 chip has a land mass roughly equal to that of France, provided France hasn’t surrendered any land lately. To give you some perspective, have a look at the picture below, which shows a Radeon 9700 chip next to an Athlon XP processor. The Athlon XP is made up of about 37 million transistors, and it’s manufactured on a 0.13-micron process.


The Athlon XP (Thoroughbred) is dwarfed by the R300

In order to provide this massive chip with the needed juice, ATI put an auxiliary power connector on the card. The AGP slot alone just isn’t up to snuff, so—a la 3dfx’s Voodoo 5—you’ll have to plug the card into your computer’s power supply unit. That’s really no big deal, and ATI provides a pass-through cable to make it as easy as possible.

Our testing methods
As ever, we did our best to deliver clean benchmark numbers. Tests were run at least twice, and the results were averaged.

Our test systems were configured like so:

Processor Intel Pentium 4 2.8GHz
Front-side bus 533MHz (133MHz quad-pumped)
Motherboard Asus P4T533C
Chipset Intel 850E
North bridge 82850E MCH
South bridge 82801BA ICH2
Chipset drivers Intel Application Accelerator 6.22
Memory size 512MB (4 RIMMs)
Memory type Samsung PC1066 Rambus DRAM
Sound Creative SoundBlaster Live!
Storage Maxtor DiamondMax Plus D740X 7200RPM ATA/100 hard drive
OS Microsoft Windows XP Professional
OS updates None

The test systems’ Windows desktops were set at 1024×768 in 32-bit color at an 85Hz screen refresh rate. Vertical refresh sync (vsync) was disabled for all tests.

We used the following graphics cards and driver revisions:

  • ATI Radeon 8500 128MB — 7.74
  • ATI Radeon 9700 Pro 128MB — 7.75 (first shipping drivers)
  • Matrox Parhelia 128MB — 1.00.04.231
  • PNY Verto GeForce4 Ti 4600 128MB — Detonator 40.41

We used the following versions of our test applications:

All the tests and methods we employed are publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.


The Radeon 9700 Pro’s heatsink comes with a TIM pad

Fill rate
To show exactly where a new product sits among the current crop of 3D graphics cards, we like to pull out the ol’ chip table and compare the specs. As always, these numbers can lie. The key ones, like memory bandwidth and fill rate, are theoretical peaks, not real-world numbers. Real-world performance will vary depending on implementations. Still, the chips’ specs are instructive, so here’s our trusty table:

Core clock (MHz) Pixel pipelines Peak fill rate (Mpixels/s) Texture units per pixel pipeline Peak fill rate (Mtexels/s) Memory clock (MHz) Memory bus width (bits) Peak memory bandwidth (GB/s)
Radeon 8500 275 4 1100 2 2200 550 128 8.8
GeForce4 Ti 4600 300 4 1200 2 2400 650 128 10.4
Parhelia-512 220 4 880 4 3520 550 256 17.6
Radeon 9700 Pro 325 8 2600 1 2600 620 256 19.4

The Radeon 9700 Pro leads in nearly every category. It’s endowed with gobs of memory bandwidth and a blistering pixel fill rate that’s more than double that of the closest competition.

One of the most important items in the table above is texel fill rate, which describes a card’s ability to produce pixels with texture maps applied to them. In current games, texel fill rate is key to good performance at high resolutions. The 9700 Pro’s texel fill rate is good, but it’s not head and shoulders above the other cards. As I said above, ATI’s use of only one texture unit per pixel pipeline is a bit of a compromise. The 9700 chip can apply a single texture per clock cycle, while others can apply two or four textures per clock.

However, with eight pixel pipelines, the 9700 chip has an advantage. Matrox’s Parhelia, for instance, has the highest theoretical peak texel fill rate, with four texture units in each of its four pixel pipes. Parhelia isn’t able to use all four texture units per pipe in most current games and apps, because those apps don’t apply four textures per rendering pass. As a result, Parhelia performs much slower than its stats-sheet specs would suggest. The 9700, on the other hand, can almost always put all of its texture units to use at once, even if an app only applies one texture per pass.

Should the 9700 need to apply more than one texture per rendering pass, it can send pixels back through its pipelines up to 16 times before sending the results out to a frame buffer. This process will chew up clock cycles, but it’s not nearly as much trouble as writing the results to memory, reading them back in, and then applying another texture. (For the record, the Radeon 8500 can “loop back” pixels in this manner up to three times per rendering pass, for a total of six textures per pass. The GeForce4 Ti can do it twice, delivering four textures per pass.)

Of course, none of this pixel filling ability is worth much if the VPU—visual processing unit, don’t ask—can’t write those pixels out to memory as fast as it can process them. In order to make most effective use of its memory, the Radeon 9700 includes a sophisticated crossbar memory interface very similar to the one in NVIDIA’s GeForce4 Ti chips. However, at 256 bits, the 9700 has double the number of paths to memory and thus double the raw memory bandwidth. The 9700’s memory interface incorporates four memory controllers on the VPU side and four 64-bit channels into main memory. Between the memory channels and the array of controllers is a switched fabric. Any one of the memory controllers can talk to any one (or two or four) memory channels via the switched fabric. As you might imagine, this approach is much more efficient than simply transferring data sequentially in 256-bit chunks.


A block diagram of the Radeon 9700’s memory controller. Source: ATI.

Let’s see how all of this technology translates into performance. 3DMark’s synthetic fill rate test measures pixel and texel fill rates. This test doesn’t exploit all of the advanced bandwidth conservation and pixel loopback techniques we’ve discussed, but it should give us a good idea about a card’s basic pixel-pushing prowess. We’ll test performance in real games at very high resolutions a bit later.

The theory works out rather nicely in practice with the 9700. The chip is by far the fastest in terms of pixel (or single-textured) fill rate, and it delivers the highest texel fill rate in all but one display resolution, where the Parhelia grabs a small edge. Notice, also, how much closer the R9700 Pro comes to its theoretical fill rate than the Parhelia.

Occlusion detection
Real application performance is determined by more than just raw fill rate, however. The 9700 packs the third generation of ATI’s HyperZ suite of bandwidth conservation techniques. All of these techniques have to do with the Z buffer, which stores depth information about each pixel in a scene. The HyperZ bag of tricks includes lossless Z buffer compression, fast clearing of the Z buffer, and a pair of occlusion detection methods, dubbed Hierarchical Z and Early Z.

If you read the words “occlusion detection” just now and your eyes glazed over, don’t fret. Occlusion in 3D graphics is a simple thing: when one object is in front of another, the object in back is occluded from view. Occluded objects are a problem in graphics, because 3D chips tend to draw everything in a scene, only figuring out which pixel belongs in front of another during the rendering process. Drawing pixels the viewer will never see is often referred to as overdraw, and overdraw is wildly inefficient. If a chip can figure out which pixels are occluded and avoid drawing them, lots of effort—and memory bandwidth and pixel processing power—will be saved.

ATI’s Hierarchical Z logic examines blocks of pixels to determine whether they will be occluded when a final scene is rendered. If none of the pixels in a block will be visible, the block doesn’t get drawn. Hierarchical Z doesn’t catch everything, but it can cut down on unneeded pixel processing. Early Z is a new addition in this third-gen HyperZ suite, and it does occlusion detection on a per-pixel level. ATI claims Early Z “virtually eliminates” overdraw. This new, more rigorous approach makes particular sense on a DX9 card, because performing complex pixel shading ops on occluded pixels would be especially painful.

To examine the effectiveness of the 9700’s occlusion detection, we’ll use VillageMark, which benchmarks an extreme worst-case scenario for overdraw. For what it’s worth, the Radeon 8500 and GeForce4 Ti 4600 both have limited forms of occlusion detection, but Parhelia doesn’t have any.

The Radeon 9700 Pro shows its mettle here, clearly outrunning the other cards.

Incidentally, some folks have speculated that VillageMark effectively tests texel fill rates more than anything because of its robust use of multitexturing. However, our results show something different. The Parhelia, which has the highest texel fill rate and no provisions to reduce overdraw, is slowest of the group. The R9700 Pro, with only one texture unit per pixel pipe, is fastest by a fair margin.

Pixel shader performance
Pixel shader performance is closely related to fill rate, occlusion detection, and memory bandwidth, but it’s not the same thing. Reasonably complex pixel shader programs can make the VPU the primary limiting factor in performance, and a good pixel shader implementation can be several times faster at a given task than a competing one.

The Radeon 9700’s pixel shaders, which meet the requirements for DirectX 9’s version 2.0 pixel shaders, are much more capable than the DX8-class pixel shaders in all of the competing cards we’re testing. The 9700’s pixel shaders can execute 64 color operations and 32 texture ops in a single rendering pass, which is four to eight times what the DX8-class cards can achieve. Also, the 9700 has 96 bits of precision in its pixel shaders, while older chips’ pixel shaders have no more than 48 bits of precision. More importantly for performance in current apps, the Radeon 9700 has eight pixel shaders, while the rest of the pack has only four.

This advantage in pixel shading capacity results in markedly better performance on synthetic tests, such as 3DMark’s DirectX 8.x pixel shader benchmarks.

The Radeon 9700 Pro nearly doubles the scores of the other cards, which is about what we’d expect in this case.

We’ll also use NVIDIA’s ChameleonMark to measure pixel shader performance. Although we’ve included Radeon 8500 scores, please note that ChameleonMark doesn’t produce the proper output on this card in the Glass and Shiny tests because the 8500 doesn’t support the cubemap function the program uses (ATI has its own implementation that works fine, however). The Radeon 9700 and Parhelia both run these tests flawlessly.

ATI’s new card utterly dominates NVIDIA’s pixel shader benchmark, especially in the tests where the cubemap function is employed. Still, true DirectX 9 applications with longer, more complex shader programs created in high-level shading languages should take even better advantage of the Radeon 9700’s pixel shading power. The R9700 Pro’s true increase in computational capacity isn’t fully reflected in these tests, but they do give us a taste of things to came.

Polygon throughput and vertex shader performance


A block diagram
of a Radeon 9700
vertex shader unit.
Source: ATI.

The Radeon 9700 has four DX9-class vertex shader units running at the chip’s 325MHz clock rate. That’s twice the number of DX8 shader units in the GF4 Ti and Radeon 8500 chips, although vertex shader implementations vary in their performance. Matrox’s Parhelia has four DX9-class vertex shaders running at the chip’s 220MHz clock speed. DX9’s vertex shader 2.0 spec incorporates support for flow control (branching, loops, and subroutines) in vertex programs, but no current apps can use this capability. Each of the Radeon 9700 chip’s four vertex shader pipelines has vector and scalar processing units, so they can process the two types of operations in parallel.

We’ll test vertex shader performance first with Matrox’s SharkMark. SharkMark was written by Matrox to show off the power of Parhelia’s quad vertex shader units, so it does a nice job taking advantage of the Radeon 9700’s four vertex shaders, as well.

In SharkMark, the Radeon 9700 delivers exactly twice the vertex processing ability of a GeForce4 Ti 4600, and it outruns the Parhelia, too. 3DMark2001 also includes a vertex shader test; let’s see how the 9700 performs there.

Again, the 9700 shows roughly twice the vertex processing power of competing chips. The Parhelia drops back into the back in 3DMark for some reason, possibly because 3DMark’s vertex shader programs aren’t as complex as SharkMark’s—at least, to my eye they certainly aren’t.

Next, we’ll look at traditional DirectX 7/OpenGL style transform and lighting. I believe all of these cards implement T&L engines as vertex shader programs rather than using dedicated circuitry for T&L. Traditional transform and lighting performance is still the key to good performance in most current apps.

Here again, the 9700 leads the pack, although not by quite as much. Legacy T&L performance will matter less and less over time.

AGP write performance
For the sake of completeness, I’ll include another round of tests of AGP texture download performance. What we’re talking about here is the ability to move rendered images from a graphics card’s local memory over the AGP bus into main memory. Games don’t generally have a need to transfer data to main memory, but applications like video processing tools and high-quality rendering programs do. Please see my article on this subject if you want to know more.

Once again, none of these cards move data back over the AGP bus at anything close to an acceptable rate for real-time graphics applications. This problem can probably be fixed in software. In fact, these especially low transfer rates aren’t as much of a problem in Windows 98 or in OpenGL. But in Win2K/XP with Direct3D, AGP texture download rates are slow as molasses.

Games
Our synthetic benches have shown that the Radeon 9700 Pro is faster than previous-generation cards in nearly every key category: fill rate, occlusion detection, pixel shading, vertex processing, and traditional T&L. Now we’ll test some current games and game engines to see how these capabilities affect everyday use.

Quake III Arena
We tested Q3A using the Q3Bench utility’s “Max” quality settings. Sound was enabled with 22KHz mixing.

The R9700 Pro just outclasses the other cards. The higher the resolution, the bigger the gap between the 9700 and everything else.

Jedi Knight 2

Jedi Knight II tells an interesting story: the Radeon 9700 Pro simply isn’t fill-rate limited in this game. Raising the display resolution has almost no impact on performance. In fact, all of the cards seem limited to about 155 FPS in JK2, probably by the game engine itself.

Comanche 4

Comanche 4 is a true DirectX 8-class game with vertex and pixel shader support. The Radeon 9700 outruns the GF4 Ti 4600 here, but not by much. Again, at lower resolutions, performance is not limited by fill rate, with the possible exception of the preternaturally pokey Parhelia.

Unreal Tournament 2003
The UT 2003 demo is a late addition to this review. Epic released it right as we were finishing up, and we decided to include test results, because UT 2003 uses more advanced DX8 graphics features, more polygon detail, and larger textures than most current games. Also, we used just-released, brand-spanking-new video drivers for the UT tests—version 7.76 from ATI and version 1.01 from Matrox.

The UT demo’s default benchmarking function tests two things. The first test is a pair a fly-throughs of two of the game’s levels, and the second test is a pair of deathmatches between bots.

Things really slow down from the flyby to the botmatch, but in either case, the 9700 is generally fastest. The GF4 Ti 4600 is still very competitive, though.

Codecreatures Benchmark Pro

Here’s another DX8 application. The Radeon 9700 Pro separates itself from the pack here, no doubt thanks to its superior vertex and pixel shading power. Oddly enough, the Radeon 8500 and Parhelia are right on top of each other in all three resolutions.

Serious Sam SE
Serious Sam SE allows us a number of benchmarks options, including the ability to switch between Direct3D and OpenGL, and the ability to plot second-by-second frame rate numbers.

The trouble is, the game engine auto-tunes its graphics settings to run optimally on various graphics cards, which makes apple-to-apples comparisons difficult. In the past, we’ve used Serious Sam SE as an application benchmark, allowing the game to tune itself, but we’d like to limit that effect this time around. Some reviewers have used the game’s “Max quality” add on to defeat the engine’s auto-tuning feature, but that’s not a perfect solution. For instance, the game will use the strongest form of anisotropic filtering available to it, which varies from 2X to 16X, depending on the graphics card.

To keep things even, we’ve elected to use the game’s “Default quality” addon. We won’t be using every possible feature and form of texture filtering, but we should have a solid basis for comparison.

The story is much the same in D3D and OpenGL. The GeForce4 is fastest, by just a little bit, in lower resolutions, probably thanks to NVIDIA’s highly optimized graphics drivers. At higher resolutions, the 9700 pulls ahead.

Next, we’ll plot frame rates over time to see something a little more revealing than a simple frame-rate average. These tests are conducted using OpenGL.

All of the cards demonstrate similar performance profiles; they all peak and bottom out at about the same places, and none of them shows any really inordinate extremes in either direction.

3DMark2001 SE

The R9700 Pro rules the roost in 3DMark. It’s nearly as fast at 1600×1200 as the Parhelia is at 640×480.

I’m not sure why the Ti 4600 is slower at 1280×1024 than at 1600×1200, but I ran this test a number of times to make sure the results were for real. This anomaly only showed up in this test, but it was consistent.

3DMark’s game tests confirm what its synthetic tests showed us: the Radeon 9700 is faster in nearly every way than the competition.

Workstation-class applications

The Radeon 9700 Pro is fastest in four out of six tests, but it’s not exceptionally faster than previous-gen cards. We’ll have to test out the FireGL X1, ATI’s workstation-class card based on the 9700 chip, to see how it performs with drivers properly optimized for workstation apps.

Speaking of drivers, when we initially tested the Parhelia in viewperf, it was extraordinarily slow; it barely scored above zero on the Unigraphics test. Matrox has just released new drivers that are certified for some of the applications in the viewperf suite, so we retested with the new 1.01 drivers and included those results above. The improvements in performance were notable. Unigraphics is still slow, but it’s better than before.

Antialiasing


Antialiased text

Edge antialiasing
We’ll cover antialiasing in its two most important forms, edge AA and texture AA. Most newer graphics chips handle these two things separately. I happen to think texture AA has the bigger impact on overall visual quality, but edge AA gets most of the attention, so we’ll start with it. I’m going to break it down on a card-by-card basis, because each card’s edge antialiasing implementation is unique.

Fundamentally, though, all of these chips’ edge AA methods are similar. They all seek to smooth out rough edges by blending colors. To do so, the chips take multiple samples of the area covered by a pixel on the screen. These samples are offset slightly at a sub-pixel level according to some kind of pattern. The sampled colors are then blended. If the sub-pixel-level samples happen to straddle an edge, the resulting pixel output color is somewhere between the colors on either side of the edge.

Presto, jaggies melt away.

At least, that’s the theory. Most of us use edge antialiasing for text in our everyday computer use. Doing AA in 3D graphics is similar, but it does get a little tricky.

  • GeForce4 — The GeForce4 Ti chips use an antialiasing algorithm known as multisampling. This algorithm singles out pixels on polygon edges by checking the Z-buffer for each pixel. If a pixel is covered by more than one polygon, the chip will do a texture read for each of the samples it’s grabbing. If not, the chip will avoid the texture read, conserving effort and memory bandwidth. The multiple samples are then blended. In this way, multisampling algorithms effectively do edge-only antialiasing. The GeForce4 Ti’s multisampled AA is so efficient, there’s barely any performance penalty for turning on 2-sample (or 2X) antialiasing.

    The GF4 Ti collects its samples according to one of two patterns. In 4X mode, the chip uses a traditional ordered grid sampling pattern (the leftmost of the two patterns at right)—like graph paper. In 2X mode, the chip samples according to a rotated grid pattern (turn that graph paper 35 degrees or so, like the rightmost pattern on the right). The human eye can lock on to patterns easily, and an ordered grid sample is an easy one to spot. Also, on-screen pixels are themselves organized in an ordered grid. The point of using an offset or tilted grid pattern is to disrupt the correspondence between the display’s grid of pixels and the underlying AA sampling. Doing so produces more effective antialiasing, especially on near-vertical and near-horizontal edges, where ordered-grid sampling patterns sometimes aren’t very effective. The GF4 Ti also has a mysterious mode dubbed “4XS” that is apparently a combination of 2X multisampling and 4X supersampling (we’ll get to supersampling in a second).

    GeForce4 Ti 4600 — No AA, 2X AA, 4X AA, “4xs” mode, diff between No AA and 4X AA Uncompressed versions of our edge AA sample images are available here.

    The results speak for themselves. Even 2X antialiasing produces much cleaner edges than a stock image. Look at the rightmost tile of the image above to see something a little bit unique. That’s the result of a mathematical “diff” operation between the non-AA image and the 4X AA image. You can see that the only pixels really modified by the GeForce4 Ti’s multisampling AA routine are edge pixels. Multisampling is efficient, but effective.

    Call the GeForce4’s two AA techniques ordered grid multisampling and rotated grid multisampling.

  • Radeon 8500 — The Radeon 8500 is a different case. It uses more sophisticated sample placement, but it doesn’t use multisampling. Instead, the 8500 uses supersampling, which is simply rendering a scene at two or four times its original size (or more) and then scaling it back down, blending two or four pixels into one. Supersampling provides full-scene AA, not just edge AA, but it’s much less efficient than multisampling—the algorithm has to do a texture read for each sample it processes.

    The Radeon 8500’s SMOOTHVISION AA uses a nifty trick for sample placement. Instead of sampling according to grid pattern, the 8500 uses a programmable template to determine sample placement. Have a look at the two squares at the right for a visual example. The leftmost image is a sample template with six possible sample positions, and the rightmost one shows resulting sample patterns for four adjacent pixels in 4X AA mode. The 8500 uses the template to determine sample positions in a quasi-random fashion, which disrupts the eye’s ability to detect patterns. This technique isn’t completely random, but it’s much closer to true stochastic sampling than other cards’ methods.

    Radeon 8500 — No AA, 2X AA, 4X AA, diff between No AA and 4X AA In practice, the Radeon 8500’s 2X AA is surprisingly ineffective. The 4X mode, however, is quite good. In motion, the 8500’s 4X SMOOTHVISION AA is visibly more effective than the GeForce4’s ordered grid 4X mode. Have a look at the “diff” image on the right to see how pixels in the scene are affected by the 8500’s supersampled AA. The 8500 is antialiasing edges as well as textures. Unfortunately, it’s neither as effective or efficient at texture AA as texture-specific routines like anisotropic filtering.

    The 8500’s also has a “performance” mode that samples on an ordered grid, so the Radeon 8500 does both ordered grid supersampling and programmable jittered supersampling.

  • Parhelia — The Parhelia’s unique Fragment AA method does edge-only antialiasing with lots of samples—sixteen, to be exact. Fragment AA does a Z-buffer check to determine which pixels are on polygon edges, then segregates edge pixels from the rest of the scene. Non-edge pixels are rendered normally, while edge pixels are sent to a fragment buffer and rendered using 16X ordered grid supersampling (see the pattern image at right). Although Fragment AA doesn’t use a rotated or quasi-random sampling pattern, it’s very effective, because sample size is more important than sample pattern. 16X AA is hard to beat. This method is theoretically more efficient than multisampling. Multisampling avoids multiple texture reads for non-edge pixels, but those pixels are still processed by the algorithm. Fragment AA operates only on edge pixels, so Parhelia can sample more often for those pixels with little performance penalty. However, Fragment AA sometimes causes visual artifacts and other compatibility problems, so Matrox included a 4X ordered-grid supersampling fallback mode. Because supersampling is less efficient, the 4X mode is slower than 16X FAA.

    Oddly, the Parhelia drivers won’t respond to Direct3D API calls for 2X or 4X AA, so users have to manually choose 16X FAA or 4X ordered-grid AA.

    Parhelia — No AA, 4X AA, 16X AA, diff between No AA and 4X AA, diff between No AA and 16X AA FAA at 16X does a very nice job cleaning up the near-vertical lines in our sample image. You can see from the “diff” images how the 4X supersampled mode modifies quite a few pixels, while 16X Fragment AA touches only edge pixels. Although this method is distinct from multisampling, its results are generally similar.

  • Radeon 9700 Pro — The Radeon 9700 Pro improves on the 8500 by shedding supersampling for multisampling. This change should help performance immensely. The 9700 supports 2X, 4X, and 6X modes. Beyond that, ATI has included a number of innovations in the 9700’s antialiasing hardware; most should help image quality and performance.

    Notably, the Radeon 9700 retains the programmability of the 8500 in terms of sampling patterns, but as I understand it (at least with current drivers, and probably over the long term) the 9700 doesn’t vary sample points from pixel to pixel in a quasi-random fashion like the 8500 does. Still, the chip’s programmed sampling pattern is a little bit wilder than a simple rotated grid. The example at right is based on a sampling pattern shown in an ATI whitepaper, but I’m not sure of the 9700’s exact pattern in current drivers. Obviously, the pattern shown is for 6X AA modes. Also, unlike the Radeon 8500, the 9700 can make use of Z-buffer compression in conjunction with antialiasing, which should help performance.

    On the image quality front, the Radeon 9700 employs a simple but effective new twist: gamma correct blending. When the chip blends sampled color values together, it applies a gamma curve. Gamma correction at various points in the pixel pipeline is encouraged in the DirectX 9 spec. Because graphics chips must operate in a gamma colorspace, this technique only makes sense.

    The pictures below illustrate the impact of gamma correction. The first, from an ATI whitepaper, shows how applying a gamma curve ensures smooth gradients on antialiased edges. The second, generated in a paint program using a screenshot from Serious Sam SE, shows how incrementing color values with a simple brightness filter washes out colors, while a gamma-correct filter preserves contrast. This second image isn’t strictly about antialiasing, but it demonstrates clearly the impact of accounting for gamma.


    Gamma-correct blending maintains smooth gradients. Source: ATI.

    Brightness washes out colors, while gamma preserves contrast.

    The Radeon 9700 includes provisions to address one of the weaknesses of multisampling: textures with transparency. Multisampling handles antialiasing at the edges of polygons well, but it doesn’t address “edges” inside of textures with transparency. Unfortunately, these provisions appear to require application-level support, and current apps don’t provide it.

    Radeon 9700 — No AA, 2X AA, 4X AA, 6X AA, diff between No AA and 6X AA Nevertheless, the Radeon 9700’s antialiasing is excellent. Thanks to its irregular sampling pattern and gamma correction, the Radeon 9700’s 6X AA mode looks at least as good as Matrox’s 16X FAA to my eye—and much better than a GeForce4 Ti. Of course, that perception may be colored by the fact that the Radeon 9700’s frame rates in 6X AA are much higher than the Parhelia’s in 16X FAA, and frame rate is an important component of effective antialiasing. Without fluid motion, the AA effect doesn’t work as well.

Now that we’ve covered the theory, we’ll look at how the various cards perform in their various antialiasing modes. For the sake of simplicity, we’ve omitted a few less useful AA modes, like the 3X, 5X, and 6X modes on the Radeon 8500, which are simply too slow and cumbersome to matter, and the 4xS and Quincunx modes on the GeForce4 Ti. 4xS mode is only available in Direct3D, and Quincunx is simply 2X AA plus a blurring filter; it sacrifices too much texture quality and overall sharpness to be worthwhile.

The Radeon 9700 Pro’s antialiasing performance is very impressive. The 9700 has the best combination of edge antialiasing quality, compatibility, and performance of any consumer graphics chip.

Texture antialiasing
ATI has cleaned up some of the texture filtering quirks from the Radeon 8500, as well. The 9700 can do anistropic filtering up to 16X, and the 9700’s anisotropic filtering algorithm is improved.

The best way to show you the effect of anisotropic filtering is with visuals. (Plus, I spent way too much time on edge AA, so this is what you get.) The images below show how higher degrees of anisotropy provide increased texture clarity; anisotropic filtering provides this clarity without aliasing while the image is in motion. The key to anisotropic filtering is taking into account the slope of the surface from the perspective of the viewer. Conventional isotropic mip map filters don’t do this, so they have to cut out texture detail in order to reduce aliasing.

GeForce4 Ti 4600 — Isotropic filtering, 2X aniso, 4X, 8X, diff between isotropic and 8X aniso

Radeon 9700 — Isotropic filtering, 2X aniso, 4X, 8X, 16X aniso, diff between isotropic and 16X aniso If you’d like to see uncompressed versions of these images, go right here. Be warned, however, that the files are rather large.

The Radeon 8500’s anisotropic filtering method has a couple of annoying quirks. Its adaptive filtering algorithm acts selectively on certain surfaces, and sometimes, textures aren’t filtered as they should be. ATI says the Radeon 9700’s anisotropic filtering algorithm better accounts for these polygons, so filtering is applied more thoroughly.

Also, the 8500 couldn’t apply anisotropic filtering and trilinear filtering simultaneously. Users were forced to choose between blurry textures with smooth mip map transitions (bilinear + trilinear filtering) or sharp textures with stark mip map transitions (anisotropic). Personally, I couldn’t stand playing Serious Sam with anisotropic filtering on a Radeon 8500 for this very reason. That “line” out on the ground in front of me everywhere I went was just too much to take. Ugh.

Colorized mip maps show anisotropic + bilinear filtering

Colorized mip maps show anisotropic + trilinear filtering Happily, the 9700 addresses this quirk, although ATI requires users to check a “quality” radio button in its drivers in order for aniso and trilinear to work together. I’d prefer they do it like everyone else and allow the application settings for trilinear filtering to determine what happens. Nevertheless, the 9700’s image output is positively gorgeous. Our screenshots with colored mipmaps show long, smooth trilinear gradients with anisotropic filtering active.

Mip map transitions are handled beautifully As you might expect, the Radeon 9700’s texture filtering performance is excellent, as well.

All told, the Radeon 9700 Pro provides more and better texture antialiasing, at higher frame rates, than anything else on the market.

Conclusions
I believe I’ve said enough about the Radeon 9700 Pro by now. You have seen the results and screenshots for yourself, and you know that it’s got more pixel-pushing power, faster pixel shaders, better vertex shaders, more poly throughput, and better real-world performance than any other graphics card you can buy. The image quality is second to none, especially with antialiasing enabled. I’ve racked my brains and raked this thing over the coals trying to find a significant weakness, and I’ll tell you what: I haven’t found one.

Oh, sure, there are always odd driver bugs, incompatiblities, and other teething problems, and I won’t pretend to be able to keep tabs on all of those things. (The card was problem-free in my testing.) If you’re thinking about buying a Radeon 9700 Pro card, you’ll definitely need to factor those considerations into your decision.

But as a 3D graphics chip, the Radeon 9700 is darn near perfect. ATI has taken the time with this chip to increase precision and expand registers and tweak functional units to the point where everything works as advertised. The Radeon 8500, good as it was, was nowhere near this good.

We may run into some problems or limitations when DirectX 9 arrives in force. I’m still a little wary about how this first generation of chips with support for 64- and 128-bit color depths will perform at those depths. The data throughput and computational requirements for 128-bit pixels are enormous, and that presents every opportunity to fudge by splitting 128-bit chunks into two and the like. Tricks like that could sap performance and cause problems with next-gen games. Of course, those games probably won’t be arriving for months and months.


110 million transistors of joy

At the end of the day, the Radeon 9700 Pro is the culmination of an era and the beginning of something new. The 9700 is the last in a line, the ultimate graphics chip as a collection of fixed functional units that accelerate specific parts of the rendering process in hardware. But it also ushers in the era of generally programmable graphics chips, VPUs as general SIMD machines. As programmability and high-level shading languages gain a foothold, graphics chips will probably become both simpler and more powerful—general SIMD processors with less custom circuitry for accelerating specific effects.

The 9700 bridges these two eras, running today’s games better than anything else, and providing the potential for artists to create graphics unlike anything you have ever seen done in real time.

Comments closed
    • Anonymous
    • 17 years ago

    All of these tests and not ONE dealing with TV out!

    Unbelievable!

    • Anonymous
    • 17 years ago

    Got a rotation tab in your driver?

    Hi – I’ve got at 9700 connected to a Samsun Syncmaster 171P, nice – but I’m missing the “Rotation” tab in the control panel’s “Advanced” tab, as it is shown on several screenshots found on the ‘net, so that I can utilize the LCD’s pivot ability – SO – if you GOT the rotation tab, please let me know your driver version, the latest catalyst 3.0 seems not to have it.

    Regards
    Bjarne Andersen, bjarne@myrealbox.com

    • Anonymous
    • 17 years ago

    2nd this guy ALL the WAY UP TO THE TOP. Nice card, lousy driver support, heck ATI support in general is poor, try calling them, I swear they have 1 line and one lady answering the phone. They are ONLY available Mon-Fri 9-5, so those of us with Jobs can look forward to great ATI support. It took more than 3 emails and 10 days to get a response from ATI, which was canned and did not EVEN touch on the issue I wrote about.

    As a side note I play 3d games hardcore and the 9700 support is at its worst there. My CAD program looks great and renders a screen much faster than the ti4600 I had. Maybe ATI should outsorce their driver support to say maybe NVIDIA ๐Ÿ™‚

    Heck even the 3rd party hacked voodoo drivers work better than the catalyst mess. Just try and follow the 19 peices you have to install in a specific order to get the rest of the functions top work…..and Sigh I should have waited but oh well live and learn.

    If anyone has the dvd playback running correctly I could use some tips ๐Ÿ™‚

    • Anonymous
    • 17 years ago

    Wow what an understatement regarding the ATI 9700 Pro, I’ve had driver issues like mad….DVD playback is problematical, the suppIied DX8.1 WDM drivers won’t install under WINME because of signing issues.

    MS CFS3, lord of the rings, NWN all have various nasty problems with the ATI card. While the hardware is top notch, ATI HAS NOT gotten any better at writing drivers or just plain responding to support questions. Heck it was 2 weeks after I got the card that ATI finally put the drivers up on their site, and added the card as a vaildated field so you could even get support. I am regretting buying this POS, I should have stayed with the GF2 until the NV30 chipsets hit the market. The ATI config utility still shows I have a directx8.1 problems with WDM drivers, when my DX version is NOW 8.2 with Direct play update, while oddly enough DXDIAG finds no problems.

    Maybe ATI will suddenly develop some reliable support and technical resources, but I doubt it. On a side note, the card DOES FLY for things it works on, but ATI has never resolved all of the ALL-IN-WONDER support issues, and the Newest member of the family is no exception. If anyone has gotten the thing to work correctly under WinME, (Iknow Iknow, but I can’t scrap it it is not my PC) please let me in on the secret…

    archfeld@hotmail.com or ICQ 6740411

    • Pete
    • 17 years ago

    121, mind sharing your sources?

    • Anonymous
    • 17 years ago

    The NV30 will not be fully DX9.1 compliant (ie. PS 3.0 and VS 3.0). The R400 WILL be the first DX9.1 compliant asic and ATI will (twice in a row) beat Nvidia to a new API generation.

    • Pete
    • 17 years ago
    • Pete
    • 17 years ago

    Great review, Scott! As others have said, the new charts are a nice improvement. The side-to-side AA/AF shots are also well done.

    jbirney, thanks for those screen shots. Way to make me despise my Xpert 128 all the more. ๐Ÿ˜‰

    As for the rest of you trolls, please find your way back to whatever fanboy haven you crawled out of. The 9700 is the fastest card available, bar none. Its speed increase over a 4600, particularly with AA, AF, or both, is greater than the price increase over a 4600. The image quality, as quite a few sites and forum users have attested to, is noticably superior (particularly with gamma correction and higher-order AA and AF). It is not overkill as it provides a real and, for ATi, astonishing performance jump over the previous generation. At the same price a 4600 went for a few short weeks ago, the 9700 is the first top end card I consider worth the price premium. You’re getting 2-3x the performance over the previous high end, the 4600, and improved image quality. The 9700 has issues, no doubt–don’t expect brand new PC tech (particularly software) to be problem-free at launch–but every single review I’ve read has been noteworthy in that they all mentioned how [i]few[/i] problems (if any) they’ve had with the 9700. And ATi certainly seems to be treating the AGP 8x problem correctly, encouraging users to call in for a replacement. I’m discouraged by the lack of AA in 16-bit games, though. We’ll see if NV30 supports this.

    For now, the 9700 reigns unchallenged, and it brought with it lower prices for all other cards. You nVidia fans should be grateful at least for that–the 4600 dropped in price by $80+ overnight. However, all indications point to the NV30 raising the bar again. A Creative employee was quoted as saying the NV30 will be “significantly faster” than the 9700 Pro. OTOH, he also said it would come with 800+MHz 128-bit DDR2 RAM, which many take to be nonsensical (DDR2 is not quad-pumped, and 128-bit RAM would starve the NV30’s 8×2 pipeline). I expect it to be faster simply because it’ll be clocked higher (.13u process), but I’m not expecting a speed boost on the magnitude of 4600->9700Pro. I also expect ATi to answer with a DDR2 (but still .15u) part shortly afterward, if the NV30 shows up in the next two months. If it shows up in 2003, unless DDR2 is much cheaper, I’d guess ATI would just focus on the NV30-beating R350 (a .13u, upclocked, and more fully-featured R300).

    The R300 is DX9, BTW. NV30 is expected to be DX9.1. As these are first-gen DX9 parts, I expect the difference to be purely academic–they’re probably both too slow to fully exploit their DX9 featuresets. The next half-gen should be very interesting, though, as both companies refine their chipsets. Maybe they’ll be smart enough to make their $400 cards dual-DVI by then. ๐Ÿ˜›

    • Anonymous
    • 17 years ago

    what’s wrong with the yamaha banner? everyone needs to keep thier site running. I don’t think that means they automatically get paid off.

    • Anonymous
    • 17 years ago

    droopy1592 , Nice BIG banner on the HardOCP site eh? When will people put two and two together? It’s not that tough,

    • Anonymous
    • 17 years ago

    oh guys, dont forget 8x agp is artificial anyway, the memory is straight from the damned card not the cpu nor ram, nor motherboard, so 8x isnt much at all, maybe a 2-3% differenc, guys give your head a shake..

    • sativa
    • 17 years ago

    does anyone know if there is a site that benches the radeon 9700 on AGP 8x versus AGP 4x? i wonder what (if any) performance differences there are.

    • droopy1592
    • 17 years ago

    From hardocp

    ATi Statement on AGP8X:
    ATI is aware of a small number of reports of incompatibility between the RADEON 9700 PRO graphics accelerator and certain AGP 8X motherboards. It is important to emphasize that the number of incidents is very low. Extensive testing has been done, in conjunction with motherboard chipset manufacturers, motherboard manufacturers and with retail available motherboards, to determine if compatibility issues exists. From these tests, it has been concluded that no compatibility issues exist with the current or soon to be released AGP 8X motherboards.

    Some end users have reported issues to our Customer Service department that we are investigating. From the data that has been gathered, it appears that issues that have been reported are caused by individual board issues and are not indicative of a general compatibility issue. RADEON 9700 PRO product owners are encouraged to contact our Customer Service Department should they have an operational problem with their product.

    In addition, it is important to ensure that appropriate system preparation is done prior to installing the RADEON 9700 PRO. We recommend installing the latest motherboard BIOS in the system as well as the latest available AGP drivers. Many of the 8X issues have been resolved with an updated BIOS from the motherboard manufacturer.

    from kyle:
    From what we have seen here and from what other webmasters have shared with us it does seem to be the mainboard and PSU builders “fault” that certain compatibility issues exist. We are still trying to get a grasp on this ourselves.

    As for thinking true AGP8X was going to come to market without any glitches to handle is certainly a pipe dream. If we all hang in there and make sure we report the problems we see back to ATi and the company that builds your particular mainboard, we will certainly get things worked out sooner. Remember, teamwork, as a community is key for smoothing out the AGP8X road. So let’s all be part of the solution, and not part of the problem. Report your problems to the appropriate support department with full system, OS, and driver specs.

    from me:
    I tried to tell you ATi haters what the problem was, but in all your all-knowingness (hehe) you refused to understand and instead blamed ATi’s QA and Engineering for the boo boo.

    • Anonymous
    • 17 years ago

    Possibly the best video card review I’ve seen, well done Scott.

    -Malakin

    • Anonymous
    • 17 years ago

    Very nice article.

    • WaltC
    • 17 years ago

    Cowboy_X,

    i[

    • Anonymous
    • 17 years ago

    [quote]Donald over at 3DGPU was able to post a bunch of SS on SS:SE in the pit level in that one room were the floor rotates 45 degrees on a GF4 and a R9700. We found that at about 22 defrees the R9700 dropped down to about x8 AF. THe GF4 stayed at x8 thought out the all of the rotation. So it looks like the R9700 AF is much much improved and at its worse its the same as the GF4. [/quote]
    Interesting. Sounds like a bunch of good info over there; I’d still like to hear the ATI developer comments on why we can’t have >32bpp precision in current titles, too.

    Hope it comes back soon.

    • Anonymous
    • 17 years ago

    *[

    • Ryu Connor
    • 17 years ago

    [quote]Of course it’s a personal choice, but that’s how I see it. I guess in a perfect world we’d have 1920×1440 with 16xAA at a constant 60fps[/quote]

    And fifth order filters all around. ๐Ÿ™‚

    BTW, nice post.

    • WaltC
    • 17 years ago

    Scott,

    How dumb can a guy be? I apparently didn’t bother to read your name as it is so clearly spelled out in the article byline, and gave the credit to Ryu (which Ryu was kind enough to point out.) So, allow me to rephrase:

    Excellent review, b[

    • Anonymous
    • 17 years ago

    Uh, not to MAC bash. Macs suck, but I still want an Apple cinema display.

    • Anonymous
    • 17 years ago

    [quote]It’s said that you would need to exceed 4000×4000 to eliminate jaggies without AA.[/quote]
    Eh. I don’t see a whole lot of jaggies at 1600×1200.

    For one thing, when the two surfaces the edge are on have very little contrast, you don’t need AA. On most typical 3D game scenes I’d estimate only perhaps 5% of the edges are “noisy”, eg, black against white and so forth. Nobody ever talks about this when considering AA, but that’s the reality.

    Furthermore, compare the total percentage of on-screen pixels that are improved by anisotropic filtering with the total percentage of on-screen pixels that are improved by AA (by this I mean medium to high contrast edges only). You’ll then see that AA is still way, WAY too costly in terms of performance when you consider the marginal benefits it provides in real world gaming.

    Higher resolution (eg more pixels) also makes everything on screen look better because of the higher precision it affords, whereas AA only makes a very, very few pixels look better.

    Of course it’s a personal choice, but that’s how I see it. I guess in a perfect world we’d have 1920×1440 with 16xAA at a constant 60fps, but we’re not quite there yet.

    • droopy1592
    • 17 years ago

    It’s always been that way. You can get a good graphics card for cheaper than retail just like you can get any computer part cheaper than retail. What else is new. ATi is getting good market share with this “enthusiast part” to the point where the supply is drying up. Atleast they are getting decent yields.

    • Anonymous
    • 17 years ago

    *[http://www.hardforums.com/showthread.php?s=b6f79076e61a7559543e3d42df42d8ed&threadid=493870<]ยง

    • Anonymous
    • 17 years ago

    Forge won’t ever admit there’s anything wrong with ATi… not as long as they keep giving him the long hard shaft…

    Sorry pal but your dead wrong… the 9700 is a TURD…. the card itself rocks, but the drivers are HORRENDOUS…

    I repeat… ati = suck… and the claven does too…

    • droopy1592
    • 17 years ago

    har har, sativa.

    har har, AG#96

    • Anonymous
    • 17 years ago

    [quote]www.enpc.com has them 9700 PRO OEM for $322 with free ground shipping. Thats where I got mine[/quote]
    Thanks. I think I will pick one up. My impatience always ends up costing me hundreds of dollars.. at least it’s not hookers. This time.

    • Anonymous
    • 17 years ago

    [quote]There is no difference in the viewable area in BF1942 between a person running 1600×1200 or 800×600. You both see the same amount of the screen[/quote]
    What a weird comment. Obviously the viewable area is the same; the quality of that viewable area is what’s at stake here. On a 21″ monitor, I don’t care to dip below 1280×960 if I can help it. Anti-alias ’em all you want, but the pixels are just too damn big and obvious below that.

    And don’t forget there ARE resolutions > 1600×1200; I’ve tried up to I think 2048 x 1536 successfully. It’s friggin’ amazing looking. Who needs AA?

    • Anonymous
    • 17 years ago

    I bought my radeon9700pro to replace my gf3 ti500… I went to controll panel, add/remove software and uninstalled the nvidia software through there, shut down, installed the ati, rebooted ran the install and everything has worked fine for me… get 3dmark of 11600 now and can play any game i’ve tried so far at 16×12 with 6xAA and 16xAF, its beautiful ๐Ÿ™‚

    as far as drivers go, well its a brand new type of card, sure there will be some issues… when I first got my gf3 I couldnt play daoc at all, crashes every 30 seconds… then it worked for a while, then nvidia broke it again and I had to wait 4 driver revisions before it was fixed…

    I think all you people talking about how smooth nvidia’s drivers are, are forgetting the fact that they’ve had quite a while now to update them, and have already gone through tons of revisions…

    now tell me this… if nvidia’s drivers are so good, why do they have to keep updating them???

    • Anonymous
    • 17 years ago

    [quote]I like to completely take apart my computer each time too. Everything down to the heatsink and floppy. [/quote]
    LOL! You should consider immersing everything in mineral oil as well to purge any “negative energies” that might hinder the new video card install.

    • Freon
    • 17 years ago

    That last graph on page 15 is pretty convincing.

    9700, SS51G, 9700, SS51G…

    • sativa
    • 17 years ago

    [quote]Meh, just be like me and reinstall windows everytime I install a new video card for any real length of time. Especially of a different name brand. I have spent hours going from nvidia-> matrox-> ATi and I am tellin’ ya you will save more time and have less frustration if you just format and reinstall windows
    [/quote]I like to completely take apart my computer each time too. Everything down to the heatsink and floppy.

    • Ryu Connor
    • 17 years ago

    [quote[I’m guessing I could run 1600×1200 with 4x or 8x aniso at ~ 60fps no problem, whereas I am “stuck” with 1280×960 with 2x aniso on the 4600 for the same framerate. I know, I know, life is tough all over, isn’t it?[/quote]

    Lower your resolution and increase the aniso and AA. There is no difference in the viewable area in BF1942 between a person running 1600×1200 or 800×600. You both see the same amount of the screen.

    • Anonymous
    • 17 years ago

    [quote]my 3d mark scores went from 9050/gf4 to 10656/rd9700, [/quote]
    A whopping 18% improvement! Be still my beating heart!

    The 9700 isn’t very interesting performance wise until you kick in aniso and/or AA. Then it really pulls away.

    • Anonymous
    • 17 years ago

    [quote] Meh, just be like me and reinstall windows everytime I install a new video card for any real length of time. Especially of a different name brand. I have spent hours going from nvidia-> matrox-> ATi and I am tellin’ ya you will save more time and have less frustration if you just format and reinstall windows[/quote]
    This is ridiculous. And unnecessary.

    • Anonymous
    • 17 years ago

    I istalled my ati radon in five min no problems and my 3d mark scores went from 9050/gf4 to 10656/rd9700,

    • liquidsquid
    • 17 years ago

    The process that has always worked well for me is:
    1. Leaving the old card in, remove the reference to the drivers, and use Window’s base VGA drivers instead. This should remove all references to the old card
    2. Restart in Safe mode, see if the old card is still referenced there. Sometimes I will see multiple references of hardware that I can remove now.
    3. Power down, change cards.
    4. Power up, install new drivers.

    I have never had problems with this method that I am aware of. I hate re-instals myself, and haven’t performed one in quite some time by using the above method for hardware changes.

    -Mark W.

    • droopy1592
    • 17 years ago

    Forge. Meh, just be like me and reinstall windows everytime I install a new video card for any real length of time. Especially of a different name brand. I have spent hours going from nvidia-> matrox-> ATi and I am tellin’ ya you will save more time and have less frustration if you just format and reinstall windows. I know I will get flamed for this one, but I am no windows registry/ driver removal genius.

    • Anonymous
    • 17 years ago

    *[http://www.enpc.com<]ยง has them 9700 PRO OEM for $322 with free ground shipping. Thats where I got mine.

    • Anonymous
    • 17 years ago

    You’ll be the first one buying a NV30, Forge.

    • Forge
    • 17 years ago

    It’s hard to take anyone seriously when they say things like ‘ Forget the 9700 and think NV30.’… I’d love to be able to cross-index NV employees’ home IPs and the posting IPs on this thread.

    Not saying they’d do such a thing, but it would be fun/interesting to check.

    • Forge
    • 17 years ago

    BladeRnR10 – If you are a happy 8500 owner, then I’m appalled at how little it takes to turn your opinion against your own experiences. How can you be put off/scared by the people griping about ‘took out my GF4 w/o any uninstall and dropped in the 9700 and things aren’t 100% perfect!!’ when you know damn well the same thing almost always happens with 8500 as well!?

    I’ve been in favor of a full fdisk/format/reinstall on major hardware changes for a very long time, and graphics cards are officially in that category for me now. Anyone who does less deserves the trouble they’ll get.

    I was running an 8500 and I dropped a 4600 in and performance sucks! I was getting almost 10K 3Dmarks with the Radeon, and I can only barely break that now with the 4600!! GeForce 4 sucks! Nvidia sucks! Nvidia drivers suck! I’ve removed and reinstalled my drivers 500 times, and I STILL haven’t corrected the underlying configuration problem that got me into this mess!!!

    • Anonymous
    • 17 years ago

    #82,

    ATi are well known for their driver problems. This isn’t to say NVIDIA doesn’t have them. It’s just that ATi has a bad reputation at releasing new harware with buggy and/or incomplete drivers. And, this has been going on for years. They also have a horrid driver install. I’ve seen more complaints about this than anything else.

    Now, NVIDIA’s drivers are more polished and everything works like it should when they release a new video card. I’ve never had a problem with installing new drivers. Everything from a TNT to a Ti500 has worked flawlessly for me. Actually, if you decide to buy now, a Ti4600 is a good choice since it’s stable in both hardware and drivers. And, you can buy one for as low as $238 from Newegg (EVGA model). So, stop with the bullshit about price. Forget the 9700 and think NV30.

    • Anonymous
    • 17 years ago

    I think it’s funny to listen to bleeding edge people actually say, “This card is too fast for games right now and it’s not worth it” but I can sure as sh*t bet my next paycheck if the NV30 came out tomorrow you’d be all over it and it’s enormous price tag. Another gerbil had the gall to say that 9700 shouldn’t be compared to the 4600 yet I still see every review still compare the 8500 to the GF4 line. I believe we all know that it came out to compete with the GF3 line and did it’s job but I don’t see you all complaining about that. Fact is ATI has the sweet spot for now and the battle will keep on going back and forth which is good for us so why complain. I’d like to know about the gerbil who complained about the price, it was only a few weeks ago I saw the cheapest 4600 was still over $360 and that was ok but $340 for something better is not? I STILL see 4600’s at EB for $400 that’s a waste. I had a half ass tnt2 and it’s drivers sucked ass with my system at the time, every 10 minutes the menu’s in windows blurred and it had to be rebooted. I had an ATI rage 128 pro in there with no problems. I guess I’m lucky but I have’t had and probs with my radeon 64 ddr VIVO and I still don’t have any probs with my 8500 maybe some of you people just have buggy systems.

    • Anonymous
    • 17 years ago

    *[

    • Anonymous
    • 17 years ago

    Based on this specs!!!

    Should DDR platform be faster at least in application terretory.
    ———————————————————————————-
    If you have 1024MB ECC DDR 266MHz memory. Using Asus
    P4B533-E motherboard.
    -V.S-
    Asus P4T533-C motherboard;
    but only Installed 512MB intead 1024MB RAM; but using Rimm
    Rambus PC1066MHz memory.
    ———————————————————————————-
    DDR ECC 1024MB Total mem -v.s- Rambus 512MB Total mem

    DDR 266MHz -v.s- Rimm 1066MHz

    My CPU: Intel Pentium 4 Northwood B 2.26GHz (2292.10 exact)
    [134.83 x 17] 539.32MHz QDR

    • Anonymous
    • 17 years ago

    *[

    • Anonymous
    • 17 years ago

    Yes, ECC ram is a little bit slower because it sends checksum data with every clock cycle, so its a little bit slower. This is similar to how cdroms can actually hold ~800mb worth of date, its just that the format requires ~100mb worth of checksum data to recover from sctratches etc.

    • freshmeat
    • 17 years ago

    Isn’t ECC ram of whatever sort slower than the similar non-ECC ram?

    • Anonymous
    • 17 years ago

    Or bettter yet, wait for dual channel ddr. The mobo will be a little bit more expensive, but you’ll be able to get inexpensive pc2100 or PC2700 a lot cheaper then high test PC1066.

    • droopy1592
    • 17 years ago

    Rambus board will. With games, the sweet spot for RAM seems to be around 384MB with my experience. Anything more can slow the games do just a tad. The RAMBUS board is going to put the juice down and ECC ram ain’t much faster than standard DDR. But then again, with the slower P4, the improved latency of DDR gets it close but as the proc revs get higher RDRAM kicks DDRs ass more and more. Like a 2.533 Ghz with RDRAM would give a 3Ghz with DDR a run for it’s money on some applications. Anything memory intesive the slower processor would win. So RdrAM would be your best bet if you have the money.

    • Anonymous
    • 17 years ago

    Can someone answer a question for me??? -Honestly Please-

    If you have 1024MB ECC DDR 266MHz memory. Using Asus
    P4B533-E motherboard.

    -V.S-

    Asus P4T533-C motherboard; but only Installed 512MB intead
    1024MB RAM; but using Rimm Rambus PC1066MHz memory.

    Who will get higher score in 3DMark2001SE using Radeon9700.

    DDR ECC 1024MB Total mem -v.s- Rambus 512MB Total mem

    Thanks for reading this.

    My CPU: Intel Pentium 4 Northwood B 2.26GHz (2292.10 exact)
    [134.83 x 17] 539.32MHz QDR

    • Anonymous
    • 17 years ago

    Apparently the demo doesn’t look as good on the 9700 as it does on the GF4. I saw the water and it was kinda sad on the 9700.

    • Anonymous
    • 17 years ago

    *[

    • LiamC
    • 17 years ago

    Yeah,

    but one of my fav old games is Master of Magic and I can’t figure out how to force 4xAA + AF in a DOS window…

    Oh, not that old. Sorry, I go now…

    ๐Ÿ™‚

    • Anonymous
    • 17 years ago

    People are waiting for NV30 because they’ve lost all trust in ATi. That is, those that have been burned with crap drivers after spending 300+ for a video card. So far, NVIDIA’s driver support has been great. I’ve never been burned by a NVIDIA product since the TNT. Why switch and take another gamble?

    • JustAnEngineer
    • 17 years ago

    Great review. As others have noted, the X-Y charts are much more accurate than bar charts with non-zero axes.

    I am also glad that you emphasized the image quality effects of anti-aliasing and filtering rather than focusing purely on running the same old games at the same image quality at 300+ frames per second. Anti-aliasing can make your old games look new again.

    • Anonymous
    • 17 years ago

    Someone ain’t had no stuff in a while. Got get some Tbird. Save yourself.

    • rabbit
    • 17 years ago

    welcome to the internet, tbird

    • Tbird
    • 17 years ago

    ONO!!! Phear my opinion… I have all the hardware, I read all the reviews and the Hardware I’VE GONE with is OBVIOUSLY superior to any of you dolts…

    There ya have it… thats what you guys sound like… retarded huh…. no one is going to be “Right” with everyone elses opinions, maybe more informed (always a good thing) but that doesnt make any one right or wrong… you ppl need to chill out and just enjoy your f’ing games… I remember when we were proud and very lucky to get any where near 60FPS with ANY game….

    I’ve gone with nvidia for years now and i’ve loved it, but dont think I would give a crap about the brand name if for some reason something about the 9700 appealed to me I would buy it in a second.

    • droopy1592
    • 17 years ago

    People have been having the exact same problems with the Xabre AGP8x card also.

    • droopy1592
    • 17 years ago

    Seems that AGP8x boards are having problems with AGP8x cards, period. Not just the ATi Radeon 9700
    From HardOCP:

    [q] Not All Roses with 9700:
    We mentioned last week that we were having issues getting our 9700 Pro to operate on our Asus A7V8X. Seems as though we are not alone as Nick Palmer pointed out this official ATi page.

    In certain cases, after installing the RADEON 9700 PRO 128MB in an AGP 8x capable motherboard, the system will not post, or boot up.

    This is one of the things that comes with being first to market with new technology and you guys need to be aware of it. The Asus KT400 is the only board we have found so far that it does not work with.

    Our buddies over at Asus had their two cents to add as well.

    The problem is caused by a combination of power supply and motherboard +5V voltage ramp curves creating a situation where certain process corners in the ASIC will intermittently return an incorrect value for the ASIC ID after a cold boot.

    Fix:
    * Short-term solution is diligent screening prior to shipping.
    * Long-term there are 2 options:

    1. A modification to the motherboard BIOS (system BIOS) to create an extra read of the ASIC ID, the return value of which is ignored. The second read of the ASIC ID will then return the correct value

    2. A board modification to monitor the read of the ASIC ID using programmable logic which will generate a NAK command back to the Northbridge controller while allowing the Radeon 9700 Pro to act as if the read command were executed successfully in order to cause the Northbridge to reissue the read command.

    * Expected availability of HW fix: early September 2002.[/q]

    • Anonymous
    • 17 years ago

    *[

    • Anonymous
    • 17 years ago

    *[

    • Anonymous
    • 17 years ago

    to #56

    I’d say that you’re wrong since this is the only card that I can use to play recent games at full res, full image quality with ful AA and AF and get a smooth game. Cant do it with the 4600… yeah some cool features are not in any games yet… but… and its a big but… the ability to play games the way they are meant to look with full eye candy, well I’d say this is the ONLY choice if you want a card that can do it all today.

    • Cowboy_X
    • 17 years ago

    What do you Radeon users think of the new “catalyst” driver program? I swore off ATI cards a couple years ago due to their crappy drivers, and this whole Catalyst thing just sounds like the ATI marketers trying to be like nVidia. Or is that unfair, have the Catalyst drivers really redeemed ATI?

    • Anonymous
    • 17 years ago

    re #25 BladeRnR10

    I have a 9700pro, been using it since the 29th of august… I’ve experienced zero problems with it… took it to a lan party and played multiple lan games for 15 hours, zero crashes, zero problems, ran all the games at 1600 res and highest quality with zero frame lag…

    I love this card.

    • droopy1592
    • 17 years ago

    Actually, the ATi Radeon is a DX9.1 part.

    Has anyone actually compared the [hypothetical] specs between the 9700 and the NV30? I don’t know much about 3d programming, but it appears that ATi would still have an edge when it came to pure polygon processing power, while the NV30 would have an edge in color instruction sets, pixel shader precision and texture addressing.

    Someone enlighten me.

    • atidriverssuck
    • 17 years ago

    good review. A lot of the risk/trust still falls on ATI’s shoulders. I personally feel they haven’t earned my trust, so needless to say I’ll be waiting for Nvidia’s newie to see how it performs. (and prices to drop in half…)

    • Yoweigh
    • 17 years ago

    Well I got my Radeon 9700 in a few days ago and I couldn’t be happier! Haven’t had any driver issues yet, been playing UT2K3 at 1024×768 with 8xAA and 16xAnistropic. Absolutely gorgeous! Considering that I’m upgrading from a pansy-ass Radeon 7200 I don’t really have any basis for comparison but [i]damn[/i] this card is fast. GTA3 with AA and anistropic filtering is also beautiful, but I think that game sorta wishes it had more CPU horsepower (I’ve got a 1.5GHz P4), ’cause the game occasionally slows down regardless of the resolution and other video settings. It did this with my older Radeon also. =( I find it sort of amusing to be griping over a 1.5GHz processor…. maybe I should try GTA in Win98 and see what happens.

    I hope Hammer doesn’t get delayed anymore…. come on birthday presents!!! ๐Ÿ˜› I’d have one hell of a rig! It’s been a while since I’ve been able to call my computer top-notch.

    -Yoweigh

    • Anonymous
    • 17 years ago

    *[

    • Ryu Connor
    • 17 years ago

    [quote]Excellent review, Ryu[/quote]

    Scott, that is.

    I didn’t have a hand in this one.

    • WaltC
    • 17 years ago

    Excellent review, Ryu—about as good as they get, I think. I bought a 9700 a few days ago and put my Ti4600 in my wife’s box and, like you, I’ve been looking for overt flaws and haven’t found a single one. Everything I’ve ask it to run it’s run without a qualm. Likewise, I looked for overt flaws in your review and couldn’t find a single one there, either (although we’ll have to agree to disagree on our respective definitions of MSAA….;)) Cosmetically, like someone else said, your use of x-y charts is just so much nicer than bar charts it’s like a breath of fresh air! Not once did I have to say to myself: “Aha! Just look how this bar is 3x longer than that one, when the difference between them is 2%.” It was wonderful freedom to enjoy your review…;)

    • Mr Bill
    • 17 years ago

    An exceptional review. The change to X-Y plots with labeled values makes trends more apparent.

    • Anonymous
    • 17 years ago

    [quote]wumpus – The Radeon 9700 *does* have 256bit DDR. 512bit maybe, you meant? [/quote]
    No, that is what I meant (my post wasn’t clear though). Since we have no way of enabling >32bpp color depth, we have no idea what the performance hit will be.. that’s what I’m saying. It may very well turn out that 256bit DDR isn’t enough. Or not.

    Wish there was some way to find out.. if anyone gets that Beyond3D link referenced earlier (the ATI driver guy explaining), please forward it to me.

    • Anonymous
    • 17 years ago

    Damage,

    Great article, and the change to x,y graphs from bar charts is a big improvement in my book.

    One snide question, why thank Corsair for DDR memory when the single test rig used RIMMs? <grin>

    Keep up the good work guys.

    T-Dawg (wished my account worked)

    • Anonymous
    • 17 years ago

    masterb ATI on, heh

    • Anonymous
    • 17 years ago

    Doesn’t this card have tremendous problems running in AGP 8X motherboards? Sure it’ll run in my current 4X system, but my next board will be 8X- will I have to buy a new vid card then as well? I want one of these cards, but not if it won’t work in my next rig.

    • Anonymous
    • 17 years ago

    Yup, lotsa complaints about Geforce problems. Can’t use this feature, can’t use that, system won’t boot, card died after 1 week, display quality low, can’t go above such and such resolution, game locks same part of the game, system reboots while playing, bla bla bla.

    • Anonymous
    • 17 years ago

    Some of the posts here are complete drivel. First off if you enjoy playing games with lots o jaggies and smeared textures, go ahead nvidiots. nvidiots, go on over nvnews and see people having the same amount problems as the Radeon with DRIVERS, sheesh, talk about brain piercing or is that brain washing?

    When I play a game, I want AA, I want AF, I want what a graphics card is supposed to do. The 9700 does it. Its more than 30 to 300% faster than any card out and superior in terms of IQ when AA and AF is combined. If you want a regular grphics card with no appeal looking through waxed paper buy a gefarce, you want to wait till nv30, heh, go ahead, it aint gonna be much faster doorknobs as it would be cpu limited.

    Nvidia had made a dire mistake redesigning the nv30, and its about time Ati got their just reward…

    • Anonymous
    • 17 years ago

    [q]Buggy drivers, core stepping issues – bah who wants that?[/q]

    Ah, but you learn to live with it since they only release buggy drivers. No ATi fan boy will ever admit it, though. They’re too busy benchmarking…

    • droopy1592
    • 17 years ago

    BladeRnR10 – It’s ok for NVidia not to have the best card sometimes. That AA is just so pretty. It makes old games look good. Even plain ol’ Warcraft 3 looks good @ 1280X1024 full eye candy on. Is that an earring clasp I see? Doom 3 will be so gorgeous. I am still waiting for that ATi 3DS max plugin though.

    • dmitriylm
    • 17 years ago

    Yeah, but too bad every modern 3D Accelarator can run UT2k2 at a decent framerate.

    • Anonymous
    • 17 years ago

    Why do people keep saying “Don’t buy it, there’s nothing to take advantage of the speed/features”. My mommy always buys my clothes too big, so that I can grow into them. UT2003 is a good indicator of a next gen game and it will be here very soon, and I’m sure it’s going to push every card to its limit.

    • Anonymous
    • 17 years ago

    I want to render graphics like you would see in movies such as Shrek (my daughters favorite) in real time. Can we do that yet? or do I have to wait for the ATI Radeon 97,000 or the NV300? 256 bit computing at 100 Gigahertz? what will it take? sign me up.

    ps : I like the charts, stick with em easier to grab the numbers/feeling behind the card and go.

    Zach from the Inq.

    • Anonymous
    • 17 years ago

    *[

    • Dposcorp
    • 17 years ago

    Excellent review Damage. I appreciate the hard work.
    I just bought a 9700 a little over a week ago, and so far i have had no problems with, and the Image Quality is outstanding.

    It cost a lot, but for once, i wanted to be on the cutting edge, and for at least a short time, i will have a kick ass rig.
    (2.0A@2.5, 512 DDR, 9700Pro. dual 80gig maxtors in a mirror array).

    I almost sprung for the Parahelia when it came out, because i value image quality more then speed, but i am glad i waited. I can see the 9700 retaining a lot of its value for a long time.

    Quality products (Like my G400 & G550) seem to retain their value much better then plain jane video cards, when it comes to resale.

    • Anonymous
    • 17 years ago

    *[

    • Anonymous
    • 17 years ago

    *[

    • Anonymous
    • 17 years ago

    So, Forge, do you just buy the latest video card just for the hell of it (deep pockets, bragging rights) or is there some game or program that you run that actually needs that much power?Seriously.

    Other than better FSAA and anisotropic filtering scores, is there any other reason to buy this thing? Doom 3? This card will be collecting dust by the time that game ships.

    • elmopuddy
    • 17 years ago

    Cripes Forge! I laughed so hard, coffee almost came out my nose!

    ๐Ÿ™‚
    EP

    • Forge
    • 17 years ago

    Bladrn10 – Don’t worry, man. That burning sensation is just being #2, it’s not gonna kill you.

    What’s that? You’re a proud GeForce owner, yet you don’t want to mention that **mark2001 anymore?

    C’mon, if you’re gonna troll, at least try and be original.

    • elmopuddy
    • 17 years ago

    Great review as usual Damage!

    Personally I don’t compare the 9700 to the 4600…I’ll compare the 9700 to the NV30… The 9700 is a great card, but its way overkill for *most* people… I’m sure 4x AA and 16XAS is nice at 16×12 (maybe) but like someone said before, the games are not here yet that need this kind of power…

    Since I have a sickness, if I had the $$ I’d upgrade..hehe

    I went from TNT->GF1SDR->GF2->GF3->8500->4600.. its almost logical to get the 9700 ! ๐Ÿ™‚

    I’ll buy 2 new carseats for my kids first I think ๐Ÿ™

    EP

    • Anonymous
    • 17 years ago

    The Radeon 9700, as impressive as it is, is not necessary at this point in time. DirectX 9’s release is still a couple of months away, and there are no games out there which a GeForce4 Ti series card can’t handle. When I upgraded my computer a couple of months ago, I just bought myself a GeForce3 Ti500 video card on clearance (at an excellent price) to hold me over until the DirectX 9 cards are all out in full force and when software starts to hint at actually benefiting from DirectX 9 cards in terms of either features or beneficial performance gains. And no, I don’t consider a boost from 100 frames per second in a game to 120 (for example) to be necessary or beneficial. A jump from 30 to 50 would be.

    • Anonymous
    • 17 years ago

    [q]Good card. I’ll keep an eye open for it when it finally reaches the bargain bin! :)[/q]

    True. The performance gain over the Ti4600 just isn’t worth the high price. If you’re just buying it for better FSAA then that’s about all it’s worth. I’ll wait for NV30.

    • Anonymous
    • 17 years ago

    *[http://www.rage3d.com<]ยง who have setup one specifically for this new card. Once you\'ve reviewed the multitude of problems users are having with all kinds of applications you\'ll soon grasp the relaity of the situation. Add to that fact the card runs hot enough to cook a barbeque on you can rest assured I won\'t be touching it with a 10-foot barge pole. Buggy drivers, core stepping issues - bah who wants that? I\'ll keep my G4 Ti4600 for now which is ample until Nvidia release their NV30 which will once again dominate modern graphics as we know them. ATi\'s stay in the sun will be short and sweet. Nice review though. Sadly the facts are very different. Cheers

    • Ricardo Dawkins
    • 17 years ago

    ey…forge..can u ship that card as a Xmas gift for me ???

    • kyboshed
    • 17 years ago

    Forge:

    What make/model is the Ti4600?
    Would you ship internationally?

    • FluffydUCK
    • 17 years ago

    Dammit my ti4400 is lame now. wish th 9700 had a driver bug that made it melt into a puddle of goo when you enabled AA..that would make me feel better.

    Iat least i got my card nice and cheap.

    • Forge
    • 17 years ago

    wumpus – The Radeon 9700 *does* have 256bit DDR. 512bit maybe, you meant?

    • Ricardo Dawkins
    • 17 years ago

    9000 or 9500 for me !!!!

    • EasyRhino
    • 17 years ago

    Gee, this might be a good card to replace my Voodoo3 with… a mere $400.

    Wonder how much I can sell a kidney for?

    ER

    • EasyRhino
    • 17 years ago

    BOOYAH!

    (I shall now read the article)

    • Anonymous
    • 17 years ago

    *[

    • Anonymous
    • 17 years ago

    *[

    • Anonymous
    • 17 years ago

    You can get a new 4600 shipped for $239.

    • Anonymous
    • 17 years ago

    you Trident fanboys have *NO* idea what the Bitboys have up their sleeve…you’ll be flabbergasted when/if their holy grail video card is released!!

    • Forge
    • 17 years ago

    dimitrylm – The *current* chip could probably blow Parhelia out of the water. I understand framerates vs. image quality, but that thing is always slow!

    • Damage
    • 17 years ago

    Wumpus:

    As I said in the opening paragraphs of the review, I ask ATI about this possiblity, and they said they didn’t have any way to enable extended color depths in current games. Matrox does it, so long as you don’t care about alpha, but ATI doesn’t.

    I expect in certain scenarios (with lots of multitexture “loopbacks” in a single rendering pass or with pixel shaders) the 9700 will deliver output superior to older cards, but I haven’t seen a really stark example of that yet. I can try getting screenshots if you have some really good candidates.

    • Ricardo Dawkins
    • 17 years ago

    uh…oh…..ooooooooohhhhhhhhhhh !!! Damn

    • Anonymous
    • 17 years ago

    *[

    • dmitriylm
    • 17 years ago

    Yes, the new Trident chip will indeed blow everything out of the water, especially the parahelia!

    • Anonymous
    • 17 years ago

    The new Trident card will blow the 9700 out of the water. Heed my warning!!!

    • Jim_2CPU
    • 17 years ago

    So… this is faster than my Matrox G450 then?

    /me coughs.

    I’ll have to get ATI to send me one so I can test it in my duallies over at 2CPU.com.

    • Anonymous
    • 17 years ago

    *[

    • mattsteg
    • 17 years ago

    I love mine…

    • Forge
    • 17 years ago

    250$ shipped, as-is, functional, fan on stock HSF is dead.

    • dmitriylm
    • 17 years ago

    Uh, yeah…

    • Forge
    • 17 years ago

    I need one. Badly.

    Anyone want to buy a Ti4600 cheap?

Pin It on Pinterest

Share This