BOTH ATI AND NVIDIA HAVE recently updated their graphics product lines. Now even a heapy-cheapy graphics card is more powerful than anything you could buy a couple of years ago, and the really high-end cards are absolutely hoss. Packed with 128MB of RAM, these cards are faster than a French surrender. Today, we’ve rounded up a total of ninecount ’emdifferent graphics card configurations, ranging from the Radeon 7500 and GeForce4 MX to the new 128MB Radeon 8500 cards and GeForce 4 Titaniums. We’ve tested them all together in a massive, teeming, silicon-based pack, and we’re here to show you who’s fastand who’s faster.
Because we’re dealing with a whole lotta cards here, we’re going to limit our focus to performance. We’ll address the newest wrinkles in image quality and anti-aliasing in more depth in a future article, where we can devote more attention to how every mip gets mapped. And we’ve already covered most of the highfallutin’ 3D theory in our previous articles. We’ve already compared the GeForce3 to the Radeon 8500, and we’ve charted the changes contained in the GeForce4. Not only that, but we’ve dug deep into the Radeon 8500’s GPU to see what makes it tick. So we’ll dispense with the theory here. If you’re not up to date on that stuff, do yourself a favor and go read our previous articles before you go on.
Now, let’s take a look at some of the cards we’ll be comparing and see what we’ve got.
ATI’s Radeon 7500
The ATI Radeon 7500 occupies the top rung of the low end of ATI’s retail video card lineup. (Repeat that fives times fast.) This card is based on ATI’s original Radeon GPU, but in this implementation, the chip is clocked at 290MHzover 100MHz faster than the original Radeon. Similarly, the card’s DDR memory runs 230MHz, or 460MHz in DDR-speak. To put these numbers into perspective, the original Radeon chip was a little more advanced than the GeForce2, so this combination of elements is nothing to sneeze at. With 7.4GB/s of memory bandwidth plus ATI’s Hyper-Z suite of bandwidth-conserving technologies, this thing ought to outclass a GeForce2 Ultra.
The price? About 75 bucks from online retailers.
ATI’s Radeon 7500 card Pick your jaw up off the floor for a second and consider this: the Radeon 7500 comes complete with dual video outputs: a VGA-out port and a DVI-out connector that can double as a second VGA output. (ATI includes the DVI-to-VGA adapter in the retail box.) With ATI’s HydraVision software, you can drive a pair of monitors in tandem. Not a bad deal for a “budget” card.
The Radeon 7500’s outputs: S-Video, DVI, and VGA
The Radeon 8500 line gets refreshed
ATI’s answer to the new GeForce4 Ti cards is simple: double the RAM on the Radeon 8500. The Radeon 8500 GPU already has many of the new features NVIDIA added to the GeForce4 Ti, like dual vertex shaders and more advanced pixel shaders with dependent texture addressing. [Ed: No theory, please.]
Sorry about that.
But to combat the GeForce line in stores, ATI added a card to the mix, the Radeon 8500LE 128MB. The Radeon 8500LE is nothing more than a Radeon 8500 with core and memory clock speeds at 250MHz, or 25MHz lower than the Radeon 8500. To cut costs, ATI also removed the DVI-out port on the LE card. Retailing for about $199, the 8500LE promises to give GeForce4 cards some serious competition in the bang-for-your-buck category.
ATI’s Radeon 8500LE 128MB
The Radeon 8500LE card has Infineon memory in a square BGA package For now, the original Radeon 8500 remains at the high end of ATI’s lineup. Like the LE cards, the 8500 now comes with 128MB. In theory, the extra memory ought to allow for better performance handling complex scenes with lots of texture and geometry data.
Abit’s Siluro GF4 MX 440
Most of you are probably familiar with our gripes about NVIDIA’s GeForce4 MX chip. It’s more of a GeForce2 on steroids than it is a GeForce4-anything (or even a GeForce3-anything). Considering how NVIDIA has spearheaded the move to a new approach to real-time 3D with vertex and pixel shaders, the GeForce4 MX is a bit of a disappointment. Look at it as direct competition for the Radeon 7500, though, and NVIDIA’s thinking begins to make more sense. Clearly, the GF4 MX isn’t a gamer’s card, but it matches up well against the competing product from ATI.
Abit’s version of the GeForce4 MX 440 comes in a stunning shade of blue with a very nice cooler.
The Abit Siluro GF4 MX 440
Abit’s GF4 MX 440 includes 64MB of DDR memory clocked at 400MHz and an S-Video output. Unfortunately, there’s no secondary output, DVI or otherwise, so this card can’t support multiple monitors, even though the GF4 MX chip can. Abit is planning a “VIO” version of this card with a DVI output and a TV encoder.
Of course, the Siluro GF4 MX 440 includes all of the good things the NV17 chip brings to the table, including Accuview anti-aliasing, NVIDIA’s efficient memory architecture, and a full MPEG2 decoder for effortless DVD playback.
VisionTek’s Xtasy GeForce4 Ti 4600
Finally, we have VisionTek’s rendition of the GeForce4 Ti 4600. Not long ago, these cards hit retail, and the weak among us hardware enthusiasts could be seen out on street corners trying to sell our first-born on the open market to fund the purchase of one of these puppies. This is the highest of the high-end consumer video cards, with faster RAM and better bragging rights than just about anything.
Like any product that follows the NVIDIA GeForce4 Ti reference design, this card is large. I’m not sure whether you plug it into a motherboard or plug a motherboard into it.
Top: GeForce4 Ti 4600. Bottom: Radeon 8500LE.
Only the Voodoo 5 5500 can make this thing look small Either way, the VisionTek card comes with most of what you’d need, including a DVI output, a VGA out, and video in and out ports (via an included splitter cable) for real VIVO functionality. VisionTek even throws in CyberLink’s PowerDirector video editing suite. VisionTek’s lifetime warranty is part of the package, as well. The only thing missing from the box with our review sample was a DVI-to-VGA converter, which you might have to hunt up separately if you want to drive a pair of CRT monitors.
NVIDIA’s reference cooler is mighty fancy As you’d expect, the GeForce4 Ti 4600 chip promises to make this beast one of the fastest cards you can buy.
The rest of the field is comprised of variations on the cards shown above. GeForce Ti 4400 cards use the same chip as the Ti 4600 cards, except with lower core and memory clock speeds. And we’ll test the 128MB Radeon 8500 variants against a Radeon 8500 64MB to see if the extra RAM really helps performance at all.
To see exactly how these cards stack up in terms of vital stats, have a look at the table below. I’ve sorted the contenders by peak memory bandwidth, since memory tends to be the single biggest performance bottleneck in most 3D graphics.
|Core clock (MHz)||Pixel pipelines||Peak fill rate (Mpixels/s)||Texture units per pixel pipeline||Peak fill rate (Mtexels/s)||Memory clock (MHz)||Memory bus width (bits)||Peak memory bandwidth (GB/s)|
|GeForce4 MX 440||270||2||540||2||1080||400||128||6.4|
|GeForce3 Ti 200||175||4||700||2||1400||400||128||6.4|
|GeForce3 Ti 500||240||4||960||2||1920||500||128||8.0|
|GeForce4 Ti 4400||275||4||1100||2||2200||550||128||8.8|
|GeForce4 Ti 4600||300||4||1200||2||2400||650||128||10.4|
Hardware specs matter, but they aren’t destiny, as you’ll see in the benchmarks results below. However, this table helps show us some of the key performance matchups. Among them:
- Radeon 7500 vs. GeForce4 MX 440 These two cards are close competitors, and it would appear the Radeon 7500 has the upper hand; it’s got faster memory, gobs more multitextured fill rate, a higher clock speed, and (at present) a lower price tag.
- GeForce4 Ti 4400 vs. Radeon 8500 These cards will be priced roughly equal to one another, and they happen to share the same basic specs in terms of clock speed, memory speed, pixel pipelines, and peak fill rates. Here’s a chance to see ATI’s and NVIDIA’s best chips square off on equal footing.
- The various Radeon 8500 flavors vs. each other We know the 8500LE’s sweet price tag matters. Does an extra 25MHz really matter? And how much does an extra 64MB help?
Those aren’t the only interesting matchups, but they are worth keeping your eye on as the test results unfold.
Our testing methods
As ever, we did our best to deliver clean benchmark numbers. All tests were run at least twice, and the results were averaged.
The test system was built using:
|Processor||Intel Pentium 4 2.2GHz|
|Front-side bus||100MHz (400MHz quad-pumped)|
|North bridge||82845 MCH|
|South bridge||82801BA ICH2|
|Memory size||512MB (2 DIMMs)|
|Memory type||Micron PC2100 DDR SDRAM (CAS 2)|
|Sound||Creative SoundBlaster Live!|
|Storage||Maxtor DiamondMax Plus D740X 40GB 7200RPM hard drive|
|OS||Microsoft Windows XP Professional|
For comparative purposes, we used the following video cards and drivers:
- ATI Radeon 7500 64MB AGP with 184.108.40.20637 drivers
- ATI Radeon 8500 64MB AGP with 220.127.116.1137 drivers
- ATI Radeon 8500LE 128MB AGP with 18.104.22.16837 drivers
- VisionTek Xtasy 6964 (NVIDIA GeForce3 Ti 500) with Detonator XP 28.32 drivers
- Abit Siluro GF4 MX 440 64MB AGP with Detonator XP 28.32 drivers
- VisionTek Xtasy GeForce4 Ti 4600 with Detonator XP 28.32 drivers
We also included a “simulated” GeForce3 Ti 200, because we could. We underclocked our GeForce3 Ti 500 card to Ti 200 speeds and ran the tests. The performance of the card at this speed should be identical to a “real” GeForce3 Ti 200. Likewise, we underclocked the GF4 Ti 4600 card to test it at GF4 Ti 4400 speeds. And perhaps most heinously, we overclocked the Radeon 8500LE 128MB card in order to simulate a Radeon 8500 128MB. (The card showed no signs of problems at the 8500’s 275MHz clock speedperfectly stable.) If you can’t handle the concept of a simulated graphics card, pretend those results aren’t included.
We used the following versions of our test applications:
- 3DMark 2001 SE
- Giants: Citizen Kabuto 1.4
- Quake III Arena 1.30
- Serious Sam 1.05
- Comanche 4 demo benchmark
- SPECviewperf 6.1.2
The test systems’ Windows desktop was set at 1024×768 in 32-bit color at an 85Hz screen refresh rate. Vertical refresh sync (vsync) was disabled for all tests.
All the tests and methods we employed are publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.
Quake III Arena
First up is the well-worn favorite, Quake III. We’ll start at low resolutions, where polygon throughput and driver bottlenecks are the primary limiting factor, and move up quickly to 1600×1200, where pixel-pushing power (or fill rate) is the name of the game.
Obviously, we’re hitting a system-level bottleneck here; something other than the graphics card is limiting performance. This result actually bodes well for the ATI cards. Last time around, the ATI cards were measurably slower at low resolutions. Perhaps ATI has made some progress on its drivers.
As the resolution increases, the pack begins to separate, and we can see how things shake out. At 1600×1200, especially, the picture is clear. Despite a disadvantage in memory bandwidth, the GeForce4 MX 440 outpaces the Radeon 7500. Likewise, the GF4 Ti 4400 is over 20 frames per second faster than the Radeon 8500 at the same clock speed. In both cases, the NVIDIA cards have the performance edge.
Giants: Citizen Kabuto
Giants is one of a select few Direct3D games with a decent built-in benchmark function. Let’s see if the field looks any different here than in Quake III.
At low res, the NVIDIA cards have a small but pronounced advantage.
Even at 1600×1200, there’s not too terribly much difference between the cards. The middle of the pack is especially tight, with the Radeon 8500 flavors and the GeForce3 Ti 500 all stacked up with only a few frames per second separating them.
Serious Sam SE
We’ll use Serious Sam SE’s magical benchmarking tools to look at the test data a different way. The graphs below plot frames rates over time in 1-second intervals. It’s a bit of a jumble to read, but bear with me, because there’s something to be learned here.
In order to make this interesting, the results below are the result of a single representative test run, not an average. I did it this way in order to demonstrate something. Watch for peaks and valleys in the frame rates, because those things (especially the valleys) affect playability much more than an “average frame rate” number.
It’s hard to read, but what you’re seeing here is the NVIDIA cards all bunched up just above the ATI cards.
Here, at about 5 seconds into the test, the Radeon 8500 64MB has a sharp spike downward in its frame rate compared to the other cards. It happens again, to a lesser degree, with the Radeon 8500 128MB at about 17 seconds. This kind of problem is a serious playability killer, and it’s something I’ve seen while playing Serious Sam SE on a Radeon 8500 card. I was pleased to be able to capture it in a graph. Check out the next two tests.
All of the Radeon 8500 cards are susceptible to downward spikes in frame rates, and the likelihood it will happen goes up as the screen resolutions increase. Note that the 128MB cards seem just as prone to the problem as the 64MB cards. With these kinds of slowdowns, you’ll get smoother gameplay out of a GeForce3 Ti 200 card than out of any flavor of Radeon 8500. That’s something an FPS average just won’t show you.
I hestitate to speculate much about the cause of these performance spikes on the Radeon 8500 cards. I’ll leave the heavy lifting to folks more concerned with application-level tuning and the like. It’s possible Serious Sam’s self-tuning nature is causing a problem for the Radeon 8500 here. We use Serious Sam SE as an application-level benchmark, so we let the game engine tune itself. Serious Sam adjusts some video settings for individual graphics chips, and it’s possible the settings for the Radeon 8500 are holding it back. For one thing, certain models in the game are set to use ATI’s Truform technology by default, and those models will have much higher polygon counts than the stock in-game models. However, I tried turning off Truform as I was playing the game myself, and it did nothing to eliminate the intermittent slowdowns. Using smaller texture sizes seemed to help more.
I’m hopeful ATI and the game’s developers can alleviate the problem, but right now, the game isn’t very playable on an 8500.
UPDATE: Just before we went to press, we discovered this thread in the Serious Sam forums that explains the problem has to do with the texture upload routine in ATI’s driver. Apparently, the problem will be fixed in future driver releases.
Comanche 4 demo
This benchmark is a late addition to the review. We actually held back the release of this article in order to add results for Comanche 4. This game uses pixel and vertex shaders, if available, to produce some stunning effects. It’s gotta be one of the most graphically advanced games out here. The nifty thing about this game engine is that it will use pixel shaders if it can, but if not, it will use multipass rendering to achieve the same effectseven on an older graphics chip without pixel shaders. Rendering in multiple passes can degrade performance and image quality, but Comanche doesn’t show any serious image quality degradation on a Radeon 7500 or GF4 MX.
This test is very much CPU-bound at lower resolutions; there’s not much difference at all between 640×480 and 1024×68. Only the higher resolutions really slow down the slower chipsand the Ti 4600, amazingly, is unfazed even at 1600×1200.
What sticks out like a sore thumb here is the massive performance gap between the Radeon 8500 64MB and the 128MB version. Somehow, Comanche 4’s test scene stresses the memory in the Radeon 8500 pretty seriously. Yet the GF3 Ti 500 card with 64MB memory doesn’t have that problem.
The low-end cards don’t seem to suffer too much without pixel or vertex shaders. They are slower, but not much slower than they are in other games.
3DMark uses DirectX 8.1’s new features to good effect, and it packs loads of polys into many of its scenes. I consider it a decent predictor of performance in upcoming games that take advantage of vertex and pixel shaders.
The GeForce4 Ti cards run away with it, and the Radeon 8500 cards distinguish themselves with a relatively strong performance, as well. Not conincidentally, both the GeForce Ti and Radeon 8500 chips have dual vertex shaders and solid pixel shader implementations.
Without either sort of shaders, the Radeon 7500 and GeForce4 MX cards are simply outclassed here.
Here’s how the cards performed on 3DMark’s individual game tests.
In the first three game tests, the top two slots and the bottom two slots are pretty well consistent. The GF4 Ti cards are always fastest, and the Radeon 7500 is almost always just a little slower than the GeForce4 MX.
In between, the Radeon 8500/LE cards and the GeForce3 cards battle it out.
The Nature test makes extensive use of pixel shaders, so the low-end cards just aren’t able to run it. Remarkably, the Radeon 8500 128MB outpaces the GF4 Ti 4600 to take the top spot. This scene uses loads of complex geometry and some nice-looking pixel shader effects, so both vertex and pixel shaders are working hard in this one.
Fill rate and pixel shader performance
I’ve tried to group 3DMark’s synthetic test together as logically as possible. The first two of these tests measure fill rate.
This is pretty much the same pattern we’ve been seeing in games at higher resolutions, with the exception that the Radeon 8500 cards perform relatively stronger on this synthetic test than they do in most games.
The next two tests measure performance with the two most popular forms of bump mapping. Newer games are supposed to make extensive use of bump mapping, so watch carefully.
Notice that the GeForce4 MX wasn’t able to complete the Environment Bump Mapping test. That’s because the GeForce2/GeForce2 MX 3D engine never could handle this form of bump mapping. With a 3D core lifted right out of GeForce2 MX chip, the GeForce4 MX can’t do it, either.
Speaking of things the GF4 MX can’t do, let’s look at pixel shader performance.
Running at the same clock and memory speeds, with the same amount of memory, the Radeon 8500 and the GF4 Ti 4400 are neck-and-neck in pixel shader performance. However, the next test was added specifically to capitalize on the more advanced pixel shader 1.4 capabilities of the Radeon 8500. The GeForce3/4 chips can execute the test, but they have to resort to multi-pass rendering (drawing the scene multiple times before pushing it out to the frame buffer) in order to do it.
Whoa. That wasn’t what we were expecting. So what gives?
Q: I have an ATI Radeon 8500, which should draw the water surface in the Advanced Pixel Shader test in a single pass, compared to my friend’s system with DX8 hardware that should draw it in two passes. Still I don’t see much performance difference. Shouldn’t my system be twice as fast?
A: The Advance Pixel Shader test is what we call a Feature Test, which means that we, above all, want to present some new technology. It was decided that a fall-back was to be included in addition to the 1.4 pixel shader, since the same looking effect can be achieved using pixel shader 1.0 hardware. These two different modes of that same test work a bit differently and should, therefore, not be directly compared. Both modes could be optimized to show more performance either way, but now the test is just optimized for maximum compatibility. Vertex shader performance also affects the score, somewhat, due to this compatibility optimization.
So the preceding graph was pretty much meaningless. Made you look, though.
Poly throughput and vertex shader performance
Finally, we’ll look at T&L and vertex shader performance. Keep in mind that the low-end cards here both have fixed-function T&L engines to remove some burden from the CPU, but they can’t handle the more complex operations a vertex shader can. As a result, the low-end cards will fall back on the 2.2GHz Pentium 4 processor in our test system to execute vertex programs. In DirectX 8, this fallback mode is automatic, because unlike pixel shaders, vertex shaders can be emulated with relative ease.
Meanwhile, the rest of the cards have vertex shaders, and they implement fixed-function T&L as a vertex program. Essentially, they’re emulating a traditional T&L unit in software, and that software runs on the chip’s vertex shader. Therefore, sometimes older chip designs with hard-wired T&L units can perform very well next to the newer chips.
Anyhow, the theory police are on to me, so I’ve gotta move on.
These two tests use only fixed-function T&L, and you can see the GF4 MX outpacing the GF3 Ti 500. However, the GF4 and Radeon 8500 cards are all faster than the GF4 MX and Radeon 7500.
Here’s a true vertex shader test, and yes, even the GF3 Ti 200 can outperform the Pentium 4 2.2GHz running DX8’s software vertex shader routines. As in the Nature test, the Radeon 8500 beats the Ti 4600 here. No doubt ATI’s vertex shaders are formidable. The GeForce3, with only a single vertex shader unit, can’t keep up with the dual-shader chips.
Point sprites or particles are generally handled by the vertex shader, so we’ll lump them in here. Our contestants line up just about like you’d think, but the Radeon 7500 about has a nervous breakdown. Ouch.
We’ll close out our massive 3DMark comparo with some antialiasing action. We’ll cover AA in all of its forms in a future article, but the tests below will give you a good sense of how these chips perform with AA turned up.
At 640×480, even 4X mode doesn’t cause serious slowdowns with most of these cards. They have ample fill rate to handle 4X mode at this resolution.
Now we’re starting to see the pack separate. Look at the GeForce4 Ti cards go! The GF4 Ti 4400, with the same memory and clock speeds, delivers double the frame rate of the Radeon 8500. Yes, NVIDIA’s multisampling AA method is more efficient than ATI’s supersampling, but this is amazing.
At this resolution, only NVIDIA’s GF4 Ti cards with 128MB can handle 4X AA mode. Oddly, the Radeon 8500/LE 128MB cards won’t do it; they drop back into 2X AA as if they didn’t have enough frame buffer memory. Perhaps the Radeon 8500’s drivers just haven’t been updated to use the extra RAM for anti-aliasing at higher resolutions yet.
Anyhow, look how fast the Ti 4600 is. In 2X AA mode, it’s nearly as fast as a Radeon 8500 without antialiasing. NVIDIA has produced an AA monster.
I’ve about had enough benchmarks, but we’ll add one more to the list of the sake of completeness. Viewperf’s workstation-class tests use OpenGL for both wireframe and solid rendering tasks, which makes them quite different from our gaming benchmarks.
It’s a back-and-forth battle between the ATI and NVIDIA cards, which is good news for ATI. The Radeon 8500 cards have made great strides in OpenGL performance since list time we tested a Radeon 8500, when we used an older driver set. Advanced Visualizer (AWadvs) and MedMCAD show substantial performance improvements, and if you’re planning on using ProCDRS for industrial design, get a Radeon 8500. (Well, OK, you might want to get a workstation-class card, instead, but you see what I’m saying.)
I’ve been around 3D graphics on the PC since the beginning, and it’s hard not to like what we’re seeing here, from top to bottom. Even the slowest card of the bunch, the $75 Radeon 7500, is breakneck fast at mid-level screen resolutions.
The Radeon 7500 and GF4 MX can’t always cut it when all of the next-gen graphics features are turned up. Then again, they performed pretty well in the Comanche 4 demo, so maybe vertex and pixel shaders won’t be as necessary as we had once thought. I wouldn’t bank on that, though. If you can afford it, move up to a card with real vertex and pixel shaders.
One of the most striking results of our tests is how much performance NVIDIA is able to wring out of a given hardware spec. The GeForce4 MX card has a lower clock speed and slower RAM than the Radeon 7500, but it outperforms the 7500 more often than not, especially in fill rate-limited scenarios like high resolutions. You’d think that with faster RAM, the Radeon 7500 would be faster in such situations. Similarly, the GF4 Ti 4400 matches the Radeon 8500 clock-for-clock, but the Ti 4400 is faster nearly across the board.
Obviously, NVIDIA’s bandwidth conservation methodslike the GF4 line’s crossbar memory controller and occlusion detection abilitiesare much more effective than ATI’s. The GeForce3 has the same basic set of bandwidth-saving features as the GF4 line, but those features are greatly improved in the GeForce4 Ti. The progress is most evident when antialiasing is in use. The GeForce4 Ti cards are practically magic when it comes to antialiasing. I thought the line about turning on 2X AA and not seeing any performance drop was just a marketing spiel. Turns out that spiel is mighty close to the truth.
The GeForce4 Ti series doesn’t bring a whole lot of new 3D technology to the scene, but the refinements since the GeForce3 deliver gobs of real-world performance. VisionTek’s Ti 4600 card pretty much blew away everything else in our tests. It’s really no contest. If you want the fastest card on the planet, get a Ti 4600.
That said, ATI has some very solid products in the Radeon 8500 series. Right now, there’s a gaping hole in the middle of NVIDIA’s product lineup, because the GF4 MX 460 is apparently stillborn (I challenge you to find a GF4 MX 460 for sale anywhere). Its apparent replacement, the GF4 Ti 4200 isn’t quite here yet. ATI forced NVIDIA’s hand with the Radeon 8500LE 128MB, which is a bargain at $199 or less. Even after the Ti 4200 cards arrive, the Radeon 8500LE 128MB will be an attractive card for the price. The extra RAM makes the Radeon 8500LE 128MB perform about like a Radeon 8500 64MB, despite 25MHz the clock speed gap. ATI has made notable progress on its drivers, and the Radeon 8500 GPU is still more advanced, in some ways, than even the GeForce4. I’m finding myself recommending ATI cards to friends and readers who want a good all-around card at a decent price.
As for the GF4 Ti 4200, it should hit store shelves near the end of April. The cards will come in two basic configurations: a 250MHz core and 64MB of DDR memory at 500MHz for around $179, and a 250MHz core paired with 128MB DDR memory at 444MHz (don’t ask) for about $199. Once those cards arrive, the Radeon 8500LE will have some real competition. I expect the prices of both the LE and the Ti 4200 to drop pretty quickly.
So that’s the story on performance. We’ll explore some of the intricacies of edge and texture antialiasing in a future review, and we’ll revisit the performance scene once we have a GF4 Ti 4200 card in our hot little hands, which should be very soon.