NVIDIA’s GeForce4 chips with AGP 8X

THE TWO NEW graphics chips from NVIDIA we’re looking at today, previously code-named NV18 and NV28, ought to look mighty familiar. They’re essentially chips from the GeForce4 lineup—both MX and Ti—with a new AGP interface grafted on. Oddly enough, these new AGP 8X-capable GeForce4 chips only come in two flavors: GeForce4 MX 440 with AGP 8X and GeForce4 Ti 4200 with AGP 8X. (Catchy names, eh?)

NVIDIA has also taken this opportunity to tweak the clock speeds of these GeForce4 chips, so they should be a little bit faster overall, even without AGP 8X support.

By my count, this newest GeForce4 MX is the eighteenth incarnation of the original GeForce GPU, and the new Ti 4200 is the seventh GeForce3-derived product. Not counting Quadros. Mighty familiar, indeed.

So the questions are: What does the move to AGP 8X get you? What about the additional clock speed? Can these latest revisions of NVIDIA’s aging GPUs keep pace with ATI’s new Radeons? Keep reading to find out.

The skinny on the GeForce4 chips with AGP 8X
To understand these new revisions of the GeForce4 line, you’ll need to understand the previous chips. We previewed the GeForce4 chips for you when they were launched, and we followed up with this review of a GeForce4 MX 440-based product. Essentially, the GeForce4 MX 440 is a GeForce2 MX chip on steroids.

OK, maybe that’s not fair.

The GeForce4 MX 440 is more like a GeForce2 hopped up on a cocktail of steroids, Xanax, caffeine, Metabolife, and some sort of fish paralyzer. The GeForce4 MX has two pixel pipelines and a transform and lighting unit essentially unchanged from the GeForce2, but it packs a revamped memory interface, improved antialiasing, and reworked video- and display-oriented bits and pieces. The GF4 MX also runs at a much higher clock speed. In the case of the original GF4 MX 440, the GPU ran at 270MHz with a 400MHz memory clock. The new “with AGP 8X” model runs at 275MHz with memory at 512MHz.

So the new rev of the MX 440 should be a little faster than the last one, especially when it comes to running apps fluidly at higher resolutions. Beyond that, it’s still a DirectX 7-era graphics chip, with none of the new abilities of DX8 or DX9-class chips, like vertex shaders or floating-point color datatypes. That puts the MX440 in a tenuous position, because it has to compete with ATI’s Radeon 9000, a DX8-class chip with real vertex and pixel shaders. When it comes down to it, the Radeon 9000 ought to be faster and more capable when running next-gen games.

The MX 440 8X reference card doesn’t need active cooling However, leaving out all those features does give the GF4 MX one advantage: it’s a very small chip, so it’s cheap to make and easy to cool. In fact, NVIDIA’s reference card for the GF4 MX400 with AGP 8X has only passive cooling—no fan needed. I’d expect many of the retail cards to arrive with active cooling in order to appeal to overclockers, but the MX 440 does indeed work without a fan.

The GeForce4 Ti 4200 with AGP 8X is a different story. You can read our review of the GF4 Ti 4200 to familiarize yourself, if you somehow missed the chip that’s dominated the middle of the graphics market for the past six months. This chip needs active cooling, and NVIDIA hasn’t bothered to increase the stock clock speed on the chip.

The GeForce4 Ti 4200 8X reference card looks very much like the original 4200 card They have, however, bumped up the stock memory speed. Previously, 64MB versions of the Ti 4200 came with 500MHz memory, while 128MB versions came with memory clocked at 444MHz. Our new “GeForce4 Ti 4200 with AGP 8X” reference card arrived with 128MB of memory running at 512MHz. As with the MX440, the extra memory speed should help the chip run smoother at higher resolutions or in games with more intensive texturing and rendering.

The GF4 Ti 4200, of course, is a true DirectX 8-class chip with dual vertex shaders and real pixel shaders. It’s a GeForce3 that’s been bonging Miracle-Gro.


Introducing AGP 8X
The AGP bus connects the graphics card to the rest of the computer, and it provides a speedy, dedicated connection for transferring all the data a GPU needs to make movie magic—vertex data to describe scenes, texture data to give surfaces their essence, and instructions about how to manipulate these things.

At its heart, AGP 8X is a simple step up from AGP 4X: it provides twice the bandwidth of AGP 4X. Under the surface, things are a little more complex. Both AGP 4X and 8X (along with the older 1X and 2X standards) have a common clock rate of 66MHz, but AGP 4X “strobes” four times per common clock cycle to provide an effective data rate of 266MHz. AGP 8X strobes eight times per common clock cycle to achieve a 533MHz data rate. Since AGP is a 32-bit bus, the effective bandwidths of AGP 4X and 8X are 1.06GB/s and 2.1GB/s, respectively.

An intimidating drawing from the AGP 3.0 spec paper The AGP 3.0 standard, which defines AGP 8X mode, does several things. It describes the changes required to double AGP bandwidth, of course. The standard also removes some (apparently unneeded) features and complexity from the previous AGP 2.0/4X standard. The AGP 3.0 spec defines how chips that implement AGP 8X can maintain backward compatibility, as well. (All the core-logic chipsets that currently support AGP 8X have “Universal” AGP implementations capable of supporting both AGP 2.0 and 3.0 signaling.) Finally, AGP 3.0 adds some new capabilities, including isochronous mode operation for improved data streaming and—get this—support for multiple AGP 3.0 ports, with multiple AGP devices per port, in a single system.

These improvements should be much needed for driving next-generation GPUs to produce cinematic quality rendering in real time. The more immediate question is: what can AGP 8X do for a GeForce4 chip running current games? That’s tough to say, because nearly every game out there is written and tuned for graphics cards with 64MB of memory or less. Game developers have to keep a working set of data that will fit into most graphics cards’ local memory, so their games won’t choke on all but the very fastest high-end systems. As a result, current games don’t seem likely to show off the benefits of shuffling data back and forth from main memory faster via AGP 8X.

That said, we’re going to try our best to see how AGP 8X affects performance in these new GeForce4 chips. Let’s see what we find.


Our testing methods
Because new AGP standards often present compatibility problems, we tested for both compatibility and performance. We tested the new GeForce4 chips with AGP 8X against their older, AGP 4X revisions, and we threw in some ATI products where appropriate. In our compatibility tests, we used an ATI Radeon 9700, which is currently ATI’s only AGP 8X-capable card. We included a Radeon 9000 Pro in our performance tests as a foil for the GeForce4 MX 440. (ATI doesn’t yet have a direct competitor for the GeForce4 Ti 4200, but the Radeon 9500 should be very close to release.)

We used two different core logic chipsets for compatibility tests: the SiS 648 and VIA’s P4X400. These are the only two AGP 8X-capbable chipsets commercially available. We do have an nForce2 motherboard in house, but it’s an engineering sample, and hardly a suitable candidate for compatibility tests.

To isolate AGP 8X as a variable, we tested the new GeForce4 cards at the same clock speeds as their predecessors. We also tested the new GF4 cards at their new stock clock speeds.

As ever, we did our best to deliver clean benchmark numbers. Tests were run at least twice, and the results were averaged.

Our test systems were configured like so:

  SiS 648 VIA P4X400
Processor Pentium 4 2.53GHz Pentium 4 2.8GHz
Front-side bus 533MHz (133MHz quad-pumped)
Motherboard Abit SR7-8X VIA P4PB 400
North bridge 648 VT8754
South bridge 963 VT8235
Chipset drivers SiS AGP 1.11 4-in-1 4.43
Memory size 512MB (1 DIMM) 512MB (1 DIMM)
Memory type Corsair XMS3200 PC2700 DDR SDRAM
Sound Creative SoundBlaster Live!
Storage Maxtor DiamondMax Plus D740X 7200RPM ATA/133 hard drive
OS Microsoft Windows XP Professional
OS updates Service Pack 1

We used the VIA P4PB 400-based system for the majority of our testing. The SiS 648 board was used only for testing compatibility.

The test systems’ Windows desktops were set at 1024×768 in 32-bit color at an 85Hz screen refresh rate. Vertical refresh sync (vsync) was disabled for all tests.

We used the CATALYST 7.76 drivers for the ATI cards and NVIDIA’s Detonator 40.72 drivers for the NVIDIA cards.

We used the following versions of our test applications:

All the tests and methods we employed are publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.


Compatibility tests
Our compatibility tests were pretty simple. We ran each card through our series of benchmark programs on both of our test platforms, based on the SiS 648 and VIA P4X400 chipsets. If the system didn’t crash, freak out, or spit the card out of the AGP slot, we considered the test a success.

None of the configurations gave us any trouble with basic installation—just make sure you have the latest AGP drivers for your motherboard’s chipset. For the most part, compatibility was very good. Everything ran well in our VIA P4X400-based board. The GeForce4 chips with AGP 8X were quite stable in the SiS 648 system, as well. But the Radeon 9700 did not get along with the SiS 648 board. We even tried installing and fiddling with the latest BIOS provided by Abit, which features some settings especially geared toward fixing Radeon 9700-related compatibility problems. Nothing helped. The Radeon 9700 locked up hard in Unreal Tournament 2003, and it had occasional problems running other 3D apps.

The Radeon 9000 worked just fine in our SiS 648-based board, so the problem is definitely an AGP 8X-related SiS 648/ATI problem. We’ve heard some rumblings about the causes of these problems, and I’m not sure where exactly to lay the blame. For now, however, I’d avoid the combination of a SiS 648 and an ATI AGP 8X graphics card.

Fill rate tests
We’re going to do a theory/practice thing here next. It’s just too exciting, isn’t it? The higher clock speeds on the new AGP 8X GeForce4 cards change the landscape a little bit, so let’s whip out our trusty chip table to see how things compare—in theory.

  Core clock (MHz) Pixel pipelines  Peak fill rate (Mpixels/s) Texture units per pixel pipeline Peak fill rate (Mtexels/s) Memory clock (MHz) Memory bus width (bits) Peak memory bandwidth (GB/s)
GeForce4 MX 440 270 2 540 2 1080 400 128 6.4
GeForce4 Ti 4200 128MB 250 4 1000 2 2000 444 128 7.1
GeForce4 Ti 4200 64MB 250 4 1000 2 2000 500 128 8.0
Radeon 8500LE 250 4 1000 2 2000 500 128 8.0
GeForce4 MX 440 8X 275 2 550 2 1100 512 128 8.2
GeForce4 Ti 4200 8X 250 4 1000 2 2000 512 128 8.2
Radeon 9000 Pro 275 4 1100 1 1100 550 128 8.8
GeForce4 MX 460 300 2 600 2 1200 550 128 8.8
GeForce4 Ti 4400  275 4 1100 2 2200 550 128 8.8
Radeon 8500 128MB 275 4 1100 2 2200 550 128 8.8
GeForce4 Ti 4600 300 4 1200 2 2400 650 128 10.4

The GeForce4 Ti 4200 with AGP 8X now has 1.1GB/s more memory bandwidth than previous incarnations with 128MB memory did. That’s pretty simple. The new Ti 4200 should be faster in situations where the old cards were memory bandwidth limited.

The trickier matchup is the new GF4 MX 440 versus the Radeon 9000 Pro. To keep these value chips cheap, both companies cut down the 3D pipelines. ATI removed one of the texture units in each of the Radeon 9000’s four pipes, while NVIDIA cut the number of pipelines down to two. The result is that, at the same 275MHz core clock speed, these chips have the same peak fill rate with multitexturing. They have comparable memory bandwidth, too, so it’s very close on paper.

Let’s put the theory in practice using 3DMark’s fill rate tests.

These numbers aren’t quite up to the theoretical peaks, but they’re generally the sort of relative performance we’d expect. The GeForce4 MX chips don’t do so well in the single-texturing test, because they have only two pipe pipelines. However, multitextured performance is usually more important, anyhow.

Here the MX 440 runs much closer to the Radeon 9000 Pro. Both of the new GF4 cards benefit from their higher clock speeds. Also, in both tests, AGP 8X offers a slight performance advantage over AGP 4X at the same clock speed.


Comanche 4
Now we’ll move into our real gaming tests and see how these new cards compare.

As the display resolution increases, we can see a slight advantage for the AGP 8X cards, but it’s pretty miniscule. Let’s plot out this data another way to get a visual sense of what’s going on.

AGP 8X doesn’t do much, but the clock speed bumps do. The higher core and memory clocks make the GF4 MX 440 more competitive with the Radeon 9000 Pro, but the R9000’s pixel and vertex shaders probably give it a bit of an edge at low and intermediate resolutions.


Unreal Tournament 2003
UT2003 has some of the largest textures and most spectacular effects of any game around. Will it lean on the AGP bus hard enough to show us a real difference with AGP 8X?

The Ti 4200 again shows a slight improvement with AGP 8X, but the MX 440 is actually slower, clock for clock, with AGP 8X. Let’s see if the pattern holds in UT2003’s botmatch test.

The pattern does hold, but the good news is that the clock speed increase more than offsets this strange effect at higher resolutions.


Serious Sam SE
With Serious Sam SE, I took the liberty of using the game’s “Extreme Quality” add-on in an attempt to stress the AGP bus a little. Using this add-on means scores from different basic 3D chip architectures won’t be comparable. For instance, the Radeon 9000 can’t do anisotropic filtering with trilinear filtering, so its scores will be artificially high. The GF4 Ti 4200 cards are actually slower here because they are more capable.

So the main use of the scores below should be to compare AGP 8X with AGP 4X, and to see the effect of clock speed increases on a chip.

Well, I’ll let you draw your own conclusions there. Obviously, with “Extreme Quality” in action, Serious Sam SE doesn’t benefit much from minor clock speed changes or a faster AGP bus.


3DMark2001 SE

Without pixel and vertex shaders, the MX 440 can’t keep up with the Radeon 9000 Pro in 3DMark. AGP 8X seems to hurt performance for both GeForce4 cards, especially at lower resolutions.


Workstation-class applications
Desperately searching for differences between AGP 4X and 8X, I ran the Ti 4200 cards through SPECviewperf to see if any of its tests might show us something interesting.

Well, I suppose that’s interesting. The AGP 4X card is markedly faster in four of the six tests. Only the last test, Unigraphics, benefits appreciably from AGP 8X.


We saw pretty much what we expected out of the benchmarks with AGP 8X. The faster AGP mode just isn’t stressed by current games. I was a little surprised to see the AGP 8X cards turning in lower scores, at the same clock speeds, as AGP 4X cards. However, that’s probably to be expected. AGP 8X implementations are not yet mature—especially, I suspect, in the chipset and GPU drivers that glue everything together.

The AGP 3.0 spec makes some changes that could compromise performance somewhat. For instance, the “long” transaction type present in the AGP 2.0 spec is removed in 3.0. With AGP 8X, the GPU must split up its request for larger data transfers into 64-byte requests. (I suspect this change was made to enable AGP 3.0’s isochronous transfer mode.) The additional overhead associated with breaking transfers down into multiple requests could slow texture uploads, for instance.

But I’m just speculating, and probably doing it poorly. I have to wonder if AGP 8X will ever matter much to GeForce4-class GPUs, especially the MX 440 with its aging GeForce core. Who knows? Perhaps AGP 8X will help older cards stave off obsolescence for a few more months when a new wave of games with larger textures arrives. Perhaps.

But at the end of the day, it doesn’t matter much. These products aren’t exactly world-beaters coming out of NVIDIA. They are just another spin of the GeForce4 chips intended to slot into NVIDIA’s lineup below the upcoming NV30 chip. With these chips, NVIDIA delivers a key “checklist” feature for big PC manufacturers to throw around on their spec sheets. That’s the reason these chips exist.

As far as I’m concerned the MX 440 isn’t much more compelling with its new clock speeds and AGP 8X. I’d still rather have a Radeon 9000 Pro. The R9000 has more advanced features, better image quality, and twice the pixel pipes the MX 440 has. This one is a no-brainer.

However, the GeForce4 Ti 4200 with AGP 8X looks pretty good. I’m pleased to see the increase in memory speed and the corresponding jump in performance, especially in 128MB configurations. Also, the GeForce4 Ti GPU stands to take better advantage of AGP 8X at some point down the road. Assuming the price is no higher for the AGP 8X version of the GF4 Ti 4200, these are all welcome improvements. But will the Radeon 9500 spoil the party? 

Comments closed
    • Anonymous
    • 16 years ago


    • Anonymous
    • 17 years ago

    i liked the atricle…
    but to the average user, what do i buy? Not only that, when is Nvidia coming out with its new chip? jan of 03? should i wait until then to buy one? Why does Radeon chips always have a hard time with cmpatability?

    • Anonymous
    • 17 years ago

    so were does the GeForce4 Ti 4600 fit in is it a better profmer than the 8x card

    • Khopesh
    • 17 years ago

    Sorry just had to post to get that last post off the top, if we’re lucky no one will read it. 🙂

    • Anonymous
    • 17 years ago

    Why are the newest comments listed first?

    [quote]Q. Shouldn

    • Anonymous
    • 17 years ago

    I like the fact that you guys tried to determine how much of the speed gain was due to AGP8X. Unfortunately, the test methodology was flawed. If you want to do it right, you have to overclock the 4X cards to the same speed as the 8X cards, not downclock the 8X cards. Why? Because the cards may be hitting the 4X bandwidth limit at the higher clock speeds, but not at the lower speeds. Moreover, if it hits the bandwidth limit at either speed, the degree of difference will be lower at the lower clock speeds than at the higher speed. This happens for the same reason a P4 gains less from a 200MHz speed increase when paired with PC133 than it would when paired with PC1066.

    • Anonymous
    • 17 years ago

    And it has been slashdotted. Run away!

    • Freon
    • 17 years ago


    I’m guessing the 8X parts will displace the 4X models in production. Expect the normal Ti4200 and MX440 models to disappear only to be regurgitated as “SE” or “New And Improved!!11” models.

    • sativa
    • 17 years ago

    [quote] i guess we might as well except the implimentation of new, if not necessary, technology. [/quote]Sorry, just woke up from a nap, meant to say ‘accept’ lol

    • Anonymous
    • 17 years ago

    AG20 here

    Thanks Damage, I will stay tuned. And good review as always. This is my favourite hardware site on the net. 🙂


    • sativa
    • 17 years ago

    well even though we don’t really see the performance benefits of AGP 8x right now, 4x eventually has to be a limitation.

    And if the costs of the cards are the same, i guess we might as well except the implimentation of new, if not necessary, technology.

    • R2P2
    • 17 years ago

    …and his conclusion will be: It still blows. But I’ll read the article anyway. 🙂

    • Damage
    • 17 years ago

    I’ll address AGP texture download performance separately soon.

    • R2P2
    • 17 years ago

    Serious Magic’s benchmark was the one that tested transfers [/i]from[/i] the video card, whereas normal benchmarks only test transfers [i]to[/i] the video card. There was an article a couple of months ago about why one might care about that.

    • Anonymous
    • 17 years ago

    So, what is Serious Magic?

    • R2P2
    • 17 years ago

    As of 7:30AM ATL this morning, Serious Magic’s benchmark was listed. I swear, it was right above Serious Sam: SE. (Yes, I thought Serious Magic then Serious Sam was funny too) It must have been corrected after I pointed it out.

    • Anonymous
    • 17 years ago

    Do these 8X cards cost more $$ than the 4X counterparts?

    The take home message here seems to be if you want the max performance, spend a bit of time researching which card has the best memory overclockability and forget the 8X AGP.

    • Anonymous
    • 17 years ago


    I didn’t keep up because I saw enough to know AGP speed is pretty meaningless. I think the most I ever saw was a few percent at 640x480x16. Whoopteefriggindo.

    I doubt there is even a 10% difference between 2x and 4x at high quality settings in UT2003 on a Ti4600.

    • Anonymous
    • 17 years ago

    Nothing will ever touch VLB. Locked at processor speed…

    • Unanimous Hamster
    • 17 years ago

    … and while we’re at it, let’s do VESA local bus and ISA as well.

    • R2P2
    • 17 years ago

    AG14 — If it’s “been a while” since you saw an AGP 2X vs 4X comparison, then I suspect the benchmarks that were used for that comparison probably didn’t use T&L as much as the newer ones do, so there wasn’t as much vertex data, etc. being thrown around as there is in some of the newer benchmarks. The extra data could have made the results look at bit different.

    So, you know what this calls for: An AGP 2X vs 4X vs 8X comparison with modern benchmarks. Heck, throw in PCI and AGP 1X, too.

    • droopy1592
    • 17 years ago

    OH NO!!! There can’t be problems with Nvidia drivers!!! Oh god, the world is going to end.

    My ATi 9700, up to this present time, has had no problems. It’s a great overclocker too!

    Just saying that Nvidia AND Ati have driver issues.

    • Anonymous
    • 17 years ago


    Probably because AGP speed doesn’t mean a heck of a lot unless you’re wastefully cramming vertex data over the bus (IOW, non-game benchmarks).
    And AGP texturing isn’t really needed consistantly with 64 and 128MB cards. Even when it is, 8X is still too slow. 8X just means it might stutter slightly less, which might be noticable, though.

    Last test I saw comparing 2x and 4x didn’t even show much of a difference. Granted that has been a while and I don’t really bother keeping up.

    • Ryu Connor
    • 17 years ago


    Ah, yes. The thread where people can’t figure out which driver is to blame. Look at this beauty for example.

    [quote]At this stage I am only recieving the BSOD with Battlefield 1942 and Hitman 2. Screen goes black but there is still a signal going to the monitor.

    Other games I play like Jedi Knight 2, quake 3 and Soldier Of Fortune 2 work perfectly.[/quote]

    Geez. D3D crashes, but OpenGL works. Quick, let’s blame the audio driver!


    Of course you’re free to reach your own conclusion.

    • Anonymous
    • 17 years ago

    Why were there no AA tests?

    Since AA requires a large portion of the VRAM to be allocated for the sample buffer that this would restrict space for textures, causing them to overflow into system memory, so wouldn’t this be the ideal test for AGP 8X?

    • Anonymous
    • 17 years ago

    Can anyone explain why 8X doesn’t help much in some of the tests? I assume it is because they dont benefit from the extra bandwith……

    • Anonymous
    • 17 years ago


    • Unanimous Hamster
    • 17 years ago

    Zzzzzz zzzzzzzzz zzzzzzzzz …….

    /me wakes up
    /looks around

    What, NV30 ain’t here yet? Wait me up when it is.

    /me goes back to sleep

    • TheCollective
    • 17 years ago
    • wesley96
    • 17 years ago

    Oh, and if you ask a 3DMark junkie what 1% would mean, that difference is so crucial. I know people live for increasing just 10 points (~0.1%). Seeing these results, I’m pretty sure none of the peeps would get these AGP8x cards. Hehe…

    • wesley96
    • 17 years ago

    [quote]You are talking about a difference of LESS THAN 1% which is much more likely from the different clock generators on each card than anything else. At most, 1-3% performance difference is meaningless and isn’t something you or anyone else could notice except via a benchmark.[/quote]
    I wouldn’t expect the clock generator be that far off, since we can, in fact, change the clock speed by the MHz. However, a minor datapath change can influence the performance, too, so I’m gonna hold the judgment until there are multiple products out to compare with.

    • R2P2
    • 17 years ago

    Serious Magic’s benchmark is on the list of tests, but it doesn’t seem to have actually been used. Did I miss something? The results couldn’t have been left out because they didn’t say anything interesting, because, well, neither did any other results.

    • Anonymous
    • 17 years ago

    Oh come on, as soon as system builders find out you cant play games on the nForce, they will flock to it. Super stabil until you try a game. perfect for the office.

    smell the sarcasm?

    • fejji
    • 17 years ago

    [q]AGP 8X seems to hurt performance for both GeForce4 cards, especially at lower resolutions. [/q]

    You are talking about a difference of LESS THAN 1% which is much more likely from the different clock generators on each card than anything else. At most, 1-3% performance difference is meaningless and isn’t something you or anyone else could notice except via a benchmark.

    Thanks nVidia for a NON-product and for f*#$ing the launch of the nForce 2 even more than the nForce 1. The curse of the 3dfx engineers continues! I hope they don’t count this as a new product or a refresh – wait I’ve got it – Q3 marketing refresh!

    • Forge
    • 17 years ago

    Radeons always spoil the party for NV fans. Sometimes the Radeon is a little early (9700), sometimes it’s a little late (8500), but it always keeps NV moving.

    Mid-range buyers should thank their lucky stars for the 8500-prompted Ti4200. Now all we need is for NV to realize that having a full and up-to-date feature set at lower clock speeds (Ti4200) is far more desirable to the gaming crowd than old tech hopped up on new processes (GF4 MX).

Pin It on Pinterest

Share This