A look at NVIDIA’s GeForce4 chips

JUST ABOUT ACCORDING TO schedule, NVIDIA has unveiled a top-to-bottom refresh of its entire desktop graphics product line. The new NVIDIA chips, dubbed GeForce4 Ti and GeForce4 MX, bring with them a number of new features and better performance, which is always a good thing. However, they do little to advance the state of the art in 3D graphics, nor has the GeForce4 Ti unambiguously recaptured the technology lead from ATI’s Radeon 8500.

As always, the GeForce4 chips have been launched with much fanfare—NVIDIA knows how to throw a mean party—and with a torrent of new “marketing terms” to help describe the chip’s technology to the public. And as always, our analysis of the GeForce4 will go beyond the marketing terms to give you the skinny on the GeForce4 scene. Read on to find out what’s new—and what’s not—in NVIDIA’s latest GPUs.

The modular approach
NVIDIA’s rise to the top in graphics has been fueled by the company’s aggressive six-month product development cycle. Typically, NVIDIA launches a truly new product once a year, with a minor refresh (usually involving higher clock speeds) at six months. One reason this approach has been successful for NVIDIA is because its chip designs are modular, so functional units on a chip can be reused if needed. Lately, NVIDIA has taken this approach to an extreme, integrating and interchanging a number of technologies across the GeForce3, the Xbox GPU, and the nForce core logic chipset. Doing so has allowed the company to introduce a number of extremely complex chips in relatively short order.

By my watch, it’s time for NVIDIA to launch another truly new product. Instead, NVIDIA has elected to introduce two brand-new chips, and though they share some technology, they’re fundamentally very different from one another. Naturally, the GeForce 4 MX is the low-end chip, and the GeForce4 Ti is the new high-end chip. These new chips do incorporate some new and improved functional units, but they’re not what you might be expecting from NVIDIA a year after the launch of the GeForce3.

We’ll look at each chip in some detail to see which bits have been recycled from previous products. Before we dive in deep, however, we’d better pull out the trusty ol’ chip chart to see how these chips stack up in terms of basic pixel-pushing power.

Here’s how the hardware specs match up:

Core clock (MHz) Pixel pipelines  Peak fill rate (Mpixels/s) Texture units per pixel pipeline Peak fill rate (Mtexels/s) Memory clock (MHz) Memory bus width (bits) Peak memory bandwidth (GB/s)
GeForce2 GTS 200 4 800 2 1600 333 128 5.3
Radeon 64MB DDR 183 2 366 3 1098 366 128 5.9
GeForce3 Ti 200 175 4 700 2 1400 400 128 6.4
GeForce3 200 4 800 2 1600 460 128 7.4
Radeon 7500 290 2 580 3 1740 460 128 7.4
GeForce2 Ultra 250 4 1000 2 2000 460 128 7.4
GeForce3 Ti 500 240 4 960 2 1920 500 128 8.0
Radeon 8500LE 250 4 1000 2 2000 500 128 8.0
GeForce4 MX 460  300 2 600 2 1200 550 128 8.8
Radeon 8500 275 4 1100 2 2200 550 128 8.8
GeForce4 Ti 4600 300 4 1200 2 2400 650 128 10.4

Now remember, as always, that the fill rate (megapixels and megatexels) numbers above are simply theoretical peaks. The other peak theoretical number in the table, memory bandwidth, will often have a lot more to do with a card’s actual pixel-pushing power than the fill rate numbers will. ATI and NVIDIA have implemented some similar tricks to help their newer chips make more efficient use of memory bandwidth, so newer chips will generally outrun older ones, given the same amount of memory bandwidth.

In fact, that chart above so doesn’t capture the actual facts of the matter that we’ll augment it with another chart that shows a few, key features of most newer GPUs.

Vertex shaders Pixel shaders? Textures per rendering pass Z compression? Hardware occlusion culling?
GeForce2 GTS/Pro/Ultra/Ti 0 N 2 N N
Radeon/Radeon 7500 0 N 3 Y Y
GeForce3 1 Y 4 Y Y
GeForce4 MX 0 N 2 Y Y
Radeon 8500/LE 2 Y 6 Y Y
GeForce4 Ti 2 Y 4 Y Y

That’s by no means everything that’s important to know about these chips, but it’s what I could squeeze in with confidence about the precise specs. Also, the implementations of many of these features vary, so the fact both GeForce4 and Radeon 8500 have “hardware occlusion culling” doesn’t say much all by itself. The GF4’s culling might be much more effective, or vice versa.

Still, this chart is revealing. As you can see, there are a few surprises for those of us familiar with the GeForce3. The GeForce4 Ti includes a second vertex shader unit, while the GeForce4 MX has no vertex shader at all.

What does it all mean? Let’s take it one chip at a time.

 

The GeForce4 Ti
NVIDIA describes the features of the “Titanium” version of the GeForce4 with a number of marketing terms, including Lightspeed Memory Architecture II, nFinite FX II Engine, Accuview AA, and nView dual-display support. To better understand how these pieces fit into the whole, have a look at NVIDIA’s diagram of the GeForce4 Ti chip:

By and large, this chip is very similar to the GeForce3. The most important changes are simple ones: the chip now runs at speeds as high as 300MHz, and memory runs as fast as 325MHz (or 650MHz DDR). GeForce4 cards will come with memory chips in a ball-grid array (BGA) package, which should cut costs and improve heat dissipation. ATI’s new 128MB Radeon 8500 cards will have BGA memory, too. It’s hip.

GeForce4 Ti cards will come in several flavors. The GF4 Ti 4600 will feature 128MB of memory and the aforementioned 300/325MHz core/memory clock speed. At these speeds, the Ti 4600 cards will be unquestionably faster, in terms of fill rate, than ATI’s top-end Radeon 8500.

GF4 Ti 4400 cards will be clocked slower, but NVIDIA’s not stating exactly how much slower; it may be up to the discretion of card makers.

The GF4 Ti chip itself weighs in at 63 million transistors, which is hefty by any measure. Still, the GeForce3 was 57 million transistors, which ain’t exactly small. (An Athlon XP processor, for reference, is only about 37.5 million transistors.) Like the GeForce3, the GeForce4 Ti is manufactured on a 0.15-micron fab process.

So what did NVIDIA add with those six million new transistors? Nothing radically new. NVIDIA’s engineers have tweaked a few things here and there, however. With some pointed questioning and a little work, we’ve been able to determine what most of the changes are.

  • Vertex shaders — The vertex shader unit itself is the same as the one in the GeForce3, but the GF4 Ti includes two of ’em. That puts the GF4 Ti on par with the Xbox GPU and ATI’s Radeon 8500. We’ve already seen how a second vertex shader can help: the Radeon 8500 is an absolute beast in 3DMark 2001.

    Also, since the GeForce3’s fixed-function T&L—useful for running apps written to take advantage of GeForce/GeForce2-class T&L engine—is implemented as a vertex program, the GeForce4 Ti ought to be able to accelerate fixed-function T&L very well.

  • Pixel shaders — NVIDIA’s pixel shaders are perhaps most in need of improvement. Most of us are familiar with the controversy over pixel shader versions between DirectX 8 and 8.1. ATI lobbied for some changes to the pixel shader API to better support the Radeon 8500, and NVIDIA countered that ATI’s “new” pixel shader versions didn’t add any real functionality.

    However, ATI has since demonstrated some convincing advantages of its pixel shader implementation. I keep going back to an old .plan update by John Carmack, written at the introduction of the GeForce3, that I think has since been overlooked. What he said is enlightening.

    Now we come to the pixel shaders, where I have the most serious issues. I can just ignore this most of the time, but the way the pixel shader functionality turned out is painfully limited, and not what it should have been.

    DX8 tries to pretend that pixel shaders live on hardware that is a lot more general than the reality.

    Nvidia’s OpenGL extensions expose things much more the way they actually are: the existing register combiners functionality extended to eight stages with a couple tweaks, and the texture lookup engine is configurable to interact between textures in a list of specific ways.

    I’m sure it started out as a better design, but it apparently got cut and cut until it really looks like the old BumpEnvMap feature writ large: it does a few specific special effects that were deemed important, at the expense of a properly general solution.

    Yes, it does full bumpy cubic environment mapping, but you still can’t just do some math ops and look the result up in a texture. I was disappointed on this count with the Radeon as well, which was just slightly too hardwired to the DX BumpEnvMap capabilities to allow more general dependent texture use.

    Enshrining the capabilities of this mess in DX8 sucks. Other companies had potentially better approaches, but they are now forced to dumb them down to the level of the GF3 for the sake of compatibility. Hopefully we can still see some of the extra flexibility in OpenGL extensions.

    ATI’s pixel shaders do offer more general dependent texture addressing, and their superior flexibility and precision allows for more convincing effects. (See here for an example.) The long and short of it: NVIDIA’s pixel shaders could stand some improvements.

    Apparently, the GeForce4 Ti’s pixel shaders have been improved incrementally in order to address some of these concerns. There are a few new pixel shader instructions in the GF4 Ti, including dependent texture lookups into both 2D and 3D textures and Z-correct bump mapping. These changes may or may not bring NVIDIA’s pixel shaders up to par with ATI’s, but the addition of more general dependent texture addressing is a step in the right direction, even if it isn’t a major improvement over the GeForce3.

  • Texturing — The GeForce3 can apply two textures per pixel in a single clock cycle, but through a “loop back” method it can apply an additional two textures per rendering pass. That handy trick improves performance and image quality by keeping the chip from having to resort to multi-pass rendering. The GF4 Ti offers improved support for three and four textures per pass. NVIDIA has optimized the GF4 Ti chip’s caches and pipelines to improve this “loopback” feature, especially when anisotropic and/or triliear filtering is in use.
  • LMA-II — The GF4 Ti includes the same crossbar memory controller as the GeForce3. NVIDIA has tweaked its Z-buffer compression routine so that it now provides lossless compression as tight as an 8:1 ratio. (At its launch, NVIDIA claimed 4:1 for the GeForce3’s Z compression.)

    Also, NVIDIA says the GF4 Ti’s occlusion detection is improved, though when pressed about how it’s improved, all they would say is that it’s “more aggressive.” We’ll seek more detail if we can. These are hardware-level changes, though, not just driver tweaks.

 
  • Accuview anti-aliasing — The GeForce4 Ti’s basic approach to anti-aliasing—multisampling—is the same as the GeForce3’s, but NVIDIA has made some very worthwhile tweaks to the AA implementation in the GF4 Ti, and they’ve given it a new name. Accuview AA uses NVIDIA’s multisampling approach, which effectively provides edge-only antialiasing in a more efficient manner than the traditional super-sampling approach.

    Among the improvements:

    • NVIDIA claims the GF4 Ti includes wider internal data paths, so it can accommodate multisampled anti-aliasing with very little performance loss.
    • The sample patterns for Accuview are improved. The GeForce3 used a rotated grid sample pattern for 2X AA and an inferior ordered grid pattern for 4X mode. The GeForce4 family uses rotated grid patterns for both 2X and 4X modes. NVIDIA denotes this difference in 4X mode by dubbing the rotated grid 4X mode “4XS”. The grid rotation is undoubtedly a good thing; it interrupts the regularity of the pixel grid, helping fool the eye to detect less aliasing. Unfortunately, Accuview doesn’t include a semi-randomized sampling pattern like the Radeon 8500’s SMOOTHVISION does—that would be even better than a rotated grid.
    • Also, as you can see in the diagrams below, Accuview’s AA sampling points are more centered in the pixel. At the subpixel level, the GeForce3 was sampling at the very center and at the edge of the pixel. NVIDIA claims these new sampling patterns provide more accuracy than the previous arrangement.


      GeForce3 2X anti-aliasing sample patterns


      Accuview 2X anti-aliasing sample patterns

      My understanding is that these new sample patterns ought to cause NVIDIA’s multisampling routine to decide to do blending more often, which is probably why NVIDIA didn’t use these patterns with the GeForce3—out of a concern for performance.

    • Quincunx remains, for those of you who prefer 2X AA plus full-screen blurring. NVIDIA claims turning on Quincunx doesn’t slow performance at all versus 2X mode. I hear adjusting your monitor to run out of focus doesn’t, either.
    • Accuview AA now encompasses anisotropic filtering as well as edge AA. This combination of edge antialiasing (multisampling) and texture antialiasing (anisotropic filtering) is simply The Right Thing To Do. Multisampling is just as effective as supersampling for edge AA, and anisotropic filtering is more effective than supersampling for texture AA. The GeForce3 could do both things at once, but the features weren’t logically grouped together as they are with Accuview.

    All in all, Accuview is a sensible upgrade to the GeForce3’s multisampling AA. The combination of efficient edge AA, better sample patterns, and anisostropic filtering (with trilinear, if you so choose) probably puts the GF4 Ti on level with the Radeon 8500 for AA. The Radeon’s edge AA may look a little nicer, but it’s probably a fair amount slower than Accuview. We’ll see.

  • nView multi-display support — The GeForce4 chips both incorporate dual RAMDACs, dual TDMS transmitters, and a TV-out encoder, so they can drive a wide variety of display combinations, from a single VGA monitor to dual digital flat panels. The GF4 cards are the first cards outside of the Matrox G500 to have dual DVI output capability, which puts them into an industry-leading position. This is also the first time NVIDIA has emphasized dual-display features in one of its high-end graphics chips. On this front, the GF4 moves ahead of the dual-display Radeon 8500, which lacks a second DVI out.


    The GeForce4 Ti reference card with dual DVI outputs

    NVIDIA has underscored the utility of multi-monitor support by introducing a new feature set in its drivers that helps manage multiple displays, virtual desktops, and the like. The nView software suite was designed by former Appian engineers, so it ought to be very nice. The nView feature set will extend back to existing NVIDIA cards, as well.

    We were able to confirm that the GF4 will be able to run displays concurrently in multiple, independent resolutions. However, we’ll believe it can happen in Windows 2000/XP when we see it, because such things are famously difficult.

And that’s about it for the GeForce4 Ti. The improvements aren’t anything special in terms of 3D capabilities, but they do bring NVIDIA’s high-end product offering up to snuff feature-wise versus the Radeon 8500. And the GF4 Ti 4600 will no doubt be a supremely fast graphics card.

 

The GeForce4 MX
The GeForce4 MX chip, also know as the NV17, is the new low-end GPU from NVIDIA, and it’s an intriguing combination of technology culled from the GeForce2, GeForce3, and GeForce4 Ti. The fastest GF4 MX variant will be the GF4 MX 460, clocked at 300MHz with 64MB of DDR memory at 275MHz (550MHz DDR). From there, it gets hazy. The GF4 Ti 440 will also use DDR memory, and it will run somewhat slower than the 460. The GF4 420 will be mated with plain ol’ SDRAM.

The strangest thing about the GeForce4 MX is that its 3D rendering core is ripped directly from its predecessor, the GeForce2 MX. The GF4 MX has two pixel pipelines capable of laying down two pixels per clock, and it has a fixed-function T&L engine. There aren’t any pixel or vertex shaders in sight (unless you count the GeForce2’s register combiners as primitive pixel shaders, I suppose). In terms of 3D technology, the GF4 MX is significantly less advanced than the GeForce3 or the Radeon 8500.

It’s possible NVIDIA might try to implement a software vertex shader. DirectX 8 has its own set of routines to handle vertex shader programs on the CPU if no vertex unit is present. NVIDIA might choose to write its own, highly optimized software vertex shader—perhaps making some use of the GF4 MX’s fixed-function T&L unit—to help improve performance. However, the fact remains: the GeForce4 MX lacks a vertex shader.

And pixel shaders can’t really be emulated.

Beyond that, the NV17 does include some worthwhile new tech, like the Lightspeed Memory Architecture bits lifted from the GF3/GF4 Ti. The one big modification to LMA for NV17 is that two of the four memory controllers in the crossbar config are eliminated. Those controllers are paired up with rendering pipelines, and the NV17 has only two pipes. Even so, with Z compression, occlusion detection, and fast DDR memory, the GF4 MX 460 ought to bring a monster fill rate to the party. If all you want to do is push pixels in DirectX 7-class games, the NV17 will certainly do so.

The GF4 MX also lifts the antialiasing and display units from the GF4 Ti, so the chip will include one of the better AA implementations around, plus excellent dual-display output capabilities and the nView feature set.

The one piece of unique technology in the NV17 is a nod to the fact the NV17 will find its way into lots of laptop PCs. The chip includes a full MPEG2 decoder, so DVD playback should require almost no CPU overhead. NVIDIA says they left the MPEG2 decoder out of the GF4 Ti because big, fat desktop PCs with GF4 Ti cards don’t need much help with DVD playback. Makes sense to me.

What doesn’t make sense to me is why in the world NVIDIA is introducing this product, with this 3D rendering pipeline, at the beginning of 2002. One would expect a “GeForce4 MX” to include a cut-down version of the GeForce3/4 rendering pipeline, perhaps with two pixel shader/rendering pipes and a single vertex shader. Instead, we’re getting a card that’s incapable of taking advantage of all of the new 3D graphics programming techniques NVIDIA pioneered with the GeForce3.

With every GF4 MX that NVIDIA sells, the installed base for yesterday’s 3D technology will grow, and resistance against truly ground-breaking games and other software will be strengthened. Not only that, but attaching the “GeForce4” name to a chip with a GeForce2 MX rendering core seems deceptive to me, especially since the correlation between the GeForce2 and the GeForce2 MX was pretty tight.

Yes, the GF4 MX will be fast; it will have nice dual-display capabilities; and it will be cheap. But this cheap and easy date will be a nightmare the morning after.

 

Conclusions
NVIDIA will be “gracefully” phasing out its current products in favor of this new lineup. For the most part, that’s a fine thing. The GF4 Ti is much stronger competition for the Radeon 8500 than the Ti 500 was. With dual vertex shaders, Accuview AA, and dual-display support, the GF4 Ti has few weaknesses now. And with the core and memory clock speeds NVIDIA is suggesting, the GF4 Ti ought to be the fastest GPU on the planet.

Still, I can’t imagine recommending that any current GF3 or GF3 Ti 500 owner upgrade to a GF4 Ti. The only really solid reason to do so would be for the dual-display capabilities. Beyond that, most of the improvements are too minor, too incremental to justify an upgrade.

The loss of the GeForce Ti 200, however, will be a step backwards. Unless the GF4 Ti 4400 cards can get very cheap very soon, NVIDIA may be stuck with the GeForce4 MX 460 card as its mainstream $149-199 product. Given the fact that ATI has just introduced a Radeon 8500LE card with 128MB RAM for $199 list, the choice may be a simple one. The GF4 MX matches up nicely against the Radeon 7500, but with real vertex and pixel shaders, the Radeon 8500LE will blow it away.

Whatever the case, it’s safe to say this is one of NVIDIA’s least memorable “spring cycle” product releases. After the GeForce3 last year and the original GeForce the year before that, the GeForce4 family is a little underwhelming. The GeForce killed off NVIDIA competitors in droves, and the GeForce3 was a revolutionary product. Now that the GeForce4 has arrived, though, ATI is still in a pretty good position. After 3dfx and some of the other 3D chipmakers kicked the bucket, many of us were wondering whether there would be two major players in the graphics market or only one. Looks like it’s gonna be two. 

Comments closed
    • Anonymous
    • 18 years ago

    Geforce4 GPU itself 256bit, but not 512bit

    Memory bits are diffrent from GPU bits

    YOU SAID???———————————————————–
    GeForce3/4 : 4x 64 bit DDR memory path, gets chunks of 32 and 64 bits of data. Equals 512 bit.

    You are talking about “MEMORY BITS” but not GPU bits!!!

    You got confuse with bits…………

    • Anonymous
    • 18 years ago

    The Geforce 4 Ti (4xxx) is 256bit GPU.

    Nvidia has not realese 512bit GPU Architecture.

    ————————————————
    Nvidia first generation: 64bit GPU
    Nvidia TNT: 128bit GPU
    Nvidia TNT2, Pro, Ultra: 128bit GPU
    Nvidia Geforce 1: 256bit GPU
    Nvidia Geforce 2, MX, GTS, Pro and Ultra: 256bit GPU
    Nvidia Gefroce 3, Titanium: 256bit GPU
    Nvidia Geforce 4, Titanium: 256bit GPU

    But where is 512bit GPU + 8x AGP support.
    It is obsolete card.
    It does not have/support 500Mhz/550Mhz RAMDAC
    It does not have/support DDR-2 /533Mhz
    It does not have/support 0.13/0.11 micron tech.

    • Anonymous
    • 18 years ago

    Hm…it’s not a 512 bit?
    Well, with all kudos to boy wonder Anand, LMA and LMA II are NOT 32-bit DDR pathways…four of them, to be more exact.

    GeForce3/4 : 4x 64 bit DDR memory path, gets chunks of 32 and 64 bits of data. Equals 512 bit.

    GeForce4 MX: 2x 64 bit DDR memory path, gets chunks of 32 and 64 bits of data. Equals 256 bit.

    ATi’s R8500: 2x 128 bit DDR memory path, gets chunks of 64 and 128 bits of data. Equals 512 bit.

    ATi’s R7xxx: 2x 64 bit DDR memory path, gets chunks of 32, 64 and 128 bits of data. Equals 256 bit.

    Don’t bog me down, I am just a journalist who these to companies….hate ๐Ÿ™‚

    • Anonymous
    • 18 years ago

    The only thing Geforce 4 TI (4xxx) is missing?

    It does not have/support AGP 8x
    It does not have/support 512bit GPU.
    It does not have/support 500Mhz/550Mhz RAMDAC
    It does not have/support DDR-2 /533Mhz
    It does not have/support 0.13/0.11 micron tech.

    If Geforce 5 will not support this? then it is obsolete card.

    • Anonymous
    • 18 years ago

    Yo, corrosive #62. Check this presentation slide if you think the GF3 will continue on. Looks like it vanishes to me. The Ti200 will be the low end until supplies dry up.

    ยง[< http://www.hardocp.com/reviews/vidcards/nvidia/gf234/direction.html<]ยง

    • R2P2
    • 18 years ago

    MadMadOriginal – Actually, the companies that don’t have the time or money to use shaders in their games are the ones that churn out derivative games as quickly as they can. i.e. The kinds of companies that started cranking out FPSs when they where at their peak, then went on to RTSs when Command and Conquer was popular, etc. When you’re trying to make money off a “hot” gene, you need to be able to get a game out the door while its still hot, and that means no time for fancy stuff.

    Then there’s id, whose stuff tends to *define* what’s hot, so they can take their sweet time and let Carmack do whatever he wants.

    • corrosive23
    • 18 years ago

    Damn some of you have no idea what you are talking about. they have not cancled the gf3 line. The gf3 ti200 is now going to be the “low-end” card, and the ti500 which was a rip-off to begin with is being phased out.

    • MadManOriginal
    • 18 years ago

    [q]As for devs making games that use the shaders. Um duh they do that already they just as with everything else make sure they have fallbacks. [/q]

    Not necessarily. With the tight budgets and time constraints placed on game devs today, why would they spend extra time coding for shaders that very few have? heck, most have a hard enough time releasing a workable product as it is. Sure, the “big guys” like Id, et al., have the resources, but they are few and far between. It is the little houses that are truly innovating that do not have the $$ or time to waste programming for shaders.

    • Anonymous
    • 18 years ago

    Many TVs have different types of filters on them now with all sorts of marketing names so that you can’t tell what they actually do, but for the most part TVs don’t do the de-interlacing thing because it won’t be needed when they all move to digital/HDTV formats. Besides if I were playing a reflex-driven console game I wouldn’t want that delay because I would ever be able to make any jumps or anything. 20ms may not seem like that much but I know I notice the 20ms difference on the games online and I think it would be just as obnoxious on a tv.

    • TwoFer
    • 18 years ago

    [q]Well, given there is a market of people who are perfectly happy with the MX200 level graphics the IGP of their nForce provided them[/q] Then why don’t they keep the MX200 in production? But they don’t… they are even planning to drop the GF3 cards, and force everyone to GF”4″ prices — despite the fact that they’ve doubtlessly written off the GF3 setup costs.

    It’s about more money for them, and less choice for everyone else. And that’s cool — capitalism, it’s called. But it doesn’t change the fact they’re playing a large fraction of the user population for suckers.

    • Steel
    • 18 years ago

    #54, I guess what I was referring to was the old, old consoles like the NES and SNES that have a vertical resolution of only 240 lines or so. It could have been a trick of the eye but the images didn’t look interlaced at all – I could clearly see the scan lines.

    • Anonymous
    • 18 years ago

    55

    I agree. It seems all the new generation offers is 15-20% more performance. Whoopee? I’m not impressed. I guess it is difficult to expect Voodoo 1 to 2 or Geforce SDR to DDR jumps in performance any more. Graphics cards are finally catching up to the level of CPUs, with slower gains over time and granulated speed ratings. At least we don’t have the megaHertz myth to deal with.

    • Hallucinosis
    • 18 years ago

    AG #54:

    While this is true for the most part in 1992 a few TVs hit the market that would digitally process TV images (incurring a 20ms delay) and make them non-interlaced. My parents then bought and still own such a TV. It’s a 32″ by Toshiba. Unfortunately they faded from the market as most people weren’t willing to pay $3000 for a 32″ TV when one could be had for $1500-2000.

    The picture quality of this 9 year old TV is much, much better than my 1 year old, interlaced, 27″ Sony Wega. I don’t understand why TVs don’t feature this now as the technology has become much cheaper.

    • Rousterfar
    • 18 years ago

    Glad I am the only one NOT excited about this generation of hardware. Maybe I have been just too spoiled by the current generation of console system’s lush fully realized graphics to really care about these cards. They still have not fully taken advantage of hardware 2 years old, and they expect me to buy this? Riiight…

    • Anonymous
    • 18 years ago

    [q]It’s only recently that the consoles have been using interlaced mode on TVs (PS2, XBOX, GameCube and maybe Dreamcast) and TVs are better at handling interlacing than computer displays. I’ve never had a chance to see 1080i in person, maybe interlacing doesn’t look too bad on a HDTV screen. [/q]

    Umm perhaps I misred this, so forgive me if I’m arguing nothing, but R2P2 is right. Pretty much all TV’s out there (with the exception of HD’s, and plasmas and some other fangled TVs I don’t know about) work in interlaced. Only with the new sets that have come out has progressive even been an option. Regular tv’s haven’t done it because cable, antenna, satellite and even DVD(initially) didn’t support it. Consoles have always been running in interlaced mode on TVs because that is what the TV’s support so it doesn’t make sense to add hardware for something that won’t be used ESPECIALLY when you’re losing money on the hardware to begin with. Also TV’s aren’t really better at handling interlacing, their images just suck so bad that interlacing isn’t really noticeable whereas looking at a computer screen at our super high resolutions would cause interlacing to stand out like a red thumb.

    • Anonymous
    • 18 years ago

    All nvidea is doing is replacing the mx2 line with the mx 4 line and the gf3ti line with the gf4 ti line. Tho I have a hunch the gf3 ti200 might stay around awhile.

    Simple as that.

    As for devs making games that use the shaders. Um duh they do that already they just as with everything else make sure they have fallbacks.

    Only now instead of falling back to the mx2 in about a year they will fall back to the mx4 level of power. Quite a leap up frankly so shut your whiney yap and sit down!

    The mx line is for BUDGET cards and the shaders are too big to cram into such a small chip. The prices as always will drop a ton and in fact are the same prices the mx 2 started at so pththththt.

    I havnt seen so many clueless people milling about screaming since I accidentaly switched on cspan.

    • Anonymous
    • 18 years ago

    #48

    Yeah, that’s impressive that it pulls off a fair efficiency increase. Although as some other posters have and will have stated, it’s been about a year. The speed increase is moderate, and the features increase is not worth mentioning.

    Sadly, it’s not like anyone is going to greatly eclipse them either.

    • TwoFer
    • 18 years ago

    From Damage’s article: [q]What doesn’t make sense to me is why in the world NVIDIA is introducing this product, with this 3D rendering pipeline, at the beginning of 2002.[/q] Simple. Because they know there’s one born every minute… and Nvidia has shown they’re not above fleecing the market.

    • Pete
    • 18 years ago

    [i]This is NOT just a higher clocked GF3 good people.[/i]

    Well, they [i]have[/i] had a year to tweak the design. A [b]year[/b]. That doesn’t sound like a six-month product cycle to me, not does it sound like a new architecture per year, and I don’t see the bar set too high for ATi to compete. It just sounds like a natural progression, much like AMD and Intel bump clock speeds. I have high hopes for the R300, but I’m not sure I’ll wait that long to upgrade.

    But the GF4 is still impressive, moreso because nV will phase out the GF3 completely.

    • Anonymous
    • 18 years ago

    Check out the article on Anandtech. He benches a Ti4600 against a Ti500 and clocks the 4600 DOWN to the same core/mem speed as the 500.

    The 4600 generally shows a 10-15% increase in performance across the board over the Ti500 using the same speed settings.

    The other thing you will notice is that because of the improvements in memory handling, running at HIGH resolutions with AA turned on is not only possible, you can do so at frame rates that are more than acceptable.

    This is NOT just a higher clocked GF3 good people.

    • Anonymous
    • 18 years ago

    until ATI can support it’s products with quality drivers, they are no competition for nVidia. and what is ATI’s product development cycle like? not good enough to mach nV’s, i’m sure. what is needed is a new player. ATI will probably never be able to compete with nVidia over the longterm. They’ll fall too far behind, like 3dfx.

    • Steel
    • 18 years ago

    [q]Steel — Um, someone correct me if I’m wrong, but I believe normal TVs only display interlaced, so if you’ve ever played a game on a TV you’ve probably played it interlaced. I haven’t heard anyone complain about the interlacing on their PS2 (or XBOX, or GameCube, or SNES, or…) giving them a headache. [/q]It’s only recently that the consoles have been using interlaced mode on TVs (PS2, XBOX, GameCube and maybe Dreamcast) and TVs are better at handling interlacing than computer displays. I’ve never had a chance to see 1080i in person, maybe interlacing doesn’t look too bad on a HDTV screen.

    • Anonymous
    • 18 years ago

    #28 Nvidia have “Nvidia distance fog” which works not like ATIs “Trueform” which dont.

    The new GF4s are great, more fps, always more fps, plz!

    • Anonymous
    • 18 years ago

    *[

    • Anonymous
    • 18 years ago

    *[

    • Anonymous
    • 18 years ago

    There’s a resone why theres no games. and its mostly because companies like nvidia to remove the features. no company would write code to use it if no one has it.

    • Anonymous
    • 18 years ago

    [q]1920x1080i = 960x540p [/q]

    Huh? There is no halving of horizontal resolution. Think half, not quarter.

    • Anonymous
    • 18 years ago

    [q]but since it takes twice the time to fill those lines in the same frame of time as a progressive [/q]

    It was my understanding that an interlaced signal at 30 frames per second was actually 60 fields per second. So an entire frame (2 fields) is filled in the same time as a progressive image at 30 frames per second.

    I’m not arguing that progressive is not better, though.

    • Anonymous
    • 18 years ago

    I forsee long winded dumbed down technical arguments with friends as I try my dearest to stop them from buying a GF4 MX.

    “But, It’s a Geforce 4, dude!”

    “Yeah, they suck unfortunately. See if you can still find one of those nifty Geforce 3 Ti-200s online. They’re only like $15 more than an MX460.”

    “But, the Geforce 4 has more features, right? I want this card to last me 2 years.”

    “Em. Actually, Nvidia screwed the pooch with the Geforce 4 MX. The Geforce 3 Ti200 is faster and has a lot better feature set.”

    etc…

    • R2P2
    • 18 years ago

    #36 — Yes, but interlacing only effectively halves the Y axis, so I think Ryu’s right about the halving and wrong about the effecive resolution. AFAIK interlacing draws alterating scanlines on alternating frames, so it’s like using Voodoo2s in SLI, except with chip 1 drawing its half now, and chip 2 drawing its half afterwards, rather than both drawing at the same time. If that makes any sense.

    • Despite
    • 18 years ago

    maybe I’m missing something here Ryu, but when you halve both X and Y, you’re essentially quartering XY, aren’t you?

    • Pete
    • 18 years ago

    The GF4Ti4200 for $200 is the runaway price/perf. winner of this entire generation (especially in a few months when it’ll be much easier to overclock to 4400 and maybe 4600 levels, when memory tech is faster). I’m doubting its existence, tho, as I don’t see nV coming out with a $200 GF4Ti when they sell a GF4MX for $180. But I can hope.

    BTW, the GF4MX looks to be a purely money-making proposition, as some companies offer it with two bundled games. Two! How long has it been since you’ve seen a GFx card with any bundled games, let alone two? (Absolute and Elsa, maybe, but I haven’t heard a lot about the former recently, and the latter is withdrawing from the consumer market, no?)

    • Ryu Connor
    • 18 years ago

    [q]How do you figure that? Just because the display is interlaced doesn’t mean the resolution is cut in half.[/q]

    I do that because it takes two passes to complete one image, where as progressive provides the entire image in one frame.

    Yes, you are actually sitting with more lines, but since it takes twice the time to fill those lines in the same frame of time as a progressive image half the size I’ve always considered interlaced images to be exactly half their represented size. More fairly I should probably just divide the refresh rate of the interlaced resolution in half, but hell, any way you cut it everything in interlaced takes twice as long.

    I won’t go into all the other problems of interlaced such as interline twitter, line crawling and field aliasing.

    • Anonymous
    • 18 years ago

    [URL]http://www.3dcenter.org/artikel/ati_128mb_praesentation/index5.php[/URL]

    In this presentation, ATi’s really cranked up the loudness in their new marketing-strategy and prepare a frontal attack against the release of the new GeForce4 MX/Ti series.

    ATI calls the GeForce4 as GeForce 3.5 in this presentation. ATI also stresses that it

    • R2P2
    • 18 years ago

    Steel — Um, someone correct me if I’m wrong, but I believe normal TVs only display interlaced, so if you’ve ever played a game on a TV you’ve probably played it interlaced. I haven’t heard anyone complain about the interlacing on their PS2 (or XBOX, or GameCube, or SNES, or…) giving them a headache.

    • Anonymous
    • 18 years ago

    For those complaining about ClearType being a waste, you should look intro trying ClearTweak. The program lets you choose just how much anti-aliasing text should have (I think the default is 1400 or something. 1000 works perfect for me on my 21″, while I prefer 1400 on my laptop’s 15.1″.

    I found the program via fileflash.com, but as the site is dog slow for me, I’d rather not look it up. If it can’t be found there, Google will return some matches, I’m sure.

    As for the whole GeForce4 thing, I can’t say I care much for any of the new cards for their performance. My Ti200 (at 500 speeds) serves me just fine for now. Dual monitor output is something I’m interested in, though, especially since my KT266A-based motherboard doesn’t seem to want to run with an additional PCI VGA adapter. Well, not stable anyway. I’m stuch with using my laptop as my secondary monitor/compouter now, which is sweet, but I’d really like to get a nice 15″ LCD up and running next to my big 21″ beast.

    – timm

    • Anonymous
    • 18 years ago

    Everyone that’s waiting for cheap GF3’s can probably just keep waiting. From what I’ve heard, GF3 chip production ceased quite some time ago and the GF4 line is replacing the GF3 line, not pushing it downmarket as in the past. I bet that GF3’s drop a little more in price and then just disappear. How far the price drops depends on how many GF3’s are in the pipeline already and how long they last.

    • Anonymous
    • 18 years ago

    *[http://www.anandtech.com/video/showdoc.html?i=1583&p=12<]ยง More for those looking way ahead than anything else. If you can stand the five banners per page, ugh.

    • Anonymous
    • 18 years ago

    Too bad Nvidia didn’t implement something like Ati’s TRUFORM.

    • Steel
    • 18 years ago

    [q]1920x1080i = 960x540p[/q]How do you figure that? Just because the display is interlaced doesn’t mean the resolution is cut in half.

    Aside from that, playing games on an interlaced display sounds headache inducing.

    • Ryu Connor
    • 18 years ago

    [q]but I imagine most people would be able to tell the difference between the three resolutions.[/q]

    A important word was missing from that sentence.

    [i]but I imagine most people would [b]not[/b] be able to tell the difference between the three resolutions.[/i]

    Gomen.

    • Ryu Connor
    • 18 years ago

    [q]Imagine playing games at 1920x1080i on a 50+ inch TV (it would make that 19 inch monitor look pathetic), or scaling a DVD to HiDef resolutions (would easily beat a 480p standalone player) but no graphic card makers easily support this.[/q]

    1920x1080i = 960x540p

    A 19″ monitor is going to smoke that for pure resolution. The only thing you gain is a larger viewing area.

    My DVD’s decoder does 480p, 720p, and 1080i. I run in 720p because it’s the highest available resolution, but I imagine most people would be able to tell the difference between the three resolutions. It’s not even all that noticable that you are in HDTV until you see a DVD whose material was left in NTSC and you get a interlacing effect in certain scenes.

    [q]And Ryu, admit it, in the previous post, you just wanted to type “poppycock”, didn’t you? :)[/q]

    Yes, it’s true. I couldn’t resist the opportunity.

    [q]Ryu, that’s why I love hardware DVD Decoders. :)[/q]

    Yes, they rock.

    [q]The cleartype crap that you say does help the text quality on laptop screen.[/q]

    Yeah, but who uses those things? ๐Ÿ˜‰

    • Anonymous
    • 18 years ago

    *[

    • Anonymous
    • 18 years ago

    I am still disappointed that there is a gap in the graphics market.

    I would like the ability to connect a graphics card through the DVI, to component cables into my HDTV. But the drivers do not support 1920x1080i, and there are no DVI to component cables (though ATI keep of stating that they will release one).

    Imagine playing games at 1920x1080i on a 50+ inch TV (it would make that 19 inch monitor look pathetic), or scaling a DVD to HiDef resolutions (would easily beat a 480p standalone player) but no graphic card makers easily support this.

    Forget the s-video out to the TV, give us some HDTV connectivity! I feel this is an market segment waiting to happen.

    If I could do this with the GF4 I might upgrade. For now I will stick to my current “slow” card….

    • Anonymous
    • 18 years ago

    Featurewise the GF4 Ti is not much of an improvement, however there are very few games out there which make good use of DX 8 and AFAIK none which use 8.1, and I bet there won’t be any “must-have” titles until November at the earliest. So featurewise you won’t need anything better than a GF3 or Radeon 8500 for the next 18 months or so.

    Speedwise the GF4 Ti is a significant improvement over the GF3 or Radeon 8500 especially if you run AA. The speed improvement of a GF4 over a GF3 is more than the improvement of a GF3 over a GF2.

    • R2P2
    • 18 years ago

    cRock — Save your pity for the people who buy the GF4MX expecting something like a GF4. Some people might actually *want* a card that’s “only” competitive with the Radeon 7500 (I just bought a 7500 a month ago, so I have to insist that it’s still good enough ;)). Of course, Joe SixPack probably *is* stoopid enough to buy a GF4MX expecting a GF4, and there’s probably several other reasons why he deserves your pity. ;p

    • Anonymous
    • 18 years ago

    So, okay, if the GF3 Ti200 is the price/performance sweet spot (or will be in a bit due to the new cards), which company makes the best one? 2d image quality being a high priority. Or should I go Matrox, since the only 3d game I play (and not very often at that) is Q3A?

    -Rick

    • cRock
    • 18 years ago

    1. The only stand out product (in terms of price/performance) from nVidia right now if the GF3 ti200.

    2. It looks like ATI’s R300 can overtake nVidia. The GF4 is pretty disappointing, more like a GF3+.

    3. The MX cards are a joke now. I pity the fool that buys one.

    • Anonymous
    • 18 years ago

    *[

    • Anonymous
    • 18 years ago

    *[

    • MadManOriginal
    • 18 years ago

    YAY! I see a GF3 Ti200 for ~$100 in my near future. Or maybe a Radeon 8500LE. Who needs an extra vertex shader when hardly any games use them anyway?
    Thankfully, with game development that taking 1.5-2 years, graphics cards are the one area where being behind top of the line is acceptable…

    • Anonymous
    • 18 years ago

    NVIDIA says they left the MPEG2 decoder out of the GF4 Ti because big, fat desktop PCs with GF4 Ti cards don’t need much help with DVD playback. Makes sense to me.
    ———–
    Are we talking about the onboard iDCT function here or something else? Just want clarification on that.

    Ryu brought up some good points…I didn’t know about that.

    • EasyRhino
    • 18 years ago

    Thanks for the writeup, and dishing it out quick. Now I don’t have to bother with the fluff for another six months ๐Ÿ™‚

    And Ryu, admit it, in the previous post, you just wanted to type “poppycock”, didn’t you? ๐Ÿ™‚

    ER

    • Ryu Connor
    • 18 years ago

    [q]Quincunx remains, for those of you who prefer 2X AA plus full-screen blurring. NVIDIA claims turning on Quincunx doesn’t slow performance at all versus 2X mode. I hear adjusting your monitor to run out of focus doesn’t, either.[/q]

    Sort of like the ClearType crap XP offers.

    [q]. NVIDIA says they left the MPEG2 decoder out of the GF4 Ti because big, fat desktop PCs with GF4 Ti cards don’t need much help with DVD playback. Makes sense to me.[/q]

    Poppycock. You’ll drop frames everytime you do something even momentarily CPU intensive if you’re watching the movie on one monitor and doing something else on another monitor. Or perhaps your the type that has the TV on the desk with the computer and decide to output to the TV while you work on the PC; same afforementioned rules apply. There’s something to be said about true multi-tasking and leaving the CPU completely unburdened. Especially when you’re already paying $500 for the priviledge of owning a particular graphics card.

    • pudd
    • 18 years ago

    “…is an absolute beast in 3DMark 20001.”

    Minor correction needed. ๐Ÿ˜›

    • Anonymous
    • 18 years ago

    Well, it’s about time Nvidia integrated some MPEG2 support. But once again they’ve fallen behind! Sigma Tech has announced a MPEG4/2/1 card with TV-out and a remote control. Cool stuff for us DiVX/SVCD/DVD-watching maniacs.

    • Anonymous
    • 18 years ago

    bah, haven’t read the article. Have the prices on gf3 cards gone down yet? ๐Ÿ™‚ That’s the real news. eheh

    • R2P2
    • 18 years ago

    I don’t quite follow this:[q]The sample patterns for Accuview are improved. [b]The GeForce3 used a rotated grid sample pattern for 2X AA and an inferior ordered grid pattern for 4X mode.[/b] [b]The GeForce4 family uses ordered grid patterns[/b] for both 2X and 4X modes. NVIDIA denotes this difference in 4X mode by dubbing the rotated grid 4X mode “4XS”. The grid rotation is undoubtedly a good thing; it interrupts the regularity of the pixel grid, helping fool the eye to detect less aliasing. Unfortunately, Accuview doesn’t include a semi-randomized sampling pattern like the Radeon 8500’s SMOOTHVISION does

    • glappkaeft
    • 18 years ago

    [q]The loss of the GeForce Ti 200, however, will be a step backwards. Unless the GF4 Ti 4400 cards can get very cheap very soon, NVIDIA may be stuck with the GeForce4 MX 460 card as its mainstream $149-199 product.[/q]

    Some places (e.g. THG shudder) lists a GF4 Ti4200 at a list price of 199$. It has 225MHz core clock and 500 MHz memory so it should be similar to the GF3 Ti500 in performance and a much better buy than the GF4 MX Ti460.

    Patrik

    • Anonymous
    • 18 years ago

    [quote]Fools! Talking about shooting themselves in the foot.[/quote]

    I agree . . . the sooner Nvidia can make new features widely available on desktops (e.g. through lower end cards), the more willing game developers will be to use such features. If only ultra high-end desktops had 3D acceleration, I doubt we’d have as many FPS’es as we do today.

    Kinda throws some doubt on whether or not a GF4 Ti card will ever be worth the purchase, if the features in it are just there for fluff.

    • glappkaeft
    • 18 years ago

    [q]Quincunx remains, for those of you who prefer 2X AA plus full-screen blurring. NVIDIA claims turning on Quincunx doesn’t slow performance at all versus 2X mode. I hear adjusting your monitor to run out of focus doesn’t, either. [/q]

    Now that is just plain evil. I love it!

    • Dodger
    • 18 years ago

    Page 2 of the Article, I’m impressed that the Radeon is so good in Madonion’s 3dMark 20,001…I didn’t expect that to be released for another 18,000 years.

    • Anonymous
    • 18 years ago

    Geforce4 Ti? YAAAAWWN.

    GF4MX? Shame on Nvidia for that piece of crap. Why would anyone buy a GF3/4 if games will never support the new vertex shaders? And then Nvidia releases a “new” chip without full DX8-capabilities. Fools! Talking about shooting themselves in the foot.

    • Anonymous Coward
    • 18 years ago

    Dual DVI would be sweet. I wonder if that’s dual DVD-I? Hmmm, I better go check their Linux drivers (probably aren’t out so soon).

    • Kevin
    • 18 years ago

    Hmm….so now what do I buy? I was hoping that the GF4 was going to be a bit more spectacular. I guess I’ll stick with my GF2 until Doom3 comes out (or maybe even Unreal 2). Or until something is really going to use all these vertex and pixel shaders.

Pin It on Pinterest

Share This