Dissecting the 3DMark03 controversy

RECENTLY, FUTUREMARK RELEASED the latest version of its 3DMark benchmark, and the PC enthusiast world took notice. Previous versions of 3DMark had developed quite a following, in part thanks to publications like us using the tests in evaluating PC hardware. FutureMark claims over 1.5 million copies of the benchmark were downloaded within 72 hours of its release. Steve over at Fileshack told me 3DMark03’s release caused the biggest spike in download traffic in Fileshack’s history—even bigger than the last major Counter-Strike release.

Meanwhile, just as FutureMark was introducing its new benchmark, graphics heavyweight NVIDIA initiated a public relations campaign aimed at undermining 3DMark03’s credibility as a benchmark and discouraging use of the test in the enthusiast press. NVIDIA’s first move was to mail out a whitepaper outlining its criticisms of 3DMark03. NVIDIA asked members of the press not to redistribute this document, only to paraphrase or offer excerpts. The document registered some specific complaints about 3DMark03’s methodology, but its primary thrust was an overall critique of FutureMark’s approach to 3DMark03 and of synthetic benchmarks in general.

The impact of NVIDIA’s PR push was immediate and impressive. A number of web sites published articles raising questions about 3DMark03—in some cases, unfortunately, repeating NVIDIA’s claims about the test without attribution and without critical evaluation of those claims. Since that time, a number of players, including FutureMark themselves, have weighed in with responses to NVIDIA’s criticisms.

During the past couple of weeks, I’ve talked with representatives of NVIDIA, FutureMark, and ATI about this controversy in an attempt to better understand the issues involved. Also, over the past few days, some intriguing new details about the architecture of NVIDIA’s new GeForce FX chip have come to light, and those revelations may help explain why NVIDIA has objected so strenuously to 3DMark03’s design. I’ll try to cover what’s happened and why it matters. Let’s start with some background on FutureMark, NVIDIA, and the creation of 3DMark03.

FutureMark, NVIDIA, and the genesis of a conflict
FutureMark is a small company based in Finland whose business depends on two primary sources of income: sales of the “Pro” versions of its benchmarks to end users and sales of memberships in its beta programs to independent hardware vendors (IHVs) like AMD, Intel, ATI, and NVIDIA. The beta program has several membership tiers, with pricing tied to level of participation. Broad participation in the beta program has been key to FutureMark’s success. The beta program member list on FutureMark’s website reads like a Who’s Who of PC performance hardware. Tier-one participants include ATI, AMD, Intel, and Microsoft. Other members include graphics players like Matrox, S3 Graphics, SiS, Imagination Tech, and Trident, plus PC OEMs like Dell and Gateway.

The months-long process of developing a new revision of 3DMark involves input and feedback from beta program partners about a series of design documents, alpha builds, and beta builds of the benchmark. As I understand it, NVIDIA had been a top-tier FutureMark beta program member during the development of 3DMark03 until the first of December, when NVIDIA’s membership renewal came due. At that time, NVIDIA elected not to renew its membership. 3DMark03 was in the beta stage of development at this point, and was essentially feature-complete.

By all accounts, NVIDIA’s decision not to renew its membership was triggered by its dissatisfaction with the 3DMark03 product and with FutureMark’s responses to NVIDIA’s input on 3DMark03’s composition. Clearly the two parties had substantive disagreements over how 3DMark03 should be built. The questions now are, what were those disagreements, and who was right?

Was NVIDIA miffed because 3DMark03 wouldn’t give its new GeForce FX chip a fair shake? Or because the test would disadvantage NVIDIA’s current products in the GeForce4 line? Early benchmark results from 3DMark03 aren’t as instructive one might expect. The HardOCP tested the GeForce FX versus the Radeon 9700 Pro, and results were mixed. In the first round of tests, the Radeon 9700 Pro won handily. A second set of tests with updated drivers from NVIDIA, however, showed the GeForce FX taking a narrow lead in the overall game score.

Our own testing with NVIDIA’s current generation of 3D chips, the GeForce4 line, didn’t look too good for NVIDIA:

But such things are to be expected when one’s competitor is a technology generation ahead, especially in a benchmark that purports to be forward-looking. Besides, NVIDIA told me straight up its complaints aren’t about 3DMark03’s performance on its GF4 cards.

 

NVIDIA’s beef
NVIDIA was kind enough to allow me time to speak at length with two key employees, Tony Tamasi, the company’s General Manager of Desktop Graphics Processors, and Mark Daly, Director of Technical Marketing, who manages the teams responsible for benchmarking and making NVIDIA’s graphics technology demos. Daly and Tamasi were very helpful in stating NVIDIA’s case against 3DMark03 and very patient in answering my (sometimes-boneheaded) questions. They were also both very consistently “on message,” sticking to the company line on 3DMark03 like George Bush sticks to a Karl Rove script on the campaign trail. I mention this fact because it’s so very, well, remarkable coming from techie types talking tech.

NVIDIA’s problems with 3DMark03 seem to encompass nearly everything about the benchmark. That is, the company sees very little good in the test as it exists now. However, NVIDIA’s complaints generally fall into two categories: general, overarching criticisms and specific, technical critiques. NVIDIA’s big-picture complaints can be summed up in two points:

  • 3DMark03 is a bad benchmark — This is a big point with lots of little sub-points, but the complaints all fall easily under this banner. NVIDIA’s key contention is that 3DMark03 isn’t representative of actual games. Near as I can tell, that means not now, nor ever in the future, although there is some ambiguity on this point. NVIDIA’s specific technical criticisms seem to bounce around from talking about now and talking about the future without much discernible pattern. NVIDIA suggests synthetic benchmarks are not a useful component of a graphics performance test suite, and recommends testing only with “actual games.”
  • Wasted resources — Optimizing for 3DMark03, says NVIDIA, pulls critical software engineering resources away from other tasks. Because 3DMark03 isn’t representative of actual games, optimizations for 3DMark are in no way beneficial for actual games. What’s more, online reviewers and editors who choose to use 3DMark in their performance evaluations create an irresistible need for NVIDIA to keep wasting resources optimizing code paths never used by real applications.

These larger complaints only make sense if NVIDIA’s more targeted technical criticisms of 3DMark03 hold up. I won’t cover all of the technical complaints in exacting detail, but in truth, NVIDIA’s whitepaper essentially makes four main complaints about 3DMark’s four game tests. A weighted average of these four tests alone determines the “overall” 3DMark score most users like to compare between systems.

  • Not enough multitexturing in game test 1 — The first game test is a WWII-era air battle scene supposedly representative of legacy DirectX 7-class games, and much of what’s on screen at any given time is simply sky or ground. These elements are made up of very few polygons, and only one texture is applied to the skybox and ground surfaces. As a result, NVIDIA claims, Game 1 is largely a test of single-textured (or pixel) fill rate, which isn’t representative of current or future games. Furthermore, 3DMark2001 was more “forward looking” than this test, because it employed multitexturing in its three DX7-class game tests.
  • The stencil shadow volumes implementation — Game tests 2 and 3 use the same basic rendering paths, and they both use stencil shadow volumes to create a realistic shadowing effect. However, NVIDIA’s whitepaper claims 3DMark03’s rendering method is “bizarre” because it requires objects to be skinned many times in the vertex shader for each frame rendered:

    3DMark03 uses an approach that adds six times the number of vertices required for the extrusion. In our five light example, this is the equivalent of skinning each object 36 times! No game would ever do this. This approach creates such a serious bottleneck in the vertex portion of the graphics pipeline that the remainder of the graphics engine (texturing, pixel programs, raster operations, etc.) never gets an opportunity to stretch its legs.

    The paper suggests caching the results of the vertex skinning operation between passes would be more efficient, and more like John Carmack’s implementation in Doom III.

  • Too much pixel shader 1.4 — Game tests 2, 3, and 4 all use pixel shader programs based on the pixel shader 1.4 specification from DirectX 8.1. In the case of game tests 2 and 3, pixel shader 1.4 is inappropriate because pixel shader 1.4 “is virtually non-existent in DX8 games.” Furthermore, if 1.4 pixel shaders aren’t available, the benchmark falls back to pixel shader 1.1 instead of 1.3 in order to render the scenes.
  • Not enough DirectX 9 — The game 4 test, dubbed “Mother Nature,” doesn’t use enough of DX9’s new features. Only two of the nine pixel shaders use the new PS 2.0 spec; the other seven use PS 1.4. Thus, “the amount of DX9 represented in the 3DMark03 score is negligible. It’s not a DX9 benchmark.”

While presenting these concerns, Tamasi acknowledged the difficulty of FutureMark’s task in constructing a good benchmark, and he expressed a deep skepticism about the feasibility of ever building a good forward-looking synthetic test representative of future games. He pointed out the difficulty of FutureMark’s business model, as well. NVIDIA seemed to be concerned that this one company could have so much power in determining the industry’s performance metrics. He pointed to the example of the SPEC committee as a possible alternative to FutureMark’s approach.

Tamasi also stressed the need for developers to include performance tests in their games, and said NVIDIA’s developer relations team has long been encouraging just that and offering resources to help make it possible.

I believe that’s a fair summation of NVIDIA’s complaints about 3DMark03. These things were, according to NVIDIA, problem enough to prompt NVIDIA to remove itself from FutureMark’s beta program and begin discouraging use of the benchmark created by a former partner.

 

FutureMark’s answer
Three days after 3DMark03’s release, FutureMark published its response to NVIDIA’s criticisms (as reported by the enthusiast press). This paper restates the case for synthetic benchmarks generally as a part of overall 3D performance evaluations, and it addresses some of NVIDIA’s specific complaints. Let’s leave the general arguments about benchmarking aside for now and look at FutureMark’s response to the specific tech issues.

  • Not enough multitexturing in game test 1 — FutureMark contends game test 1 is typical of current games in using a single texture for a skybox, and lists several games as examples: Crimson Skies, IL-2 Sturmovik, and Star Trek: Bridge Commander. The paper also shows signs of a past conflict with NVIDIA over this issue:

    As this issue was brought up already during 3DMark03 development, we did a test by adding a second texture layer to the skybox. The performance difference stayed within the error margin (3%), and in our opinion the additional layer did not significantly add to the visual quality of the test. Thus, there were no game development or technical reasons for implementing a multitextured skybox.

    Obviously, FutureMark and NVIDIA had indeed been at odds over this issue.

  • The stencil shadow volumes implementation — FutureMark takes on NVIDIA’s whitepaper directly here, arguing that the efficiency of vertex shader skinning justifies its approach. What’s more, NVIDIA’s example doesn’t quite fit FutureMark’s implementation, as explained in sparkling Finnish English:

    Since each light is performance-wise expensive, game developers have level designs optimized so that as few lights as possible are used concurrently on one character. Following this practice, 3DMark03 sometimes uses as many as two lights that reach a character concurrently, not five as mentioned in some instances.

    …instances like, perhaps, NVIDIA’s whitepaper? Hmm.

    To back up its claims, FutureMark suggests running 3DMark03 in different resolutions, to see whether game tests 2 and 3 are bottlenecked by vertex shader performance. “If the benchmark was vertex shader limited, you would get the same score on all runs, since the amount of vertex shader work remains the same despite the resolution change.”

    That’s easy enough. Let’s have a look.

    Indeed, the game test results scale with fill rate, suggesting vertex shaders are not a primary performance limiter here, especially in the case of the DirectX 9-class GPUs. This fact may not completely justify FutureMark’s stencil shadow volumes implementation, but it certainly shoots down some claims made in NVIDIA’s whitepaper.

  • Too much pixel shader 1.4 — Because pixel shader 1.4 is a standard forged by ATI and Microsoft to accomodate ATI’s R200-series chips, we looked at 3DMark03’s use of pixel shader 1.4 with some skepticism. After all, other GPUs like the Matrox Parhelia and SiS Xabre support PS1.3, but of the non-DX9 chips, only ATI hardware supports PS1.4. Rather than refer to FutureMark’s whitepaper, let me offer our question for FutureMark’s Tero Sarkkinen and his direct response:

    TR: Why did you use pixel shader 1.4 with a fallback to 1.1 instead of 1.3? Doesn’t this choice unfairly disadvantage NVIDIA cards and other non-ATI GPUs?

    Tero Sarkkinen: Firstly, when we design a benchmark, we do not care which manufacturer happens to have what type of hardware out there. We follow DirectX standard and what game developers are doing. Pixel shader 1.4 is NOT an ATI specific technology, it is technology that belongs in the DirectX standard.

    Fallback to 1.3 (instead of fallback to 1.1) would not have changed the performance at all. We tried it. There is very very little change from 1.1 to 1.2 to 1.3, the real change comes from 1.3 to 1.4. The 1.4 pixel shader only needs a single rendering path for each light (and the depth pass, which is similar to how Doom3 works). Note that 1.3 pixel shaders only add a few instructions to 1.1 pixel shaders. However, 1.4 pixel shaders allow 6 texture stages, compared to 4 in 1.1 (or in 1.3) pixel shaders. 1.4 shaders further allow each texture to be read twice.

    That’s FutureMark’s story. We’ll explore the issue of pixel shader versions in more depth below.

  • Not enough DirectX 9 — FutureMark contends the game test 4 uses an appropriate mix of pixel shader types. “Because each shader model is a superset of the prior shader models, this will be very efficient on all DirectX 9 hardware.” Also, the scene’s most striking elements, the water, sky, and leaves, use 2.0 shaders.

    Furthermore, FutureMark claims the test’s workload is appropriate for DX9-class hardware, with an average of 780,000 polys and “well over 100MB of graphics content” per frame. The paper states with confidence that “there will be a clear correlation between 3DMark03 and game benchmark results” once 3D games start using pixel and vertex shaders more thoroughly.

FutureMark defends the usefulness of its benchmark and claims the test’s impartiality is key. The implication is clear: sometimes it’s not easy being a benchmark house that produces unbiased products.

 

Evaluating the claims
When I began work on this article, I intended to offer my own attempt at an evaluation of the competing claims of FutureMark and NVIDIA. Now, Dave at Beyond3D has already offered an extensive evaluation with more detail than I was prepared to offer, so I will have to embarrass myself otherwise. I won’t attempt to match all of his analysis, but I will try to offer my thoughts on the four basic tech issues I’ve identified among NVIDIA’s complaints.

  • Not enough multitexturing in game test 1 — Complaints about game test 1 are a large part of NVIDIA’s case against 3DMark03. I’m compelled by NVIDIA’s argument that FutureMark concentrated here on making a nice demo rather than a test representative of current games (and this first game test is indeed meant to represent current games). The percentage of pixels onscreen that are part of a skybox is probably a bit excessive. And anyone who’s seen the 3DMark03 demo can see how FutureMark’s developers could have taken a liking to the game 1 models and scene layout. This part of the 3DMark03 demo mode is really, really cool.

    However, FutureMark’s gaffe doesn’t seem too severe. Many current games are fill-rate bound, and many are bound by single-textured fill rate. Truly extensive multitexturing doesn’t dominate the scene quite like one might expect. Take, for instance, the poor Matrox Parhelia with its quad texture units per pipe and massive theoretical texel fill rate; two to three of those units are doomed to sit idle. This is one reason why ATI’s 8-pipe design for the R300 (Radeon 9700) chip makes so much sense.

    I’d prefer to have seen more complex geometry in this test to give it a little bit better balance. But to claim it’s not representative of current games isn’t entirely fair.

  • The stencil shadow volumes implementation — We’ve already looked at FutureMark’s response to NVIDIA’s claim here, and we’ve seen benchmarks which prove the test isn’t bound entirely by vertex shader performance. I’ll leave the fight over the best methods of vertex skinning to graphics developers, but this one seems like a victory for FutureMark.
  • Too much pixel shader 1.4 — This is a tough one, because it’s an old fight (PS 1.1/1.3 vs. PS 1.4) between NVIDIA and ATI, yet it’s a very current fight about the immediate future (the next 6-12 months or so) of 3D games.

    The primary reason pixel shader 1.4 has proven so useful to FutureMark is its ability to deliver per-pixel lighting effects in a single pass. As ATI pointed out to me in our conversation about 3DMark03, John Carmack’s now-famous .plan file update on the subject describes PS 1.4’s ability here:

    The fragment level processing is clearly way better on the 8500 than on the Nvidia products, including the latest GF4. You have six individual textures, but you can access the textures twice, giving up to eleven possible texture accesses in a single pass, and the dependent texture operation is much more sensible. This wound up being a perfect fit for Doom, because the standard path could be implemented with six unique textures, but required one texture (a normalization cube map) to be accessed twice. The vast majority of Doom light / surface interaction rendering will be a single pass on the 8500, in contrast to two or three passes, depending on the number of color components in a light, for GF3/GF4

    So PS 1.4 allows for single-pass rendering with per-pixel lighting, while pixel shader 1.1 and 1.3 require multiple passes to achieve the same effect. FutureMark apparently found the same thing in developing 3DMark03. I should note that no one has credibly claimed pixel shader 1.3 would reduce the number of rendering passes required versus PS 1.1 in 3DMark’s game tests.

    However, it’s quite possible the introduction of tools like high-level shading languages and all the advanced features of DirectX 9-class hardware could cause a rather sharp break between DX8-class games and really out-there DX9-only games with gobs of complex pixel shaders of the 2.0 variety. In this case, PS 1.4 would never see widespread use.

    I tend to think we will see an earnest transition period in which a mix of 1.1, 1.4, and 2.0 pixel shaders along the lines of those used in 3DMark03 will be common practice, with different rendering paths used for different types of hardware. FutureMark had to do some guessing here, and they haven’t yet been proven wrong.

  • Not enough DirectX 9 — The simple reality is, FutureMark could only go so far in making a “full DX9” test. DirectX 9 is a young API, and the tools are just now coming together. In light of this fact, 3DMark03’s Mother Nature test seems like a decent first crack at a DX9 scene, and the procedural shaders in the PS 2.0 feature test are the kinds of complex shader programs one would hope to see. I asked FutureMark a couple of questions about the 2.0 shaders used in 3DMark03. The answers are worth repeating here.

    TR: How many instructions long are the pixel shader programs in the Mother Nature and PS 2.0 tests?

    FutureMark: We haven’t published the actual shader code, but I can reveal that the ps2.0 test’s procedural texture generation shaders are about as much as you can fit into a 2.0 pixel shader.

    TR: Did FutureMark use Microsoft’s High Level Shading Language or a similar tool in developing any of 3DMark03’s tests?

    FutureMark: All shaders are written in the assembly like shader language. This is because HLSL produces a pretty optimized shader code, but you can optimize even further manually.

    I have to admit, I’d rather see more and better shader programs written in HLSL, compiled at runtime, and running onscreen concurrently on the various cards. However, those are respectable answers for a first-generation DX9 benchmark. 3DMark03 isn’t the end-all, be-all DX9 test, but it seems like a reasonable start. FutureMark’s point about game test 4’s workloads being designed for DX9 class hardware is persuasive, as well.

 

Revelations about GeForce FX
In the course of all the hubbub over 3DMark03, some intriguing revelations about the GeForce FX chip have surfaced. In part because of 3DMark03’s own fill rate tests, some folks have raised questions about the architecture of the FX. NVIDIA has essentially led the world to believe the GeForce FX has 8 pixel pipelines like the Radeon 9700 by claiming the FX can deliver 8 pixels per clock. However, now that the first few cards have trickled out to developers and select press, folks are finding that the chip performs more like a 4-pipeline design with two texture units per pipe. That’s a significant difference, because the chip’s pixel fill rate is apparently half what we originally understood it to be.

Let me explain quickly with a trusty chip chart showing the before and after scenarios.

  Core clock (MHz) Pixel pipelines  Peak fill rate (Mpixels/s) Texture units per pixel pipeline Textures per clock Peak fill rate (Mtexels/s) Memory clock (MHz) Memory bus width (bits) Peak memory bandwidth (GB/s)
Radeon 9700 Pro 325 8 2600 1 8 2600 620 256 19.8
BEFORE: GeForce FX 5800 Ultra 500 8 4000 1 8 4000 1000 128 16.0
AFTER: GeForce FX 5800 Ultra 500 4 2000 2 8 4000 1000 128 16.0

As you can see, the pixel-pushing power of the FX in cases where only one fixed texture is being applied per poly is lower than the 9700 Pro, essentially erasing the FX’s clock speed advantage. Only in cases where multiple textures are being applied per polygon does the FX outrun the 9700 Pro.

This revelation goes a long way toward explaining why the GeForce FX isn’t much faster than the Radeon 9700 Pro in scenarios where, based on specs, many of us expected the FX to be faster.

Now, some caveats. While the GeForce FX appears to perform like a 4×2 design, the reality seems a little more complex. We don’t know the exact layout of the GeForce FX’s internals, because NVIDIA has elected not to make that information public yet. When we asked NVIDIA about the exact pipeline configuration of the FX, we received this reply:

GeForce FX 5800 and 5800 Ultra run at 8 pixels per clock for all of the following:

a) z-rendering
b) stencil operations
c) texture operations
d) shader operations

For advanced applications (such as Doom3) *most* of the time is spent in these modes because of the advanced shadowing techniques that use shadow buffers, stencil testing and next-generation shaders that are longer and therefore make the apps “shading-bound” rather than “color fill-rate” bound.

Only color+Z rendering is done at 4 pixels per clock, all other modes (z, stencil, texture, shading) run at 8 pixels per clock.

The more advanced the application, the less percentage of total rendering is color, because more time is spent texturing, shading and doing advanced shadowing/lighting.

So the FX can only deliver 4 pixels per clock in more traditional rendering scenarios, but it’s able to do 8 pixels per clock in some cases. Based on all the evidence, the GeForce FX is apparently a very complicated design, in some ways less conventional than ATI’s R300. The chip seems to have many functional units capable of interacting together flexibly, in programmable ways. We don’t know exactly how flexible or limited the FX is, and it’s possible we may not know exactly for a good, long time. We do know that NVIDIA talked a lot before the introduction of the FX about how compiler optimizations would play a very important role in determining the performance of future GPUs, and about how new performance metrics would be needed to evaluate such chips.

This radical design may have a great many technical advantages over the R300, but it appears to have some tangible disadvantages, too. First and foremost among them: the FX acts in many cases as a 4×2-pipe design like the GeForce4 Ti.

3DMark03 and the real GeForce FX
One of the most puzzling questions I’ve been asking myself about this whole controversy is: Why is NVIDIA upset over the content of 3DMark03? Yes, I understand the criticisms the company has offered to the press in its whitepaper, but those haven’t seemed entirely worthy of the fuss. Now that we understand a little bit more about the exact capabilities of the GeForce FX, however, the reasons behind NVIDIA’s complaints come into sharper focus.

For starters, the complaints about game test 1’s skybox full of single-textured pixels seem much more relevant. The GeForce FX 5800 Ultra’s 600Mpixel/s disadvantage in pixel fill rate versus the Radeon 9700 Pro doesn’t bode well for NVIDIA here. Too much emphasis on single-textured fill rate could make this test—which comprises 26% of the 3DMark03 overall game score—a source of endless trouble for NV30-derived architectures.

Similarly—and this is pure, wild speculation here—complaints about FutureMark’s use of pixel shader 1.4 could be related to this fill-rate limitation. 1.4 shaders can deliver per-pixel lighting in a single pass, which PS 1.1/1.3 cannot do. However, NVIDIA’s Mark Daly mentioned to me NVIDIA’s attempts to get FutureMark to use “simpler techniques” to achieve “a visually similar result.” One way to achieve such a result would be the technique NVIDIA advocates in its 3DMark03 whitepaper: the use of precomputed lightmaps, which would achieve similar results by laying down an additional, fixed texture in each rendering pass. Of course, precomputed lightmaps need not use pixel shaders at all, but they might help the FX’s showing.

Like I said, that’s pure speculation. NVIDIA may well have had a different shader-based technique in mind. My point here is simply to emphasize that we have, until very recently, not known about the FX’s four-pixels-per-clock limitation, and we still don’t know very much about how the GeForce FX really works. Our understanding of the conflict between FutureMark and NVIDIA may grow more acute as we learn more about the FX’s true strengths and weaknesses.

 

The future of the 3DMark03 controversy
I should say here that I mean no insult by not entirely taking NVIDIA’s complaints at face value. To the contrary, the fight over 3DMark03 may well be, in a sense, a proxy fight over the direction developers will take in writing upcoming games. NVIDIA’s claim that 3DMark03 doesn’t represent actual (future) games may jibe quite well with what NVIDIA’s developer relations team is currently recommending to game developers. The trouble is, the GeForce FX’s radical design may require some serious mindset adjustments among developers, and such things typically take some time to sink in.

Of course, this fight is primarily over 3DMark03’s widespread acceptance as a successor to 3DMark2001, and the issue is far from settled. FutureMark has a several key constituencies to win over, including end users, members of the media, and its beta program members.

I can’t comment much on the status of its beta program members other than to say that NVIDIA is a very big loss. Losing the graphics market sales leader will hurt the credibility of FutureMark’s graphics test, without a doubt. The fact NVIDIA’s primary rival, ATI, remains a first-tier member of the beta program will raise questions about undue influence as long as the situation persists. However, no other beta members have, to my knowledge, broken ranks yet. NVIDIA may be seen as the primary problem here, which could actually enhance FutureMark’s credibility, if other beta members see FutureMark as standing up to a bully.

Hardware reviewers are a more complicated case. NVIDIA’s initial PR push was successful, in part, because of the sort of the task at hand: taking apart a synthetic benchmark and criticizing the design choices the authors made in building it. By nature, the deconstructive task is easier than the constructive one. And we are a skeptical lot. I believe many hardware testers (and certainly their readers) carry in them an innate insecurity about the veracity of their own methods, which is, of course, in some ways key to avoiding pitfalls in performance testing. We often find the ground shifting under us as IHVs exert influence on makers of popular applications and benchmarks, which makes us skittish. These dynamics helped NVIDIA’s criticism fall on fertile ground.

As for me and my house, we will keep an eye on this controversy, but barring any unforeseen changes, we will use 3DMark03 as we have used 3DMark2001. That is, we will continue to offer 3DMark results as part of a wide range of tests. We’ll continue to present 3DMark results in more detail than most publications, so readers can see the scores behind the score, and we’ll offer context wherever we can. We will also keep looking for new and better benchmarks, especially those from new games and game engines. And I can’t wait for better DX9 pixel shader tests.

As for end users, well, I’ll let you all decide. 

Comments closed
    • Anonymous
    • 17 years ago

    CONTROVERSY IS GOOD! The more NVidia and ATI compete, the better their cards will be in the future.

    Who needs a stupid benchmark? Just test 2 different cards using our favorite games!

    -Unicron

    • sativa
    • 17 years ago

    [quote]So the Nv35 will be like the R350…..LOL. Little too late as the R400 will be out soon too……….. [/quote]So the R350 will use 500mhz DDRI with a 256bit bus?

    • Anonymous
    • 17 years ago

    So the Nv35 will be like the R350…..LOL. Little too late as the R400 will be out soon too………..

    • Ardrid
    • 17 years ago

    Some info in from the rumor mill: NV35 will use a 256-bit bus and produce half as much heat as NV30. Supposedly NVIDIA was showing it off to ppl at IDF.

    • Anonymous
    • 17 years ago

    Rather than get into the bashing, trashing, ranting and raving, my sole comment:

    Thank you for an extremely well written and well researched explanation of the subject matter. This was an objective, in-depth article which obviously took a great deal of time and effort. The piece helped explain to those of us who wouldnt know a pixel shader from a european brown toad, the two positions of the commercial gladiators, who have both made gargantuan steps forward in the last few years to all our benefit. Laude’

    Think I shall wait till all the cards are played before I pick a winner. That, in my opinion, will not occur until the competitors hole cards are seen: Ati’s 350 and Nvidia’s FX 35.

    • Anonymous
    • 17 years ago

    So the Nv30 will run DOOM3 well. And fall behind in other games compared to the 9700pro and the r350…. WOW. Good investment i say. Like herpes…….

    • JustAnEngineer
    • 17 years ago

    The water in game test 4 looks great. At least you can see something using vertex shader 2.0 and say “wow” about it.

    3DMark03 may become irrelevant when Doom3 arrives. The analysis at Beyond3D suggests that GeForceFX’s general implementation of DX9 shaders is crappy, but that NVidia’s approach [b]is[/b] highly optimized for the precise way that Doom3 renders, so NV30 may suck at many tasks, but it should run Doom3 very well.

    • Anonymous
    • 17 years ago

    the_xolf has it down exactly right.

    the graphics were decent, but graphics that look almost as good or better run fine on my machine.

    what gives ? I highly suspect that games and apps are not being optimized enough for the technology that already exists, and that goes for cpus as well.

    and on a side note, what a shitty bunch of benchies.

    just did nothing for me. 3dmark2000 and 3dmark2001 made me go, wow…this new one just sucks.

    • Anonymous
    • 17 years ago

    The reason Nvidia is optimizing it’s drivers for 3D marks is plainly for the benifit to push its hardware towards companies like DELL and others. As you know, those companies look at 3D marks as a benchmark to determine which high-end cards will be in their boxes. The truely real reason Nvidia is pissed at 3d marks 2003. It makes the card look bad and in turn, you just might find the new line of ATI cards in all of DELLS and the “other” boxes real soon. As most already have the 9700pro. Yup its a sad day when Nvidia is concerned over a bench mark test to enhance the drivers and not to enhance the games. I remember even ATI doing something of that nature with Quake3 tests.

    • sativa
    • 17 years ago

    AG63, I completely agree. these ‘pull up your pants’ people obviously didn’t read the article.

    • Anonymous
    • 17 years ago

    *[

    • Anonymous
    • 17 years ago

    *[

    • Anonymous
    • 17 years ago

    I would like to say… the important part of the article is that 3dmark takes away from improvement of drivers for games. I hate to think that Nvidia workers actually have to sit around and think how to optimize 3dmark scores just to sell more product. When they should be working on other more important things like fixing drivers for counter strike. lol

    • Anonymous
    • 17 years ago

    “Pull up you PANTS!” 61. It’s spelled “Pull up you PANTS!”

    • Anonymous
    • 17 years ago

    Boohoo! Pull up your pants!

    You know, that IS really catchy. I’m going to hafta work that one in sometime. Example:

    “Did you catch the weather forcast today?”

    “Yeah, more rain. What a bummer. I was really hoping to sunburn my butt today.”

    “Boohoo! Pull up your pants!”

    • Pete
    • 17 years ago

    No problem, fm. I’m guessing NV35 will fix all that ails NV30 (a la TNT->TNT2 & GF->GF2).

    BTW, R(V)350 should be announced March 5th.
    ยง[<http://www.extremetech.com/article2/0,3973,899157,00.asp<]ยง ATi already wants to know if we're ready. ยง[<http://www.ati.com/<]ยง

    • freshmeat
    • 17 years ago

    Thanks, Pete!

    I remain curious as to whether the nv30 represents a poor implementation of the wave of the future, an evolutionary dead-end, or a kludge thrown together out of desperation. If the first, then presumably nvidia’s next effort will build on this and show its promise, if the second, it will remain a mere curiosity, and if the third, nvidia’s in big trouble.

    I think it’s too early to tell, and I want to see nvidia’s next step, as that will in large part answer the above question, but I’m not waiting for an upgrade ๐Ÿ™‚

    • Anonymous
    • 17 years ago

    What a Crock!

    • Anonymous
    • 17 years ago

    Whoa!

    Whoever is the Human Resource manager in charge of hiring marketing people better shift his skill in evaluating engineering candidates! I mean really!!! I am becoming more and more suspicious that nVidia is just a big marketing company.

    But anywho – want some cheese with that whine Nvidia???

    • Anonymous
    • 17 years ago

    *[

    • Pete
    • 17 years ago

    [quote]I could be wrong but UT2K3 uses pretty heavy use of PS in DX8.[/quote]UT2K3 has shader support, but last I read hardware shaders provide only minimal speed benefits and no IQ benefits. I’m not even sure UT2K3 is DX8, I think it might be DX7 (thus why shaders were added late, and provide no real benefit).

    [quote]Question: since when did Microsoft dictate how video cards work? If I remember correclty DX was only a layer to the graphic subsystem and the video card dictated what features it had and prgrammers only had to write to DX instead of the video card directly?[/quote]DX is an API, like OGL. Different manufacturers present hardware targets, and MS picks and chooses to set a minimum standard. MS went with nV for the past few designs, but based DX9 on ATi HW, as they still want to control the dominant Windows API and maintain competition in the market. They don’t want nV to get so powerful that they can introduce their own Glide and thus exert a more powerful influence on PC (and now Xbox) game development.

    • Pete
    • 17 years ago

    [quote]Why so? I would have assumed that if PS2 was required to be backwardly compatible with PS1.4, that PS1.4 functionality constituted a subset of PS2 capabilities, and thus PS1.4<PS2. After all, x86 processors are backwardly compatible, yet you wouldn’t say 8086=P4, would you? I understood nvidia’s criticism as follows: In the future, all manufacturers will implement PS2.0. However, at this time, when no-one has implemented PS2.0, using PS1.4 is unfair because it benefits only ati. By using ps1.4, it allows ati to look as though it is a dx9 card ‘o the future, yet ati has yet to make the jump to PS2.0, just like eveyone else. If 3DM required the use of PS2.0, then the ground would be level, as anyone can implement PS2.0 (and, of course, no-one has yet) [/quote]
    PS2.0 is required to support all lower PSx.x, so PS2.0-compliant hardware is by definition PS1.4 compliant as well. 3DM03 is a next-gen benchmark, not current gen–FutureMark is aiming for the next gen of video cards, not at making a benchmark that makes the most cards look good (3DM01SE already does that). FutureMark also said they aimed to use the minimal codeset nec’y; some effects obviously didn’t need PS2.0, so FutureMark coded them in PS1.4, knowing that all PS2.0 hardware supports PS1.4 as well.

    ATi’s R200/RV250 (8500/9100/9000) series may benefit some from their forward-looking hardware (I’ve heard PS1.4 called “PS2.0 lite”), but nVidia’s NV25 (GF4Ti) still outperforms them.

    • d0g_p00p
    • 17 years ago

    wumpus, I could be wrong but UT2K3 uses pretty heavy use of PS in DX8. Anyhow like most people here have said, I look at how the games play right now on my computer and I could care less about the benchmarks. Right now I have 2 computers that have a Ti4600 in one and a 9700Pro in the other. If you want to know, yes I use the 9700P for gaming, only because I like AA and Anso. However with the games I play without the AA and Anso, both cards are plenty fast for what they do. Hell I am a nvidia person only because I like “things to work” but I did purchase an Ati card because right now it satisfies my gaming “experience”

    Geforce FX. yeah a let down to me, only because 1 you cannot purchase it and 2 because of that insane cooling crap on it. Call me a “nvidiot” or whatever but like I said I just like things to work and nvidia has yet to fail me on that (keep in mind I dont upgrade to whatever it the latest det driver).

    Again though, I went with Ati this time because the tech and the price is right. Little drunk here so ignore the spelling and rambling.

    Question: since when did Microsoft dictate how video cards work? If I remember correclty DX was only a layer to the graphic subsystem and the video card dictated what features it had and prgrammers only had to write to DX instead of the video card directly?

    bah.

    • Anonymous
    • 17 years ago

    Yep, I think it’s been well known that HardOCP has been an obedient slave to nVidia’s PR machine for quite some time.

    • crazybus
    • 17 years ago

    freshmeat
    If all dx9 hardware supports ps1.4, why would it make a difference in the test as it requires full dx9 support?

    • Anonymous
    • 17 years ago

    I read with interest the comment from nVidia “we will have to waste time writing drivers for the test instead of for games”.

    What is the point of this test anyway? To test the raw capabilites of the card? Remember a few years ago when a company was caught fudging drivers to make benchmarks look better?

    Not everyone wants drivers that will scream on a few games, but suck when you need to do work.

    Ray/Layton, Utah

    • indeego
    • 17 years ago

    That’s a good slogan, but I think:
    [quote] NO i[

    • Anonymous
    • 17 years ago

    *[

    • sativa
    • 17 years ago

    Yeah props to damage. good article.

    • WaltC
    • 17 years ago

    Excellent summation, Scott.

    • Anonymous
    • 17 years ago

    It’s funny, I was reading the article(well done Tech-Report), and I got the feeling that nVidia is trying to tell FutureMark how to make a benchmark utility. What’s next? nVidia telling John Carmack how to program a game?

    I think if nVidia wants to do just that, keep having influence and a say in FutureMark’s utilities, they should have remained a member of the Beta program.

    • Anonymous
    • 17 years ago

    #14 the_xolf: have you considered that games 2 & 3 looking little better than 1 on your ti4400 may be because your card can’t manage to do everything that these games require for full visual effect? My radeon 9700 pro shows 2 & 3 to be of substantially higher quality.

    • eitje
    • 17 years ago

    [i]Something EVERYONE seems to be missing is that [H]ard|OCP has said they won’t be using the 3DMark03 overall score for comparison purposes in reviews. That doesn’t mean they can’t run the benchmark and look at the individual test results, and try to get some useful information out of them.[/i]

    which is exactly what Damage said he’d be doing here @ TR – looking at things at a more detailed level.

    of course, we all know he’s only doing it that way so that he can have more charts & graphs. ๐Ÿ˜‰

    • freshmeat
    • 17 years ago

    Doh! You folks are correct! Now I’m even more interested in getting a 9700 pro . .

    So why doesn’t 3DM just use PS2.0 functions exclusively on its dx9 test? That would stop nvidia’s whining (well, answer it, anyway, I’m not sure whining ever stops) and remove any questions about ati unfairly benefiting from the structure of the test.

    • Anonymous
    • 17 years ago

    ATI’s entire R300 based product line, 9500, 9500Pro, 9700, 9700Pro are ALL PS/VS 2.0 compliant. They’ve supported DX9 PS and VS 2.0 compliant since day one.

    –|BRiT|

    • Anonymous
    • 17 years ago

    freshmeat,

    what do you mean by this : “yet ati has yet to make the jump to PS2.0, just like eveyone else.” ?

    As far as i know R9700 is DX9 compliant including PS2.0…

    Adi

    • Anonymous
    • 17 years ago

    Summary: Current GF FX D3D drivers use FP16 (16 bit floating point) as default when precision is not specified for registers, however DX9 specifications require FP24 (24bit floating point). Currently, if Nvidia desires to be WHQL certified, they will have to use FP32 which runs at half the speed of FP16 in the NV30.

    Speculation: This could mean that the NV31 and NV34 will not be DX9 compliant if they do not have FP32 capabilities (which some have reason to suspect they don’t/won’t) and the GF FX will be even slower when run as a DX9 compliant part.

    Taken straight from Beyond3D forums — ยง[<http://www.beyond3d.com/forum/viewtopic.php?p=87010#87010<]ยง "As of this moment thats wrong, the D3D spec have been up'ed to FP24 minimum for temp registers (i.e. FP16 is only allowed with the partial precision flag). It MAY be changed back later, but I heardn't anything yet (I brought the whole issue up with nVidia and MS and I've got a book article to write (they know this) ). Last I heard nVidia is still in discussion with MS about a solution. Just to reiterate as this point the official MS line is FP24 is default. Sorry that I keep going on about it but until something changes the current GFFX driver should only be registering PS_1_4 (the hardware is able to do PS_2_0 but the driver isn't). Alot of developer don't understand how bad this would make development of future games. Currently you can't guarentee results from CG or HLSL with GFFX D3D driver! It will produce incorrect visuals (both it and HLSL assume float=FP24/FP32), if thats what you intended (i.e. depth mapping) the algorithm will FAIL if forced to use FP16. You may be lucky, and be o.k. with half being swapped behind your back for floats, but if your not it will render incorrectly!"

    • freshmeat
    • 17 years ago

    Oh, and did I mention? I’m getting a 9700 pro as soon as I can scrape together teh monies. Maybe I’ll catch nvidia the next time around . . . ๐Ÿ™‚

    • freshmeat
    • 17 years ago

    Nice article — glad to see some effort to examine the issue. Not to kick the gerbils when they’re down, but the validity of a claim is an issue independent of the motivations of the claimant — claims stand or fall on their own merit.

    Obviously, for most of us, the key consideration in purchasing a new video board is “how will it run the games I play”, so I have little use for synthetic benchmarks. Of course, even with game benchmarking, there can be issues of optimization or outright cheating, but nevertheless, it remains a truer test of how useful a card will be to the individual consumer.

    And Pete, a clarification please:

    [q]Firstly, PS1.4 = PS2, as all PS2 hardware is required to be backwards-compatible. So nVidia’s complaint about there being too much PS1.4 and too little DX9 doesn’t make sense to me[/q]

    Why so? I would have assumed that if PS2 was required to be backwardly compatible with PS1.4, that PS1.4 functionality constituted a subset of PS2 capabilities, and thus PS1.4<PS2. After all, x86 processors are backwardly compatible, yet you wouldn’t say 8086=P4, would you? I understood nvidia’s criticism as follows: In the future, all manufacturers will implement PS2.0. However, at this time, when no-one has implemented PS2.0, using PS1.4 is unfair because it benefits only ati. By using ps1.4, it allows ati to look as though it is a dx9 card ‘o the future, yet ati has yet to make the jump to PS2.0, just like eveyone else. If 3DM required the use of PS2.0, then the ground would be level, as anyone can implement PS2.0 (and, of course, no-one has yet)

    • Anonymous
    • 17 years ago

    Something EVERYONE seems to be missing is that [H]ard|OCP has said they won’t be using the 3DMark03 [b]overall score[/b] for comparison purposes in reviews. That doesn’t mean they can’t run the benchmark and look at the individual test results, and try to get some useful information out of them. Which is good, because you don’t really get a sense of how much faster a 9700 (for example) is in DX8 tests than a Ti 4600 just by looking at the overall score, because of the extra points from Game 4.

    • Anonymous
    • 17 years ago

    Maybe they should have seperate scores for the three DX types.

    • Anonymous
    • 17 years ago

    Heh, I forgot to say, nice article by the way :).

    • Anonymous
    • 17 years ago

    Wow, another example showing that Hard|OCP are a bunch of bandwagon-jumping puppet-morons. Ahh… I like their link to the article too… They carefully pick the one line that suits them. Idiots.

    • Anonymous
    • 17 years ago

    *[

    • Anonymous
    • 17 years ago

    #28 what the smart people here are trying to tell you.. is the 3DM is irrelavent.. who cares if u get a 10,000 or a 2,000 in 3DM??? What matters is how the cards plays in game.. If u play UT2k3 and the nVIDIA card gets 100 fps and great IQ but only a 2000 in 3dm and the Radeon get 10000 in 3dm but only 20 fps in UT2K3 (or vise versa) which card are you going to get? Real World test are what matters in the end..

    nVIDIA has a list of games on their site, I want to know how their card plays in those games.. if I play Delta Force Black Hawk Down, I want to know is the GFFX better in that game, or is the 9700pro.

    • Anonymous
    • 17 years ago

    #28
    Don’t worry, GF4 is dx8 card, 3Dmark03 is DX9 benchmark, and Ti4600 still has better score than Radeon 8500/9000

    • Anonymous
    • 17 years ago

    Just Brew It!
    Please don’t buy ATI, you won’t be able to make it work. ATI sucks, ATI driver suck

    • Anonymous
    • 17 years ago

    Very interesting read indeed, many thanks for all your comments too!

    I did the new 3D benchmark test on my PC:

    Ti4600 GF4
    2.26 Ghz P4 (533 BUS)
    1024 MB DDR RAM
    SB Audigy 2

    Whilst watching the test, I felt… ASHAMED of my GF4!!! I was watching a bloody slide show most of the whole thing:(

    Which makes me think though, is 3D Benchmark biased against nVIDIA cards??? Or is it simply the fact that Radeon are indeed the future today with their 9700 Pro for example?

    To be honest it is all confusing, because my above system specs, shined on the previous version of the 3D Benchmark.

    More comments please:)

    • Samlind
    • 17 years ago

    Great article!

    I’d urge you to not only use 3DMark 2003, but continue to use 3DMark 2001 for a least the next year. Helps us get some perspective on how things are changing, and how far things have come. Most new games will be mostly DX8 tittles for a while anyway. Cutting edge games using DX9 exclusively have too few potential buyers yet.

    • Anonymous
    • 17 years ago

    *[

    • Anonymous
    • 17 years ago

    it will be nice when DooM III finally does come out and put an end to what future games will be like and what type of hardware will be needed. after all, a large majority of games will be using the DooM III engine for the next few years.

    • Anonymous
    • 17 years ago

    Exactly! If the GeforceFX were a couple of thousand points ahead of the 9700Pro, nVidia wouldn’t be complaining like this would they? The only reason they are complaining, is because they ended up building a vacuum cleaner instead of a video card! Boohoo, who’s fault is that? Not Futurmark’s!

    Try to imagine if the GeforceFX kickin’ 9700Pro’s a** in 3DMark03, would nVidia come out and say the benchmark is unfair? No way!

    Everybody hates a sore looser nVidia, don’t you guys know that? Did ATI go crying when they came out with the 8500? No, they pulled up their pants, and went back to work! And now they’re getting the benifits of all this hard work.

    Boohoo! Pull up you PANTS!

    • atidriverssuck
    • 17 years ago

    AG21….you’d think more people would understand that too, but no.

    • Anonymous
    • 17 years ago

    Finally an unbiasis opinion!

    I’ve read Tom’s Hardware and HARD-OCP articles before this.
    My overall impression of TW and HOCP is that they were PAID to do so. While Anandtech, the most intelligent of the 3 major Tech Site took the cowardly way out by staying silence….probably don’t want to anger the all-mighty nVidia.

    Kudo for Tech-Report for the Guts to published this Articles.
    I’m sure nVidia won’t be please.

    Furthermore, it probably won’t be long before nVidia starting to discrediting “Tech-Report.com” as an ATI-Biasis Lover Site with too much emphasize on Theortical Benchmark instead of the Actual Games.

    • Anonymous
    • 17 years ago

    /me never bothered looking at 3dmark anyway. Game benchmarks that i play are what matters. and a decent 2d output.

    • Anonymous
    • 17 years ago

    Alternatively, we could all just PLAY FREAKIN’ GAMES with our computers and actually have some fun with the stuff we spent our hard earned cash on.

    Or we could sit around all day discussing how “suddenly” a synthetic benchmark has become irrelevant.

    Tough choice *pops in C&C: Generals*

    • Anonymous
    • 17 years ago

    [q]More Benchmark Madness:
    I personally have made up my mind that we will not be using the overall 3DMark03 score for video card testing, but others are finding it an issue still worth talking about at TechReport.

    Losing the graphics market sales leader will hurt the credibility of FutureMark’s graphics test, without a doubt.[/q]
    :: [H]ardOCP

    omfg. they’re making false notions! Misleading ppl(who not taking time to read the original article) to think the conclusion of the original article is FM’s credibility at risk. wtf wrong with [H]? are they really getting fed by nV?

    • Pete
    • 17 years ago

    Firstly, PS1.4 = PS2, as all PS2 hardware is required to be backwards-compatible. So nVidia’s complaint about there being too much PS1.4 and too little DX9 doesn’t make sense to me–unless they’re looking out for their non-DX9 line, like maybe the NV31/34.

    Or unless they’re looking to run 3DM in FP16 rather than FP32, “behind” 3DM’s “back,” which is a possibility given that they said they’re specifically telling the drivers how to interpret 3DM03’s commands, ostensibly differently than how normal apps would run. This runs into the whole optimization issue, as it seems NV30 may need much more hand-tuning than R300, because of its unusual architecture (like P4+SSE2 vs. Athlon’s “brute force”). So nV might have a valid concern, in that they’d rather get their optimizers working on current and upcoming games that haven’t been made using Cg (with NV30’s “twitchiness” in mind).

    Secondly, in terms of HLSL vs. assembly, well, you’d have to imagine that the longer devs have to tune HLSL, the closer it’ll be in performance to hand-tuned shader assembly. Sure, HLSLs will help game devs to spend more time on AI and music and other things, but FutureMark really only has to focus on graphic rendering, so HLSL doesn’t seem so pressing. Not to mention a few of the FutureMark crew were coders in the demo scene, AFAIK, so they’re probably pretty adept with the assembly thing. ๐Ÿ˜‰

    Most importantly, why is NVIDIA ALL CAPS IF IT’S NOT AN ACRONYM? We have [i]n[/i]VIDIA on one side, and AT[i]i[/i] on the other–these companies understand they don’t have to copy each other in [i]every[/i] way, right?

    In the end, I’m more interested in game benchmarks than 3DM03 in determining how well a card will play games. As long as review sites realize that, and take the time to bench and [i]play[/i] as wide a variety of games as possible with review hardware (rather than running 3DM and Q3 overnight and rushing to Excel the results), Joe Sixpack (and I) will be better served.

    • Kart
    • 17 years ago

    Very nice article.

    Personally, I’m not going to pass any judgement on either company. I don’t feel it is my place as a gamer to do so. In the past the top game developers have done us well, and I believe that they will continue to do so.

    Kudos to FutureMark for holding their ground.

    Kudos to Nvidia for speaking out.

    • Anonymous
    • 17 years ago

    *[

    • droopy1592
    • 17 years ago

    And I thought [H] OCP were better for not using the benchmark. Boy was I wrong. Looks like they weren’t using it because Nvidia is having a hissyfit over then bench, and since Kyle and the gang seem to been funded partially by Nvidia, it helps me to understand better what’s really going on.

    • Anonymous
    • 17 years ago

    *[

    • EasyRhino
    • 17 years ago

    And whoever’s right gets let out of the closet? Sweet!

    I don’t like how 3dmark-3 leaves old hardware with super-few benchmarks to run, yet still goes and spits out a score.

    3dMark 2001, if you have a DX7 card, you could run 2/3 of the game tests, and most of the individual tests (cept for pixel shader)

    ’03, if you have a DX 7 card, you can only run 1/3 of the tests, and a few individuals. A DX 8 card can run more, and only a DX 9 card can run them all?

    Why bother? I think Future mark should have either made a DX 8 card the effective minimum requirement, or let the DX 7 cards have more to play with.

    ER

    • Anonymous
    • 17 years ago

    *[Originally Posted by Trident

    • My Johnson
    • 17 years ago

    Kyle’s article on 3DMark03 says that he wasn’t yet authorizing use of it in testing for {h}’s reviews. Will Scott’s deft analysis of the benchmark change his mind? Does anyone still read {h}’s articles?

    • Anonymous
    • 17 years ago

    *[

    • Ardrid
    • 17 years ago

    Give the man break…he’s probably tired ๐Ÿ™‚

    • Anonymous
    • 17 years ago

    [q]…in part thanks to publications this like us using the tests in evaluating PC hardware.[/q]
    Like this? Like us? Hmmph, never a grammar cop around when you need one.

    • indeego
    • 17 years ago

    Good article. And like [H], Tom’s, and Anand, they all will not base a card on a single bench via 3dmark, and to my knowledge never have.

    Lots of hot air over tech that most gamers won’t see for years, in reality. It’s ultimately up to the -[

    • Ardrid
    • 17 years ago

    Excellent article, Damage. Fine work as usual…with a little humor thrown in ๐Ÿ™‚ It does seem that as we find out more and more about the FX and it’s architecture, nVidia’s criticisms tend to become much clearer. I wonder though if we’ll ever know the “real” story behind all of this.

    • Anonymous
    • 17 years ago

    Hey Zenith, you can always wait for the NV40. LOL.

    • Anonymous
    • 17 years ago

    Great article scott. Nice to see a site balancing the various PR claims against eachother in a fair manner.

    *tips hat

    • Zenith
    • 17 years ago

    AG#2 – NV isn’t the only one crying……..I am because i don’t want to have to buy a ATi card to get the best…eww.

    • Anonymous
    • 17 years ago

    Good article. Nvidia crying the blues because they are not on the top of the heap this time. If the FX performed better then the 9700pro. I don’t think Nvidia would be crying like a little bitch…..

    • Ardrid
    • 17 years ago

    Damage is on the trail. Now, it’s time for me to read this article ๐Ÿ™‚

Pin It on Pinterest

Share This